nv-l

Large value_info.pag file

2000-04-03 04:38:17
Subject: Large value_info.pag file
From: "Jewan, P. (Pritesh)" <PriteshJ AT nedcor DOT com>
To: nv-l AT lists.tivoli DOT com
Date: Mon, 3 Apr 2000 10:38:17 +0200
Hello people,

We recently reinstalled our Netview machine with Framework 3.6 and Netview
5.1.2 on AIX 4.2.1. After we rediscovered our network and customised the
maps(which is a massive task for our network), we noticed that the
value_info.pag file was just under 2gig's in size( 1,828,765,996 to be
specific). However, we managed to backup the database to tape and then run
the compress object database option from the Tivoli desktop. Once the
compress had completed certain daemons would not start and we had to restore
from tape, however everytime the tar got to the value_info.pag  it would
complain that there is not enough space in the file system to resotre
value_info.pag, but the file system we were restoring to had 3 gig's of
space. So we finally had to thrash the database and start from scratch. Now
that the process has completed again the value_info.pag has shot back to
more or less 1.8 gig's.

We have about 20000 object in the database, 'du -a value_info.pag' = 33680.
Is the 'compress object database' the right option to run and  what else
should we have run with it ? Why would this option have corrupted the
database ? Also should we try using the dbmcompress utility if so which
options should we run it with? Does anyone know the reason why this file get
so large?

Any help/suggestion would be greatly appreciated.

Regards
Pritesh Jewan
ESM - Technology & Operations Division 
Nedcor Bank Limited (South Africa)

Tel : +27 - 011 - 320 5417
Cell: +27 - 82 570 5046
Fax : +27 - 011 -  8814743
e-mail : priteshj AT nedcor DOT com

Hello people,

We recently reinstalled our Netview machine with Framework 3.6 and Netview 5.1.2 on AIX 4.2.1. After we rediscovered our network and customised the maps(which is a massive task for our network), we noticed that the value_info.pag file was just under 2gig's in size( 1,828,765,996 to be specific). However, we managed to backup the database to tape and then run the compress object database option from the Tivoli desktop. Once the compress had completed certain daemons would not start and we had to restore from tape, however everytime the tar got to the value_info.pag  it would complain that there is not enough space in the file system to resotre value_info.pag, but the file system we were restoring to had 3 gig's of space. So we finally had to thrash the database and start from scratch. Now that the process has completed again the value_info.pag has shot back to more or less 1.8 gig's.

We have about 20000 object in the database, 'du -a value_info.pag' = 33680. Is the 'compress object database' the right option to run and  what else should we have run with it ? Why would this option have corrupted the database ? Also should we try using the dbmcompress utility if so which options should we run it with? Does anyone know the reason why this file get so large?

Any help/suggestion would be greatly appreciated.

Regards
Pritesh Jewan
ESM - Technology & Operations Division
Nedcor Bank Limited (South Africa)

Tel : +27 - 011 - 320 5417
Cell: +27 - 82 570 5046
Fax : +27 - 011 -  8814743
e-mail : priteshj AT nedcor DOT com

<Prev in Thread] Current Thread [Next in Thread>