Networker

Re: [Networker] mminfo splitting data greater than 2 GB

2004-03-26 14:15:25
Subject: Re: [Networker] mminfo splitting data greater than 2 GB
From: Ty Young <Phillip_Young AT I2 DOT COM>
To: NETWORKER AT LISTMAIL.TEMPLE DOT EDU
Date: Fri, 26 Mar 2004 13:15:15 -0600
I think you're trying to get a "totalsize" number that is all-inclusive for
each saveset, right?  If so, NetWorker 5.7 clients and newer do this
"non-chunking" thing.

If you have the ability to upgrade the clients on these machines to
something newer i.e. 5.7 (or better yet, 7.x) I think your problem will go
away.
-ty

Phillip T. ("Ty") Young, DMA
Backup/Recovery Systems Mgr.
Network Services Group
i2 Technologies, Inc.



             Nitin Gupta
             <[email protected]
             OM>                                                        To
             Sent by: Legato           NETWORKER AT LISTMAIL.TEMPLE DOT EDU
             NetWorker                                                  cc
             discussion
             <NETWORKER@LISTMA                                     Subject
             IL.TEMPLE.EDU>            [Networker] mminfo splitting data
                                       greater than 2 GB

             03/26/2004 11:55
             AM


             Please respond to
             Legato NetWorker
                discussion
             <NETWORKER@LISTMA
              IL.TEMPLE.EDU>;
             Please respond to
                Nitin Gupta
             <[email protected]
                    OM>






Good day Backup Gurus,



I am designing a web based legato backup reporting tool which collects
the data using mminfo on daily basis and then uploads it to a database.

I am facing a major problem with the data the mminfo returns.

I am using the following command:

mminfo -s $svr -r
"client,totalsize,group,totalsize,level,sscreate(17),sscomp(17),pool,nam
e" -q "savetime>$d2" | grep -v "index:" | grep -v bootstrap | grep -v
"undefined"| sort -u

The problem lies in that the mminfo splits up the save set of more than
2GB into chunks of 2GB and the rest, e.g. if a save set is of size 10GB
then it would give me 10 different outputs of 2GB sizes.

Please see the output below:



doberman             4                   RDBMS                4
9          03/22/04 01:11:40          03/22/04 01:12:02
3Month      /usr

doberman             4                   RDBMS                4
9          03/22/04 01:12:45          03/22/04 01:12:48
3Month      /opt

doberman          6224                 RDBMS             6224
9          03/22/04 01:05:18          03/22/04 01:08:09
3Month      /usr2

doberman          7500                 RDBMS             7500
9          03/22/04 01:12:51          03/22/04 01:12:54
3Month      /

doberman       6700916               RDBMS          6700916
9          03/22/04 01:08:12          03/22/04 01:11:37
3Month      /usr1

doberman      12685768              RDBMS         12685768             9
03/22/04 23:58:54          03/22/04 23:59:55             3Month
/orasys

doberman      19287356              RDBMS         19287356             9
03/22/04 01:12:05          03/22/04 01:12:43             3Month
/var

doberman      31482332              RDBMS         31482332             9
03/22/04 01:04:49          03/22/04 01:05:15             3Month
/usr3

doberman     650962236             RDBMS        650962236             9
03/22/04 23:38:17          03/22/04 23:58:51             3Month
/oradata

doberman    2048009264            RDBMS       2048009264            9
03/22/04 23:59:58          03/23/04 00:15:13             3Month
/usr4

doberman    2048016956            RDBMS       2048016956            9
03/23/04 00:09:45          03/23/04 00:21:43             3Month
<3>/backup

doberman    2048046996            RDBMS       2048046996            9
03/22/04 23:38:17          03/22/04 23:48:35             3Month
/backup

doberman    2048047372            RDBMS       2048047372            9
03/22/04 23:48:35          03/22/04 23:59:09             3Month
<1>/backup

doberman    2048047372            RDBMS       2048047372            9
03/22/04 23:59:09          03/23/04 00:09:45             3Month
<2>/backup

doberman    2048047372            RDBMS       2048047372            9
03/23/04 00:15:13          03/23/04 00:26:40             3Month
<1>/usr4

doberman    2048047372            RDBMS       2048047372            9
03/23/04 00:21:43          03/23/04 00:32:25             3Month
<4>/backup

doberman    2048047372            RDBMS       2048047372            9
03/23/04 00:32:25          03/23/04 00:42:52             3Month
<5>/backup



You can see above that it is given output for "/backup" path in chunks
of 2GB.

Now my question is that is there any way so that we can instruct mminfo
to give us the total size and actual start and end time for /backup
rather than we manually adding the sizes and fixing the start time for
the first /backup and the end time of the last /backup entry ( in here
actual start time for the /backup being 03/22/04 23:38:17  and end time
being 03/23/04 00:42:52).



I would be very thankful if any of you could give me a solution to this.







Thanks with Best Regards

Nitin Gupta




--
Note: To sign off this list, send a "signoff networker" command via email
to listserv AT listmail.temple DOT edu or visit the list's Web site at
http://listmail.temple.edu/archives/networker.html where you can
also view and post messages to the list.
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=

--
Note: To sign off this list, send a "signoff networker" command via email
to listserv AT listmail.temple DOT edu or visit the list's Web site at
http://listmail.temple.edu/archives/networker.html where you can
also view and post messages to the list.
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=