ADSM-L

Re: JBB Question(s)

2006-12-12 10:16:55
Subject: Re: JBB Question(s)
From: "Bos, Karel" <Karel.Bos AT ATOSORIGIN DOT COM>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Tue, 12 Dec 2006 16:14:35 +0100
Hi,

I partly agree with the OS statement. Partly because I find it difficult
to explain why an OS is able to contain one disk with +6 million files
but the back-up application isn't able to get the back-up stable
(journaling) or working at all (normal incremental) without using time
consuming options like memory efficient.

Splitting a disk over multiple nodes means hard coding the subdirs under
root in the opt files. If a system administrator puts new data in a
different folder, this data will be mist. Work around, adding a extra
node which has a exclude.dir for all dirs already being managed by the
other nodes.

But what if the root is de container of all data? Meaning, the profile
disk of a windows box with all profiles (6000+) in the root of the disk?
Do I really want to be force to configure multiple nodes + one extra to
get the backup of this monster running? MemEff runs for over 36 hours
and Journal db is +2 GB within 24 hours.

Regards,

Karel


-----Original Message-----
From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On Behalf Of
Mark Stapleton
Sent: dinsdag 12 december 2006 15:46
To: ADSM-L AT VM.MARIST DOT EDU
Subject: Re: JBB Question(s)

From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On Behalf Of
Otto Chvosta
>After that dbviewb reports that the journal state is 'not valid'. So we

>tried a further inremental backup (scheduled) to get a valid state of 
>the journal database.
>This incremental was stopped with
>
>ANS1999E Incremental processing of '\\fileserver\q$' stopped.
>ANS1030E The operating system refused a TSM request for memory 
>allocation.
>
>We tried it again and again ... same result :-(((

From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On Behalf Of
Schaub, Steve
>Add this to the dsm.opt file and run the incremental again:
>*======================================================================
*
>
>* Reduce memory usage by processing a directory at a time (slower)
*
>
>*======================================================================
*
>memoryefficientbackup  yes
>
>Large windows fileservers with deep directory structures often exhaust 
>memory trying to traverse the entire filesystem during the initial
scan.
>This option scans the filesystem in chunks.

To add a bit of detail:

All modern Windows versions (except possibly Vista) have a hard-set
limit of total memory that can be dedicated to a single process thread.
(I believe it's 192MB, but don't quote me on that.) It is a hard limit
that cannot be gotten around.

Steve's workaround is an option. The other option is to use two
nodenames for the same machine, with two option files/sets of TSM
services/etc. One node backs up half the machine (by using
include/exclude lines in the option files), and the other node backs up
the other half.

The real fix? Use a real server OS.

--
Mark Stapleton (mark.s AT evolvingsol DOT com)
Senior TSM consultant

Attachment: disclaimer.txt
Description: Text document

<Prev in Thread] Current Thread [Next in Thread>