ADSM-L

Re: Journal Based Backups

2006-06-14 18:26:39
Subject: Re: Journal Based Backups
From: "Bos, Karel" <Karel.Bos AT ATOSORIGIN DOT COM>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Thu, 15 Jun 2006 00:25:51 +0200
Hi,

Journal backups, one of these days it will be easy to configure and
stable in use?

One of the big plusses of using journaling should be on a server with a
lot of files and a small change ratio.
13,000,000 objecten to inspect and 200,000-600,000 to backup looks like
a candidate. 

Problem, the journal services keeps locking up and not cleaning up the
journal databases (after 2 days around 2-4GB) and tsmjbbd.exe will not
close properly eating up as much as 750 meg of ram. Also, if the
database isn't cleaned out once a week (thereby forcing a normal incr
backup running anywhere between 24 and 36 hours), all other TSM backups
configured on this cluster(node), not using the journal engine, will not
run either.

For some reason, a normal (no cluster) windows 2000 file server with the
same jbbd.ini parms (2,200,000 objecten 750,000 to backup) is running
unattended for the past few months. 

So, if anyone out there could come up with a working config for this
cluster:
OS:Win2003;
TSM server 5.3.x
TSM Client 5.3.x

4 virtual nodes, each containing 10,000,000 +objecten on 2 cluster
controlers where only one virtual node needs journaling at this moment.
If that's stable, the others will follow with journal pipe options.

Regards,

Karel

Ps. Short version: Journal based backups can help to reduce backup
windows, but give almost always extra work.




-----Original Message-----
From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On Behalf Of
John Monahan
Sent: woensdag 14 juni 2006 21:32
To: ADSM-L AT VM.MARIST DOT EDU
Subject: Re: Journal Based Backups

"ADSM: Dist Stor Manager" <ADSM-L AT VM.MARIST DOT EDU> wrote on 06/13/2006
09:42:38 PM:

> On Tue, 13 Jun 2006, TSM_User might have said:
>
> > Many windows 2003 servers now days can scan one to two million files
> per hour. We don't use journaling until we get over five million
files.
> I've seen the deeper the directory structure the longer it takes to 
> scan

> Basically if all one million files are at the root of a drive it will 
> scan much faster then if one million files were in hundres of
subdirectories.
>
> I don't think the time issues relate only to number of files.
> The amount of traffic between client and server about just which files

> to backup is my concern. That's why I have this other program.
>
> Mike

I'm not sure there is all that much network traffic during the exchange
of metadata, at least I haven't noticed it.  If that was the case, then
remote site backups over slow connections would take several hours to
backup even if only .00001% changed.  I've done some slow link remote
site backups before and the actual exchange of metadata seemed to be
rather small and quite efficient, and I don't recall the times to scan
the filesystems taking a whole lot longer than their local LAN
counterparts.
Then again that was a few years ago and my memory could be failing me.

If anyone on the list is doing LANfree backups with any servers that
have a somewhat large number of files, it would be interesting to see
your LANfree vs LAN bytes sent summary at the end of your backup.  That
will be a true test of how much data is sent over the network during the
process of deciding whether or not to backup a file.


______________________________
John Monahan
Consultant Infrastructure Solutions
Computech Resources, Inc.
Office: 952-833-0930 ext 109
Cell: 952-221-6938
http://www.computechresources.com

Attachment: Disclaimer.txt
Description: Text document

<Prev in Thread] Current Thread [Next in Thread>