Networker

Re: [Networker] Parallelism Question

2009-05-05 09:36:36
Subject: Re: [Networker] Parallelism Question
From: "Goslin, Paul" <pgoslin AT CINCOM DOT COM>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Tue, 5 May 2009 09:28:35 -0400
Good Question as to which parallelism has the highest priority... I do
not know the answer to that. It would be nice if EMC provided a chart or
something similar to show the hierarchy for that...

It appears to me that 'something' is limiting the parallelism on your
server. I would expect at least 12 concurrent save-sessions given your
current situation with the clients and save-sets you have configured for
this group... 
First thing I would try would be to increase overall server setting to
at least 32, and if that improves things. Then maybe increase the target
sessions on the drives if that did not help. 12 GB in 35 minutes is not
too bad though... This is probably a fraction of what your server is
capable of.  

This 35 minutes could probably be cut down a bit if the library did not
have to take the time to load the tapes first.  I always set the library
to keep tapes in drives by setting the 'idle device timeout' to zero so
the drives are ready to go when a group starts and needs tapes. I've
always felt a tape library should be ready to receive data anytime and
begin writing as soon as possible. Physical tape handling is one of the
slowest things a tape library does IMHO, so it should always have
appendable tapes in the drives ready to go whenever possible. The only
time it does not, is after a restore had just completed where it usually
has an older full tape in a drive needed for the restore. I usually
eject that tape and then mount an appendable tape in its place so it's
ready to go for the next session that needs it... 

Not sure what I would recommend for multiple groups. Every site is
unique. We have multiple groups (we run 14 groups each night for about
100 clients). We just stagger the starting times and let Networker
prioritize them and run them as it sees fit.. It seems to work OK for
us. We had severe throughput problems until we disabled anti-virus
scanning on the server, that increased the throughput overwhelmingly for
us.... 

> -----Original Message-----
> From: EMC NetWorker discussion 
> [mailto:NETWORKER AT LISTSERV.TEMPLE DOT EDU] On Behalf Of psoni
> Sent: Tuesday, May 05, 2009 8:37 AM
> To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
> Subject: [Networker] Parallelism Question
> 
> Paul,
> 
> I used Parallelism = Number of Devices * Target Sessions: 
> from the perfromance tuning guide to set the server 
> parallelism setting.
> 
> I was just trying to understand how parallelism works and 
> used 12 instead of 16 ( # devices =4 , target sessions =4 )
> 
> It was the first diff backup that ran on Monday after weekly 
> full and took ~ 35 minutes for 12 GB.
> 
> There are no restrictions on the drives for the media pools.
> 
> I have also enabled "recycle to" & "recycle from" in the 
> pools but that particular media pool already has 3 
> (appendable) volumes and so I believe NW didn't try to get 
> any recycled tape from the other media pool during the backup.
> 
> Here is the output from daemon.log
> 
> 05/04/09 03:00:05 nsrd: savegroup info: starting 
> <BACKUP_GROUP> (with 3 client(s))
> 05/04/09 03:04:56 nsrd: Operation 166 started: Load volume `1'.
> 05/04/09 03:04:56 nsrd: media waiting event: Waiting for 1 
> writable volume to backup pool 'DailyTape' disk(s) or tape(s) 
> on NW Server.
> 05/04/09 03:05:05 nsrmmd #30: Start nsrmmd #30, with PID 5976,
> 05/04/09 03:05:48 nsrd: \\.\Tape0 1: Verify label operation 
> in progress
> 05/04/09 03:05:52 nsrd: \\.\Tape0 1: Mount operation in progress
> 05/04/09 03:06:17 nsrd: Operation 167 started: Load volume `2'.
> 05/04/09 03:06:17 nsrd: Operation 168 started: Load volume `3'.
> 05/04/09 03:06:17 nsrd: media event cleared: Waiting for 1 
> writable volume to backup pool 'DailyTape' disk(s) or tape(s) 
> on NW Server
> 05/04/09 03:06:17 nsrd: media waiting event: Waiting for 2 
> writable volumes to backup pool 'DailyTape' disk(s) or 
> tape(s) on NW Server
> 05/04/09 03:06:30 nsrd: [Jukebox `[email protected]', operation # 
> 166]. Finished with status: succeeded
> 05/04/09 03:06:31 nsrmmd #31: Start nsrmmd #31, with PID 824, 
> at HOST NW Server
> 05/04/09 03:06:31 nsrmmd #32: Start nsrmmd #32, with PID 
> 5676, at HOST NW Server
> 05/04/09 03:06:40 nsrd: \\.\Tape3 2: Verify label operation 
> in progress
> 05/04/09 03:06:41 nsrd: client 3 <SAVESET # 1> saving to pool 
> 'DailyTape' (volume 1)
> 05/04/09 03:06:43 nsrd: \\.\Tape3 2: Mount operation in progress
> 05/04/09 03:06:44 nsrd: client 3: <SAVESET # 1> done saving 
> to pool 'DailyTape' (volume 1)
> 05/04/09 03:06:44 nsrd: client 3: <SAVESET # 2> saving to 
> pool 'DailyTape' (volume 1)
> 05/04/09 03:06:44 nsrd: media waiting event: Waiting for 1 
> writable volumes to backup pool 'DailyTape' disk(s) or 
> tape(s) on NW Server
> 05/04/09 03:06:45 nsrd: client 3: <SAVESET # 2> done saving 
> to pool 'DailyTape' (volume 1)
> 05/04/09 03:06:46 nsrd: client 2 :< SAVESET #1> saving to 
> pool 'DailyTape' (volume 1)
> 05/04/09 03:06:46 nsrd: client 2<SAVESET # 1> done saving to 
> pool 'DailyTape' (volume 1)
> 05/04/09 03:06:48 nsrd: client 3 <SAVESET # 3> saving to pool 
> 'DailyTape' (volume 1)
> 05/04/09 03:06:52 nsrd: client 3: <SAVESET # 3> done saving 
> to pool 'DailyTape' (volume 1)
> 05/04/09 03:06:56 nsrd: \\.\Tape2 3: Verify label operation 
> in progress
> 05/04/09 03:06:59 nsrd: \\.\Tape2 3: Mount operation in 
> progress pools supported: DailyTape;
> 05/04/09 03:07:16 nsrd: [Jukebox `[email protected]', operation # 
> 167]. Finished with status: succeeded
> 05/04/09 03:07:36 nsrd: write completion notice: Writing to 
> volume 1 complete
> 05/04/09 03:07:36 nsrd: client 3: <SAVESET # 4> saving to 
> pool 'DailyTape' (volume 2)
> 05/04/09 03:07:39 nsrd: client 2: <SAVESET # 2> saving to 
> pool 'DailyTape' (volume 2)
> 05/04/09 03:07:39 nsrd: client 1 <SAVESET # 1> saving to pool 
> 'DailyTape' (volume 2)
> 05/04/09 03:07:44 nsrd: client 2: <SAVESET # 3> saving to 
> pool 'DailyTape' (volume 2)
> 05/04/09 03:07:44 nsrd: client 3 <SAVESET # 4> done saving to 
> pool 'DailyTape' (volume 2)
> 05/04/09 03:07:48 nsrd: client 2 :< SAVESET # 2> done saving 
> to pool 'DailyTape' (volume 2)
> 05/04/09 03:07:56 nsrd: client 2 :< SAVESET# 3> done saving 
> to pool 'DailyTape' (volume 2)
> 05/04/09 03:08:17 nsrd: client 1 :< SAVESET #1 > done saving 
> to pool 'DailyTape' (volume 2)
> 05/04/09 03:08:24 nsrd: media event cleared: Waiting for 1 
> writable volume to backup pool 'DailyTape' disk(s) or tape(s) 
> on NW Server
> 05/04/09 03:08:34 nsrd: [Jukebox `[email protected]', operation # 
> 168]. Finished with status: succeeded
> 05/04/09 03:08:42 nsrd: client 2: <SAVESET # 4> saving to 
> pool 'DailyTape' (volume 3)
> 05/04/09 03:08:42 nsrd: client 2: <SAVESET # 4> done saving 
> to pool 'DailyTape' (volume 3)
> 05/04/09 03:08:45 nsrd: NW Server: index: client 1 saving to 
> pool 'DailyTape' (volume 2)
> 05/04/09 03:08:49 nsrd: NW Server: index: client 1 done 
> saving to pool 'DailyTape' (volume 2)
> 05/04/09 03:08:57 nsrd: client 2: <SAVESET # 5> saving to 
> pool 'DailyTape' (volume 2)
> 05/04/09 03:09:05 nsrd: client 2 <SAVESET #5> done saving to 
> pool 'DailyTape' (volume 2)
> 05/04/09 03:09:05 nsrd: client 3 <SAVESET # 5>   saving to 
> pool 'DailyTape' (volume 2)
> 05/04/09 03:09:18 nsrd: write completion notice: Writing to 
> volume 3 complete
> 05/04/09 03:10:31 nsrd: client 2 <SAVESET # 6> saving to pool 
> 'DailyTape' (volume 2)
> 05/04/09 03:10:39 nsrd: client 2<SAVESET # 6> done saving to 
> pool 'DailyTape' (volume 2)
> 05/04/09 03:10:42 nsrd: client 2: <SAVESET # 7> saving to 
> pool 'DailyTape' (volume 2)
> 05/04/09 03:11:08 nsrd: client 2: <SAVESET # 8> saving to 
> pool 'DailyTape' (volume 2)
> 05/04/09 03:11:21 nsrd: client 2 :<SAVESET # 7 > done saving 
> to pool 'DailyTape' (volume 2)
> 05/04/09 03:11:30 nsrd: client 2 < SAVESET # 8 > done saving 
> to pool 'DailyTape' (volume 2)
> 05/04/09 03:11:30 nsrd: client 3:< SAVESET # 5 > done saving 
> to pool 'DailyTape' (volume 2)
> 05/04/09 03:11:50 nsrd: client 2: <SAVESET # 9> saving to 
> pool 'DailyTape' (volume 2)
> 05/04/09 03:12:25 nsrd: NW Server: index: client 3 saving to 
> pool 'DailyTape' (volume 2)
> 05/04/09 03:12:28 nsrd: NW Server: index: client 3 done 
> saving to pool 'DailyTape' (volume 2)
> 05/04/09 03:12:44 nsrd: client 2:< SAVESET # 9 > done saving 
> to pool 'DailyTape' (volume 2)
> 05/04/09 03:13:26 nsrd: client 2: <SAVESET # 10> saving to 
> pool 'DailyTape' (volume 2)
> 05/04/09 03:13:55 nsrd: client 2<SAVESET # 11> saving to pool 
> 'DailyTape' (volume 2)
> 05/04/09 03:14:06 nsrd: client 2:<SAVESET # 10> done saving 
> to pool 'DailyTape' (volume 2)
> 05/04/09 03:14:06 nsrd: client 2<SAVESET # 11>done saving to 
> pool 'DailyTape' (volume 2)             
> 05/04/09 03:14:21 nsrd: client 2: <SAVESET # 12> saving to 
> pool 'DailyTape' (volume 2)
> 05/04/09 03:14:27 nsrd: client 2 :<SAVESET # 12> saving to 
> pool 'DailyTape' (volume 2)
> 05/04/09 03:15:05 nsrd: write completion notice: Writing to 
> volume volume 2 complete ...................................
> ..............................
> ...................................
> 
> Remaining savesets ( # 13 to # 17 ) from client 2 and its 
> index were backed up to volume # 1 
> 
> What is the best option if I have multiple groups ( with few 
> clients ) running in such a way that they overlap?
> 
> Which parallelism has the highest priority ?
> 
> Thanks
> 
> +-------------------------------------------------------------
> ---------
> |This was sent by soni.parth AT gmail DOT com via Backup Central.
> |Forward SPAM to abuse AT backupcentral DOT com.
> +-------------------------------------------------------------
> ---------
> 
> To sign off this list, send email to 
> listserv AT listserv.temple DOT edu and type "signoff networker" in 
> the body of the email. Please write to 
> networker-request AT listserv.temple DOT edu if you have any 
> problems with this list. You can access the archives at 
> http://listserv.temple.edu/archives/networker.html or via RSS 
> at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
> 

To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the body of the email. Please write to 
networker-request AT listserv.temple DOT edu if you have any problems with this 
list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER