Networker

[Networker] Parallelism Question

2009-05-05 11:22:22
Subject: [Networker] Parallelism Question
From: psoni <networker-forum AT BACKUPCENTRAL DOT COM>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Tue, 5 May 2009 11:15:12 -0400
Thierry FAIDHERBE wrote:
> Well, normally logic is as following :
> 
> 1° datazone parallelism. = max concurrent backup session but
> is, depending backup server edition computed as
>       32 (BS) + (32* SN)
> 
> In other words, you can only have 32 concurrent running sessions
> on the backup server even if datazone parallelism is set as 64
> (1 backup server + 1 full storage node).
> 
> ----------------------------
> 
> 2° Device target session is not limitative, it is just load
> ballancing. As soon as device target session is reached, another 
> device matching pool criteria is searched and media requested.
> Use MAX Session under device setting to limit amount of max
> sessions being written.
> 
> If sessions can still be created (group worklist not completed and
> BS/SN parallelism limit not reached, then if device MAX session
> is not reached (512 sessions by default), sessions will be distributed
> on writing devices, overwritting device target session.
> 
> -----------------------------
> 
> 3° Savegroup parallelism. Turned off by default, once the group is started
> all of the possible backup sessions are established until datazone
> parallelism
> is reached or/and BS/SN parallelism limit is reached.
> Backup sessions from clients are started depending Client priority.
> 
> -----------------------------
> 
> 4° Client parallelism : At client level, you can also control how many
> save sessions a client will handle at same time. 4 By default.
> 
> 
> 
> In your case, logic is matched :
> 
> It start group, with max 12 parallelism on BS.
> Start group, assign 4 saves to a drive (device target session)
> then 4 to a second and 4 to a third. When sessions finish,
> new will be assigned to same devices the session just finished) 
> 
> Well it didn't send 4 saves to the third volume after using 1 and 2 with four 
> sessions each. 
> Vol 3 has the lowest utilization among the three appen volumes available in 
> that pool.
> 
> 
> In your sample, if you set all of your 3 clients with client parallelism set
> to 2,
> only 6 save sessions will be established even if datazone parallelism is 12.
> 
> 
> 
> 
> Your single option is to slowdown/speedup your groups by
> reducing/increasing savegroup parallelism (different setting for each group)
> and custom client priority within group for controling client order.
> 
> 
> Hope that helps
> 
> Th
> 
> 
> 
> 
> > -----Original Message-----
> > From: EMC NetWorker discussion 
> > [mailto:NETWORKER < at > LISTSERV.TEMPLE.EDU] On Behalf Of psoni
> > Sent: Tuesday, May 05, 2009 8:37 AM
> > To: NETWORKER < at > LISTSERV.TEMPLE.EDU
> > Subject: [Networker] Parallelism Question
> > 
> > Paul,
> > 
> > I used Parallelism = Number of Devices * Target Sessions: 
> > from the perfromance tuning guide to set the server 
> > parallelism setting.
> > 
> > I was just trying to understand how parallelism works and 
> > used 12 instead of 16 ( # devices =4 , target sessions =4 )
> > 
> > It was the first diff backup that ran on Monday after weekly 
> > full and took ~ 35 minutes for 12 GB.
> > 
> > There are no restrictions on the drives for the media pools.
> > 
> > I have also enabled "recycle to" & "recycle from" in the 
> > pools but that particular media pool already has 3 
> > (appendable) volumes and so I believe NW didn't try to get 
> > any recycled tape from the other media pool during the backup.
> > 
> > Here is the output from daemon.log
> > 
> > 05/04/09 03:00:05 nsrd: savegroup info: starting 
> > <BACKUP_GROUP> (with 3 client(s))
> > 05/04/09 03:04:56 nsrd: Operation 166 started: Load volume `1'.
> > 05/04/09 03:04:56 nsrd: media waiting event: Waiting for 1 
> > writable volume to backup pool 'DailyTape' disk(s) or tape(s) 
> > on NW Server.
> > 05/04/09 03:05:05 nsrmmd #30: Start nsrmmd #30, with PID 5976,
> > 05/04/09 03:05:48 nsrd: \\.\Tape0 1: Verify label operation 
> > in progress
> > 05/04/09 03:05:52 nsrd: \\.\Tape0 1: Mount operation in progress
> > 05/04/09 03:06:17 nsrd: Operation 167 started: Load volume `2'.
> > 05/04/09 03:06:17 nsrd: Operation 168 started: Load volume `3'.
> > 05/04/09 03:06:17 nsrd: media event cleared: Waiting for 1 
> > writable volume to backup pool 'DailyTape' disk(s) or tape(s) 
> > on NW Server
> > 05/04/09 03:06:17 nsrd: media waiting event: Waiting for 2 
> > writable volumes to backup pool 'DailyTape' disk(s) or 
> > tape(s) on NW Server
> > 05/04/09 03:06:30 nsrd: [Jukebox `ADIC < at > 4.2.1', operation # 
> > 166]. Finished with status: succeeded
> > 05/04/09 03:06:31 nsrmmd #31: Start nsrmmd #31, with PID 824, 
> > at HOST NW Server
> > 05/04/09 03:06:31 nsrmmd #32: Start nsrmmd #32, with PID 
> > 5676, at HOST NW Server
> > 05/04/09 03:06:40 nsrd: \\.\Tape3 2: Verify label operation 
> > in progress
> > 05/04/09 03:06:41 nsrd: client 3 <SAVESET # 1> saving to pool 
> > 'DailyTape' (volume 1)
> > 05/04/09 03:06:43 nsrd: \\.\Tape3 2: Mount operation in progress
> > 05/04/09 03:06:44 nsrd: client 3: <SAVESET # 1> done saving 
> > to pool 'DailyTape' (volume 1)
> > 05/04/09 03:06:44 nsrd: client 3: <SAVESET # 2> saving to 
> > pool 'DailyTape' (volume 1)
> > 05/04/09 03:06:44 nsrd: media waiting event: Waiting for 1 
> > writable volumes to backup pool 'DailyTape' disk(s) or 
> > tape(s) on NW Server
> > 05/04/09 03:06:45 nsrd: client 3: <SAVESET # 2> done saving 
> > to pool 'DailyTape' (volume 1)
> > 05/04/09 03:06:46 nsrd: client 2 :< SAVESET #1> saving to 
> > pool 'DailyTape' (volume 1)
> > 05/04/09 03:06:46 nsrd: client 2<SAVESET # 1> done saving to 
> > pool 'DailyTape' (volume 1)
> > 05/04/09 03:06:48 nsrd: client 3 <SAVESET # 3> saving to pool 
> > 'DailyTape' (volume 1)
> > 05/04/09 03:06:52 nsrd: client 3: <SAVESET # 3> done saving 
> > to pool 'DailyTape' (volume 1)
> > 05/04/09 03:06:56 nsrd: \\.\Tape2 3: Verify label operation 
> > in progress
> > 05/04/09 03:06:59 nsrd: \\.\Tape2 3: Mount operation in 
> > progress pools supported: DailyTape;
> > 05/04/09 03:07:16 nsrd: [Jukebox `ADIC < at > 4.2.1', operation # 
> > 167]. Finished with status: succeeded
> > 05/04/09 03:07:36 nsrd: write completion notice: Writing to 
> > volume 1 complete
> > 05/04/09 03:07:36 nsrd: client 3: <SAVESET # 4> saving to 
> > pool 'DailyTape' (volume 2)
> > 05/04/09 03:07:39 nsrd: client 2: <SAVESET # 2> saving to 
> > pool 'DailyTape' (volume 2)
> > 05/04/09 03:07:39 nsrd: client 1 <SAVESET # 1> saving to pool 
> > 'DailyTape' (volume 2)
> > 05/04/09 03:07:44 nsrd: client 2: <SAVESET # 3> saving to 
> > pool 'DailyTape' (volume 2)
> > 05/04/09 03:07:44 nsrd: client 3 <SAVESET # 4> done saving to 
> > pool 'DailyTape' (volume 2)
> > 05/04/09 03:07:48 nsrd: client 2 :< SAVESET # 2> done saving 
> > to pool 'DailyTape' (volume 2)
> > 05/04/09 03:07:56 nsrd: client 2 :< SAVESET# 3> done saving 
> > to pool 'DailyTape' (volume 2)
> > 05/04/09 03:08:17 nsrd: client 1 :< SAVESET #1 > done saving 
> > to pool 'DailyTape' (volume 2)
> > 05/04/09 03:08:24 nsrd: media event cleared: Waiting for 1 
> > writable volume to backup pool 'DailyTape' disk(s) or tape(s) 
> > on NW Server
> > 05/04/09 03:08:34 nsrd: [Jukebox `ADIC < at > 4.2.1', operation # 
> > 168]. Finished with status: succeeded
> > 05/04/09 03:08:42 nsrd: client 2: <SAVESET # 4> saving to 
> > pool 'DailyTape' (volume 3)
> > 05/04/09 03:08:42 nsrd: client 2: <SAVESET # 4> done saving 
> > to pool 'DailyTape' (volume 3)
> > 05/04/09 03:08:45 nsrd: NW Server: index: client 1 saving to 
> > pool 'DailyTape' (volume 2)
> > 05/04/09 03:08:49 nsrd: NW Server: index: client 1 done 
> > saving to pool 'DailyTape' (volume 2)
> > 05/04/09 03:08:57 nsrd: client 2: <SAVESET # 5> saving to 
> > pool 'DailyTape' (volume 2)
> > 05/04/09 03:09:05 nsrd: client 2 <SAVESET #5> done saving to 
> > pool 'DailyTape' (volume 2)
> > 05/04/09 03:09:05 nsrd: client 3 <SAVESET # 5>   saving to 
> > pool 'DailyTape' (volume 2)
> > 05/04/09 03:09:18 nsrd: write completion notice: Writing to 
> > volume 3 complete
> > 05/04/09 03:10:31 nsrd: client 2 <SAVESET # 6> saving to pool 
> > 'DailyTape' (volume 2)
> > 05/04/09 03:10:39 nsrd: client 2<SAVESET # 6> done saving to 
> > pool 'DailyTape' (volume 2)
> > 05/04/09 03:10:42 nsrd: client 2: <SAVESET # 7> saving to 
> > pool 'DailyTape' (volume 2)
> > 05/04/09 03:11:08 nsrd: client 2: <SAVESET # 8> saving to 
> > pool 'DailyTape' (volume 2)
> > 05/04/09 03:11:21 nsrd: client 2 :<SAVESET # 7 > done saving 
> > to pool 'DailyTape' (volume 2)
> > 05/04/09 03:11:30 nsrd: client 2 < SAVESET # 8 > done saving 
> > to pool 'DailyTape' (volume 2)
> > 05/04/09 03:11:30 nsrd: client 3:< SAVESET # 5 > done saving 
> > to pool 'DailyTape' (volume 2)
> > 05/04/09 03:11:50 nsrd: client 2: <SAVESET # 9> saving to 
> > pool 'DailyTape' (volume 2)
> > 05/04/09 03:12:25 nsrd: NW Server: index: client 3 saving to 
> > pool 'DailyTape' (volume 2)
> > 05/04/09 03:12:28 nsrd: NW Server: index: client 3 done 
> > saving to pool 'DailyTape' (volume 2)
> > 05/04/09 03:12:44 nsrd: client 2:< SAVESET # 9 > done saving 
> > to pool 'DailyTape' (volume 2)
> > 05/04/09 03:13:26 nsrd: client 2: <SAVESET # 10> saving to 
> > pool 'DailyTape' (volume 2)
> > 05/04/09 03:13:55 nsrd: client 2<SAVESET # 11> saving to pool 
> > 'DailyTape' (volume 2)
> > 05/04/09 03:14:06 nsrd: client 2:<SAVESET # 10> done saving 
> > to pool 'DailyTape' (volume 2)
> > 05/04/09 03:14:06 nsrd: client 2<SAVESET # 11>done saving to 
> > pool 'DailyTape' (volume 2)             
> > 05/04/09 03:14:21 nsrd: client 2: <SAVESET # 12> saving to 
> > pool 'DailyTape' (volume 2)
> > 05/04/09 03:14:27 nsrd: client 2 :<SAVESET # 12> saving to 
> > pool 'DailyTape' (volume 2)
> > 05/04/09 03:15:05 nsrd: write completion notice: Writing to 
> > volume volume 2 complete ...................................
> > ..............................
> > ...................................
> > 
> > Remaining savesets ( # 13 to # 17 ) from client 2 and its 
> > index were backed up to volume # 1 
> > 
> > What is the best option if I have multiple groups ( with few 
> > clients ) running in such a way that they overlap?
> > 
> > Which parallelism has the highest priority ?
> > 
> > Thanks
> > 
> > +-------------------------------------------------------------
> > ---------
> > |This was sent by soni.parth < at > gmail.com via Backup Central.
> > |Forward SPAM to abuse < at > backupcentral.com.
> > +-------------------------------------------------------------
> > ---------
> > 
> > To sign off this list, send email to 
> > listserv < at > listserv.temple.edu and type "signoff networker" in 
> > the body of the email. Please write to 
> > networker-request < at > listserv.temple.edu if you have any 
> > problems with this list. You can access the archives at 
> > http://listserv.temple.edu/archives/networker.html or via RSS 
> > at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
> > 
> > 
> 
> 
> "signoff networker" in the body of the email. Please write to
> networker-request < at > listserv.temple.edu if you have any problems with 
> this
> list. You can access the archives at
> http://listserv.temple.edu/archives/networker.html or
> via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
> 
> 
> via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER


+----------------------------------------------------------------------
|This was sent by soni.parth AT gmail DOT com via Backup Central.
|Forward SPAM to abuse AT backupcentral DOT com.
+----------------------------------------------------------------------

To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the body of the email. Please write to 
networker-request AT listserv.temple DOT edu if you have any problems with this 
list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER