Networker

Re: [Networker] Virtual Tape Library - saveset multiplexing slowsmigration

2007-01-25 11:08:49
Subject: Re: [Networker] Virtual Tape Library - saveset multiplexing slowsmigration
From: "Landwehr, Jerome" <jlandweh AT HARRIS DOT COM>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Thu, 25 Jan 2007 10:58:08 -0500
To eliminate multiplexing, set the group parallelism (if only one group
runs at a time) to the munber of drives, the drive target sessions to
one; or else set the server parallelism to the number of drives and the
drive target sessions to one - the downside of this is that if you have
other types of non VTL backups, the number of streams will still be
limited

I believe you will find that each individual client will complete faster
and you will still make your backup window 

Only trying it will you know for sure though - maybe on a Friday
night???

Jerry

-----Original Message-----
From: Mark Davis [mailto:davism AT uwo DOT ca] 
Sent: Thursday, January 25, 2007 10:37 AM
To: EMC NetWorker discussion
Cc: Landwehr, Jerome; Mark Davis
Subject: Re: [Networker] Virtual Tape Library - saveset multiplexing
slowsmigration

The reason we multiplex is we have to. We are backing up over 400 
clients per night in a 12 hour window. Also as I mentioned in a previous

post, we are limited in the number of devices we can use with Networker 
"Network Edition". We have room for 16 virtual drives, and without 
multiplexing, we would never complete our nightly schedule.

Also when you say turn off multiplexing, do you mean the number of 
Target Sessions to the virtual drive? If so, it is my understanding that

even with Target Sessions set to 1, if Networker sees the need for 
another device, and none are available in the pool, it will 
automatically start multiplexing to the virtual drives. Target Sessions 
is not an absolute limit.

Thanks,

Mark

Landwehr, Jerome wrote:
> Indeed - I concur with Curtis - this is what we did to maximize
cloning
> speed from VTL to PTL
> 
> The only reason multiplexing exists is on a PTL the tapes individually
> spin much faster than a single backup client can spit the data - this
> limitation is removed on a VTL so turn off multiplexing!
> 
> Jerry 
> 
> -----Original Message-----
> From: EMC NetWorker discussion [mailto:NETWORKER AT LISTSERV.TEMPLE DOT EDU]
On
> Behalf Of Curtis Preston
> Sent: Wednesday, January 24, 2007 8:03 PM
> To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
> Subject: Re: [Networker] Virtual Tape Library - saveset multiplexing
> slowsmigration
> 
> The question is: why are you multiplexing to your VTL?  Instead of
> sending 40 jobs to 10 virtual tape drives, why not just create 40
> virtual tape drives and turn off multiplexing?  
> 
> ---
> W. Curtis Preston
> Author of O'Reilly's Backup & Recovery and Using SANs and NAS
> VP Data Protection
> GlassHouse Technologies
> 
> 
> -----Original Message-----
> From: networker-bounces AT backupcentral DOT com
> [mailto:networker-bounces AT backupcentral DOT com] On Behalf Of Mark Davis
> Sent: Wednesday, January 24, 2007 1:47 PM
> To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
> Subject: [Networker] Virtual Tape Library - saveset multiplexing
> slowsmigration
> 
> Hello,
> 
> We have a FalconStor VTL in our backup environment. We use this as a
> disk to
> disk to tape configuration, where the client data is initially backed
up
> to
> virtual tape, then after a few days we migrate/stage the data to "real
> tape"
> (LTO3) to free up disk space.
> 
> The problem we are having is the multiplexing of savesets on the
virtual
> tape. When we want to migrate a saveset from virtual tape to real tape
> using
> nsrstage, the throughput can be quite slow if the saveset was from a
> slow
> writing client mixed in with savesets from other much faster writing
> clients
> on the same virtual tape. It appears that the entire virtual tape is
> read to
> pick up the pieces of the saveset from the slow writing client. 
> 
> Has anyone run into this? Any suggestions as to how we can improve the
> performance when migrating our data off the virtual tapes to our LTO3
> drives?
> 
> Thanks,
> 
> Mark Davis
> University of Western Ontario
> London, Ontario
> Canada
> 
> To sign off this list, send email to listserv AT listserv.temple DOT edu and
> type "signoff networker" in the body of the email. Please write to
> networker-request AT listserv.temple DOT edu if you have any problems with
this
> list. You can access the archives at
> http://listserv.temple.edu/archives/networker.html or
> via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
> 
> To sign off this list, send email to listserv AT listserv.temple DOT edu and
> type "signoff networker" in the body of the email. Please write to
> networker-request AT listserv.temple DOT edu if you have any problems with
this
> list. You can access the archives at
> http://listserv.temple.edu/archives/networker.html or
> via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
> 
> To sign off this list, send email to listserv AT listserv.temple DOT edu and
type "signoff networker" in the body of the email. Please write to
networker-request AT listserv.temple DOT edu if you have any problems with this
list. You can access the archives at
http://listserv.temple.edu/archives/networker.html or
> via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
> 


-- 
Mark Davis
Legato NetWorker Support - I.T.S
University of Western Ontario
519-661-2111 x85504
email: davism AT uwo DOT ca

To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the body of the email. Please write to 
networker-request AT listserv.temple DOT edu if you have any problems with this 
list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER