Networker

Re: [Networker] Virtual Tape Library - saveset multiplexing slows migration

2007-01-25 02:36:22
Subject: Re: [Networker] Virtual Tape Library - saveset multiplexing slows migration
From: Stefan Kapitza <k AT PITZA DOT DE>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Thu, 25 Jan 2007 02:24:20 -0500
Hello,

maybe you could "seperate" the slow client.

I would either create a pool for this Client on the VTL or
if the granularity of Tapes is to big (meaning you have a few big 
Tapes), create a group for this Client and start the Backup when 
the throughput to the VTL is low.

regards

stefan Kapitza


On Wed, 24 Jan 2007 16:46:51 -0500, Mark Davis <davism AT UWO DOT CA> wrote:

>Hello,
>
>We have a FalconStor VTL in our backup environment. We use this as a disk to
>disk to tape configuration, where the client data is initially backed up to
>virtual tape, then after a few days we migrate/stage the data to "real tape"
>(LTO3) to free up disk space.
>
>The problem we are having is the multiplexing of savesets on the virtual
>tape. When we want to migrate a saveset from virtual tape to real tape using
>nsrstage, the throughput can be quite slow if the saveset was from a slow
>writing client mixed in with savesets from other much faster writing clients
>on the same virtual tape. It appears that the entire virtual tape is read to
>pick up the pieces of the saveset from the slow writing client.
>
>Has anyone run into this? Any suggestions as to how we can improve the
>performance when migrating our data off the virtual tapes to our LTO3 drives?
>
>Thanks,
>
>Mark Davis
>University of Western Ontario
>London, Ontario
>Canada
>
>To sign off this list, send email to listserv AT listserv.temple DOT edu and 
>type
"signoff networker" in the body of the email. Please write to
networker-request AT listserv.temple DOT edu if you have any problems with this
list. You can access the archives at
http://listserv.temple.edu/archives/networker.html or
>via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
>=========================================================================

To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the body of the email. Please write to 
networker-request AT listserv.temple DOT edu if you have any problems with this 
list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

<Prev in Thread] Current Thread [Next in Thread>