Re: [Networker] Query in Staging from adv_file
2008-08-18 14:19:32
Really at this point I think the solution lies in either upgrading your drives
to LTO3/4 which may or may not be possible. Or analyzing the specifics of
which adv_file each save set is sent to and then staging immediately. Based
upon the architecture you described the drives are by far your biggest
bottleneck. Unless that SAN disk that is presented to your networker server
for adv_file is some cheap ata or some such thing(you didn't mention any
read/write MB/sec stats for the adv_file disks). But I doubt that since you
are able to write backups over the LAN at ~80MB/sec
You mentioned in a previous post about multiplexing and I think I know where
you are going with that, but as Peter pointed out this isn't possible with
staging.
Maybe someone else on the list has an idea for you. Also check the list
archives there are a lot of really great staging threads which may or may not
help you.
--Shawn
-----Original Message-----
From: EMC NetWorker discussion [mailto:NETWORKER AT LISTSERV.TEMPLE DOT EDU] On
Behalf Of anandhg
Sent: Monday, August 18, 2008 12:46 PM
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Subject: [Networker] Query in Staging from adv_file
The backup I am taking is from a database using NMO, I will have approx 20 save
sets. Below is the answers for ur questions
How many LTO2 drives do you have access to from the Networker server?
The adV_file device is give to a Storage Node and this SN has visiblity of
5 tape drives.
How are the LTO2 drives connected to the Networker server? SCSI, Fibre, etc.
FC
What is the Network architecture between the db server and the networker
server? 80MB/sec sounds like 2-3 teamed GBit NICs.
From the client I have IPMP configured and on the Storage Node I
have Solaris Trunking enabled.
What is the adv_file disk architecture? What kind of local disk access metrics
do you see on it? Read/Write MB/sec?
I have around 10TB of storage space taken from a SAN box and in
the OS side Solaris ZFS is used. I am able to see the load is equally
distributed among all the disks.
I have created a single file system of 10TB size and have created few
directories under this and made each of them as adv_file device. And configured
the parallelism to 4 per adv_file. In our environment all the RMAN backup is
configured with 4 channels, so that each client will use one adv_file device.
Also during staging each adv_file will take one physical drive.
I am still finding out a better solution.
+----------------------------------------------------------------------
|This was sent by anandhg AT gmail DOT com via Backup Central.
|Forward SPAM to abuse AT backupcentral DOT com.
+----------------------------------------------------------------------
To sign off this list, send email to listserv AT listserv.temple DOT edu and
type "signoff networker" in the body of the email. Please write to
networker-request AT listserv.temple DOT edu if you have any problems with this
list. You can access the archives at
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
To sign off this list, send email to listserv AT listserv.temple DOT edu and
type "signoff networker" in the body of the email. Please write to
networker-request AT listserv.temple DOT edu if you have any problems with this
list. You can access the archives at
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
|
|
|