ADSM-L

Re: [ADSM-L] RMAN direct to NFS

2012-07-11 14:30:41
Subject: Re: [ADSM-L] RMAN direct to NFS
From: "Schneider, Jim" <jschneider AT USSCO DOT COM>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Wed, 11 Jul 2012 14:27:06 -0400
One Data Domain Fun Fact for you:  When you upgrade the DD OS you will
have to remount all of the NFS shares on the servers that write to it
directly.

Jim Schneider

-----Original Message-----
From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On Behalf Of
Richard Rhodes
Sent: Wednesday, July 11, 2012 8:07 AM
To: ADSM-L AT VM.MARIST DOT EDU
Subject: Re: [ADSM-L] RMAN direct to NFS

>I don't know if we'd have gone with VTLs if we were architecting this 
>from scratch, but as we went from tape-based to virtual technology, the

>VTL interfaces made the transition logically simpler, and it appeased 
>the one team member who has an irrational hatred of NFS. We're now 
>under pressure to adopt a new reference architecture that is NFS based,

>not VTL based. I'm skeptical about that will work, but because we're 
>changing everything except the fact that we're still a TSM shop, if it 
>doesn't go well, everyone will have a chance to blame someone else for 
>any problems.

Compared to what you guys are describing, we are a small.
We run 10 main TSM servers, 50tb/night, 3000 nodes, 2 x 3584 libs with
50 drives each, and now we've added 4 DataDomains.  We replicate between
the DD's.

For the first two, we decided to use the NFS interface.  Our experience
is that we are now ONLY
interested in NFS file based interface.   For the TSM instances we've
moved on the DD, It has
 GREATLY simplified our TSM instances and processing.

The Good:
- no tape (zoning, paths, stuck tapes, scsi reservation errors,
           rmt/smc devices, atape, etc)
- no copy pool  (we use DD replication.  This cuts the I/O load in
half.)
- quick migration (we migrate disk pool at 10% to the DD.
                   It runs all night, so migration in the moring is
minimal.
- protect disk pool with lower max file size (we pass any file over 5gb
directly
                   to the DD pool.)
- simpler batch processing.  No copy pool!!!  We let reclamation fun
     automatically whenever a vol needs it.  We are using 30gb volumes,
     so many need little reclamation.
- We collocate the DD pool by node.
      I'm working on a script to see the DD compression per node.
- NFS has been very reliable.  Our TSM servers are in lpars on several
      chassis.  We're using VIO to share a 10g adapter per chassis.
      I'm seeing 150-250mb/s during migrations per TSM instance.
      (Jumbo packets are a MUST.)
- DR is simpler.  DB and recplan gets backed up to the DD along with
      other "stuff", which all gets replicated to the DR site.
      When brought up we have the PRI pool.

The Not-so-good:
- Yes, it's NFS.  AIX can be tied in a knot if the NFS server (DD in
this
case)
    has a problem.  Since the DD is a non-redundant architecture (not a
cluster)
    I DO expect problems if the DD dies.  The one change I've made that
DD doesn't
    recommend is that I mount the shares "soft".
- No way to take the DD file pool offline.  You can mark it
"unavailable", but that
    only effects client sessions, not reclamation or other internal
processes.
- When you take the DD down for some reason, you have to kill
sessions/processes
    using it, mark the pool unavailable, then umount the share on all
servers.
- As mentioned about, the architecture if the DD is non-redundant.  That
was
    a kind of comfort with all the tape pieces/parts.  Individual
piece/part
    can break, but it only effected the one part.  With the DD, if it it
    crashes all users have problems.

For us, this has been a major step forward.  It's not often that a
product

truly simplifies what we do, but the DD's with NFS interface is one that
stands out.

Rick






From:   Nick Laflamme <dplaflamme AT GMAIL DOT COM>
To:     ADSM-L AT VM.MARIST DOT EDU
Date:   07/10/2012 08:11 PM
Subject:        Re: RMAN direct to NFS
Sent by:        "ADSM: Dist Stor Manager" <ADSM-L AT VM.MARIST DOT EDU>



This is more about VTLs than TSM, but I have a couple of questions,
influenced by my shop's experience with VTLs.

1) When you say "40 VTLs," I presume that's on far fewer frames, that
you're using several virtual libraries on each ProtecTier or whatever
you're using?
2) I see that as 128 tape drives per library. Do you ever use that many,
or is this just a "because we could" situation? (We use 48 per library,
and that may be overkill, but we're on EMC hardware, not IBM, so the
performance curves may be different.)
3) Do I read 1) and 4) to mean that you're sharing VTLs among TSM
servers?
Why, man, why? Can't you give each TSM server its own VTL and be done
with it? Or are you counting storage agents as TSM instances?

I don't know if we'd have gone with VTLs if we were architecting this
from scratch, but as we went from tape-based to virtual technology, the
VTL interfaces made the transition logically simpler, and it appeased
the one team member who has an irrational hatred of NFS. We're now under
pressure to adopt a new reference architecture that is NFS based, not
VTL based.
I'm skeptical about that will work, but because we're changing
everything except the fact that we're still a TSM shop, if it doesn't go
well, everyone will have a chance to blame someone else for any
problems.

Now that I think about it, I have no idea how many paths we have defined
to all of our VTLs on all of our DataDomains. It might be 10,000 paths
ultimately, but when you define them a few hundred at a time, or fewer,
it's not so overwhelming!

Nick


On Jul 10, 2012, at 12:29 PM, Hart, Charles A wrote:

> The IBM one, the reason I said overhead and complexity
>
> 1) We have 40 VTL's for
> 2) 5120 Configured Vtape Drives
> 3) More than 10,000 TSM Tape Drive Paths
> 4) 100 TSM Instances that share all the above
>
> It would "seem" that if we used a VTL that has NFS we would still have
> 40 Devices but not he 15K objets to manage (tape Drives and paths)
>
> Regards,
>
> Charles




-----------------------------------------
The information contained in this message is intended only for the
personal and confidential use of the recipient(s) named above. If the
reader of this message is not the intended recipient or an agent
responsible for delivering it to the intended recipient, you are hereby
notified that you have received this document in error and that any
review, dissemination, distribution, or copying of this message is
strictly prohibited. If you have received this communication in error,
please notify us immediately, and delete the original message.

**********************************************************************
Information contained in this e-mail message and in any attachments thereto is 
confidential. If you are not the intended recipient, please destroy this 
message, delete any copies held on your systems, notify the sender immediately, 
and refrain from using or disclosing all or any part of its content to any 
other person.

<Prev in Thread] Current Thread [Next in Thread>