ADSM-L

Re: [ADSM-L] TSM Dedup stgpool target

2013-11-18 15:56:33
Subject: Re: [ADSM-L] TSM Dedup stgpool target
From: Paul Zarnowski <psz1 AT CORNELL DOT EDU>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Mon, 18 Nov 2013 15:54:06 -0500
Bill,

Are you virtual volumes purely on tape on the target server, or are they 
fronted by some sort of disk storage pool?  I am trying to understand whether a 
small volume size for the ingest dedup file pool will cause a lot of tape 
mounts on the copy storage pool during a backup storage pool process, or 
whether TSM is smart enough to optimize output tape volume mounts.  If your 
virtual volumes are fronted by some sort of disk, or if you have a plethora of 
tape drives, you might not notice this even if TSM was dumb in this regard.  Do 
you use collocation (in order to collocate volumes in your copy storage pool)?  
If not, that could be another reason why you wouldn't notice it.

One other question, if I may.  Why do you have a BKP_1A and BKP_1B storage 
pool?  They seem to have the same attributes and both funnel into BKP_2.

I'm sure you've put a lot of thought into this, but I'm not sure I'm getting 
everything you did, and why.

..Paul



At 10:24 AM 11/18/2013, Colwell, William F. wrote:
>Paul,
>
>I describe my copypool setup in a previous reply, last Friday.
>If you lost it somehow, it is on adsm.org.
>
>But quickly, they are on virtual volumes.  I have never seen any issues
>related to the primary pool volume size.
>
>- bill
>
>-----Original Message-----
>From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On Behalf 
>Of Paul Zarnowski
>Sent: Monday, November 18, 2013 9:35 AM
>To: ADSM-L AT VM.MARIST DOT EDU
>Subject: Re: TSM Dedup stgpool target
>
>One other question, if you don't mind Bill:  Do you have Copy Storage Pools?  
>If so, are they on tape or file?  If tape, is the small volume size on the 
>primary pool an issue?  I.e., does TSM optimize output tape mounts?
>
>Thanks.
>..Paul
>
>At 05:48 PM 11/14/2013, Colwell, William F. wrote:
>>Paul,
>>
>>I am using 4 GB volumes on the 15k disks (aka ingest pool).  Since each disk 
>>is ~576 GiB
>>and there are 16 disks assigned to this server, that's a lot of volumes!
>>
>>On the sata based pools I am using 50 GiB volumes.
>>
>>All volumes are scratch allocated not pre-allocated.
>>
>>I know scratch volumes are supposed to perform less well, but I haven't heard 
>>how much less and I did ask.
>>I couldn't run the way I do and manage pre-allocation.  There are 2 very big 
>>and very busy instances on the
>>processor and both share all the filesystems.  And each instance has multiple 
>>storage hierarchies so
>>mapping out pre-allocation would be a nightmare.
>>
>>thanks,
>>
>>- bill
>>
>>-----Original Message-----
>>From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On Behalf 
>>Of Paul Zarnowski
>>Sent: Thursday, November 14, 2013 2:33 PM
>>To: ADSM-L AT VM.MARIST DOT EDU
>>Subject: Re: TSM Dedup stgpool target
>>
>>Hi Bill,
>>
>>Can I ask what size volumes you use for the ingest pool (on 15k disks) and 
>>also on your 4TB sata pool?  I assume you are pre-allocating volumes and not 
>>using scratch?
>>
>>Thanks.
>>..Paul
>>
>>At 02:13 PM 11/14/2013, Colwell, William F. wrote:
>>>Hi Sergio,
>>>
>>>I faced the same questions 3 years ago and settled on the products from 
>>>Nexsan (now owned by Imation) for
>>>massive bulk storage.
>>>
>>>You can get a 4u 60 drive head unit with 4TB sata disks (the E60 model), and 
>>>later attach 2 60 drive expansion
>>>units to it (the E60X model).
>>>
>>>I have 3 head units now, not with the configuration above because they are 
>>>older.
>>>
>>>1 unit is direct attached with fiber and the other 2 are san attached.  I am 
>>>planning to convert the
>>>direct unit to san attached to facilitate a processor upgrade.
>>>
>>>There are 2 server instances on the processor sharing the filesystems.  The 
>>>OS is Linux rhel 5.
>>>
>>>All volumes are scratch allocated.
>>>
>>>The backups first land on non raid 15k 600GB disks in an Infortrend device.  
>>>The copypooling is done from there
>>>and also the identify processing.  Then they are migrated to the Nexsan 
>>>based storagepools.
>>>
>>>There is also a tape library.  Really big files are excluded from dedup via 
>>>the stgpool MAXSIZE parameter and
>>>land on a separate pool on the Nexsan storage which then migrates to tape.
>>>
>>>Hope this helps,
>>>
>>>Bill Colwell
>>>Draper Lab
>>>
>>>-----Original Message-----
>>>From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On Behalf 
>>>Of Sergio O. Fuentes
>>>Sent: Wednesday, November 13, 2013 10:32 AM
>>>To: ADSM-L AT VM.MARIST DOT EDU
>>>Subject: TSM Dedup stgpool target
>>>
>>>In an earlier thread, I polled this group on whether people recommend going 
>>>with an array-based dedup solution or doing a TSM dedup solution.  Well, the 
>>>answers came back mixed, obviously with an 'It depends'-type clause.
>>>
>>>So, moving on...  assuming that I'm using TSM dedup, what sort of target 
>>>arrays are people putting behind their TSM servers.   Assume here, also, 
>>>that you'll be having multiple TSM servers,  another backup product, 
>>>*coughveeam and potentially having to do backup stgpools on the dedup 
>>>stgpools.  I ask because I've been barking up the mid-tier storage array 
>>>market as our potential disk based backup target simply because of the 
>>>combination of cost, performance, and scalability.  I'd prefer something 
>>>that is dense I.e. more capacity less footprint and can scale up to 400TB.  
>>>It seems like vendors get disappointed when you're asking for a 400TB array 
>>>with just SATA disk simply for backup targets.  None of that fancy array 
>>>intelligence like auto-tiering, large caches, replication, dedup, etc.. is 
>>>required.
>>>
>>>Is there another storage market I should be looking at, I.e. really dumb 
>>>raid arrays, direct attached, NAS, etc...
>>>
>>>Any feedback is appreciated, even the 'it depends'-type.
>>>
>>>Thanks!
>>>Sergio
>>
>>
>>--
>>Paul Zarnowski                            Ph: 607-255-4757
>>Manager of Storage Services               Fx: 607-255-8521
>>IT at Cornell / Infrastructure            Em: psz1 AT cornell DOT edu
>>719 Rhodes Hall, Ithaca, NY 14853-3801
>
>
>--
>Paul Zarnowski                            Ph: 607-255-4757
>Manager of Storage Services               Fx: 607-255-8521
>IT at Cornell / Infrastructure            Em: psz1 AT cornell DOT edu
>719 Rhodes Hall, Ithaca, NY 14853-3801


--
Paul Zarnowski                            Ph: 607-255-4757
Manager of Storage Services               Fx: 607-255-8521
IT at Cornell / Infrastructure            Em: psz1 AT cornell DOT edu
719 Rhodes Hall, Ithaca, NY 14853-3801

<Prev in Thread] Current Thread [Next in Thread>