1. Forum Rules (PLEASE CLICK HERE TO READ BEFORE POSTING) Click the link to access ADSM.ORG Acceptable Use Policy and forum rules which should be observed when using this website. Violators may be banned from this website. This message will disappear after you have made at least 12 posts. Thank you for your cooperation.

what are your largest TSM servers (in terms of storage pool data)?

Discussion in 'Capacity Planning' started by gtevis, Sep 19, 2012.

  1. gtevis

    gtevis New Member

    Joined:
    Aug 25, 2009
    Messages:
    1
    Likes Received:
    0
    Occupation:
    Tivoli Storage Technical Strategist
    Location:
    tucson, az
    This is just an informal survey on how big (in terms of total storage pool data) TSM servers are getting out there. I would appreciate any feedback (including if you're using TSM deduplication on those big servers). Thanks
     
  2.  
  3. evilution

    evilution New Member

    Joined:
    May 24, 2011
    Messages:
    81
    Likes Received:
    2
    Occupation:
    Storage Engineer
    Location:
    Madison, WI
    We are rolling aout 6.2 servers. We like to keep our database below 200GB just because we can. We are backing up about 500 run of the mill windows servers to each instance. Now.... we have a 300GB disk pool on tier 1 VMAX storage. Then we have a primary and copy pools built on SATA disk as file device classes. Our volumes are 25GB in size and total storage capacity is set around 85TB per pool.

    We are testing Deduplication on C drives and system state backups and we did a little testing on TDP database backups. We still haven't signed off on rolling out dedupe for all server instances because that savings isn't that much better than compression alone. It does seem a little better but at a cost of database growth and increased complexity and increased recovery time. Honestly the only reason we are entertaining dedupe is because TSM licensing costs thousands of dollars per TB. For example we have a few LTO4 tape servers that handle all our our big backup jobs. Tape is cheap but the licensing is getting down right silly. As a result we are now looking to precompress data sent to tape as well as reducing or "right sizing" retention the policies we have in place.

    I'm not sure Dedupe is all that it is cracked up to be especially client side.... the jobs take longer to run and longer to restore AND it eats up expensive database storage. We really need to tune retention for the need of data. We are not going to treat test/dev like prod. We aren't going to keep 30 versions of file server or low priority data. We may go as far as not backing up test systems.
     
  4. ja954

    ja954 New Member

    Joined:
    Oct 18, 2002
    Messages:
    52
    Likes Received:
    0
    Occupation:
    TSM Administrator
    Location:
    Boston
    We currently have 11 TSM 6.3.3 servers running on AIX 6.1 TL7. With a total of 23 instances of TSM running. Each instance moves 13 TB - 15 TB per night. There are 4000 nodes in total. Mostly Windows but also Sun, Linux, MAC and AS400 (Using the ABC Client). We do no dedupe within TSM. All dedupe is done on our Data Domain 890's running as VTL's. The entire setup is currently managing 22PB of data.
     

Share This Page