ADSM-L

Re: [ADSM-L] Nodes per TSM server

2012-10-11 10:41:40
Subject: Re: [ADSM-L] Nodes per TSM server
From: "Vandeventer, Harold [BS]" <Harold.Vandeventer AT KS DOT GOV>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Thu, 11 Oct 2012 14:32:36 +0000
Excellent advice Zoltan.... we, too, are in storage occupancy license status.

My consolidation effort is partly driven by the fact the TSM 5.5 servers need 
to be replaced due to "age" issues and purchasing a pile of NEW hardware 
doesn't fit the budget very well.  Thus, the push from mgmt to "consolidate."

Thanks again to everyone for the info.

------------------------------------------------
Harold Vandeventer
Systems Programmer
State of Kansas - Office of Information Technology Services
Harold.Vandeventer AT ks DOT gov
(785) 296-0631


-----Original Message-----
From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On Behalf Of 
Zoltan Forray
Sent: Thursday, October 11, 2012 7:48 AM
To: ADSM-L AT VM.MARIST DOT EDU
Subject: Re: [ADSM-L] Nodes per TSM server

You can also license TSM by storage occupancy, which is what we have done, for 
99.9% of our nodes. Our departments "bean counter" determined it came out 
cheaper,

We run 7-TSM servers (Red Hat Linux) for 600-nodes with total occupancy (as of 
this email) of *1.1PB* and *1.4B objects.*

I define/shift nodes between servers based on factors like:

1.  The machines load/TSM version (we still have 1-5.5 and 1-6.1 server -
V6 servers handle nodes with millions of objects better than v5) 2.  
Purpose/work mix (2-are considered critical in a DR recovery scenario since 
they backup financial/HR/BlackBoard system) 3.  University department/purpose 
(e.g. research vs administrative systems) 4.  Age/capacity of the TSM server 
hardware (e.g. 8-way 32GB RAM Dell T710 vs 4-way Dell 16GB 2950)

On Thu, Oct 11, 2012 at 7:37 AM, Steven Harris <steve AT stevenharris DOT 
info>wrote:

> Harold
>
> Given that TSM is licenced by client and not by server is there really 
> any need to consolidate?  You won't be running as many boxes but in 
> the greater scheme of things that won't make a lot of difference.
>
> Sure when you replace your hardware you can consolidate. Use Butterfly 
> to do the consolidation or roll your own, but until then run what you have.
>
> Regards
>
> Steve
>
> Steven Harris
> TSM Admin
> Canberra Australia
>
>
> On 9/10/2012 7:01 AM, Vandeventer, Harold [BS] wrote:
>
>> There are all kinds of measures involved in setting up a TSM server; 
>> processor, RAM, disk I/O, stg pool design, reclamation, migration, 
>> all the bits and pieces.
>>
>> But, I'm curious about how many nodes some of you have on your TSM 
>> servers?
>>
>> I'm in a Windows environment, and have been tasked with "consolidating".
>>
>> Also, about how much memory is on those systems.
>>
>> Thanks.
>>
>> ------------------------------**------------------
>> Harold Vandeventer
>> Systems Programmer
>> State of Kansas - Office of Information Technology Services 
>> Harold.Vandeventer AT ks DOT gov
>> (785) 296-0631
>>
>>
>> [Confidentiality notice:]
>> ****************************************************************
>> ***********
>> This e-mail message, including attachments, if any, is intended for 
>> the person or entity to which it is addressed and may contain 
>> confidential or privileged information.  Any unauthorized review, 
>> use, or disclosure is prohibited.  If you are not the intended 
>> recipient, please contact the sender and destroy the original 
>> message, including all copies, Thank you.
>> ****************************************************************
>> ***********
>>
>>


--
*Zoltan Forray*
TSM Software & Hardware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zforray AT vcu DOT edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will never 
use email to request that you reply with your password, social security number 
or confidential personal information. For more details visit 
http://infosecurity.vcu.edu/phishing.html

<Prev in Thread] Current Thread [Next in Thread>