IMHO,
It's not about the nodes, it's about the number of managed objects and the I/O
throughput in TB per day (and the size of the DB, if you are still on V5).
Have had 2 different customers where ONE node (an imaging app, naturally)
occupied > 50% of the TSM DB. Really.
I usually don't see more than about 200-300 nodes per server, but those are
older sites that started out as V5 or earlier, and so were limited by the size
of the DB. I would be interested in hearing from folks who have consolidated
on V6 and done merges with larger numbers of clients.
Use the IBM recs for RAM: 16G minimum for any V6, 24 is better, 32G if using
dedup.
And I think a Windows V6 server should be Win2K8 64 bit, I/O throughput appears
to be much better than Win2K3 or 32 bit.
W
-----Original Message-----
From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On Behalf Of
Vandeventer, Harold [BS]
Sent: Monday, October 08, 2012 4:01 PM
To: ADSM-L AT VM.MARIST DOT EDU
Subject: [ADSM-L] Nodes per TSM server
There are all kinds of measures involved in setting up a TSM server; processor,
RAM, disk I/O, stg pool design, reclamation, migration, all the bits and pieces.
But, I'm curious about how many nodes some of you have on your TSM servers?
I'm in a Windows environment, and have been tasked with "consolidating".
Also, about how much memory is on those systems.
Thanks.
------------------------------------------------
Harold Vandeventer
Systems Programmer
State of Kansas - Office of Information Technology Services Harold.Vandeventer
AT ks DOT gov
(785) 296-0631
[Confidentiality notice:]
***********************************************************************
This e-mail message, including attachments, if any, is intended for the person
or entity to which it is addressed and may contain confidential or privileged
information. Any unauthorized review, use, or disclosure is prohibited. If
you are not the intended recipient, please contact the sender and destroy the
original message, including all copies, Thank you.
***********************************************************************
|