Networker

Re: [Networker] Cool Threads SPARC servers

2009-06-16 11:28:36
Subject: Re: [Networker] Cool Threads SPARC servers
From: Yaron Zabary <yaron AT ARISTO.TAU.AC DOT IL>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Tue, 16 Jun 2009 18:23:45 +0300
Teresa Biehler wrote:
Is anyone using the Cool Threads SPARC servers as their NetWorker server
and/or storage nodes?  What has been your experience with performance
(CPU usage, etc.)?

I am using a T1000 with six 1Ghz cores, which was the cheapest of this line (this is an old Niagara-1 processor). It works OK without CPU bottlenecks. With 300 clients, four LTO-3 drive and ~6TB of disk backup, I could see as much as 30% total CPU utilization. Obviously, this doesn't mean that there wasn't a single thread that was starved for CPU. Each core/thread can drive an LTO-3 drive (native) OK, but I doubt it will be able to do a compressed LTO-3 stream. LTO-4 is also beyond its capacity. Other than that I expect that you will buy a machine with a newer processor, so I would expect a single core to be able to drive LTO-4 drives easily. With the new processors you also get a FP unit with each core (but you shouldn't care as Networker is not FP intensive).


Thanks.

Teresa


To sign off this list, send email to listserv AT listserv.temple DOT edu and type 
"signoff networker" in the body of the email. Please write to networker-request 
AT listserv.temple DOT edu if you have any problems with this list. You can access the 
archives at http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER


--

-- Yaron.

To sign off this list, send email to listserv AT listserv.temple DOT edu and type 
"signoff networker" in the body of the email. Please write to networker-request 
AT listserv.temple DOT edu if you have any problems with this list. You can access the 
archives at http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

<Prev in Thread] Current Thread [Next in Thread>