just a thought.
before putting the problem to the HW devices (tapes), SW backup programs
(TSM or whatever), and looking for a technology solution, I would ask
myself exactly *why* I have a 300+ GB file.one file, 300GB? is there
anything I can do to bring it to an acceptable size? how am I producing it?
anything I can change in my methodology?
this is just philosophy maybe, but before asking for more power I'd ask
myself if I'm using what I have efficiently.
that said, tape speeds are coming up.I can write to LTO2 drives at
50-60MB/s with TSM in controlled lab conditions. you might think of coming
to the wonderful world of Open Systems, get a unix box and an LTO2 robot
library.I stand by Zlatko's reasoning. nothing to add to that. whatever you
do will cost you money.
reducing the size of the file won't cost anything, if it can be done
through reasoning (and I hope no IBM sales reps are reading this...).
Cordiali saluti
Gianluca Mariani
Tivoli TSM Global Response Team, Roma
Via Sciangai 53, Roma
phones : +39(0)659664598
+393351270554 (mobile)
gianluca_mariani AT it.ibm DOT com
----------------------------------------------------------------------------------------------------
The Hitch Hiker's Guide to the Galaxy says of the Sirius Cybernetics
Corporation product that "it is very easy to be blinded to the essential
uselessness of them by the sense of achievement you get from getting them
to work at all. In other words ? and this is the rock solid principle on
which the whole of the Corporation's Galaxy-wide success is founded
-their fundamental design flaws are completely hidden by their
superficial design flaws"...
Salak Juraj
<[email protected] To: ADSM-L AT VM.MARIST
DOT EDU
T> cc:
Sent by: "ADSM: Subject: AW: bACKING UP A
SERVER WITH 500gb
Dist Stor
Manager"
<[email protected]
.EDU>
08/08/2003 10.56
Please respond to
"ADSM: Dist Stor
Manager"
----------------------------
a long-Term idea:
sub-file backup is currently not an option since it would not support files
of that size.
Your file changes daily.
I_f the portion changed is not big,
a_n_d your server would be able of computing a diff
(both CPU and avalable free disk space would suffice)
you could place an official requirement to IBM to enlarge the sub-file
limitations.
There was a mail exchange about similar problem earlier on this forum,
and one TSM programmer responded that opening the limits would not be
a very big deal <usual dislaimers apply>
----------------------------
Otherwise, see Zlatko´s mail, tape single media size is of little concern,
speed is.
----------------------------
regards
Juraj Salak
-----Ursprüngliche Nachricht-----
Von: Tim Brown [mailto:tbrown AT CENHUD DOT COM]
Gesendet: Donnerstag, 07. August 2003 20:43
An: ADSM-L AT VM.MARIST DOT EDU
Betreff: bACKING UP A SERVER WITH 500gb
i am using using tsm with primary disk storage pools (zos 3390's) and
secondary storage pools
3590's
new server i installed has a file that is over 300gb's, changes daily
i have had to exclude the folder that contains the file till i can find
a storage medium that will have enough capacity
what are other folks doing to backup this amount of data
Tim Brown
Systems Specialist
Central Hudson Gas & Electric
284 South Ave
Poughkeepsie, NY 12601
Phone: 845-486-5643
Fax: 845-486-5921
|