Phillip,
as your data is already compressed try
of=/dev/rmt/1
I usually do a very large blocksize 512k or 1024k
the data is local I hope not feeding over NFS , if so get it local first
with ftp (binary)
and use dd to get it to tape
Note the block factor on the tape PLEASE so you can restore with the same
dd command in reverse
dd if=/dev/rmt/1 bs=512k of=<filename>
maarten
At 23:30 12-3-2004, you wrote:
All,
My knowledge of 'dd' is a functional one at best, so I'm open to being
educated here.
Environment:
Solaris 8 (SPARC)
a 12 GB database dump, called "filename.tgz"
a DLT8000 drive through SCSI diff. card
command: dd if=/dir/filename.tgz of=/dev/rmt/1cbn bs=[several different
tries: 1024, 96k, 192k, etc.]
I have tried several different block sizes, but none seems to make any
difference in improving the throughput beyond about 200 KB/sec.
Any suggestions or comments?
TIA
-ty
Phillip T. ("Ty") Young, DMA
Backup/Recovery Systems Mgr.
Network Services Group
i2 Technologies, Inc.
--
Note: To sign off this list, send a "signoff networker" command via email
to listserv AT listmail.temple DOT edu or visit the list's Web site at
http://listmail.temple.edu/archives/networker.html where you can
also view and post messages to the list.
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=
--
Note: To sign off this list, send a "signoff networker" command via email
to listserv AT listmail.temple DOT edu or visit the list's Web site at
http://listmail.temple.edu/archives/networker.html where you can
also view and post messages to the list.
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=
|