Veritas-bu

[Veritas-bu] SIZE_DATA_BUFFERS on windows

2005-03-25 06:52:42
Subject: [Veritas-bu] SIZE_DATA_BUFFERS on windows
From: spe08 AT co.henrico.va DOT us (Spearman, David)
Date: Fri, 25 Mar 2005 06:52:42 -0500
Charles,

If you are using the 256 buffer size you will run into problems of all
sorts, from failures to slow downs. Even though w2k and up supports
buffers larger than 64 it still operates best in that mode. You also
must take into consideration the size of the buffers your disk
controller is laying down, typically 64. You want to try and match
everything up. UNIX clients are very forgiving on buffer size so knock
those down to 64. Another thing to keep in mind is how many data buffers
you have. This has to be tweaked in also. There is no hard answer, just
a lot of tweaking.

David Spearman
County of Henrico, VA.


-----Original Message-----
From: veritas-bu-admin AT mailman.eng.auburn DOT edu
[mailto:veritas-bu-admin AT mailman.eng.auburn DOT edu] On Behalf Of Charles
Ballowe
Sent: Thursday, March 24, 2005 5:31 PM
To: veritas-bu AT mailman.eng.auburn DOT edu
Subject: [Veritas-bu] SIZE_DATA_BUFFERS on windows


I recently tried to create the SIZE_DATA_BUFFERS file on a windows media
server with a value that matched what I use on unix servers and after
doing so, backups on that media server failed with some form of media
error and logs were indicating "invalid parameter" or something similar.
Unfortunately, I didn't write down the messages but rather put them back
to the defaults (deleted the file). Now I have a new windows media
server coming live in the next couple of days and would like to get this
right to start with (and to test before it goes production).

Is there some difference in how the values in these files are handled
between unix and windows. On the unix systems, I use 262144.

-Charlie
_______________________________________________
Veritas-bu maillist  -  Veritas-bu AT mailman.eng.auburn DOT edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


<Prev in Thread] Current Thread [Next in Thread>