Hehe.... Playing devils advocate :)
> Sounds like a bad idea to me. Netbackup is nothing BUT files. It
> uses a DB which consists basically of files being read and written to
> thru standard disk I/O. If it were a real DB it would have stuff in
> memory, and be optimized, etc, but this relies on the filesystem for
> all of that, IMHO.
The system does have a buffer cache. In fact on Solaris, all of the spare
memory will be used to buffer I/O for files. It will be slower in some
cases, but if the I/O is localized and doesn't change much stuff, it won't
be too bad.
> Would you run a busy Oracle server off of a Filer? I wouldn't unless
> you have some kind of quad-port NIC crossover, and even then, why add
> the complexity to the Master. If this Master is a Media Server too
> then you are really asking for trouble b/c the backup streams will be
> in direct competition with internal Netbackup file reads/writes.
Quite a few places segregate backup and public network traffic into 2
seperate networks. For oracle over NFS it is *highly* suggested that the
network be private between the host and the netapp.
I myself wouldn't want to do it this way, but I can see where it would be
useful. In some cases, storage may be available in a reasonable well
protected, large amount via NFS. Rather than purchasing extra disk and
such, why *not* use the NFS server to hold the dbs.
I don't know of any real reason why you *can't* do it, and it's a matter
of personal preference if you *shouldn't* use it. There are several
advantages, one of them is a remote DR site that he wouldn't normally be
using. BUT for better performance and a simpler configuration, probably
having this stuff local is a good idea.