I had success with this model using the following steps.
Create a bpstart_notify.cmd script on a sizable windows host (one with
plenty of resources).
This script needs to map a drive to the filer, using an account with
high privileges.
I wish I still had a copy of that script, but there is a flag in the
"net use" syntax that connects the drive if it doesn't exist, or
politely exits if it is already there. This is important when you get
multiple streams going, since I recall it will run for each stream.
Once the script runs, the drive letter will be available for the account
that the NB client runs, so if you check the box in the policy for
network drives, you can get the data just fine.
I can't remember if bpend_notify.cmd script to disconnect the drive was
required. If the bpstart runs for each stream, the bpend might run
after each stream, and try and disconnect the drive that you are using.
I used this on two 600GB netapp filers, and was able to get a decent
stream of data from the netapp, through the windows host, and then back
to the media server. Having gigabit made all the difference, and
breaking the backup into at least 8 streams helped too. I created a
policy that went after the user directory, making each user a separate
stream, and limiting the number of jobs in the policy to 4. I had two
filers and two policies with the same type of stream configuration, and
multiplexing set to 8, so it would keep one drive nicely busy for most
of the evening when doing a full backup.
If a user created a directory or file and removed access for the account
I used to map the drive, the files were not backed up. If the directory
structure exceeded the maximum length, the files were not backed up.
When I went to do a recovery, I had to remember that the windows server
with the drive mapping script and policy was the one who had all the
data, not the name of the filer.
Performance when doing it this way wasn't anything spectacular, and the
missed files due to permissions was very annoying and hard to find.
Fortunately it didn't happen very often and was usually an innocent
mistake on the users part.
NDMP bypasses all these complications, and I have observed a very nice
stream of data going to the media server without making adjustments to
the SIZE_DATA_BUFFERS_NDMP file. It does come with a price, but I think
it is worth the cost if it increases backup reliability. Missing files
due to permissions will make you look really bad when it comes time to
restore and they just aren't there.
-Jon
>
>
> I have come across a problem that I am hoping someone here can assist
> me with.
>
>
>
> I am running a NetBackup environment with one master server and two
> media servers all running v5.1MP6 (and on Solaris9) these are backing
> up 250+ clients which are a mixture of Windows, Linux and Solaris and
> some MSSQL clients; again the clients are running 5.1MP6.
>
>
>
> I also have a NetApp filer supplying NAS and San storage to some of
> the aforementioned clients.
>
>
>
> So, my question is how do I set up the NetBackup agent to capture the
> data that is held within a CIFs share that is being presented to a
> Windows client? Is this possible with a standard client license?
>
>
>
> Backing up the Windows client using the /ALL LOCAL DRIVES/ directive
> and selecting the /Backup network drives/ box under attributes does
> not appear to capture the data.
>
>
>
> Thanks in advance,
>
>
>
> Steve.
>
>
>
>
> __
>
>
_______________________________________________
Veritas-bu maillist - Veritas-bu AT mailman.eng.auburn DOT edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
|