Networker

Re: [Networker] MSCS cluster backups

2006-02-03 13:13:26
Subject: Re: [Networker] MSCS cluster backups
From: Darren Dunham <ddunham AT TAOS DOT COM>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Fri, 3 Feb 2006 10:08:00 -0800
> I think you need to list the drives for the physical nodes as well.  We 
> have done it that way, advised by Sun.  A Sun Solaris cluster is 
> different, there you can assign the slices to the different hosts 
> (physical and virtual) and he will only backup for that client what the 
> client ownes when you use All as saveset.
> 
> Microsoft is not that far advanced :-P, just kidding (wouldn't wanna 
> step on any toes :-P)

I'm not a windows person at all, but in some ways I like a bit of what
it does here bettern than the UNIX side.

The unix client just looks at /etc/fstab (or whatever) to enumerate the
available filesystems.  Virtual/physical configurations have no effect
on the filesystem probe, only the ownership.  So a virtual client
*always* has to have shared storage explicitly mentioned.

Windows doesn't have an fstab, so I don't know what the client does to
obtain the mounts available, and I don't know why the list is different
from one client to another in some situations.  It's potentially
"better" for what I want to do, but it's certainly more confusing at the
moment.

-- 
Darren Dunham                                           ddunham AT taos DOT com
Senior Technical Consultant         TAOS            http://www.taos.com/
Got some Dr Pepper?                           San Francisco, CA bay area
         < This line left intentionally blank to confuse you. >

To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the
body of the email. Please write to networker-request AT listserv.temple DOT edu 
if you have any problems
wit this list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

<Prev in Thread] Current Thread [Next in Thread>