Networker

Re: [Networker] /tmp Solaris

2006-06-20 23:43:35
Subject: Re: [Networker] /tmp Solaris
From: Marcelo Bartsch <mbartsch AT UNIX911.ATH DOT CX>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Tue, 20 Jun 2006 23:43:23 -0400
Any directives applied to the client?

unix default directives if i'm not wrong skips /tmp 


On Wed, 2006-06-21 at 13:20 +1000, Jeff Allison wrote:
> Hi all I've just a request to restore a file form /tmp on one of our
> solaris (2.8) boxes. I've checked and I do not appear to have any
> backups of this filesystem.Hi all I've just a Hi all I've just a
> request to restore a file form /tmp on one of our
> solaris (2.8) boxes. I've checked and I do not appear to have any
> backups of this filesystem.
> 
> I have ALL selected in the backup set
> The /tmp filesystem is of type tmpfs
> I backup Full every night.
> 
> Is this standard behaviour? I can see its point "if its not actually
> on physical disk why put it on physical tape. But I could do with some
> confirmation, as a developer left some code there and isn't happy.
> 
> TIA
> 
> Jeff Allison
> 
> BDM Network Manager
> NSW Registry of Births Deaths & Marriages
> GPO Box 30
> Sydney NSW 2001
> 
> 35 Regent Street
> Chippendale
> NSW 2008
> 
> To sign off this list, send email to listserv AT listserv.temple DOT edu and 
> type "signoff networker" in the
> body of the email. Please write to networker-request AT listserv.temple DOT 
> edu if you have any problems
> wit this list. You can access the archives at 
> http://listserv.temple.edu/archives/networker.html or
> via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the
body of the email. Please write to networker-request AT listserv.temple DOT edu 
if you have any problems
wit this list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

<Prev in Thread] Current Thread [Next in Thread>