Networker

Re: [Networker] Pools ?

2012-02-16 15:36:10
Subject: Re: [Networker] Pools ?
From: Denis <denis.mail.list AT FREE DOT FR>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Thu, 16 Feb 2012 09:42:38 +0100
Hello,

about 180 clients (aix, linux, solaris, windows), and LTO3 drives.

We do not mix device types.
We use pools depending on retention data we keep for clients (all clients with 
the same retention are going to the same pool), whatever backup level.
We have also 3 specific pools in order to store data 'offsite'.

Denis

----- Mail original -----
De: "Skip Hanson" <skip.hanson AT EMC DOT COM>
À: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Envoyé: Mardi 14 Février 2012 21:39:53
Objet: [Networker] Pools ?

All,
  Hello. I am curious how all of you are using pools today ? As we know pools 
are very flexible and as a result can be very confusing. So I am curious…

  1. Do you mix device types within the same pool ? (Disk and tape devices ?)

  2. Do you select multiple backup levels for one pool ? (incrementals and 
fulls etc.)

  Please feel free to post you opinions on pools or send me a brief description 
about how you use them.

  networker_usability AT emc DOT com<mailto:networker_usability AT emc DOT com>

Cheers,
Skip

To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the body of the email. Please write to 
networker-request AT listserv.temple DOT edu if you have any problems with this 
list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the body of the email. Please write to 
networker-request AT listserv.temple DOT edu if you have any problems with this 
list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

<Prev in Thread] Current Thread [Next in Thread>