BackupPC-users

Re: [BackupPC-users] Need advice on XFS and RAID parameters for new BackupPC server with 25 SAS disks

2013-09-13 15:45:01
Subject: Re: [BackupPC-users] Need advice on XFS and RAID parameters for new BackupPC server with 25 SAS disks
From: Arnold Krille <arnold AT arnoldarts DOT de>
To: backuppc-users AT lists.sourceforge DOT net
Date: Fri, 13 Sep 2013 21:43:23 +0200
On Fri, 13 Sep 2013 12:56:30 -0400 Carl Wilhelm Soderstrom
<chrome AT real-time DOT com> wrote:
> On 09/13 02:12 , Marcel Meckel wrote:
> > Debian Wheezy will be running from SD card inside the server,
> My company tried using CF cards as OS storage devices for a while. Our
> experience is that (anecdotally) they aren't any more reliable than
> spinny disk. They still fail sometimes.
> I don't know if SD cards will be any different, or if you might have a
> different way of mounting them which will be better.

When the host is simple (aka single-use) and maybe even configured
automatically with chef/puppet/whatever, the most you loose from a
failing CF-card/sd-card is the time it takes to reinstall.

> >    2. I always use LVM but it might not be useful in this case.
> >       Would you recommend using LVM when the whole 12 TiB is used as
> >       one big filesystem only? It might be useful if i have to add
> >       another shelf of 25 disks to the system in the future to be
> >       able to resize the DATADIR FS spanning then 2 enclosures.
> I wouldn't bother. I've done it both ways (with, and without the
> LVM). If you *know* that you'll be adding more disks in the future,
> it's a good idea. My experience is that planned expansions usually
> don't happen. ;) Also, if you're going to add more disks for more
> capacity, you're much better off adding a whole new machine. A second
> machine will increase your overall backup throughput as well as
> increasing your disk space. you won't get the benefit of pooling; but
> you will get more hosts backed up in a shorter amount of time.

My advise would actually be a bit different: The main problem with
backuppc isn't necessarily the disk acces but the memory-consumption as
backuppc (especially with rsync-method) has to keep a rather big
file-tree in memory.
So maybe do a minimal hw-host and run two or even three virtual
machines for backuppc. Then distribute your hosts-to-backup across
these. That way the file-tree per backuppc-instance should be smaller
with the "cost" of a bit less deduplication. But from my expierence is
the benefit of massive deduplication the files in /etc and similar
system-shares with small files. If you have duplicates in big user-data
files, you are either backing up one nas-resource over several clients
or your users are copying data where it shouldn't belong.

Hope that is understandible, at the end of the week writing in a
foreign language isn't the best way of expressing ones thoughts.

Have a nice weekend,

Arnold

Attachment: signature.asc
Description: PGP signature

------------------------------------------------------------------------------
How ServiceNow helps IT people transform IT departments:
1. Consolidate legacy IT systems to a single system of record for IT
2. Standardize and globalize service processes across IT
3. Implement zero-touch automation to replace manual, redundant tasks
http://pubads.g.doubleclick.net/gampad/clk?id=51271111&iu=/4140/ostg.clktrk
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/