Re: [BackupPC-users] Backing up little by little or throttling the backup?
2011-04-12 22:16:36
"Jeffrey J. Kosowsky" <backuppc AT kosowsky DOT org>
wrote on 04/12/2011 07:41:15 PM:
> Jake Wilson wrote at about 17:06:55 -0600 on Tuesday, April 12, 2011:
> > We have a production server on the network with several
terabytes of data
> > that needs to be backed up onto our BackupPC server. The
problem is that
> > the production server is in use most of the day. There
is a lot of normal
> > network traffic going in and out.
> >
> > I'm wondering what options there are for backing up the
> production server in
> > a way that will hinder the performance and network access
as little as
> > possible. I'm not too worried about the incremental
backups because those
> > wont take that long and will happen at night. But
the first, initial big
> > full backup is going to take quite a while and I don't
want the production
> > server borderline-unresponsive during the backup process.
>
> I would find it hard to believe that BackupPC would throttle a
> production server... Linux usually does a pretty good job of sharing
> cpu, disk access, network access. And if one process throttles your
> production server, then you probably have more fundamental issues
you
> need to deal with...
I pretty much agree with Jeffrey (if doing the full
backup makes your server "borderline-unresponsive" how on *earth*
does it actually do its job as a file server?!?) But I will say this: I
have found that some (sensitive or very heavy) users can tell when I'm
running a full backup on a server. However, it's usually relatively
subtle. It's not "borderline-unresponsive", it's more like,
"Hmm, that's taking a little longer than I'm used to..."
> > Here are some options I've been thinking
about:
> >
> > - Backing up / but have most of the large
directories and subdirectories
> > excluded and slowly "unexclude"
them one by one in-between full backups.
> Too much hassle and still will "throttle" it during the
smaller piece.
> > - rsync bitrate limit throttling?
> Given that network bandwidth is probably rate limiting this should
> help if your network is getting slammed.
I would say that you first have to determine A) if
it will be a problem (I think it won't), and B) *what* is causing the problem:
not enough RAM, not enough I/O throughput or not enough network bandwidth.
Then you can tackle the problem from there.
Of course, using the exclude trick (exclude, say,
every root directory but one, then add them one at a time or some such)
will certainly work, if quite possibly unnecessary and causing your first
backup to take many (extra) days to complete. And if a backup truly
does cause your server to become "borderline-unresponsive" you
may not have much other choice but to keep the deltas small enough to be
completed during your backup window.
But give it a try first: unless that production
server is a 600MHz machine with 512MB RAM and a single SATA spindle, you
will most likely be fine (and if you *are* running like that, well, you
have other problems! :) ). (Actually, I have one client with
servers that are dual-processor 600MHz with 1GB RAM that I back up during
the day and the users at this location almost *never* notice.)
Remember that, while the BackupPC process move a *lot*
of data, it's just a single thread. It's no more resource-intensive
per second than any *other* I/O user on the system. It's I/O pattern
is also relatively linear (modulo filesystem fragmentation), which should
help, too.
Unless you're talking billions of tiny files. Then
you're on your own! :) Gobs and gobs of RAM (to cache both
the file list and dentries and inodes) wiil help in that case.
>
> > - Instead of backing up /, specify specific
big directories one at a
> > time, adding more and more in-between full
backups.
> Too much hassle and still will "throttle" it during the
smaller piece
This is a significantly worse variation of #1 (using
excludes): it's going to be a problem with pooling and buys you nothing
over #1, so ignore it.
>
> > Anyone have any ideas or direction for this? Or are
there any built-in
> > config options for throttling the backup process that I'm
unaware of?
>
> Have you tried it and run into bottlenecks or are you just worrying
in
> advance? :P
To quote Michael Abrash: Profile before you
optimize.
Timothy J. Massey
------------------------------------------------------------------------------
Forrester Wave Report - Recovery time is now measured in hours and minutes
not days. Key insights are discussed in the 2010 Forrester Wave Report as
part of an in-depth evaluation of disaster recovery service providers.
Forrester found the best-in-class provider in terms of services and vision.
Read this report now! http://p.sf.net/sfu/ibm-webcastpromo _______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
|
|
|