Re: [BackupPC-users] On/off again Internal Server Error 500
2011-11-02 20:54:14
Les Mikesell <lesmikesell AT gmail DOT com> wrote on 11/02/2011
11:25:26 AM:
> > I thought 1 GB would
be enough? Or do I just need a larger swap
> file/partition?
>
> I'm sure there are systems running with less, but I like to use 4GB
or
> more because the unused portion becomes filesystem cache and greatly
> reduces the disk seeks you need.
Les and I argue frequently about this.
I run a dozen backup servers with 512MB RAM, and they do *zero* swapping.
So if you're running out of RAM, it's not BackupPC's fault. And
I do not think that more RAM will help performance with human-sized backup
servers. I've even previously posted to the list the results of going
from 512MB of RAM to 2GB of RAM. My backups still took the exact
same amount of time to complete.
Caching the filesystem is *vitally*
important for performance. But that can be done in a very small amount
of RAM. Assuming a single file entry requires 100 bytes (which seems
very high to me), 300MB of RAM performing caching (which is what my backup
severs usually average) will hold 3 *MILLION* files.
Now, if you're dealing with a pool in
that neighborhood, then *yes*, have at least 1GB of RAM. But for
the rest of us, even 512MB of RAM is plenty.
Having said all of that, my use of 512MB
of RAM dates back hardware limitations of the embedded-style motherboards
I use for my backup servers. If you're using a motherboard that accepts
multiple DIMM's of reasonable density, spend the $50 and get 2 x 2GB DIMM's
and eliminate that as a problem! :)
(And another reason to have more RAM:
fsck of a large disk will require large amounts of RAM. One
of my 512MB backup servers with a 1.5TB or so pool on a 2TB EXT3 partition
needed to run fsck. It would crash without completing it until I
upgraded to 2GB of RAM. So don't be stubborn like me: add more
RAM! :) )
> Swap might keep the process
from
> failing, but if you use it regularly it will slow the system down
> drastically.
*Drastically*. As in unusuably
drastically. Not to start a religious war, but for the most part
the days of swap are over. I don't care how much RAM you have, if
you have a swap file of 1GB residing on a single SATA spindle and you're
actually using all 1GB, your system will be unusable *anyway*, so who cares
if it crashed a little sooner? (The only exception would be for very
long-running processes with a very slow memory leak: it might keep
your system up and running a little longer. But it's certainly a
case of papering over a bug, nothing more.)
> One thing that does consume
a lot of memory is using
> rsync backups of targets with a very large number of files, because
> the complete directory listing is sent first and held in memory as
the
> files are checked. You might want to set $Conf{MaxBackups}
to 1 if
> it isn't already to limit concurrent runs.
While we're on that subject,, how many
files are on the system that you're trying to back up?
Timothy J. Massey
------------------------------------------------------------------------------
RSA(R) Conference 2012
Save $700 by Nov 18
Register now
http://p.sf.net/sfu/rsa-sfdev2dev1 _______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
|
|
|