BackupPC-users

Re: [BackupPC-users] No more free inodes

2012-10-14 12:33:13
Subject: Re: [BackupPC-users] No more free inodes
From: Frederic MASSOT <frederic AT juliana-multimedia DOT com>
To: "General list for user discussion, questions and support" <backuppc-users AT lists.sourceforge DOT net>
Date: Sun, 14 Oct 2012 18:30:59 +0200
Le 14/10/2012 18:01, Frederic MASSOT a écrit :
> Le 08/10/2012 13:47, Carl Wilhelm Soderstrom a écrit :
>  > On 10/08 11:07 , Frédéric Massot wrote:
>  >> After moving the BackupPC data on the new logical volume and thus the
>  >> new file system, the old logical volume will no longer be used. I could
>  >> delete it but how I could use this free space?
>  >
>  > Expand your new volume and filesystem to use it.
>  > Are you using LVM, or just plain partitions?
>
> Yes, I use LVM on MD.
>
> I thought I would increase the size of the new file system, but my
> concern is not having the same problem with a lack of inode in a few years.
>
>  From what I've read, if I chose XFS instead of ext4, I would not have
> this problem of lack of inode.
>
>  >> Does with XFS the inode number increases with increasing file system
> size?
>  >
>  > XFS doesn't really have a problem with inodes.
>  >
>  >> Some people use XFS on Debian without problem?
>  >
>  > I've used it for some years. If you do use XFS, make sure you have
> enough
>  > space in RAM+swap to accomodate the xfs_check tool, which is notoriously
>  > memory-hungry. My suggested filesystem layout is something like:

Hi,

I replaced one by one the four 500 GB hards disks with 1 TB hards disks. 
I increase the size of the partitions, the RAID10 array, physical volume 
and then the volume group.

I created a logical volume formatted in XFS, it is mounted in a 
temporary directory "/mnt/backuppc-new".

I started copying Backuppc data from "/var/lib/backuppc" (ext4) to the 
temporary directory "/mnt/backuppc-new" (xfs) with the command "cp -a".

The first 200 GB have been copied in about 12 hours. I felt that copying 
616 GB would take between 36 to 40 hours.

But now it is very slow, the speed is closer to 1 GB in 12 hours, or 
more. It remains to be copied 200 GB, I can not wait 100 days!

Copy of "cpool" and "log" was performed, the slowness comes from the 
copy of "pc" and certainly hardlinks.


- Is that the copy will still be slow, it will get worse and worse?

- Is that rsync is faster than cp to copy data with hardlinks?


Regards.
-- 
==============================================
|              FRÉDÉRIC MASSOT               |
|     http://www.juliana-multimedia.com      |
|   mailto:frederic AT juliana-multimedia DOT com   |
| +33.(0)2.97.54.77.94  +33.(0)6.67.19.95.69 |
===========================Debian=GNU/Linux===

------------------------------------------------------------------------------
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

<Prev in Thread] Current Thread [Next in Thread>