BackupPC-users

Re: [BackupPC-users] No more free inodes

2012-10-08 05:09:40
Subject: Re: [BackupPC-users] No more free inodes
From: Frédéric Massot <frederic AT juliana-multimedia DOT com>
To: backuppc-users AT lists.sourceforge DOT net
Date: Mon, 08 Oct 2012 11:07:59 +0200
Le 07/10/2012 23:56, Michael Stowe a écrit :
>
>> The file system has the option "resize_inode" is that it can help to
>> increase the size or number of inode?
>
> resize_inode is a flag you can set when you first create the filesystem,
> that makes it easier to expand the file system later.  Again, "when you
> first create..." so ... no.

The file system, with zero inode free, was created with the 
"resize_inode" flag.

It makes it easier to increase the size of the file system, but it does 
not increase the number of inodes, is that correct?


>> So I must create a second logical volume, format it with a smaller
>> inode_ratio as 4096, and copy the Backuppc data. After, I could format
>> the old logical volume for re-use.
>
> I'm not following you, but to be clear:  you're not getting any more
> inodes in that filesystem.  You need a new filesystem.

OK, to create a new file system I will replace one by one the 4 hards 
drives.


>> So, I'll replace the disk one by one by disks of 1TB
>>
>> For RAID10 with MD I have two choices, I can increase the size of
>> partitions for each disk and therefore the array, or create a second
>> partition on each disk and a second array in RAID 10.
>>
>> Is I can increase the size of an RAID 10 array ? Manpage only talks
>> about RAID 1/4/5/6.
>
> There's a good reason for that.  It's RAID0+1, so it behaves like 0 on top
> of 1 (or 1E, in some cases.)  At any rate, you should be able to expand it
> using pairs of identical drives, as I recall.  YMMV.

In the doc, to increase the size of an array, there is a difference 
between adding a new partition (limited to raid 1/4/5/6) and increase 
the size of the partitions that seems work for all RAID.

https://raid.wiki.kernel.org/index.php/Growing#Expanding_existing_partitions

 From what I understand changing the hard one to one it should work. :o)


> I'm not even going to ask why you're going with RAID10 and ext4, but
> neither of these would be high on my list.

I always prefer RAID 10 to RAID 5 for safety reasons. For the first 
BackupPC server I had installed, I used RAID 5 for more space compared 
to a RAID 10. I was soon disappointed by the performance. Since I only 
use RAID 10.

For the file system, I use the more standard systems used with Linux. 
But since the problem of inode numbers, I should perhaps reconsider my 
choice for other file system.


>> Backuppc uses hard links which are restricted to a file system. I do not
>> know between what directory are hard links.
>>
>> Is I can mount the "cpool" directory on a logical volume and the "pc"
>> directory on another?
 >
 > No.

After moving the BackupPC data on the new logical volume and thus the 
new file system, the old logical volume will no longer be used. I could 
delete it but how I could use this free space?

Does with XFS the inode number increases with increasing file system size?

XFS may be a better solution for me and use the free space.

Some people use XFS on Debian without problem?


Regards.
-- 
==============================================
|              FRÉDÉRIC MASSOT               |
|     http://www.juliana-multimedia.com      |
|   mailto:frederic AT juliana-multimedia DOT com   |
| +33.(0)2.97.54.77.94  +33.(0)6.67.19.95.69 |
===========================Debian=GNU/Linux===

------------------------------------------------------------------------------
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/