Hi,
>> has there been any thoughts on utilizing posix_fadvise* in BackupPC?
>> But here comes /proc/sys/vm/vfs_cache_pressure to the rescue
> Please tell us what results you get - if this approach works then
> that
> would avoid the need for application-level hints.
Server and disk array specs:
http://sourceforge.net/mailarchive/message.php?msg_id=31399081
/dev/sda is a RAID6 over 24 SAS disks a 600GB + hot spare, RAID config
details as follows:
Accelerator Ratio: 10% Read / 90% Write
Drive Write Cache: Disabled
Total Cache Size: 1024 MB
Total Cache Memory Available: 912 MB
No-Battery Write Cache: Disabled
Cache Backup Power Source: Capacitors
Logical Drive: 1
Size: 12.0 TB
Fault Tolerance: RAID 6 (ADG)
Heads: 255
Sectors Per Track: 32
Cylinders: 65535
Strip Size: 128 KB
Full Stripe Size: 2816 KB
Status: OK
MultiDomain Status: OK
Array Accelerator: Enabled
Parity Initialization Status: Initialization Completed
Disk Name: /dev/sda
Mount Points: /var/lib/backuppc 12.0 TB
Machine is backing up appr. 100 servers and now running
for 7 days in production.
version 3.2.1
Pool is 1037.96GB comprising 6278735 files and 4369 directories (as of 10/28
14:50),
Pool hashing gives 963 repeated files with longest chain 13,
Nightly cleanup removed 10328 files of size 4.53GB (around 10/28 14:50),
Pool file system was recently at 9% (10/28 15:00), today's max is 9% (10/28
14:49) and
yesterday's max was 10%.
% df -i /var/lib/backuppc/
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sda 2578525632 15105172 2563420460 1% /var/lib/backuppc
% df -hT /var/lib/backuppc/
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda xfs 13T 1.1T 11T 9% /var/lib/backuppc
% xfs_info /dev/sda
meta-data=/dev/sda isize=256 agcount=13, agsize=268435424 blks
= sectsz=512 attr=2
data = bsize=4096 blocks=3223157110, imaxpct=5
= sunit=32 swidth=704 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal bsize=4096 blocks=521728, version=2
= sectsz=512 sunit=32 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
% grep backuppc /etc/fstab
LABEL=Backuppc /var/lib/backuppc xfs
noatime,nodiratime,nobarrier,inode64,logbufs=8,logbsize=256k 0 2
% cat /proc/sys/vm/vfs_cache_pressure
10
==> default value=100, below 100 means:
prefer dentries and inodes over page cache
% echo 3 > /proc/sys/vm/drop_caches
==> flushing dentries, inodes and page cache
% time find /var/lib/backuppc/ -mtime -1 > /dev/null
real 48m57.667s
user 0m50.619s
sys 5m44.654s
% slabtop -s c -o | head -n 30
Active / Total Objects (% used) : 58157331 / 58185521 (100.0%)
Active / Total Slabs (% used) : 5668137 / 5668137 (100.0%)
Active / Total Caches (% used) : 88 / 178 (49.4%)
Active / Total Size (% used) : 21272550.20K / 21278749.55K (100.0%)
Minimum / Average / Maximum Object : 0.02K / 0.37K / 4096.00K
OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
15105176 15105172 99% 0.94K 3776294 4 15105176K xfs_inode
30130360 30130338 99% 0.19K 1506518 20 6026072K dentry
9022221 9022178 99% 0.06K 152919 59 611676K size-64
1142640 1142599 99% 0.31K 95220 12 380880K xfs_buf
1886640 1886612 99% 0.12K 62888 30 251552K size-128
322574 322573 99% 0.55K 46082 7 184328K radix_tree_node
2063 2063 100% 16.00K 2063 1 33008K size-16384
303400 302902 99% 0.10K 8200 37 32800K buffer_head
125840 125773 99% 0.19K 6292 20 25168K size-192
48186 38656 80% 0.21K 2677 18 10708K xfs_ili
4704 4472 95% 2.00K 2352 2 9408K size-2048
6145 5221 84% 0.73K 1229 5 4916K ext2_inode_cache
22653 22366 98% 0.14K 839 27 3356K sysfs_dir_cache
5360 5336 99% 0.50K 670 8 2680K size-512
2268 2211 97% 1.00K 567 4 2268K size-1024
3661 3613 98% 0.54K 523 7 2092K inode_cache
% free -m
total used free shared buffers cached
Mem: 96872 32741 64131 0 0 3733
-/+ buffers/cache: 29007 67865
Swap: 0 0 0
==> The 3733 MB cached are mostly pages from files accessed by backuppc
during the 49 minutes 'find` ran (some backups were running).
The 15.1 GB xfs_inode and 6 GB dentry cache are accumulated in
the 32 GB "used".
Since now almost all inodes are cached in RAM, traversing the whole
directory tree under /var/lib/backupp yields almost no disk activity:
% time find /var/lib/backuppc/ -mtime -1 > /dev/null
real 1m14.922s
user 0m17.217s
sys 0m57.492s
Please pay attention to the time span BackupPC_nightly runs:
2013-10-28 14:49:12 Running 8 BackupPC_nightly jobs from 0..15 (out of 0..15)
2013-10-28 14:49:12 Running BackupPC_nightly -m 0 31 (pid=22855)
2013-10-28 14:49:12 Running BackupPC_nightly 32 63 (pid=22856)
2013-10-28 14:49:12 Running BackupPC_nightly 64 95 (pid=22857)
2013-10-28 14:49:12 Running BackupPC_nightly 96 127 (pid=22858)
2013-10-28 14:49:12 Running BackupPC_nightly 128 159 (pid=22859)
2013-10-28 14:49:12 Running BackupPC_nightly 160 191 (pid=22860)
2013-10-28 14:49:12 Running BackupPC_nightly 192 223 (pid=22861)
2013-10-28 14:49:12 Running BackupPC_nightly 224 255 (pid=22862)
2013-10-28 14:49:12 Next wakeup is 2013-10-28 15:00:00
2013-10-28 14:50:00 Finished admin6 (BackupPC_nightly 192 223)
2013-10-28 14:50:00 Finished admin7 (BackupPC_nightly 224 255)
2013-10-28 14:50:00 Finished admin3 (BackupPC_nightly 96 127)
2013-10-28 14:50:00 Finished admin2 (BackupPC_nightly 64 95)
2013-10-28 14:50:00 BackupPC_nightly now running BackupPC_sendEmail
2013-10-28 14:50:00 Finished admin5 (BackupPC_nightly 160 191)
2013-10-28 14:50:00 Finished admin1 (BackupPC_nightly 32 63)
2013-10-28 14:50:01 Finished admin4 (BackupPC_nightly 128 159)
2013-10-28 14:51:12 Finished admin (BackupPC_nightly -m 0 31)
2013-10-28 14:51:12 Pool nightly clean removed 0 files of size 0.00GB
2013-10-28 14:51:12 Pool is 0.00GB, 0 files (0 repeated, 0 max chain, 0 max
links), 1 directories
2013-10-28 14:51:12 Cpool nightly clean removed 10328 files of size 4.53GB
2013-10-28 14:51:12 Cpool is 1037.96GB, 6278735 files (963 repeated, 13 max
chain, 31999 max links), 4369 directories
With /proc/sys/vm/vfs_cache_pressure set to "20" i still had a fair
amount of inodes purged from cache due to files read/written from
BackupPC trashing the cache.
Now with this option set to "10" i'll hope to conserve even more
inodes and dentries in RAM.
Findings so far:
1. more RAM = better
2. tune /proc/sys/vm/vfs_cache_pressure to your liking
to avoid random reads by keeping inodes+dentries in RAM
The new machine made a big impact as can be seen here:
http://test.thermoman.de/images/backuppc_cacti.png
I'm going to add the graphs for object count of xfs_inode
and dentry and will get back to in a week or two.
Regards
Marcel
------------------------------------------------------------------------------
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from
the latest Intel processors and coprocessors. See abstracts and register >
http://pubads.g.doubleclick.net/gampad/clk?id=60135991&iu=/4140/ostg.clktrk
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
|