Veritas-bu

Re: [Veritas-bu] Speaking of NTFS:

2008-02-15 12:54:36
Subject: Re: [Veritas-bu] Speaking of NTFS:
From: Bryan Bahnmiller <bryan.bahnmiller AT managedmail DOT com>
To: VERITAS-BU AT mailman.eng.auburn DOT edu
Date: Fri, 15 Feb 2008 11:26:35 -0600
Adam, all,

  If your NTFS volume is over 80% full, the performance starts to 
degrade. I've tested this and verified that it does happen. I didn't do 
enough testing with controls to truly characterize performance, but it 
can be demonstrated. At 85% full, you will notice a significant 
performance decrease. From 85% to 90% the performance will drop in half! 
It seems to be geometric once you hit 85%.

   Defragmenting will help the NTFS filesystem performance. Be aware, 
that the NTFS defrag likes to have 25% freespace. If you get up to 85% 
full, the defrag may not even run. You can now set up scheduled NTFS 
defrags with Win2003 - it wasn't possible without a 3rd party product 
until Win2003.

   Don't let the Windows guys use disk compression. Backup performance 
will go straight to h***. And, guess what happens if you do a large 
restore on a volume that has compression turned on? That's really fun.

   Many, many small files will kill performance. So will directory 
depth. Once I had a 500 GB NTFS filesystem that was taking 3 days to 
backup! And, incrementals would actually take longer. I laid out the 
steps we needed to run through to get it backed up. First of all, it was 
over 90% full. I told them they needed to use 75% full as their goal, 
including growth. When we migrated the data, we defragged it too. If I 
recall correctly we could then run a backup in about 18 hours or so. 
Then I set up Flashbackup using VSS. After all was said and done the 
Flashbackup would run in about 3 - 4 hours. I considered 3 days to 3 
hours a fairly decent performance increase. It really operates very 
similar to a Flashbackup of VxFS, if you've ever done that. And if you 
do defrag with Flashbackup, only defrag prior to the full backup.

   If you turn on multi-streaming with Windows and do All Local Drives, 
it creates one stream per drive - C:, D:, etc... If your drives are 
separate disks, separate luns, that's ok. However, say the local disk 
space is coming off of a locally attached SCSI array where the disks are 
setup in RAID 0+1 or RAID 5, then the RAID disk is split up to create 
different disks for the server. All multi-streaming will do for you in 
that case is increase disk contention. If your disk is coming off of a 
large array, like a DMX, Clariion, EVA or such, this is not as much an 
issue, although it can be if your various luns are coming off of the 
same set of spindles.

   Large Windows file servers rarely get good disk I/O performance. It 
has been steadily improving, but I have usually seen the network I/O 
exceed the disk I/O. DB servers are the exceptions to this. Large SQL or 
Oracle servers can usually generate a much faster I/O stream, everything 
else begin equal.

   SAN media servers? High cost that _may_ give you a performance 
increase. Make sure you can read from your disk faster than your network 
throughput. With tuning, a decent Windows server should be able to send 
out in excess of 60 MB/s over GigE. Make sure you can read from your 
disk(s) that fast before you spend the money on the SAN backup solution.

     Bryan


> My backup systems are Solaris, I have the "luxury" of vxfs filesystems
> for my staging & database areas.
>  
> I do however back up Windows file servers, Are there any guidelines to
> NTFS volumes that people would recommend ?
>  
> I thinking along the lines:
> 
>       Defragmenting,
>       Number of streams,
>       LUN Virtulization tech,
>       Volume Sizes,
>       Maintaining free space,
>       Snapshot methods,
>       impact of ohh sooo many small files
>        
>       Performance improvements with Advanced client / Flashbackup,
>       SAN Media server,
>       (For the adventurous) SAN client ?
> 
> For example, i currently have pain with about a dozen windows clients,
> from what i can tell
> 
>       we do not do defragmentaion
>       their LUNS live on HP EVA's sharing spindles with hosts
>       Free Space is minimum (~7%)
>       Volumes are only ~500GB
>       We backup with Multiple streams (Exceeds weekend (and daily)
> backup window if we don't (Windows are large)
>       
>  
> Currently backing up the windows dataservers is a pain point for me, I
> am interested in hearing peoples learnings / Golden rules when it comes
> to backing up large (over 500GB) NTFS Volumes.
>  
> Adam Mellor
> Senior Unix Support Analyst
> CF IT TECHNOLOGY SERVICES
> Woodside Energy Ltd.
> 
> 
> ________________________________
> 
> From: Ed Wilts [mailto:ewilts AT ewilts DOT org] 
> Sent: Thursday, 14 February 2008 1:17 PM
> To: Mellor, Adam A.
> Cc: VERITAS-BU AT mailman.eng.auburn DOT edu
> Subject: Re: [Veritas-bu] Defrag DSU?
> 
> 
> On Feb 13, 2008 6:22 PM, Mellor, Adam A. <Adam.Mellor AT woodside.com DOT au>
> wrote:
> 
> 
>       Although I am not currently defragmenting my current DSU
> volumes, I
>       previously had ~4TB in a single DSU under NBU 5.1 . This volume
> was
>       running vxfs 
> 
> 
> vxfs says it all, you lucky guy.  NTFS just sucks...  try a 4TB DSSU on
> Windows and see how much fun you have.
> 
> I do like your idea of dropping the threshold to a low value to empty it
> out more frequently though.
> 
> 
>    .../Ed
> 
_______________________________________________
Veritas-bu maillist  -  Veritas-bu AT mailman.eng.auburn DOT edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu