ADSM-L

On the Inversion of the Conventional Relationship between File and Volume Sizes.

2006-05-12 08:18:18
Subject: On the Inversion of the Conventional Relationship between File and Volume Sizes.
From: "Allen S. Rout" <asr AT UFL DOT EDU>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Fri, 12 May 2006 08:17:47 -0400
Greetings, all.


I'm musing about strategies to deal with backups of "files" which are
larger than some of my volumes.  The DB2 API client stores full
backups as single files;  so I've got 150, 180, 210GB files rattling
around in my stgpools.

Now, I'm moving towards virtual volumes as my copy stgpools, and have
so far set the virtual volume size at 20G.  Rationale there is that I
want the files which represent the volumes to be sanely sized for
remote management: reclaiming a 2TB tape when your file size is 400G
sounds irritating.

So this means that, when one of my 20G volumes that contains a snippet
of a DB Full comes up for reclamation, I have to move around all 180
G.   Eugh.

I understand why they reconstruct the fragmented file on reclamation
as a general case; but in this case:

+ I'm going to start with 7 or 8 full volumes, and a head and tail
  segment;

+ I'm going to finish with 7 or 8 full volumes and a head and tail
  segment;


So I see three major divisions of response, and I'm wondering which of
them have been popular, and what other responses I've missed.


1) Deal with it, whiny-boy.

2) Put your huge files elsewhere (separate copy pool) so they are
   sanely managed independantly

3) Use bigger volumes so the mismatch between the big-file workload
   and the normal workload is less severe (if I have 400-500G volumes
   then moving 210G to reclaim one is less odd)


I'd prefer to avoid option 1. ;)


Notions?


- Allen S. Rout

<Prev in Thread] Current Thread [Next in Thread>
  • On the Inversion of the Conventional Relationship between File and Volume Sizes., Allen S. Rout <=