Why does DB2 go straight to sequential storage?!!

batavus

ADSM.ORG Member
Joined
Jun 26, 2004
Messages
15
Reaction score
0
Points
0
Website
Visit site
Ok - I'm at a loss here. When backing up DB2 databases, for some reason the data ends up going straight to tape instead of first going to the BACKUPPOOL (as the policy is set to). What causes this?



I have 200GB in disk storage pool BACKUPPOOL - all consisting of 2.5GB volumes.



The DB2 session ends up directly opening a sequential volume and writes to it instead of writing to the disk storage pool first - wtf?!



Sess Number: 21,266

Comm. Method: Tcp/Ip

Sess State: RecvW

Wait Time: 0 S

Bytes Sent: 6.3 K

Bytes Recvd: 2.2 G

Sess Type: Node

Platform: DB2/AIX64

Client Name: RFPRODDB

Media Access Status: Current output volume(s): TSM013,(258 Seconds)

User Name: navrpt

Date/Time First Data Sent: 04/19/05 21:29:56

Proxy By Storage Agent:





Why would it write to sequential (LTO2) first instead of writing to the disk storage pool (that has adequate storage space available)?
 
It's probably treating the databases as single objects, and if an object is over a certain size it's written straight to tape. This is a TSM default thing. Check the MAXSIZE attribute on the storage pool; if it is set to "nolimit", then everything should go straight into the pool, but if there is a number (usually it is GB), then anything bigger gets shunted straight to tape.
 
Actually here's some further info from a _brilliant_ webpage at

http://people.bu.edu/rbs/ADSM.QuickFacts



---------------------

Backups go directly to tape, not disk



Some shops have their backups first go as intended to a disk storage pool, with migration to tape. But they may find backups going directly to tape. Possible causes:

- The file exceeds the STGpool MAXSize.

- The file exceeds the physical storage pool size.

- The backup occurred choosing a management class which goes to tape.

- Maybe only some of the data is going directly to tape: the directories. Remember that *SM by default stores directories under the Management Class with the longest retention, modifiable via DIRMc.

- Your storage pool hierarchy was changed by someone.

- See also "ANS1329S" discussion about COMPRESSAlways effects.

- Your client (perhaps DB2 backup) may be overestimating the size of the object being backed up. :grin:

- Um, the stgpool Access mode is Read/Write, yes?

A good thing to check: Do a short Select * From Backups... to examine some of those files, and see what they are actually using for a Management Class.
 
Hi Batavus -



I had the same problem as you - and I never resolved it. I was able to force the mgmt class to go to the right *tape* pool - which wasn't hard to do - but was never able to send anything to disk as I had specified.



I dropped this issue before as I had other pressing things - but I will pick this back up and work on it, sharing any info I may find. Will look at all of 'toofarnorths' ideas as well...

What is the (server) OS you are using anyway? I had some thoughts about the 'large files' issue that comes with some OS - I had ours specifically changed so that I could pull out a large chunk for this...but with our setup it didn't yield any help. Would any of your backups be larger than one of your 2.5 GB slices? Since DB2 likes to single stream everything...



Will get back to you - thanks for sharing your problem!

-Chris
 
<TABLE BORDER=0 ALIGN=CENTER WIDTH=85%><TR><TD><font class="pn-sub">Quote:</font><HR></TD></TR><TR><TD><FONT class="pn-sub"><BLOCKQUOTE>It's probably treating the databases as single objects, and if an object is over a certain size it's written straight to tape. This is a TSM default thing. Check the MAXSIZE attribute on the storage pool; if it is set to "nolimit", then everything should go straight into the pool, but if there is a number (usually it is GB), then anything bigger gets shunted straight to tape. </BLOCKQUOTE></FONT></TD></TR><TR><TD><HR></TD></TR></TABLE>



The max size is unlimited. Indeed the objects from DB2 backups come in as a single object to TSM - thus a full backup of one of their databases ends up being a single 200GB object to TSM. The disk storage pool is built with 2.5 GB volumes for a total of 250GB. Since their smaller databases (one of which is just 10GB) also go directly to tape, I'm thinking that TSM will not split those single objects that are over 2.5GB into more than one disk storage pool volume? Can anyone confirm that hunch?



I'm thinking it is true - as another client of mine that backs up Tandems using TSM, they end up having huge single objects within TSM as well. But in their case, the 1TB disk storage pool is make up of 250GB volumes, and their average object size coming from the Tandem clients are smaller than 250GB but larger than 100GB.



Thanks!
 
<TABLE BORDER=0 ALIGN=CENTER WIDTH=85%><TR><TD><font class="pn-sub">Quote:</font><HR></TD></TR><TR><TD><FONT class="pn-sub"><BLOCKQUOTE>Hi Batavus -



I had the same problem as you - and I never resolved it. I was able to force the mgmt class to go to the right *tape* pool - which wasn't hard to do - but was never able to send anything to disk as I had specified.



I dropped this issue before as I had other pressing things - but I will pick this back up and work on it, sharing any info I may find. Will look at all of 'toofarnorths' ideas as well...

What is the (server) OS you are using anyway? I had some thoughts about the 'large files' issue that comes with some OS - I had ours specifically changed so that I could pull out a large chunk for this...but with our setup it didn't yield any help. Would any of your backups be larger than one of your 2.5 GB slices? Since DB2 likes to single stream everything...



Will get back to you - thanks for sharing your problem!

-Chris



</BLOCKQUOTE></FONT></TD></TR><TR><TD><HR></TD></TR></TABLE>



The server OS and client OS are both AIX (5.2 ML3). See previous post - I'm thinking it has to do with the small disk volume size and the sizes of the single objects that are coming in.



I'm going to do some experimentation and see if it is indeed the case.... I'll post the results here.
 
I still haven't had a chance to experiment just yet.... Bear with me! :grin:
 
Oh man - I had issues finding the post again! But I didn't forget...



I have not had a chance to try this again either - just wanted to follow up and see if you had any luck yet.



-Chris
 
I doubt this will help you guys since our configs are so different, but since we back up DB2 I thought I'd chime in. Our DB2 backups are able to go to disk first without issue, but our DBs are much smaller than yours. My disk pools consist of 2.3gig MVS volumes (we run TSM on our mainframe), and we have no issues with the DB2 chunks being larger than 2.3gig and still going to disk. Some of our DB2 chunks are 10gigs, and they still get written to the disk pool no problem, which consists of 7 MVS volumes, 2.3gigs each, for a total disk pool size of roughly 16gigs. I'm guessing the MVS version of TSM handles this stuff much differently than AIX? I'd find that weird considering TSM, MVS, and AIX are all IBM products.



Bart
 
From what I am reading, it actually sounds like to separate issues. First, I would ask if your DBA team is porting their DB backup routine to system device i.e. /dev/dsk/... or are they sending them to /dev/rmt/....> I do not expect the separate volumes and the DB2 DB streaming across these separate volumes is the issue. Second, Take a look at your migration settings, your high and low thresholds. If the thresholds are low enough, it will force a migration to the next storage pool and this appears to be your tape pool. A migration may have been forced sometime back and the value not returned to its previous value. This may be what is happening in your case. The MAXSIZE parameter makes sense, to compensate for this large single object, we have similar issues and we have created a separate domain and management class to compensate for the larger objects - specifically DBs and Exchange. We have figured out its just as fast to send the large data to tape as it is to disk.

Third, directing towards your disks, how are they configured? RAID what? If these are JBOD, then it will make sense to send the data to tape since you. Consider buying a SAN device.



If you need more help, and you are able to send me your TSM config settings, feel free and I'll compare to our environment and recommend changes and provide additional answers.

Steven
 
Forum,
I am having the issue described in this post. DB2 is doing a incremental backup which transfers just 3-5GB per session. Maxsize on the disk pool is 25GB. But the backup goes straigth to sequential pool (next storage pool). There was enough space for the complete backup (39GB) and migration was not running.

I think DB2 incorrectly reports the size of the backup. Unfortunately this article is not helpul: http://www-01.ibm.com/support/docvi...ze&uid=swg21110122&loc=en_US&cs=utf-8&lang=en

Keep you posted if the mystery is resolved.

Edit, I just read my post and it seems that DB2 sends the complete backup size as a estimate before doing it (39GB). But this cannot be because this is a DB2 cluster in which differents nodes holds a partition, so for TSM they are different clients. Maxsize should be honored but it is not. :(

Rudy
 
DISK volume

TSM will never split a transaction to fit into more than one DISK volume, so if you want your 10 GB databases to fit into your diskpool, you'll have to make sure that the voumes are at least 10 GB in size.
 
TSM will never split a transaction to fit into more than one DISK volume, so if you want your 10 GB databases to fit into your diskpool, you'll have to make sure that the voumes are at least 10 GB in size.

:confused:

The disk volumes are 20GB in size. The transactions, as I said, are aprox 4GB.

Rudy
 
Folks,
IBM has just confirmed what I suspected:

DB2 provides the wrong size to TSM API for incremental backups:
http://www-01.ibm.com/support/docvi...up&uid=swg1IY75845&loc=en_US&cs=utf-8&lang=en

So, it tries to reserve a huge space (the whole database size) on the disk pool. The disk pool cannot afford it so the backup goes to tape.

I was thinking on a possible solution for this: using the include.compression statement. That way, I think, the API will reserve the final real size of the incremental backup compressed. IBM didn't provide a official statement if this is supported. I am a bit concerned about the restore as well.

Someday I will test this.

Rudy
 
I think that as I can see ("Proxy by storage agent") you are doing LAN-FREE backup, that can't go directly to disk, but it (storage agent) see that there is tape in storage hierarchy, and it goes directly to tape.
What do you people think about it? Try to disable lan-free and check if anything changes.
 
Hi,
You are rigth but:
In batavus post 'proxy by storage agent' is empty. Thus, no LanFree.

In my particular case no LanFree either.

good try but not the case.

Rudy
 
Back
Top