ADSM-L

Re: storage pool copy tapes not being filled

2000-07-27 14:05:04
Subject: Re: storage pool copy tapes not being filled
From: "Cook, Dwight E" <cookde AT BP DOT COM>
Date: Thu, 27 Jul 2000 13:05:04 -0500
Well, in a "def stg xxxx po=co" the default collocation is "no" but it kinda
sounds like you might be seeing collocation=yes in your copy pool...
how many nodes in the primary pool ?
I take it there are no anr????e messages around the time of the backup stg
I'm also guessing the tapes are still "readw" and are still libvols, and
show "filling"...
have you tried a "move data oneofthosevolsinthecopypool stg=vru_copy" to see
if it puts it on another tape or tacks it on the end of one of the other
already used, filling volumes.

just some guesses
Dwight

> ----------
> From:         Tim Brown[SMTP:tbrown AT CENHUD DOT COM]
> Reply To:     ADSM: Dist Stor Manager
> Sent:         Thursday, July 27, 2000 12:34 PM
> To:   ADSM-L AT VM.MARIST DOT EDU
> Subject:      storage pool copy tapes not being filled
>
> <<File: tbrown.vcf>>
>
> have had a problem with a copy storage pool.
>
> the devtype  info is:
>
>    Device Type: 3590
>    Unit Name: 3590-1
>    Maximum Capacity (MB):
>    Estimated Capacity (MB): 9,216.0
>
> i have recently created a new storage pool called vru_tape with a copy
> storage pool of vru_copy
>
> the first storage pool tape volume took about 24% of the volume
>
> i performed the initial copy of the storage pool "backup stg vru_tape
> vru_copy"
>
> the ending result in the vru_copy storage pool was 3 storage volumes
>
> vol1 4.6% used
> vol2 9.4% used
> vol3 10.0% used
>
> am i missing something, but shouldn't these 3 volumes be only on 1
> volume
> i have deleted and recreated the storage pool volumes and primary and
> copy storage pools but
> the copy storage pool still goes to multiple tapes the 1st time.. other
> copy storage pools that
> i have do not act this way, i have reviewed everything that i can find
> and can not figure out the problem...
>
> tim brown
>
<Prev in Thread] Current Thread [Next in Thread>