Richard,
Here is the Q OCC result.
The Q OCC shows all the data for FSID 86 is in VMCTLMC (seq disk), SLOWDEDUP
(seq disk) or the COPYPOOL (tape COPY pool).
But when the customer tries a restore, the tapes getting mounted are from
primary pool TAPEPOOL2.
How can a restore be calling for tapes in a pool where the filespace has no
data according to Q OCC?
I suggested they open a sev 1 with Tivoli. This can't be right.
tsm: HCG-TSM-SERVER>q occ dc1 86 nametype=fsid
Node Name Type Filespace FSID Storage Number of
Physical Logical
Name Pool Name Files
Space Space
Occupied Occupied
(MB) (MB)
---------- ---- ---------- ----- ---------- ---------
--------- ---------
DC1 Bkup \VMFULL-H- 86 COPYPOOL 8,928
99,057.72 99,062.73
WDDMZOCU-
LARX1
DC1 Bkup \VMFULL-H- 86 SLOWDEDUP 4,464
- 98,733.95
WDDMZOCU-
LARX1
DC1 Bkup \VMFULL-H- 86 VMCTLMC 4,464
315.14 315.14
WDDMZOCU-
LARX1
-----Original Message-----
From: Prather, Wanda
Sent: Wednesday, June 19, 2013 11:03 AM
To: Richard Cowen
Subject: RE: Another VE mystery - restoring from tape - but shouldn't be
>>Did the MOVE NODEDATA result in a zillion new volumes (.BFS's) ?
No, we don't use scratch volumes in the seq disk pool
>>Can you get the activity log for the time the process(es) ran ?
Any chance you have query occupancy node=dcname filespace=victim1
stgpool=<tape,disk> before and after move?
If not, does the query for tape pool now show zero?
Don't have them and right now I don't have access, but those are great ideas,
will ask the customer for them.
Thanks!
Wanda
-----Original Message-----
From: Prather, Wanda [mailto:Wanda.Prather AT icfi DOT com]
Sent: Tuesday, June 18, 2013 11:35 AM
To: Richard Cowen
Subject: RE: Another VE mystery - restoring from tape - but shouldn't be
Hi Richard,
>>Did you get a "zillion" tape mounts during the MOVE NODEDATA?
Yes
>>Do you know it finished without errors?
Yes. And we ran another MOVE NODEDATA to verify there was no more data to move
for that filespace.
>>How, exactly, did the data go from sequential fast disk -> sequential slow
>>disk - > tape ?
Ordinary migration, at different times, as the pools hit migration thresholds.
>>I didn't think TSM would "migrate" more than one level, so maybe the last
>>step was using a MOVE command?
Yes, your migration hierarchy can have as many levels as you want, as long as
you don't try to go from a sequential pool back to a disk(random) pool.
>>What does a QUERY NODEDATA show for primary pools and copy pools?
Would not be informative, as we only moved some of the filespaces, not all of
them.
>>Are you running aggressive reclamation on the tape pool?
No, and it's not collocated. Which is why we decided to move the filespace
back to the seq disk pool.
I don't think it's odd the data was too spread out to make the restore from
tape not work well.
What is odd is that all the data from one VM is supposedly in one filespace.
We moved that filespace back to disk with move nodedata fsid= But still getting
tape mounts on the restore.
Thanks for your interest!
Wanda
-----Original Message-----
From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On Behalf Of
Prather, Wanda
Sent: Monday, June 17, 2013 9:45 PM
To: ADSM-L AT VM.MARIST DOT EDU<mailto:ADSM-L AT VM.MARIST DOT EDU>
Subject: [ADSM-L] Another VE mystery - restoring from tape - but shouldn't be
TSM 6.2.3
Backup done with TSM VE 6.4.0.0 and TSM client 6.4.0.0.
VMCTLMC points to a dedicated sequential pool on fast disk. That pool has no
NEXTSTGPOOL defined.
VMMC points backup data to a deduplicated sequential pool on fast disk storage.
After the server dedups it there, it migrates to a slower NAS-based
deduplicated storage pool on disk.
When that slower pool fills, data migrates out to tape.
DEDUPREQUIRESBACKUP set to YES.
(No client-side dedup used.)
We have done full VM restores through the plug-in and file-level restores
through the recovery agent in the past, including testing restores from tape,
with no problems.
Now one of the customer's VM datastores has met with an unfortunate accident in
a dark alley.
We need to restore 7 full VM's.
>From the dsmc command line, restore vm victim1 datastore=newhealthyone starts
>up OK, but was calling for a zillion tape mounts, and therefore was restoring
>at the rate of about 4GB per 24 hours.
So, we did MOVE NODEDATA DCNAME FILESPACE=victim1 to bring the data from the
tape back to the sequential disk pool.
Cranked up again, same result - requesting a zillion tape mounts.
So riddle me this:
If the control information is on disk, and the filespace is back on disk, what
are the tape mounts going after?
(FWIW, tried upgrading VE to 6.4.0.1 and data mover to 6.4.0.4, no difference.)
Signed, Confused by VE. Again.
Wanda Prather | Senior Technical Specialist | Wanda.Prather AT icfi DOT
com<mailto:Wanda.Prather AT icfi DOT com> | www.icfi.com<http://www.icfi.com>
ICF International | 401 E. Pratt St, Suite 2214, Baltimore, MD 21202 |
410.539.1135 (o)
|