ADSM-L

Re: [ADSM-L] Restore and mounts

2014-12-17 12:07:42
Subject: Re: [ADSM-L] Restore and mounts
From: Steven Langdale <steven.langdale AT GMAIL DOT COM>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Wed, 17 Dec 2014 17:06:03 +0000
The point of that technote is that you do not need DIRMC anymore.

On Wed, 17 Dec 2014 16:17 Hans Christian Riksheim <bullhcr AT gmail DOT com> 
wrote:

> Thanks, Nick.
>
> Of course this was the one TSM server where I forgot to create the DIRMC
> diskpool and that explains the restore behavior.
>
> Regards,
>
> Hans Chr.
>
> On Wed, Dec 17, 2014 at 2:52 PM, Nick Marouf <marouf AT gmail DOT com> wrote:
> >
> > This could be normal if TSM is trying to recreate all the directory
> > structures. It creates this first, before restoring actual data.
> >
> >
> >
> > With the newer versions of TSM, using a directory class management
> (DIRMC)
> > shouldn’t be necessary, since ACL information is applied at a later point
> > in time. However with that said, I’ve seen fileservers with millions of
> > directory structures that could be spread  across many tapes, or even one
> > tape.
> >
> >
> >
> >                 You may want to open a ticket with support for
> > confirmation, but the symptoms you are reporting are similar to a
> problem I
> > had a while back.
> >
> >
> >
> > See this technote with a bit more background.
> >
> >
> >
> > http://www-01.ibm.com/support/docview.wss?uid=swg21669468
> >
> >
> >
> > On Wed, Dec 17, 2014 at 3:37 AM, Hans Christian Riksheim <
> > bullhcr AT gmail DOT com>
> > wrote:
> > >
> > > I am doing a file system restore. The number of volumes for this node
> is
> > 35
> > > and is collocated by filespace.
> > >
> > > In the last 24 hours there has been 700 tape mounts for this restore
> > > session. One volume has been mounted 346 times. Total amount restored
> is
> > > about 200 GB.
> > >
> > > q ses f=d tells me that this is a NoQueryRestore.
> > >
> > >
> > > Is this to be expected?
> > >
> > >
> > > Regards
> > >
> > > Hans Chr.
> > >
> >
>

<Prev in Thread] Current Thread [Next in Thread>