ADSM-L

Re: [ADSM-L] Lots and lots of FILLING volumes on Replication Target server

2017-04-26 09:37:41
Subject: Re: [ADSM-L] Lots and lots of FILLING volumes on Replication Target server
From: Zoltan Forray <zforray AT VCU DOT EDU>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Wed, 26 Apr 2017 09:35:48 -0400
Interesting methodology you use.  I would never have thought to reduce the
reclaims to <50% reclaimable since I felt it would cause lots of
thrashing/constant reclaiming due to the amount of data constantly coming
in.  With tape, I always tried to keep it at >=66% (<33% reclaimable) so
consolidation would be 3->1

What do you have your Maximum Capacity set to for your devclass?  I was at
256GB (this is a 500TB NFS storage area) but reduced it to 128GB.

My issue is not only why I have >700 volumes in filling state (over 1700
full) but why some of them haven't been accessed/written to in 3-months
when replication runs every day?  Why is it constantly creating new volumes
when there are all these in filling with most <.5% used?

Per the other recommendation, I tried switching to NODE collocation but
that created hundreds more volumes as soon as replication kicked off, so I
went back to non-collocation and did 'move data' to get rid of them (now
down to <500 filling)

On Wed, Apr 26, 2017 at 8:16 AM, Stefan Folkerts <stefan.folkerts AT gmail DOT 
com>
wrote:

> I don't really understand what the problem is, maybe I am missing
> something. :-)
> Just use your maxscratch value on the storagepool and calculate this so you
> have enough space free for reclaims and db backups.
> You can keep the reclaim value at 30-40% or at least run it like that some
> hours every day (that is what I did when I still used filepools) to make
> sure you don't waste too much space with full volumes that don't reclaim, I
> think reclaiming at 65% is way to high for disk.
>
> What I did is create a device class with a 20GB max size and set the max
> scratch on the pool to be the total size minus something like 25%, that
> should leave enough space for db backups and if you become 100% full on the
> storagepool you can increase the storage but before you got that sorted
> increase the max scratch value, it's sort of like a soft quota.
>
> Every replication session needs a volume to replicate to I think so when
> you do many to one and have many sessions things add up quickly, filling
> volumes is nothing to worry about I think.
>
>
>
>
> On Tue, Apr 25, 2017 at 5:02 PM, Zoltan Forray <zforray AT vcu DOT edu> wrote:
>
> > I do not think collocate works for a replication target server.  After
> > spending many hours removing over 300-filling volume by hand, as soon as
> > replication started from 2-source servers, over 100-new filling volumes
> > appeared!.
> >
> > On Mon, Apr 24, 2017 at 2:29 PM, Sasa Drnjevic <Sasa.Drnjevic AT srce DOT 
> > hr>
> > wrote:
> >
> > > On 2017-04-24 19:24, Zoltan Forray wrote:
> > > > Collocation is also not a good choice.  Since this is the replication
> > > > target and there are over 700-nodes, that would cause 700-filling
> > volumes
> > > > at all times.
> > >
> > >
> > > No, if you collocate by group. So if for example you have 8 nodes in a
> > > group, they would all use a single volume to fill.
> > >
> > > But, of course - it all depends on the size of the nodes, size of the
> > > volumes, retention period, total capacity, etc
> > >
> > > And maybe you should consider converting your file pools to directory
> > > container pools since you are using dedupe... But, you better upgrade
> > > all servers to v7.1.7.x or v8.1 first...
> > >
> > >
> > > Regards.
> > >
> > > --
> > > Sasa Drnjevic
> > > www.srce.unizg.hr
> > >
> > >
> > >
> > >
> > > >
> > > > On Mon, Apr 24, 2017 at 9:41 AM, Sasa Drnjevic <
> Sasa.Drnjevic AT srce DOT hr>
> > > > wrote:
> > > >
> > > >> On 24.4.2017. 15:29, Zoltan Forray wrote:
> > > >>> On Mon, Apr 24, 2017 at 9:02 AM, Sasa Drnjevic <
> > Sasa.Drnjevic AT srce DOT hr>
> > > >>> wrote:
> > > >>>
> > > >>>> -are those volumes R/W ?  if not, check ACTLOG
> > > >>>>
> > > >>>> -check MOUNTLimit for affected devclass(es)
> > > >>>>
> > > >>>> -check MAXSIze for affected stg pool(s)
> > > >>>>
> > > >>>
> > > >>> Hi Sasa,
> > > >>>
> > > >>> Thank you for the hints.
> > > >>>
> > > >>> Yes, all are R/W.
> > > >>>
> > > >>> How does MOUNTLimit and MAXSize (set to NOLIMIT) effect the
> fillings?
> > > >>
> > > >>
> > > >> In the case of disk which migrates to tape - if a disk volume is too
> > > >> small to hold a big file, data store process will mount and directly
> > use
> > > >> tape instead of disk...
> > > >>
> > > >> Not, sure when only seq disk used...
> > > >>
> > > >> MOUNTLimit could make trouble if it's set too low, but it doesn't
> seem
> > > >> to be that in your case...
> > > >>
> > > >> the question is why it is not reusing 0,5% filling volumes... Can
> you
> > > >> try collocation on small group of nodes?
> > > >>
> > > >> Regards.
> > > >>
> > > >> --
> > > >> Sasa Drnjevic
> > > >> www.srce.unizg.hr
> > > >>
> > > >>
> > > >>
> > >
> >
> >
> >
> > --
> > *Zoltan Forray*
> > Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> > Xymon Monitor Administrator
> > VMware Administrator
> > Virginia Commonwealth University
> > UCC/Office of Technology Services
> > www.ucc.vcu.edu
> > zforray AT vcu DOT edu - 804-828-4807
> > Don't be a phishing victim - VCU and other reputable organizations will
> > never use email to request that you reply with your password, social
> > security number or confidential personal information. For more details
> > visit http://infosecurity.vcu.edu/phishing.html
> >
>



--
*Zoltan Forray*
Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
Xymon Monitor Administrator
VMware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
www.ucc.vcu.edu
zforray AT vcu DOT edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://infosecurity.vcu.edu/phishing.html