Networker

[Networker] SV: [Networker] Staging: Looks like the wrong save sets have been staged

2013-05-07 10:17:48
Subject: [Networker] SV: [Networker] Staging: Looks like the wrong save sets have been staged
From: Tony Albers <Tony.Albers AT PROACT DOT DK>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Tue, 7 May 2013 14:09:34 +0000
Ok, it can be quite confusing. But just remember that what Networker sees is 
the usages reported by the filesystem. So if you have 3 file-type devices on 
one physical partition/lun, their utilization levels will be parallel. (English 
is not my primary language, so bear with me here)

I'm not that familiar with the internal workings of the staging process when it 
comes to multiple ftds, so I can't say whether it treats savesets differently 
from different volumes/ftds, but I would not use the same staging policy for 
several ftds/volumes. I'd create one for each. The automation I'm talking about 
is when it reaches the high watermark and starts staging savesets to tape. The 
fact that you can only choose a certain percentage and not a specific time or 
date opens up for processes fighting over the tape drive(s) if you have 
anything other than staging using them (which you have, because sometimes 
you'll have to do a restore I imagine).

I would lower the thresholds, 95% is a pretty high utilization rate for almost 
any type of filesystem. Actually everything above 85% is risky, and when you 
hit the 90% mark, performance starts to degrade rapidly.

When I say I use scheduled cloning, I use the "Clones" category in NMC, under 
Configuration. Here(in the first tab) I decide browse and retention times for 
the savesets in the DESTINATION pool (which is a tape pool), WHAT pool to clone 
to (the same tape pool), how many copies I want in my destination pool(usually 
1) and WHEN to clone (start time and how often). In the second tab I decide 
where to clone FROM(a pool on disk), and how far back I want to "look" for 
savesets. So if you schedule cloning for every Friday at 1AM, you should "look" 
back at least  14 days to make sure that everything is picked up in case a 
cloning fails because of a faulty tape drive or anything like that. Savesets 
already cloned to the destination pool will be ignored, and not cloned again.

At the same time, I set retention and browse for every client to be 2-3 weeks. 
This means that the savesets on disk will only be there for that long, and 
after that period of time they will only be available on tape. Which is pretty 
much what you normally want when using staging.

I suggest you create a test disk and tape pool and try it out, there's no harm 
in that.

/tony

-----Oprindelig meddelelse-----
Fra: EMC NetWorker discussion [mailto:NETWORKER AT LISTSERV.TEMPLE DOT EDU] På 
vegne af tammclaughlin
Sendt: 7. maj 2013 15:33
Til: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Emne: [Networker] Staging: Looks like the wrong save sets have been staged

I think I understand you now.

On the one filesystem, I have 3 devices/volumes (unix, Linux, notes) and the 
one staging policy. This staging policy is set to stage from all 3 devices.

So if the staging policy just "sees" save sets that exist somewhere under the 
filesystem then it should work as I expect.
However, if it "sees" and treats save sets differently under each volume then I 
can see a problem.

I'm not sure about automating the staging. The thresholds are tight so I would 
need to look for some patterns before I looked at doing this manually.

When you say that you do not use staging bu cloning, do you use nsrstage -S 
ssid/cloneid  or do you actually clone and then delete the save sets?

+----------------------------------------------------------------------
|This was sent by tam.mclaughlin AT gmail DOT com via Backup Central.
|Forward SPAM to abuse AT backupcentral DOT com.
+----------------------------------------------------------------------