Partial incremental backup?

ldmwndletsm

ADSM.ORG Senior Member
Joined
Oct 30, 2019
Messages
232
Reaction score
5
Points
0
PREDATAR Control23

I've read near everything that I can find on this until I'm blue in the face, including searching through this site, and the help pages in dsmc on the client. This is for Linux. I'm still a little fuzzy on what initiates a partial incremental backup and how it's different from a full incremental backup. In this case, I'm excluding an incremental by date from the discussion since that makes more sense to me.

Can someone kindly explain to me then what exactly happens (or doesn't happen; maybe that's more important) during a partial incremental backup? Other than the fact that you're not hitting all the data in the file system, what does it not do that a full incremental does?

[ This is what I've gathered ]
When I run dsmc on the client (interactively or passing arguments to it from the command line), and I specify 'incr', or if a scheduled backup is running, and the Action parameter is set to incremental, then a full incremental will be performed as long as the specified path is a file system, e.g. /home, and not a subdirectory, e.g. /home/username. If I specify no file system, but I have statements in the stanza like 'domain filesystem1', domain 'filesystem2', etc. then likewise a full incremental will be performed.

Is this correct?

[ This is where I'm confused ]
A partial incremental might not expire files? So if the file is deleted from disk then it won't be marked as inactive? Or it will, but values defined in the copygroup, like verdeleted, verexists, etc. might not be updated? Both? Also it won't rebind files if a management class changes? That right?

As near as I can tell, I'm not doing any partial incrementals. All backups are full incrementals. But if I elected to run a partial incremental then what is the down side? What do I need to be aware of? What is the gotcha there?

Thanks.
 
PREDATAR Control23

Other than the fact that you're not hitting all the data in the file system, what does it not do that a full incremental does?
You hit the nail on the head:
- full incremental scans the whole filesystem and updates the last backup date on the filespace if successful
- partial incremental only scans the directory(ies) you are backing up, it doesn't update the last backup date on the filespace regardless if successful or failed.

There's nothing more to it than that.

A partial incremental might not expire files?
Where did you read that? If you backed up /filesystem/directory/* yesterday and it had file1 in it. After the backup, file1 was deleted. Today you backup /filesystem/directory/* and file1 will be expired. That's fairly easy to test too. Create a test directory, create a test file, backup test directory, delete test file, backup test directory again.

As near as I can tell, I'm not doing any partial incrementals. All backups are full incrementals. But if I elected to run a partial incremental then what is the down side? What do I need to be aware of? What is the gotcha there?
What's your requirement for not backing up the entire filesystem? Is it because there's data you don't need backed up? If so, I'd exclude it instead. If it's a subset of data you need backed up separately for some reason, you could use a VIRTUALMOUNTPOINT ans Spectrum Protect will treat the directory as a filesystem and will have its own filespace.

One advantage of backing everything at the filesystem/filespace level is the the At-Risk in the Operation Center will be accurate and the last backup date of the filespace can also be used to determine the last successful backup.
 
PREDATAR Control23

You hit the nail on the head:
- full incremental scans the whole filesystem and updates the last backup date on the filespace if successful
- partial incremental only scans the directory(ies) you are backing up, it doesn't update the last backup date on the filespace regardless if successful or failed.

There's nothing more to it than that.

Okay. I think I understand. So if you went, say, a week without running a full incremental then when you queried the file space, you'd only see the date of the previous full incremental, and that would be a week old so you'd have no reliable or quick and obvious way to know what may have transpired in the interim? Something like that? Might not be explaining correctly there.


Where did you read that? If you backed up /filesystem/directory/* yesterday and it had file1 in it. After the backup, file1 was deleted. Today you backup /filesystem/directory/* and file1 will be expired. That's fairly easy to test too. Create a test directory, create a test file, backup test directory, delete test file, backup test directory again.

Yes, I concur with what you're saying. Not debating the logic, and what you say is corroborated here under "If the file specification does not match all files in a path: ":

https://www.ibm.com/support/knowledgecenter/es/SSGSG7_7.1.4/client/c_bac_fullpart-2.html

However, here are two links where they indicate the opposite:

https://www.ibm.com/support/pages/node/5000613 (Closed as documentation error, so I guess this was wrong?)

Third paragraph under this link (French version):
https://www.ibm.com/support/knowled...m.itsm.srv.doc/c_mplmntpol_incrmntlbckup.html

Maybe where this comes into play is with an incremental by date backup. See the first bulleted item under the following link:

https://www.ibm.com/support/knowledgecenter/SSEQVQ_8.1.2/client/c_bac_fullvsincr.html

Anyway, I think I must have seen this in sundry links yesterday, but I missed the point (and this is key) that only the files in the matching criteria would be subject to expiration. Clearly, in your example, anything under /filesystem/some_other_directory that was deleted on the client would not be expired since it doesn't match, but that's obvious. Maybe I was confusing this or simply overlooked that?


What's your requirement for not backing up the entire filesystem? Is it because there's data you don't need backed up? If so, I'd exclude it instead. If it's a subset of data you need backed up separately for some reason, you could use a VIRTUALMOUNTPOINT ans Spectrum Protect will treat the directory as a filesystem and will have its own filespace.

In most cases, we are backing up entire file systems, and there's nothing being skipped. However, there's a collection of data (a number of file systems) wherein their mount point pathnames (the filesystem name itself) contain directory names that might not be around long term (I didn't set this up, I just have to deal with it), and could be reorganized in the future, so I will be using the bind mounts (I've been told that those names are static and will not change) that start one level beneath the lost+found directory for each file system respectively, so I will need to back up the bind mounts and the lost+found directories for the associated file systems separately.

As discussed in my other post (Full incremental backup for bind mount?), this could be solved perhaps by adding another bind mount for the lost+found directory itself, or a more arduous workaround with an exclude statement (something I was hoping to avoid), but I was not aware of the VIRTUALMOUNTPOINT option. I think that would fix things in a jiff. I will try that. Thanks!


One advantage of backing everything at the filesystem/filespace level is the the At-Risk in the Operation Center will be accurate and the last backup date of the filespace can also be used to determine the last successful backup.

Right.
 
Top