NDMP Backup fills up active log

Trident

TSM/Storge dude
ADSM.ORG Moderator
Joined
Apr 2, 2007
Messages
612
Reaction score
71
Points
0
Location
Oslo, Norway
Website
www.basefarm.no
PREDATAR Control23

Hi,
I am trying to back up a 6Tb NAS with backup node command. It starts, and sends data. But it does not empty the active log

1586246715619.png

Server is an AIX running 8.1.8.100, and there is a PMR going.

We have tried restaring the job a few time, but that did not help. We have disabled reorg to see if that could impact the job (no change). We have doubled the acttive log from 128Gb to 256Gb. That did not help either.

Trying now to run job without a TOC. The other jobs had it preferred, but without a mgmt clas to write it to, it was then just skipped.

Has anyone seen this behaviour before?
 
PREDATAR Control23

It's not that the active log is filling (well it is), but what's really happening is that active log is pinned. Active log files are kept until all the transactions in the oldest active log file are committed and then it's archived.

So if a client starts a transaction in the first log file and the transaction is long running and still not committed, you could have millions of transactions that occur after in more recent a log files that have all been committed and ready to be archived. However, they can't be archived until the the first transaction is committed, so that the first log file can be archived followed by the others.

You're on the right path to test if it's due to the TOC to isolate if that is causing the long running transaction.

You usually have limited options with this:
- try to eliminate the long running transaction or make it run faster
- run the long transaction when there's less activity, so there will be less committed transactions in the active log once it's pinned
- increase the active log size to reduce the risk of running out of space. The maximum is 512 and if the previous 2 suggestions don't pan out, will be your next logical course of action.

Not sure if you are using container pools or not, but this may apply regardless:
https://www.ibm.com/support/pages/ndmp-ingestion-directory-container-storage-pool-appears-hang
 
PREDATAR Control23

Hi,


Thanks for the answer,

I know the log is pinned, but I keep wondering when it will be released (commited). The SP server is not a server with a lot of data, but this NAS has a 100MBit link, and has about 6Tb of data. At 100Mbit, that is about 5 days of transfer. There is only one share on it. Not sure if I can use some other means of mapping to make more directories to back up.

I use devlass FILE for this backup.

Even if I max out the log space, it will not be enough space to copy with 6 TB storage, since I only have a 100Mb link.

I left it running for a few hours, and the active log rose from 2Gb to 46Gb as I am writing. It has about the same rate of increase as with a TOC (actually it was preferred). So, it did not help.

Looking at q vol;

Volume Name Storage Device Estimate- Pct Volume
Pool Name Class Name d Capaci- Util Status
ty
------------------------ ----------- ---------- --------- ----- --------
/tsm/tod/05/00000227.BFS TOD TOD 16,0 G 0,0 Empty
/tsm/tod/06/00000228.BFS TOD TOD 16,0 G 0,0 Empty
/tsm/tod/09/00000229.BFS TOD TOD 16,0 G 0,0 Empty
/tsm/tod/10/0000022A.BFS TOD TOD 16,0 G 0,0 Empty

I guess that the status will not be updated until backup has finished.

From the PMR, I was told to increase the log to 350Gb. Will max it out when the other backup are done.
 
PREDATAR Control23

Just to follow up here, ISP 8.1.10 has fragmentation support if you backup NDMP over LAN to an ISP storage pool, which brings it in parity with other backup types.

As a result, the Db2 transaction commits and restarts regularly which prevents the log from pinning.

The corresponding limitations document has been refreshed: https://www.ibm.com/support/pages/node/744203
 
PREDATAR Control23

Just to follow up here, ISP 8.1.10 has fragmentation support if you backup NDMP over LAN to an ISP storage pool, which brings it in parity with other backup types.

As a result, the Db2 transaction commits and restarts regularly which prevents the log from pinning.

The corresponding limitations document has been refreshed: https://www.ibm.com/support/pages/node/744203

Hi,
Thanks for the update. I will look into this.

Trident
 
PREDATAR Control23

It's not that the active log is filling (well it is), but what's really happening is that active log is pinned. Active log files are kept until all the transactions in the oldest active log file are committed and then it's archived.

So if a client starts a transaction in the first log file and the transaction is long running and still not committed, you could have millions of transactions that occur after in more recent a log files that have all been committed and ready to be archived. However, they can't be archived until the the first transaction is committed, so that the first log file can be archived followed by the others.

You're on the right path to test if it's due to the TOC to isolate if that is causing the long running transaction.

You usually have limited options with this:
- try to eliminate the long running transaction or make it run faster
- run the long transaction when there's less activity, so there will be less committed transactions in the active log once it's pinned
- increase the active log size to reduce the risk of running out of space. The maximum is 512 and if the previous 2 suggestions don't pan out, will be your next logical course of action.

Not sure if you are using container pools or not, but this may apply regardless:
https://www.ibm.com/support/pages/ndmp-ingestion-directory-container-storage-pool-appears-hang
Hi,

Could you please share me procedure to configure NDMP backups to container pools.
 
PREDATAR Control23

Netapp Nas system
SP server 8.1.19.000

I figured below, please correct me.

Defined a container stg pool
Upd existing copyg intended for NAS destination to container pool along with toc destination to same.

Reg node type=Nas
Def datamover type=Nas dataformat=netappdump

Then run backup node for nasnode.
 
Top