Oracle TDP: Same file backed up twice

Chandra

ADSM.ORG Member
Joined
Apr 19, 2016
Messages
20
Reaction score
0
Points
0
Hi,
I'm taking oracle rman backup to tape using TDP. Configuration everything set and backup is hapenning normally.
In tdpo.opt file, didn't mention any other mgmt class for taking duplex backup. It's taking default mgmt class.

While checking contents, it was noticed that same rman backup piece is backed up twice.

For Eg : Total rman backup size is 100GB but in tape it's seen as 200GB.

What could be the issue?

Thanks in advance :)
 
What commands did you use to come to that conclusion?

Are you sure it's the same backup stored twice? Or can it be yesterday's backup + today's backup?
 
Hi,

I have used following query to check

select FILE_SIZE,FILE_NAME from contents where volume_name in (VOLUME_NAME)

Eg:

21,475,885,056 //lesu7ltm_4_2
21,475,885,056 //lesu7ltm_4_2

File_name and size are same. No, it's today's backup
 
Is the object_id the same or different?
Are they both on the same volume or different?
 
Object id is same and it's in same volume(Disk Pool)
 
If the object ID is the same, that's two records for the same object, not the same object stored twice. Likely multiple agreggates.

You can see by doing:
show invo {bitfile_id}
 
Following are the output from same object with different bitfile_id

File_name=//lnsuaa9n_1_2
File_Size=21475885056
OBJECT_ID = 2188923901 & 2188923901
bitfile_id = 2188923901 & 2188923941

What to be checked from these inventory objects and why it's coming like this?

Inventory object 2188923901 of copy type Backup has attributes:
NodeName: EXA_ORA, Filespace(3): /adsmorc,
ObjName: //lnsuaa9n_1_2.
hlID: EBBFFB7D7EA5362A22BFA1BAB0BFDEB1617CD610
llID: F8B480CD6650E84DEE119FD1B341769CF333767C
Type: 2 (File) MC: 1 (DEFAULT) CG: 1 Size: 21475885056 HeaderSize: 446
Active, Inserted 03/21/18 18:47:33 (UTC 03/21/18 13:17:33)

GroupMap 00000000, bypassRecogToken NULL, flags 0010
Bitfile Object: 2188923901
**Super-Bitfile 2188923901 is a Super Aggregate with 2 fragments
**Fragment 0, bitfile 2188923901, pendingId: -1
Active
**Disk Bitfile Entry
Bitfile Type: PRIMARY Storage Format: 22
Logical Size: 11011439280 Physical Size: 11013459968 Number of Segments: 1, Deleted: False
Storage Pool ID: 8 Volume ID: 110341 Volume Name: VOL01
**Fragment 1, bitfile 2188923941, pendingId: -1

**Disk Bitfile Entry
Bitfile Type: PRIMARY Storage Format: 22
Logical Size: 10457892528 Physical Size: 10459811840 Number of Segments: 1, Deleted: False
Storage Pool ID: 8 Volume ID: 110341 Volume Name: VOL01

show invo 2188923941

Object 2188923941 NOT FOUND.
Bitfile Object: 2188923941
**Super-Bitfile 2188923941 is a fragment in Super Aggregate 2188923901

**Disk Bitfile Entry
Bitfile Type: PRIMARY Storage Format: 22
Logical Size: 10457892528 Physical Size: 10459811840 Number of Segments: 1, Deleted: False
Storage Pool ID: 8 Volume ID: 110341 Volume Name: VOL01
 
What to be checked from these inventory objects
**Super-Bitfile 2188923901 is a Super Aggregate with 2 fragments

So it's a single object, that's stored in two fragments. Q CONTENT doesn't know the size of the fragments, it only knows the size of the objects, so that's what it shows for each fragment.
 
Is there a possibility to avoid single object stored in 2 fragments? Becoz it's creating confusion while checking the contents.
I hope this will not create any issue during file restoration
 
That’s how it works. Sometimes it’s how the client sends the data. Sometimes if the server splitting them. Nothing you can do about it. The data is good.
 
Back
Top