data domain replication issue

ShivaReddy

Newcomer
Joined
Mar 16, 2016
Messages
1
Reaction score
0
Points
0
I did a fastcopy to the replication source folder with the data of 8TB but out of that i needed to replicate only 3 TB so i had to delete the data that dont need and it started replicating. To my surprise replicating the 3TB data set to the destination folder took almost the same time as it would take for 8TB which means replication verified 8TB of data instead of 3tb though i deleted the 5tb , is this the nature of data domain or is this an issue?
 
Replication times is affected by a lot of factors with size as a major determining factor.

However, there are times that the replication will be the same for different sizes of the data when there small and big files that needs replication. Thus, a chunk of 8 TiB data that has 2000 files with bigger file sizes may replicate faster or at the same time than one than one chunk of 3 TiB that has 4000 files of smaller file sizes.

There is also file uniqueness - this uniqueness will determine replication times. Less unique, the longer it will be.

On another note:

Was replication on the 8 TiB chunk ongoing when you stopped it, deleted the non-essential files, and restarted the replication? If you did, the system will run through the files on the replica side to determine which were deleted resulting in longer replication times.

If the replication is not ongoing (none was transferred to the replica side), then it is a matter of the file size or uniqueness that can determine replication times.

Are you using MTREE or directory replication? The former runs faster regardless of file sizes or uniqueness.
 
Back
Top