• Please help support our sponsors by considering their products and services.
    Our sponsors enable us to serve you with this high-speed Internet connection and fast webservers you are currently using at ADSM.ORG.
    They support this free flow of information and knowledge exchange service at no cost to you.

    Please welcome our latest sponsor Tectrade . We can show our appreciation by learning more about Tectrade Solutions
  • Community Tip: Please Give Thanks to Those Sharing Their Knowledge.

    If you receive helpful answer on this forum, please show thanks to the poster by clicking "LIKE" link for the answer that you found helpful.

  • Community Tip: Forum Rules (PLEASE CLICK HERE TO READ BEFORE POSTING)

    Click the link above to access ADSM.ORG Acceptable Use Policy and forum rules which should be observed when using this website. Violators may be banned from this website. This notice will disappear after you have made at least 3 posts.

data domain replication issue

ShivaReddy

Newcomer
Joined
Mar 16, 2016
Messages
1
Reaction score
0
Points
0
I did a fastcopy to the replication source folder with the data of 8TB but out of that i needed to replicate only 3 TB so i had to delete the data that dont need and it started replicating. To my surprise replicating the 3TB data set to the destination folder took almost the same time as it would take for 8TB which means replication verified 8TB of data instead of 3tb though i deleted the 5tb , is this the nature of data domain or is this an issue?
 

moon-buddy

ADSM.ORG Moderator
Joined
Aug 24, 2005
Messages
7,032
Reaction score
402
Points
0
Location
Somewhere in the US
Replication times is affected by a lot of factors with size as a major determining factor.

However, there are times that the replication will be the same for different sizes of the data when there small and big files that needs replication. Thus, a chunk of 8 TiB data that has 2000 files with bigger file sizes may replicate faster or at the same time than one than one chunk of 3 TiB that has 4000 files of smaller file sizes.

There is also file uniqueness - this uniqueness will determine replication times. Less unique, the longer it will be.

On another note:

Was replication on the 8 TiB chunk ongoing when you stopped it, deleted the non-essential files, and restarted the replication? If you did, the system will run through the files on the replica side to determine which were deleted resulting in longer replication times.

If the replication is not ongoing (none was transferred to the replica side), then it is a matter of the file size or uniqueness that can determine replication times.

Are you using MTREE or directory replication? The former runs faster regardless of file sizes or uniqueness.
 

Advertise at ADSM.ORG

If you are reading this, so are your potential customer. Advertise at ADSM.ORG right now.

UpCloud high performance VPS at $5/month

Get started with $25 in credits on Cloud Servers. You must use link below to receive the credit. Use the promo to get upto 5 month of FREE Linux VPS.

The Spectrum Protect TLA (Three-Letter Acronym): ISP or something else?

  • Every product needs a TLA, Let's call it ISP (IBM Spectrum Protect).

    Votes: 18 18.4%
  • Keep using TSM for Spectrum Protect.

    Votes: 60 61.2%
  • Let's be formal and just say Spectrum Protect

    Votes: 12 12.2%
  • Other (please comement)

    Votes: 8 8.2%

Forum statistics

Threads
31,738
Messages
135,308
Members
21,740
Latest member
mjkoz
Top