Hi all,
I am currently trying to devise a way to have all my incoming data de-duped before reaching its final resting place. Unfortunately I cannot go to container based storage (with inline de-dup) because I still have some tapes involved.
My current idea is to have some initial storage pools where incoming data is moved to / held that have identify processes running against them. After a certain amount of time these should automatically migrate to their final storage pools (the moving of data causing the de-dup to take effect)
I was just curious if anyone else has a similar scenario and what they came up with as a hands off method of managing de-dup of incoming files.
Thanks,
I am currently trying to devise a way to have all my incoming data de-duped before reaching its final resting place. Unfortunately I cannot go to container based storage (with inline de-dup) because I still have some tapes involved.
My current idea is to have some initial storage pools where incoming data is moved to / held that have identify processes running against them. After a certain amount of time these should automatically migrate to their final storage pools (the moving of data causing the de-dup to take effect)
I was just curious if anyone else has a similar scenario and what they came up with as a hands off method of managing de-dup of incoming files.
Thanks,