Hello,
I am with product management at Syncsort and want to reply to a few things. I
realize these forums are not a place for vendor commercials, but there are some
factual errors and confusion that needs to be corrected. I will stick to that.
"Do your clients have the horsepower to handle source side deduplication?"
Syncsort NSB does not do source-side deduplication. We use a change journal to
track changed blocks. While the end result is similar to dedupe (less data sent
over the network), the process is substantially different. The NSB agent does
not need to scan the file system and hash data the way dedupe does. It simply
tracks block changes while they occur, which creates a near-zero impact on the
host. When a backup is scheduled to run, the agent already knows the blocks
that are updated and sends them to the target, which is NetApp storage.
The process is similar to what VMware does with Changed Block Tracking (CBT)
only we do it for both physical and virtual servers (you can use CBT if you
want for Agentless Vmware backups).
At the target, NSB uses NetApp Snapshot capabilities. So we do not do a
synthetic full. There is no post-backup process that needs to be run, nor do
you need to do periodic fulls the way you do with a lot of synthetic processes.
An NSB backup is useable right after it completes, and you can access any
incremental backup as a full backup image because of how we leverage the NetApp
snapshot pointers. With this, any size data image can be mounted directly back
to a server in about two minutes.
"For a Data Domain solution the dedup ratio is much better"
This is a misleading statement in some ways. The reason you get very high
numbers from a target dedupe solution -- like 95% dedupe ratios or whatever it
is -- is because you are always feeding it duplicate data. Naturally, if I am
doing incremental file backups plus a weekly full that has 90% redundant data,
I will get a high dedupe ratio. But consider the big picture. You are asking
your server environment to process all that data, both from file system
scanning, pulling the data off disk and sending it over the network. In effect,
you make your servers move a huge amount of data just so you delete it when it
gets to the target. With NSB, you don't move that data in the first place, so
naturally the target dedupe ratios are going to be much smaller. The bottom
line is who much storage do you have to use, and in that respect the two
solutions are similar.
"That is one of my main arguments is getting rid of media servers to let the
clients do all the work in our environment is really scary thought."
Again, I want to repeat that the clients will do very little work because NSB
does NOT do source-side data scanning or hashing. However, you do eliminate
media servers. The NSB agent sends data directly to the NetApp storage, so that
eliminates a layer of your backup architecture. Fewer servers, fewer switch
ports, a point of failure and management eliminated.
I hope that clears up any misconceptions about how NSB works. Feel free to
contact me directly if you have any further questions, or post them here. Good
luck in your search for a data protection solution. It's important to consider
what will work best in your environment and what will meet your backup window
goals and recovery SLAs.
Regards,
Peter Eicher
Syncsort
peicher AT syncsort DOT com
+----------------------------------------------------------------------
|This was sent by peicher AT syncsort DOT com via Backup Central.
|Forward SPAM to abuse AT backupcentral DOT com.
+----------------------------------------------------------------------
_______________________________________________
Veritas-bu maillist - Veritas-bu AT mailman.eng.auburn DOT edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
|