Issue about IBM Spectrum Protect server support for NFS

xyzegg

ADSM.ORG Member
Joined
Jun 27, 2011
Messages
48
Reaction score
0
Points
0
Hi,

Our company has acquired a Dell/EMC 6800 Data Domain system and we're planning to integrate it into a TSM environment soon, but, while we're looking for some Best Practices docs, we have found one from IBM about NFS support.

IBM Spectrum Protect server support for NFS
http://www-01.ibm.com/support/docview.wss?uid=swg21470193

One of the restrictions say:

Do not use storage pools of devtype=DISK with NFS storage.

So, we have a problem, because that's the way we have planned to use NFS feature.

We plan to define some Disk (Primary) Storage Pool (devtype=DISK) and define some volumes on NFS exports previously mounted.

Am I forgetting something?
 
The devtype DISK is random access, while devtype FILE is more streaming of data.

It should work with DISK, but the performance may not be what you are looking for.
 
I work with a few customers that use devtype=FILE successfully.

DO NOT USE devtype=DISK. It's very inefficient to try write data on a volume in a non sequential manner when the server doesn't have direct access to the disk like a locally or SAN attached disk.

With devtype=FILE, the data is written sequentially so that works fine.
 
I work with a few customers that use devtype=FILE successfully.

DO NOT USE devtype=DISK. It's very inefficient to try write data on a volume in a non sequential manner when the server doesn't have direct access to the disk like a locally or SAN attached disk.

With devtype=FILE, the data is written sequentially so that works fine.


Thanks for the reply, Trident and Marclant.

This sounds weird to me, because the product documentation states that It supports some random I/O optimizations (see attached file). Anyway, we'll conduct some tests and ask Dell explanations.

Sorry, I couldn't attach the file.

Read this:

The random I/O optimizations included in DD OS provide improved performance for
applications and use cases that generate larger amounts of random read and write
operations than sequential read and write operations.
DD OS is optimized to handle workloads that consists of random read and write
operations, such as virtual machine instant access and instant restore, and
incremental forever backups generated by applications such as Avamar. These
optimizations:
o Improve random read and random write latencies.
o Improve user IOPS with smaller read sizes.
o Support concurrent I/O operations within a single stream.
o Provide peak read and write throughput with smaller streams.
Note
The maximum random I/O stream count is limited to the maximum restore stream
count of a Data Domain system.
The random I/O enhancements allow the Data Domain system to support instant
access/instant restore functionality for backup applications such as Avamar and
Networker.
 
It's possible that this wasn't tested yet which is why the recommendation is still to use FILE.
 
Thanks for the reply, Trident and Marclant.

This sounds weird to me, because the product documentation states that It supports some random I/O optimizations (see attached file). Anyway, we'll conduct some tests and ask Dell explanations.

Sorry, I couldn't attach the file.

Read this:


I use Data Domain - in fact I have 8 pairs of them.

As mentioned, do not use DEVCLASS=DISK as Data Domain does not support direct attachment via Fiber to access its FS. DEVCLASS=FILE is the way to go via NFS. There is no issue on using the Data Domain as a primary disk pool with devclass=file (which I do). I also DO NOT have a copy pool. I just replicate to the secondary Data Center for backup/recovery.

As for the optimized I/O, this refers to internal operations and not to the way the client is connected to it.
 
Last edited:
Back
Top