ADSM-L

Re: [ADSM-L] Multiple NFS mounts to same DataDomain

2017-02-14 09:26:09
Subject: Re: [ADSM-L] Multiple NFS mounts to same DataDomain
From: Rick Adamson <RickAdamson AT SEGROCERS DOT COM>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Tue, 14 Feb 2017 14:18:27 +0000
Rick,
I cannot comment on NFS, but have used Data Domain for my primary storage for 
over 6 years now.
My servers run on windows and mount DD using CIFS.

My systems are configured like you describe below; in the backend DD is one 
large file system but each TSM instance (server) has multiple dedicated 
directories defined as individual device classes.

For example for TSM server 1 the Data Domain file system is laid out:
/backups/tsm/s1/aix
/backups/tsm/s1/ win
/backups/tsm/s1/sql
/backups/tsm/s1/db2

Then on TSM server 2:
/backups/tsm/s2/aix
/backups/tsm/s2/ win
/backups/tsm/s2/sql
/backups/tsm/s2/db2

Where s1, s2, etc. represents a particular TSM server instance.

Individual file device classes and storage pools are defined on each TSM server 
for each directory, even though in reality there is only one DD file system.

This has worked well for me, even when using hundreds and hundreds of mount 
points from backups, reclaims, migrations, etc.
The only thing I had to be careful of was flat-lining the CPU/memory limits of 
the DD system itself.

Hope this helps....

-Rick Adamson



-----Original Message-----
From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On Behalf Of 
Rhodes, Richard L.
Sent: Tuesday, February 14, 2017 8:35 AM
To: ADSM-L AT VM.MARIST DOT EDU
Subject: [ADSM-L] Multiple NFS mounts to same DataDomain

Arnaud's discussion on the another thread is SO interesting (Availability for 
Spectrum Protect 8.1 server software for Linux on power system).

It got me thinking of our problems . . . 

> NFS, whose performance is not that good on AIX systems

Agreed!!!  After getting DataDomain system and using NFS we were/are VERY 
unhappy with the NFS performance.  

Our Unix admins worked with IBM/AIX support, and finally got an admission that 
the problem is AIX/NFS using a single TCP socket for all writes.  The 
workaround was to use multiple mount point to the same NFS share and spread  
writes (somehow) across them.  He did this and got higher throughput.   

So now I'm wondering if we could use multiple NFS mounts to the same DD for our 
file device pools.

  aix:  /DD/tsm1/mnt1        dd: /data/col1/tsm1/mnt1
        /DD/tsm1/mnt2            /data/col1/tsm1/mnt2
        /DD/tsm1/mnt3            /data/col1/tsm1/mnt2

Then use multiple dir's for the file device devclass:  
   define devclass DDFILEDEV devtype=file 
dir=/DD/tsm1/mnt1,/DD/tsm1/mnt2,/DD/tsm1/mnt3

According to the link to dsmISI again, TSM will roughly balance across the 
multiple mount points, hopefully giving better write throughput.  I've been 
VERY reluctant to try this since it appears once you add a dir to a file device 
devclass, it's there forever! 


I'm curious if anyone is doing this.

Rick



     

-----Original Message-----
From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On Behalf Of 
PAC Brion Arnaud
Sent: Tuesday, February 14, 2017 5:57 AM
To: ADSM-L AT VM.MARIST DOT EDU
Subject: *EXTERNAL* Re: Availability for Spectrum Protect 8.1 server software 
for Linux on power system

Hi Zoltan,

Many reasons for it, which I'll try to shortly summarize :

1) Isilon makes use of NFS, whose performance is not that good on AIX systems. 
We are an AIX shop, and are forced to move to Power Linux machines to get 
sufficient performance to cover our backup needs.
Our first experience with Power Linux machines revealed serious lacks in the 
functionalities we are accustomed to : no easy HMC setup, thus lack of call 
home capabilities so far. In addition to this we needed to setup a RedHat 
satellite server to allow for installation on remote servers, and so far are 
unable to boot our machines from it ... This will probably work sooner or 
later, but requires lots of involvement and time from our sys-admins.

2) Isilon is not perfectly fitting in a big TSM environment. In order to get 
decent performance, a third party tool will be needed, whose name is dsmISI. 
See following : 
https://urldefense.proofpoint.com/v2/url?u=http-3A__stefanradtke.blogspot.ch_2015_06_how-2Dto-2Doptimize-2Dtsm-2Doperations-2Dwith.html&d=DwIGaQ&c=AzgFQeXLLKhxSQaoFCm29A&r=uJG3UnPaeoz9naeIzbwWFddVED8ETOYHxjoACoofi2Y&m=ceqoly0N1xdMkzC6J2TGqdvDLzG9yKYM_kL2dEIbKXE&s=jsLuA3uyleSJqx2Oeqd7Ei16GvZUAg8ffr282jWfKPw&e=
  This means another layer of complexity in the setup, and another vendor to 
talk to, if facing performance issues. I had more than my lot of "ping-pong" 
games during my career as TSM administrator, between IBM and other vendors, to 
reject the responsibility on each other in case of issues. Having 3 parties 
involved in our setup will make such games even more frequent ...

3) User base for such a combination in Switzerland is inexistent, at least in 
the same order of size than ours. EMC has not been able to provide any customer 
reference in this country, with whom we could talk to about their setup. There 
must be a good reason for it ...

4) Compatibility issues : this was more kind of a guts feeling I had, but as 
usually, it revealed to be true : see the problems I will now be facing with 
Little/Big endian versions of TSM/Spectrum Protect (without to mention that 
Spectrum Protect 8.1 is not even available for Power Linux so far). I'm 
currently facing another one : the Isilon we got is running OneFS 8, and 
there's so far no official statement that it is supported, whether by TSM or by 
dsmISI. A downgrade to OneFS 7.x revealed to be impossible, due to the fact 
that the 8 TB disks installed in the machine are not supporting it ...

5) Support by EMC : revealed to be less efficient than the one offered by IBM. 
Since EMC merged with Dell, it became even worse (for a non-native English 
speaking person, having a call with support based in India is a nightmare). 
Also our storage administrator informed me that from his experience, upgrade 
procedures on EMC devices where much more complicated than the ones for IBM 
hardware, and almost always required intervention of the vendor to be conducted 
properly (lots of dependencies in microcodes, switch versions and so on ...)

6) Costs ...

Of course, your mileage may vary, but in our case I'm pretty sure that 
management made the wrong choice (I would have gone IBM Spectrum Scale in 
conjunction with an AIX based server : one vendor, blueprints provided, and 
guaranteed performance).

Cheers.

Arnaud


-----Original Message-----
From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On Behalf Of 
Zoltan Forray
Sent: Monday, February 13, 2017 4:38 PM
To: ADSM-L AT VM.MARIST DOT EDU
Subject: Re: Availability for Spectrum Protect 8.1 server software for Linux on 
power system

Arnaud,

On Fri, Feb 10, 2017 at 9:21 AM, PAC Brion Arnaud < Arnaud.Brion AT panalpina 
DOT com> wrote:

> because our management (despite my warnings not to do so) decided that 
> the target storage for backups would be an EMC Isilon, that connects 
> to the TSM server using NFS mounts.


A little off topic for this thread but why do you feel it is a "bad idea"
to use EMC Isilon for TSM target storage for backups?  We are leaning in this 
direction and in fact have such a configuration for our offsite replication 
target server. We are aggressively moving away from expensive VNX storage to 
Isilon.  So I am curious why you feel the way you do?




--
*Zoltan Forray*
Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator Xymon Monitor 
Administrator VMware Administrator (in training) Virginia Commonwealth 
University UCC/Office of Technology Services www.ucc.vcu.edu zforray AT vcu DOT 
edu - 804-828-4807 Don't be a phishing victim - VCU and other reputable 
organizations will never use email to request that you reply with your 
password, social security number or confidential personal information. For more 
details visit 
https://urldefense.proofpoint.com/v2/url?u=http-3A__infosecurity.vcu.edu_phishing.html&d=DwIGaQ&c=AzgFQeXLLKhxSQaoFCm29A&r=uJG3UnPaeoz9naeIzbwWFddVED8ETOYHxjoACoofi2Y&m=ceqoly0N1xdMkzC6J2TGqdvDLzG9yKYM_kL2dEIbKXE&s=eMUkLI4e7TjBKiNpwMH9f411dIfF8aGStjBjCDRrGUQ&e=
 


-----------------------------------------
The information contained in this message is intended only for the personal and 
confidential use of the recipient(s) named above. If the reader of this message 
is not the intended recipient or an agent responsible for delivering it to the 
intended recipient, you are hereby notified that you have received this 
document in error and that any review, dissemination, distribution, or copying 
of this message is strictly prohibited. If you have received this communication 
in error, please notify us immediately, and delete the original message.

<Prev in Thread] Current Thread [Next in Thread>