ADSM-L

Re: [ADSM-L] TSM troubles

2015-12-10 08:39:27
Subject: Re: [ADSM-L] TSM troubles
From: Matthew McGeary <Matthew.McGeary AT POTASHCORP DOT COM>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Thu, 10 Dec 2015 07:37:38 -0600
Hello Stef,

Since your active log is filling, do you have the SAN capacity to increase it?  We use the maximum active log size of 512GB, which is overkill for our intake but might be exactly what you need.  I'd also think that with that much intake, a few more cores wouldn't hurt.  There's a lot of processing involved with large dedup workloads and we typically use all 10 of our allocated Power 8 cores just doing server-side dedupe.

Hope that helps,
__________________________

Matthew McGeary
Technical Specialist - Infrastructure
PotashCorp
T: (306) 933-8921
www.potashcorp.com





From:        Stef Coene <stef.coene AT DOCUM DOT ORG>
To:        ADSM-L AT VM.MARIST DOT EDU
Date:        12/10/2015 04:05 AM
Subject:        [ADSM-L] TSM troubles
Sent by:        "ADSM: Dist Stor Manager" <ADSM-L AT VM.MARIST DOT EDU>





Hi,

Some time ago I mailed in frustration that using DB2 as TSM backend was a bad
idea.

Well, here I'm again with the same frustration.

This time I just want to know who is using deduplication successful?
How much data do you process daily? Client or server or mixed?

We are trying to process a daily intake between 10 - 40 TB, almost all file
level backup. The TSM server is running on AIX, 6 x Power7, 128 GB ram. Disk
is on SVC with FlashSystem 840. Diskpool is 250 TB on 2 x V7000 with 1 TB
NLSAS disks, SAN attached. We are trying to do client based dedup.

The problem is that the active log fills up (128 GB) in a few hours. And this
up to 2 times per day! DB2 recovery takes 4 hours because we have to do a
'db2stop force' :(


Stef

<Prev in Thread] Current Thread [Next in Thread>