TDP for Mail (Windows Platform)

NJ88210

ADSM.ORG Member
Joined
May 14, 2003
Messages
25
Reaction score
0
Points
0
Website
http
I'm testing TDP for mail on a Windows Platform and I'm trying to figure out how to perform the Full DB backup. The incremental is no problem, however the Full seems to be.
 
Take a look at "domdsmc Selective"

Or are you getting errors while trying to run a full backup? If so, post the relevant parts of your logs.
 
No I was trying to set-up the Full and could not find any doc.
 
Full mail backup

Something like this should do?

"tdpexcc backup * full /tsmoptfile=dsm.opt /logfile=excsch.log >> excfull.log"


Edit: sorry I read 'mail' but thought 'Exchange' for some reason. Still my question doens't need to change to help you with this:

How exactly are you trying/failing to setup/run the backup? i.e. at what point do you get stuck?
 
Last edited:
@Manofmilk I have not even tried to run the full as of yet. I'm in the process of doing incrementals. I know the best practice is to run a weekly once a week.
 
Incrementals without an initial full...

I have not even tried to run the full as of yet. I'm in the process of doing incrementals.

Unless I'm missing something different with Lotus, surely you must do at least 1 full backup before you can do successful incrementals or differentials..?


For all our TDPs (SQL and Exchange) we do full backups on Sunday with differentials the rest of the week (faster restore than incrementals). We also run hourly log backups on our most critical SQL servers.
 
As you know the first TDP incremental backup is a full, however all subsequent backups are incremental. Like you said for faster restores, I need to incorporate a Full backup on a weekly basis.

BTW for full "tdpexcc backup * full /tsmoptfile=dsm.opt /logfile=excsch.log >> excfull.log",

I Don't see the tdpexcc as an executable in the TDP install path,
 
See my previous post - that's the very basic domino command to run a full (or selective) backup of every database on the server.
 
@cjhood
This is the Archive & Incremental commands that I run
Archival:

domdsmc archivelog /adsmoptfile=dsm2.opt /logfile=domasch.log >> domarc.log

Incremental:

domdsmc incremental * /subdir=yes /adsmoptfile=dsm.opt /logfile=domisch.log >> dominc.log
 
Ok, so based on those two commands:

domdsmc selective * /subdir=yes /adsmoptfile=dsm.opt /logfile=domssch.log >> domsel.log


Give that a shot and let us know how it goes
 
Chris
Thank you so much for your help, you are a life saver.

On a different note. We are in the planning stages of purchasing a Falconstor VTL with the dedup function. We are looking to have SAS disk for the dedup, and SATA disk for the backend storage. My plan is to try an elimate primary disk pools, and write directly to the VTL. We have 4 TSM instances running on a p570, about 800 (unix & win) clients, several TDP for Oracle clients. We back up approx 10-15tb nightly. With that said do you forsee any performance issues. How many conseq writes could a VTL handle?
 
No worries.

Ok that next question is a tough one to answer on here without knowing the exact specs on the Falconstor device, and really depends on the configuration of your environment, change rates, etc etc etc. If you don't get the spec of the device correct when you put it in, you will get performance problems.

In my experience with Quantum VTL's, you need to know a few things to get started at least:

What type of dedup is it? (On ingest, post-processing). This will affect your speeds during backups. Post processing will essentially dump the data into a holding area, and then at a specified time will de-dup. This method needs a lot more disk space, so your total usable goes down. On ingest is slower, but you don't have the processing overhead afterwards, or the capacity restrictions.
What's the max ingest rate for the VTL?
What's your nightly change rate GB/TB and %(for sizing dedup and total space)
Will you be replicating to a DR unit?


You really, really, really need to get the vendor to do an analysis of your environment and spec the device accordingly. My units weren't specced properly when they were put in (changing requirements etc) and consequently the ingest rate for 350+ nodes nightly was not even close to what was needed. Testing will also show potential bottlenecks in your environment.

As for simply using the devices as primary storage, they work well. Different issues and challenges as opposed to disk pools, but you'll figure those out in testing. Just be careful with available space, as reclamation doesn't work quite as expected on VTL units. See the 5.5.2 option RELABELSCRATCH for why...

Sorry there's no answer in there, but there are so many factors to consider that it's impossible to give you a good one :)
 
Chris
Falconstor hasnt spec the box out as of yet so, I cant answer that. There was talk about making them Raid6.

The Dedup will be post-processing because we do not want anything to slow down the backup process. The holding area will be faster SAS disk, and the back-end will be slower SATA disk. The VTL box from what I understand is rated at 1200 mb/sec, currently we have a Falconstor box that is rated at 800 mb/sec and we're getting over 900 mb/sec. Our nightly change rate varies between 6-10tb nightly. The dedup ratio according to Falconstor will be something like 6-8:1 compression. Also down the road we're looking at replicating the dedup data to our offsite location.

Also is there more up to date doc than this http://www.redbooks.ibm.com/redbooks/pdfs/sg245247.pdf


The notes grp wanted to know if a DBIID changes before the next full backup does the incremental pick-up the new ID? or do we have to perform a full each time that ID chnage
 
Chris,
Have you seen this type of error, we attempted to kick off an archive log backup, but the error inidcates one is already in progress. But it's not.

09/18/09 11:48:46 ANE4991I (Session: 144039, Node: NJ01NM04_ARC) TDP Domino
ACD5210 Data Protection for Domino: Starting transaction
log archive from server NJ01NM04. (SESSION: 144039)
09/18/09 11:48:46 ANE4993E (Session: 144039, Node: NJ01NM04_ARC) TDP Domino
ACD5241 Data Protection for Domino error: Archiving of
transaction logs already in progress. . (SESSION: 144039)
09/18/09 11:48:46 ANE4991I (Session: 144039, Node: NJ01NM04_ARC) TDP Domino
ACD5211 Transaction log archive from server NJ01NM04
complete. Total transaction log files archived: 0 Total
bytes transferred: 0 Elapsed processing
time: 0.00 Secs Throughput rate:
0.00 Kb/Sec (SESSION: 144039)
 
Back
Top