ADSM-L

Re: Building a *sm Solaris Server

2000-03-08 11:42:44
Subject: Re: Building a *sm Solaris Server
From: "Hagan, Patrick L" <haganpl AT BP DOT COM>
Date: Wed, 8 Mar 2000 10:42:44 -0600
Thanks for replying.

Does the following sound like a reasonable way to setup the TSM server?

For the SUN E450 18GB internal disks (all will be TSM mirrored) :
        1 disk for the log file
        3 disks for the db files (about 26GB in size) I was going to use 3
disks for more spindles for db  performance, is more better? Is one db vol
per disk ok?

1TB Fibre Raid box
        This is going to be used for stage pools (or pools) only. I was
thinking of 3 stage pools.
        One for the copypool
        One for archives - these will go to different tapes then the backups
and copypools
        One for backups

Thank you again for your help.

Patrick Hagan
BPAmoco


> ----------
> From:         Steven P Roder[SMTP:tkssteve AT REXX.ACSU.BUFFALO DOT EDU]
> Reply To:     ADSM: Dist Stor Manager
> Sent:         Wednesday, March 08, 2000 5:17 AM
> To:   ADSM-L AT VM.MARIST DOT EDU
> Subject:      Re: Building a *sm Solaris Server
>
> > - It looks like I can run solaris7 and TSM 3.7 with not problem?
> > - I assume I should put the db and log files on the internal disks
> > (mirrored)  in the E450 and not on the raid box.
> > - Does this system seem big enough to handle 600gb per night?
> > - Should I make one big stage pool or a few smaller ones?
>
> When you mirror the db and log, use ADSM mirroring.  Also, on your last
> question, are you asking about using volume manager to glue the disks
> together?  If so, don't.  Just create an ADSM stgpool volume per disk.
> If you are asking about having multiple disk storage pools, I would not do
> that unless I had to have multiple policy domains.
>
> Steve Roder, University at Buffalo
> VM Systems Programmer
> UNIX Systems Administrator (Solaris and AIX)
> ADSM Administrator
> (tkssteve AT buffalo DOT edu | (716)645-3564 |
> http://ubvm.cc.buffalo.edu/~tkssteve)
>
<Prev in Thread] Current Thread [Next in Thread>