ADSM-L

Re: MIRRORING DILEMMA

1999-11-03 12:06:16
Subject: Re: MIRRORING DILEMMA
From: William Dias <wdias AT US.BNSMC DOT COM>
Date: Wed, 3 Nov 1999 12:06:16 -0500
Bills response *>

Paul Zarnowski wrote:

> At 05:47 PM 11/2/1999 -0500, William Dias wrote:
> [...]
> >AIX Mirroring:
> >        To do proper AIX mirroring you should have three or more drives.
> This
> > is
> >because AIX requires 51% of the drives to be online to do a write.
>
> Clearly not true.  A volume group requires 51% of the drives to be online
> in order for the volume group to be online, unless it is a non-quorum
> volume group.  However, the volume group being online has nothing to do
> with mirroring.  You can have tons of disks in the volume group, yet still
> have a logical volume mirrored on only 2 disks.  The 51% rule has nothing
> to do with being able to do writes.
> **> I Stand Corrected.  It looks like this was changed in AIX 4.2.1.

>
> >  For the ADSM recovery log we use only two drives.
>
> This depends on your configuration.  You can have lots of recovery logs,
> and ADSM can support up to 3 copies of each log volume.
>
> >  If ADSM can not write to the recovery log it
> >will abort without changing the database.  AIX mirroring allows you to
> write to
> > one drive and at the same time read from the other.  This gives you the best
> >performance, but requires twice the disk space.
>
> Twice the disk space as what?  non-mirroring?  It's the same amount of disk
> space whether you mirror in AIX LVM or in ADSM.
> *> Sorry,  About twice twice the space of RAID5 with 8 drives.

>
> >RAID5 Mirroring:
> >The ECC only adds about 20% to the total disk space required.
>
> Depends on your raid configuration.
> *>  True, I should have specified 8 or more drives.

>
> > AIX mirroring adds 100%.
>
> True, as does ADSM mirroring.
>
> >Because you are using all the disks together you can not read and write at
> the same time.  Because the disk motors run
> >at slightly different speeds the read/write operations will start and end at
> >different times for each disk.  This makes for slower disk operations.
> RAID 5
> > is slow, but cheap.
>
> But you didn't mention write-cache, which offsets most of the performance
> penalty.
>

*> Cache provides a great performance improvement for most I/O operations.  It
does not help us here for two reasons:
        1. The data in RAID5 is written as stripes across all the drives.  The
data can not be recovered until all the data is on disk.  Write only two out of
eight strips and you have lost the data.  If you watch the LED's on the drives 
you
will see that RAID drives are all selected together.  Watch non RAID drives and
the LED's will flash at random.
        2. Reason two has nothing to do with the type of file system.  It has to
do with the fact that the file system is used as a data base.  Data bases, like
MCI customers " Want to know where there data is".  In "C" they do this by 
issuing
a "fsync" after every write.  This forces the data to disk, defeating the
advantage of a write-cache. The cache gives priority, when it can,  to reads 
over
writes, so some writes many be postponed for some time.     The application
(DB2...) does not get control back after an fsync until the data is on disk.  
This
is how the recovery log is kept in sync with the real I/O operation.
Bill

>
> ..Paul
>
> --
> Paul Zarnowski                         Ph: 607-255-4757
> 747 Rhodes Hall, Cornell University    Fx: 607-255-8521
> Ithaca, NY 14853-3801                  Em: psz1 AT cornell DOT edu
<Prev in Thread] Current Thread [Next in Thread>