ADSM-L

Re: GBs over WAN - ouch!

2003-05-08 08:12:39
Subject: Re: GBs over WAN - ouch!
From: "Coats, Jack" <Jack.Coats AT BANKSTERLING DOT COM>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Thu, 8 May 2003 07:13:48 -0500
I believe I understand the problem, and really sympathize.

IMHO, a 'better' solution would have to do with
a de-centralized backup product (TSM is not currently
it, but neither is anything else I know).  This is
NOT how things work today.

The architecture would, in my dreams :) , migrate a
users configuration information to the place the user
last backed up, but would leave the data on the
'local' backup node.

I.e. Given 4 clients (1, 2, 3, 4), and 2 centeral backup nodes (a, b),
if clients 1 and 2 backup to A normally, and 3 and 4
backup to B normally, life is good.  But client 1 is a
laptop and migrates between sites where node A and B
are local, well, (by checking ping times or the such)
the node 1 determines who its 'local' server is, and
backs up to it.

Now if client 4 moves to a location where node A is
closest.  Using the same protocol, it starts backing
up to node A.

To reconcile perminant and simi-perminant moves,
probably a timer kind of a setup needs to be set up for
congrigation of data between the major nodes (A, B)
where if a client does not backup to a node over some
period of time (month?), the data from the client starts
a slow migration (not unlike reclaimation works today) to the place
that (via a configuration file) (1)the client last backed up,
or (2)where the client has the most data, or (3)where the
client is assigned.  For a mostly static workforce with
large backup servers (1) would work well, (2) would force the
data to 'co-locate' to a server, and (3) is good for places
where backups are not considered a central services, but
paid for via cost center.  And of course, allow setting this
affinity on a client by client (or client group) basis (but
this policy should probably be server based rather than set
on the client itself).

Yes, distributing backups over a large network and this
reconcilliation would still be moving lots of data over the
network eventually, and restore times could be large, but
at least the backups do happen.

Another 'good thing' would be if node A was down (power hit,
disaster of any kind), then node B would 'automatically'
take all the backups.  The more nodes, and better dispersed,
the more evenly the backups could be destributed over a
network (kind of like a DFS -- distributed file system)

This entire scenario could be more interesting if DRM was moved
into the picture.  And data is replicated across backup nodes
rather than using copy pools.  This way if the TSM servrs were
dispersed, and a backup node goes down, no problem.  All the data
is available to all the clients, just from other nodes.  Yes,
the database could get hairy, like, if a client is backed up on
a foreign node, then the 'drm copy' of the data would be sent
to the clients 'primary' node, with a copy staying locally
on the node where it is baked up.  And if a node backs up to its
'primary' node, then the 'drm copy' would be sent to a node (or more)
as set up in a configuration file). ... More thought needs to go
into this, but I think you see my point.

... Just musing ... On TSMers, any thoughts?  Please show
me the error of my ways :D -- or hire me to muse with you :)

<Prev in Thread] Current Thread [Next in Thread>