Perhaps you are hitting a client options set on the TSM server?
QUERY CLOPTS
On Solaris clients, you might also check to make sure that the
TSM config files in /opt/tivoli/tsm/client/ba/bin directory are properly
linked to the files out in /usr/bin. If they are unique files in each
location, you can get some strange things.
This is how they should look:
lrwxrwxrwx 1 root other 16 Aug 25 2004 dsm.sys ->
/usr/bin/dsm.sys
lrwxrwxrwx 1 root other 16 Aug 25 2004 dsm.opt ->
/usr/bin/dsm.opt
If not, perhaps a copy of your dsm.sys, the dsm.opt and the
inclexcl file might help.
Ben
-----Original Message-----
From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On Behalf Of
Warren, Matthew (Retail)
Sent: Thursday, May 05, 2005 9:09 AM
To: ADSM-L AT VM.MARIST DOT EDU
Subject: Incremental skipping filesystems?
Hallo *SM'ers
Client o/s, SOLARIS SunOS 5.9
Dsmc 5.2.0.0
Dsmserv 5.2.2.3 on AIX 5.2.0.0
It appears incremental backup is missing/skipping filesystems
Firstly, I am assured there were definitley many directories and files
under /home/ last night when the backups ran...
The dsmc log for the backups say's this;
IBM Tivoli Storage Manager
Command Line Backup/Archive Client Interface - Version 5, Release 2,
Level 0.0
(c) Copyright by IBM Corporation and other(s) 1990, 2003. All Rights
Reserved.
Node Name: RHINE-E
Session established with server RUTLAND: AIX-RS/6000
Server Version 5, Release 2, Level 2.3
Server date/time: 05/04/05 20:01:24 Last access: 05/04/05
20:01:22
Incremental backup of volume '/'
Incremental backup of volume '/etc/mnttab'
Incremental backup of volume '/var'
Incremental backup of volume '/var/run'
Incremental backup of volume '/home'
Incremental backup of volume '/app/genrat'
Incremental backup of volume '/app/mine1'
... etc
/home is definitley listed. There are no further mentions of /home in
the logfile. Approx 130,000 files inspected, 2,500 backed up. So very
possible nothing had changed in home, hence no further mention in the
log. (or, /home was totally empty? But I am asssured this is not the
case)
Now, firstly, firing up the web client and browsing the files available
for restore, all files are listed under the single filesystem name '/'
so I assume /home is a dir of /, rather than a separate mountpoint, but
then, why would TSM list it as a domain for backup at the beginning of
the incremental?
Secondly, in the GUI, there is not an entry for /home
Query backup from the command-line shows exactly the same thing -
nothing.
The following select statement on the TSM server produces the output
that follows;
tsm: RUTLAND>select * from backups where node_name='RHINE-E' and hl_name
like '%home%'
ANR2963W This SQL query may produce a very large result table, or may
require a significant amount of time to compute.
Do you wish to proceed? (Yes (Y)/No (N)) yes
NODE_NAME: RHINE-E
FILESPACE_NAME: /
FILESPACE_ID: 1
STATE: ACTIVE_VERSION
TYPE: FILE
HL_NAME:
/usr/dt/appconfig/netscape/lib/locale/C/nethelp/netscape/home/
LL_NAME: help.hpf
OBJECT_ID: 358982860
BACKUP_DATE: 2004-11-05 13:09:42.000000
DEACTIVATE_DATE:
OWNER: root
CLASS_NAME: MC_RMM_UNIX_PROD_OSA
NODE_NAME: RHINE-E
FILESPACE_NAME: /
FILESPACE_ID: 1
STATE: ACTIVE_VERSION
TYPE: FILE
HL_NAME:
/usr/dt/appconfig/netscape/lib/locale/C/nethelp/netscape/home/
LL_NAME: home.gif
OBJECT_ID: 358982861
BACKUP_DATE: 2004-11-05 13:09:42.000000
DEACTIVATE_DATE:
OWNER: root
CLASS_NAME: MC_RMM_UNIX_PROD_OSA
NODE_NAME: RHINE-E
FILESPACE_NAME: /
FILESPACE_ID: 1
STATE: ACTIVE_VERSION
TYPE: FILE
HL_NAME:
/usr/dt/appconfig/netscape/lib/locale/C/nethelp/netscape/home/
LL_NAME: home.htm
OBJECT_ID: 358982862
BACKUP_DATE: 2004-11-05 13:09:42.000000
DEACTIVATE_DATE:
OWNER: root
CLASS_NAME: MC_RMM_UNIX_PROD_OSA
NODE_NAME: RHINE-E
FILESPACE_NAME: /
FILESPACE_ID: 1
STATE: ACTIVE_VERSION
TYPE: FILE
HL_NAME:
/usr/dt/appconfig/netscape/lib/locale/C/nethelp/netscape/home/
LL_NAME: homeHdr.htm
OBJECT_ID: 358982863
BACKUP_DATE: 2004-11-05 13:09:42.000000
DEACTIVATE_DATE:
OWNER: root
CLASS_NAME: MC_RMM_UNIX_PROD_OSA
Querying the filespace for the node gives;
tsm: RUTLAND>q fi rhine-e
Node Name Filespace FSID Platform Filespace
Is Files- Capacity Pct
Name Type
pace (MB) Util
Unicode?
--------------- ----------- ---- -------- ---------
--------- -------- -----
RHINE-E / 1 SUN SOL- UFS
No 22,090.4 23.3
ARIS
RHINE-E /etc/mnttab 2 SUN SOL- Unknown
No 0.0 0.0
ARIS
BUT A df on the box gives many many filespaces/systems;
root@rhine-e# df -k
Filesystem kbytes used avail capacity Mounted on
/dev/vx/dsk/rootvol 22620522 5262858 17131459 24% /
/proc 0 0 0 0% /proc
mnttab 0 0 0 0% /etc/mnttab
fd 0 0 0 0% /dev/fd
/dev/vx/dsk/var 8258597 4273638 3902374 53% /var
swap 12068296 136 12068160 1% /var/run
swap 12146912 78752 12068160 1% /tmp
/dev/vx/dsk/dg_dataexec_gratmi01/vol_dataexec_app_genrat
1048576 20459 963911 3% /app/genrat
/dev/vx/dsk/dg_oradata_minepw02/vol_oradata_minepw02_01
56891392 14921812 39346485 28%
/data/oradata_minepw02_01
/dev/vx/dsk/dg_dataexec_minepw02/vol_dataexec_minepw02_01
3145728 17239 2932965 1% /app/mine1
/dev/vx/dsk/dg_oradata_minepw02/vol_redologs_minepw02_02
1013760 170344 790709 18%
/data/oradata_minepw02_redo2
/dev/vx/dsk/dg_oradata_minepw02/vol_redologs_minepw02_01
1013760 170344 790709 18%
/data/oradata_minepw02_redo1
/dev/vx/dsk/dg_dataexec_gbilmi01/vol_dataexec_app_geneva1
1048576 21820 962624 3% /app/geneva1
/dev/vx/dsk/dg_dataexec_minepw02/vol_dataexec_minepw02_02
3145728 17240 2932962 1% /app/mine2
/dev/vx/dsk/dg_apps_rhine-e/vol_oracle
36700160 10812056 24270127 31% /app/oracle
/dev/vx/dsk/dg_apps_rhine-e/vol_cmagent
245760 23968 209037 11%
/app/controlm/cmafir
/dev/vx/dsk/dg_apps_rhine-e/vol_cmsysout
4194304 24013 3912420 1%
/app/controlm/sysout
/dev/vx/dsk/dg_apps_loc_rhine-e/vol_patrol
2048000 178948 1752288 10% /app/bmc/patrol
sunphnxmnt:/prod/archlogs_shared
55296000 19064195 33967355 36%
/data/archlogs_shared/sunphnxmnt
sunshwdmnt:/prod/archlogs_shared
55296000 14958309 37816839 29%
/data/archlogs_shared/sunshwdmnt
/dev/vx/dsk/dg_oradata_cashpw01/vol_oradata_cashpw01_01
56889344 51913693 4664735 92%
/data/oradata_cashdw01_01
/dev/vx/dsk/dg_oradata_cashpw01/vol_oradata_cashpw01_02
56889344 18360 56426688 1%
/data/oradata_cashdw01_02
/dev/vx/dsk/dg_oradata_cashpw01_temp01/vol_oradata_cashpw01_temp01
14222880 17056 14094848 1%
/data/oradata_cashdw01_temp01
/dev/vx/dsk/dg_oradata_cashpw01/vol_redologs_cashpw01_01
254976 151176 97319 61%
/data/oradata_cashdw01_redo1
/dev/vx/dsk/dg_oradata_cashpw01/vol_redologs_cashpw01_02
254976 151176 97319 61%
/data/oradata_cashdw01_redo2
/dev/vx/dsk/dg_oradata_gbilpw02/vol_oradata_gbilpw02_01
56891392 55551047 1256632 98%
/data/oradata_gbildw02_01
/dev/vx/dsk/dg_oradata_gbilpw02/vol_oradata_gbilpw02_02
56891392 55839997 985747 99%
/data/oradata_gbildw02_02
/dev/vx/dsk/dg_oradata_gbilpw02/vol_oradata_gbilpw02_03
56891392 56374453 484694 100%
/data/oradata_gbildw02_03
/dev/vx/dsk/dg_oradata_gbilpw02/vol_oradata_gbilpw02_04
56891392 56376453 482820 100%
/data/oradata_gbildw02_04
/dev/vx/dsk/dg_oradata_gbilpw02/vol_oradata_gbilpw02_05
56891392 55933846 897763 99%
/data/oradata_gbildw02_05
/dev/vx/dsk/dg_oradata_gbilpw02/vol_oradata_gbilpw02_06
56891392 55217278 1569546 98%
/data/oradata_gbildw02_06
/dev/vx/dsk/dg_oradata_gbilpw02/vol_oradata_gbilpw02_07
56891392 16414844 37946769 31%
/data/oradata_gbildw02_07
/dev/vx/dsk/dg_oradata_gbilpw02/vol_oradata_gbilpw02_08
56891392 30419 53307170 1%
/data/oradata_gbildw02_08
/dev/vx/dsk/dg_oradata_gbilpw02/vol_oradata_gbilpw02_09
56891392 30419 53307170 1%
/data/oradata_gbildw02_09
/dev/vx/dsk/dg_oradata_gbilpw02_temp01/vol_oradata_gbilpw02_temp01_01
14222336 19962 13314733 1%
/data/oradata_gbildw02_temp01
/dev/vx/dsk/dg_oradata_gbilpw02/vol_redologs_gbilpw02_01
256000 151177 98278 61%
/data/oradata_gbildw02_redo1
/dev/vx/dsk/dg_oradata_gbilpw02/vol_redologs_gbilpw02_02
256000 151176 98279 61%
/data/oradata_gbildw02_redo2
/dev/vx/dsk/dg_oradata_gratsb01/vol_oradata_gratsb01_01
56623104 50676954 5574567 91%
/data/oradata_gratdw01_01
/dev/vx/dsk/dg_oradata_gratsb01/vol_oradata_gratsb01_02
54525952 46242607 7765691 86%
/data/oradata_gratdw01_02
/dev/vx/dsk/dg_oradata_gratsb01_temp01/vol_oradata_gratsb01_temp01
14222688 17056 14094664 1%
/data/oradata_gratdw01_temp01
/dev/vx/dsk/dg_oradata_gratsb01/vol_redologs_gratsb01_01
509952 201242 289422 42%
/data/oradata_gratdw01_redo1
/dev/vx/dsk/dg_oradata_gratsb01/vol_redologs_gratsb01_02
509952 201242 289422 42%
/data/oradata_gratdw01_redo2
/dev/vx/dsk/dg_dataexec_gratsb01/vol_dataexec_gratsb01_01
22020096 19793079 2115229 91%
/app/genrat_XP1024
/dev/vx/dsk/dg_oradata_gbilsb01/vol_oradata_gbilsb01_01
56623104 55579331 978601 99%
/data/oradata_gbildw01_01
/dev/vx/dsk/dg_oradata_gbilsb01/vol_oradata_gbilsb01_02
56623104 55859828 715636 99%
/data/oradata_gbildw01_02
/dev/vx/dsk/dg_oradata_gbilsb01/vol_oradata_gbilsb01_03
56623104 55710396 855727 99%
/data/oradata_gbildw01_03
/dev/vx/dsk/dg_oradata_gbilsb01/vol_oradata_gbilsb01_04
56623104 55040500 1483755 98%
/data/oradata_gbildw01_04
/dev/vx/dsk/dg_oradata_gbilsb01/vol_oradata_gbilsb01_05
56623104 56429029 182009 100%
/data/oradata_gbildw01_05
/dev/vx/dsk/dg_oradata_gbilsb01/vol_oradata_gbilsb01_06
56623104 56327484 277976 100%
/data/oradata_gbildw01_06
/dev/vx/dsk/dg_oradata_gbilsb01/vol_oradata_gbilsb01_07
56623104 55408971 1138250 98%
/data/oradata_gbildw01_07
/dev/vx/dsk/dg_oradata_gbilsb01/vol_oradata_gbilsb01_08
56623104 55040180 1483993 98%
/data/oradata_gbildw01_08
/dev/vx/dsk/dg_oradata_gbilsb01/vol_oradata_gbilsb01_09
56623104 56105204 485532 100%
/data/oradata_gbildw01_09
/dev/vx/dsk/dg_oradata_gbilsb01/vol_oradata_gbilsb01_10
56623104 47182140 8850969 85%
/data/oradata_gbildw01_10
/dev/vx/dsk/dg_oradata_gbilsb01/vol_oradata_gbilsb01_11
56623104 55503236 1049940 99%
/data/oradata_gbildw01_11
/dev/vx/dsk/dg_oradata_gbilsb01/vol_oradata_gbilsb01_12
56623104 55703956 861765 99%
/data/oradata_gbildw01_12
/dev/vx/dsk/dg_oradata_gbilsb01/vol_oradata_gbilsb01_13
56623104 56154836 439065 100%
/data/oradata_gbildw01_13
/dev/vx/dsk/dg_oradata_gbilsb01/vol_oradata_gbilsb01_14
56623104 56027861 558104 100%
/data/oradata_gbildw01_14
/dev/vx/dsk/dg_oradata_gbilsb01/vol_oradata_gbilsb01_15
56623104 56212213 385274 100%
/data/oradata_gbildw01_15
/dev/vx/dsk/dg_oradata_gbilsb01/vol_oradata_gbilsb01_16
56623680 55315888 1297576 98%
/data/oradata_gbildw01_16
/dev/vx/dsk/dg_oradata_gbilsb01/vol_oradata_gbilsb01_17
56889344 18360 56426688 1%
/data/oradata_gbildw01_17
/dev/vx/dsk/dg_oradata_gbilsb01_temp01/vol_oradata_gbilsb01_temp01
14222608 10502840 3690720 74%
/data/oradata_gbildw01_temp01
/dev/vx/dsk/dg_oradata_gbilsb01/vol_redologs_gbilsb01_01
509952 201242 289422 42%
/data/oradata_gbildw01_redo1
/dev/vx/dsk/dg_oradata_gbilsb01/vol_redologs_gbilsb01_02
509952 201242 289422 42%
/data/oradata_gbildw01_redo2
/dev/vx/dsk/dg_oradata_settsb01/vol_oradata_settsb01_01
55296000 55288072 7928 100%
/data/oradata_settmi01_01
/dev/vx/dsk/dg_oradata_settsb01/vol_oradata_settsb01_02
55296000 54866320 426392 100%
/data/oradata_settmi01_02
/dev/vx/dsk/dg_oradata_settsb01/vol_oradata_settsb01_03
55296000 54871392 421360 100%
/data/oradata_settmi01_03
/dev/vx/dsk/dg_oradata_settsb01/vol_oradata_settsb01_04
55296000 55124280 170448 100%
/data/oradata_settmi01_04
/dev/vx/dsk/dg_oradata_settsb01/vol_oradata_settsb01_05
55296000 54650224 640736 99%
/data/oradata_settmi01_05
/dev/vx/dsk/dg_oradata_settsb01/vol_oradata_settsb01_06
55296000 54763800 528112 100%
/data/oradata_settmi01_06
/dev/vx/dsk/dg_oradata_settsb01/vol_oradata_settsb01_07
55296000 54979880 313720 100%
/data/oradata_settmi01_07
/dev/vx/dsk/dg_oradata_settsb01/vol_oradata_settsb01_08
55296000 55052528 241576 100%
/data/oradata_settmi01_08
/dev/vx/dsk/dg_oradata_settsb01/vol_oradata_settsb01_09
56391520 54979976 1400584 98%
/data/oradata_settmi01_09
/dev/vx/dsk/dg_oradata_settsb01/vol_oradata_settsb01_10
57671680 22546528 34850744 40%
/data/oradata_settmi01_10
/dev/vx/dsk/dg_oradata_settsb01/vol_oradata_settsb01_temp
55296000 1589256 53287168 3%
/data/oradata_settmi01_temp1
/dev/vx/dsk/dg_oradata_settsb01/vol_redologs_settsb01_01
1024000 416688 602576 41%
/data/oradata_settmi01_redo1
/dev/vx/dsk/dg_oradata_settsb01/vol_redologs_settsb01_02
1024000 416688 602576 41%
/data/oradata_settmi01_redo2
/dev/vx/dsk/dg_archlogs_settsb01/vol_archlogs_settsb01_01
57375360 21603304 35492616 38%
/data/archlogs_settmi01/arch1
/dev/vx/dsk/dg_dataexec_settsb01/vol_dataexec_settsb01_01
53248000 40802272 12359992 77% /app/settmi01
/dev/vx/dsk/dg_apps_loc_rhine-e/vol_home
2048000 38559 1883885 3% /home
....where are all the filespaces in the 'q fi' mentioned at the top of
the incremental log???
The only entries in the include-exclude list for /home are
Exclude /home/.../tmp/.../*
Exclude /home/.../.sh_hist*
And Finally, I created a file called testfile in a directory /home/test/
Then ran an incremental with the following result
root@rhine-e# ls -ltr
total 2
-rw-r--r-- 1 root other 748 May 5 15:54 testfile
root@rhine-e# cd /
root@rhine-e# dsmc incremental -servername=STANDARD_OSA IBM Tivoli
Storage Manager Command Line Backup/Archive Client Interface - Version
5, Release 2, Level 0.0
(c) Copyright by IBM Corporation and other(s) 1990, 2003. All Rights
Reserved.
Node Name: RHINE-E
Session established with server RUTLAND: AIX-RS/6000
Server Version 5, Release 2, Level 2.3
Server date/time: 05/05/05 14:54:52 Last access: 05/05/05
14:52:44
Incremental backup of volume '/'
Incremental backup of volume '/etc/mnttab'
Incremental backup of volume '/var'
Incremental backup of volume '/var/run'
Incremental backup of volume '/app/genrat'
Incremental backup of volume '/app/mine1'
Incremental backup of volume '/app/geneva1'
Incremental backup of volume '/app/mine2'
Incremental backup of volume '/app/oracle'
Incremental backup of volume '/app/controlm/cmafir'
Incremental backup of volume '/app/controlm/sysout'
Incremental backup of volume '/app/bmc/patrol'
Incremental backup of volume '/app/genrat_XP1024'
Incremental backup of volume '/data/oradata_gbildw01_17'
Incremental backup of volume '/data/oradata_settmi01_01'
Incremental backup of volume '/data/oradata_settmi01_02'
Incremental backup of volume '/data/oradata_settmi01_03'
Incremental backup of volume '/data/oradata_settmi01_04'
Incremental backup of volume '/data/oradata_settmi01_05'
Incremental backup of volume '/data/oradata_settmi01_06'
Incremental backup of volume '/data/oradata_settmi01_07'
Incremental backup of volume '/data/oradata_settmi01_08'
Incremental backup of volume '/data/oradata_settmi01_09'
Incremental backup of volume '/data/oradata_settmi01_10'
Incremental backup of volume '/data/oradata_settmi01_temp1'
Incremental backup of volume '/data/oradata_settmi01_redo1'
Incremental backup of volume '/data/oradata_settmi01_redo2'
Incremental backup of volume '/app/settmi01'
Incremental backup of volume '/home'
ANS1898I ***** Processed 1,500 files *****
ANS1898I ***** Processed 3,000 files *****
ANS1898I ***** Processed 12,500 files *****
ANS1898I ***** Processed 21,000 files *****
ANS1898I ***** Processed 24,500 files *****
ANS1898I ***** Processed 28,500 files *****
ANS1898I ***** Processed 32,500 files *****
ANS1898I ***** Processed 35,500 files *****
ANS1898I ***** Processed 36,500 files *****
ANS1898I ***** Processed 41,000 files *****
ANS1898I ***** Processed 46,500 files *****
ANS1898I ***** Processed 48,000 files *****
ANS1898I ***** Processed 50,000 files *****
ANS1898I ***** Processed 52,000 files *****
ANS1898I ***** Processed 53,000 files *****
ANS1898I ***** Processed 55,000 files *****
ANS1898I ***** Processed 56,000 files *****
ANS1898I ***** Processed 58,000 files *****
ANS1898I ***** Processed 60,000 files *****
ANS1898I ***** Processed 61,500 files *****
Special File--> 0 /devices/pseudo/pts@0:2 [Sent]
Special File--> 0 /devices/pseudo/pts@0:6 [Sent]
Normal File--> 65 /etc/opt/SUNWsrshp/chkpt.001 [Sent]
Special File--> 0 /etc/saf/_sacpipe [Sent]
Special File--> 0 /etc/saf/zsmon/_pmpipe [Sent]
ANS1898I ***** Processed 63,000 files *****
ANS1898I ***** Processed 64,500 files *****
ANS1898I ***** Processed 65,500 files *****
ANS1898I ***** Processed 66,500 files *****
ANS1898I ***** Processed 67,500 files *****
ANS1898I ***** Processed 68,500 files *****
ANS1898I ***** Processed 70,000 files *****
ANS1898I ***** Processed 71,000 files *****
ANS1898I ***** Processed 72,500 files *****
ANS1898I ***** Processed 73,000 files *****
ANS1898I ***** Processed 75,500 files *****
ANS1898I ***** Processed 77,000 files *****
ANS1898I ***** Processed 78,500 files *****
ANS1898I ***** Processed 80,000 files *****
ANS1898I ***** Processed 81,500 files *****
ANS1898I ***** Processed 83,000 files *****
ANS1898I ***** Processed 84,000 files *****
ANS1898I ***** Processed 86,500 files *****
ANS1898I ***** Processed 88,000 files *****
ANS1898I ***** Processed 89,500 files *****
ANS1898I ***** Processed 91,000 files *****
ANS1898I ***** Processed 93,000 files *****
ANS1898I ***** Processed 94,500 files *****
ANS1898I ***** Processed 96,000 files *****
ANS1898I ***** Processed 97,000 files *****
ANS1898I ***** Processed 98,500 files *****
ANS1898I ***** Processed 99,500 files *****
ANS1898I ***** Processed 101,000 files *****
ANS1898I ***** Processed 102,500 files *****
ANS1898I ***** Processed 104,000 files *****
ANS1898I ***** Processed 105,500 files *****
ANS1898I ***** Processed 106,000 files *****
ANS1898I ***** Processed 108,500 files *****
ANS1898I ***** Processed 109,500 files *****
ANS1898I ***** Processed 111,000 files *****
ANS1898I ***** Processed 112,500 files *****
ANS1898I ***** Processed 113,500 files *****
ANS1898I ***** Processed 114,500 files *****
ANS1898I ***** Processed 116,000 files *****
ANS1898I ***** Processed 117,500 files *****
ANS1898I ***** Processed 119,000 files *****
ANS1898I ***** Processed 120,500 files *****
ANS1898I ***** Processed 122,000 files *****
ANS1898I ***** Processed 124,000 files *****
ANS1898I ***** Processed 125,500 files *****
ANS1898I ***** Processed 127,000 files *****
ANS1898I ***** Processed 128,500 files *****
ANS1898I ***** Processed 130,000 files *****
Total number of objects inspected: 130,042
Total number of objects backed up: 5
Total number of objects updated: 0
Total number of objects rebound: 0
Total number of objects deleted: 0
Total number of objects expired: 0
Total number of objects failed: 0
Total number of bytes transferred: 800.08 KB
Data transfer time: 0.31 sec
Network data transfer rate: 2,568.42 KB/sec
Aggregate data transfer rate: 8.32 KB/sec
Objects compressed by: 0%
Elapsed processing time: 00:01:36
Why did TSM not backup the testfile???
It's not excluded, and it is listed as a filepsace for backup by the
incremental process then seemingly ignored. What gives? There are no
DOMAIN statements either.
Puling my Hair out!
Thanks,
Matt.
___________________________ Disclaimer Notice __________________________
This message and any attachments are confidential and should only be
read by those to whom they are addressed. If you are not the intended
recipient, please contact us, delete the message from your computer and
destroy any copies. Any distribution or copying without our prior
permission is prohibited.
Internet communications are not always secure and therefore Powergen
Retail Limited does not accept legal responsibility for this message.
The recipient is responsible for verifying its authenticity before
acting on the contents. Any views or opinions presented are solely those
of the author and do not necessarily represent those of Powergen Retail
Limited.
Registered addresses:
Powergen Retail Limited, Westwood Way, Westwood Business Park, Coventry,
CV4 8LG.
Registered in England and Wales No: 3407430
Telephone +44 (0) 2476 42 4000
Fax +44 (0) 2476 42 5432
|