Of course, it works, but again:
- if you have a tablespace with a datafile of 100GB and the tablespace is empty at 90% ( only 10GB of data ) backing up the datafile as you do now, you are spending a lot of time/resources to copy 90% of no data. Makes sense to backup garbage?
- In normal operations, Oracle use the redo logfiles to store the changes made to data blocks, in backup mode Oracle needs to store in the logfile ALL block, even if the data changed is only one byte in this block.
Read
http://www.speakeasy.org/~jwilton/oracle/hot-backup.html ( googled )
Then, you are spending a lot of time and resources to backup data that is really not ecessary to backup. And you are worry about the time/disk/tape need to backup your DB, aren't you?
Every database, and how it is used, is different and needs a personalized study, but you can give rman a try, using your development environment.
Backup your development database using your method, after that look the time and volume data backed up ( take in mind the extra admin cost to mantain the scripts / verification ) using your method
Now, let's give a try with rman:
on your development host ( the database must be in archive log mode ) , connected as oracle software owner and with the environment var pointing to your desired ORACLE_HOME / INSTANCE, execute:
rman target / nocatalog
RMAN> run {
ALLOCATE CHANNEL D1 DEVICE TYPE DISK FORMAT '/backups/%d_TEST_%u_%s_%p'; # where /backups/ is a destination with space to store backup result, can be nfs
#if your Oracle release is > 8i :
BACKUP AS COMPRESSED BACKUPSET DATABASE PLUS ARCHIVELOG;
#if 8i
BACKUP DATABASE PLUS ARCHIVELOGS;
#WE MAKE BACKUP OF CONTROL FILE, YOU CAN PUT HERE COMMANDS TO BACKUP THE PARAMETER FILE, TOO
COPY CURRENT CONTROLFILE TO '
/backups/TEST_CONTROLFILE.BCK';
SQL 'ALTER SYSTEM ARCHIVE LOG CURRENT';
RELEASE CHANNEL D1;
}
after that, you could backup
'/backups/' with bacula
As I said, we need more info ( database in archive log mode?, Oracle release? How is the data stored? are you using partitioning? etc... ) but I think that your backup time could be decreased a lot with a correct study of your situation.
For example:
Your system seems to be highly transactional, but I think that a lot of data, after inserted, is never touched again. You can make uise of partitioning to move 'historical' data to definde 'historical tablespaces' and backing up one time and putting them in readonly mode, focusing your daily backup efforts in a subset of data ( if you need to restore the entire database you will have available the 'historical tablespaces' from an 'historical backup' ) plus the last daily backup + archivelogs.
As Dan pointed ( and as I did in a previous mail, pointing about Oracle Data Guard ), seems that you need urgently a site of contingence. If your host crashes your service will be down until you get again a host with the restored database up&running. What time could be this? Can you ( your customers ) wait for it?
What will happen if you are out of service for a while?
Configuring an Oracle Data Guard, you have guarantee of a remote site replicating the changes of your production database, if a big crash occurs, in a few minutes ( the time spent in putting your standby database in 'production mode' and may be pointing your dns entries to the standby host ), you will have your service up&running again
Data Guard concepts:
http://download.oracle.com/docs/cd/B10501_01/server.920/a96653/partpage1.htm#436380
D.