07 and backups - do not trust backupexec

David Swain dataguru at polymath-bus-sys.com
Sun Jan 17 13:57:33 EST 2010

Hi Doug,

With MySQL it is basically the same. I agree with your basic premise,  
but would add a few more details:

Even snapshots take a finite amount of time to perform (even if it's  
only seconds), during which time there could be some logical  
corruption creep in because of other operations being performed on  
the database. The remedy for this is to lock all the tables to be  
backed up (in READ mode, if your system allows, which makes them  
still available for querying, just not for updating), make the  
snapshot and then unlock the tables to get them back into full  
service once the snapshot is complete. (Paranoia is an admirable  
quality in a database administrator... ;-) The disruption to service  
is minimal, but necessary because snapshots are still not instantaneous.

The next step is to "realize" the snapshot and do a cold backup from  
there - and then test the backup to make sure it's viable. To me, a  
"backup" that has not been tested is only an *attempt* at a backup.  
Trust, but verify...

My preference on systems that support replication is to use a  
replication slave as a means toward creating a backup rather than  
using snapshots. In MySQL (the system with which I am most familiar)  
this is easy to set up, although it does require an additional  
computer (or a second MySQL process on the same computer, but it is  
best if the slave saves its data to a different physical drive to  
minimize bottlenecks on an active system - and using a different  
computer removes any potential contention for RAM resources). The  
system of logs in the DBMS then provides a complete path back to the  
last operation performed on the Master machine (in MySQL, the  
"binary" log on the Master and the "relay" log on the Slave), which  
can be played back into a MySQL client after recovering to a backup  
for the purpose of incremental recovery. Shutting down the  
replication slave causes NO disruption of service on the Master.

But that's just my opinion... I'm sure there will be others... ;-)




On Jan 17, 2010, at 12:44 PM, Doug Easterbrook wrote:

> hi Wendy, and the world.
> I'll try to be plain and simple with this.
> there... its off my chest.
> Why?   If any database:
> - is in use, be it omnis .df1, oracle, mysql, postgres (doesn't  
> matter)  AND
> - it being used by the user AND
> - any backup tool backs it up while in use (even if it has the  
> ability to do open files)
> then your backup is hopelessly corrupt.   It will either be:
> a) physically corrupt OR
> b) logically corrupt OR
> c) both
> and definitely USELESS.
> The only recommended way of doing backups for any database is:
> a) make a snapshot of the database
> b) backup the snapshot
> c) make the backup program avoid the directory containing the live  
> database.
> For postgres, we do:
> - pg_Dump to a file and back up the file
> With open base, we
> - ran the backup utility in openbase to get a snapshot, then saved  
> that
> when we did omnis df1, we recommended;
> - stopping omni (we used Kelly's lights out to force people out of  
> the app).
> - or stopping the data bridge if you are using that - which make is  
> nicer
> - using winzip or some task to ZIP the file(s) to a backup
>    (that way all the pieces of the df1, df2, df3, etc are in one file)
> - backup the zip file and avoid the data bridge or omnis directory.
> and why is a backup useless unless you use a tool designed for  
> snapshots?   If you start a backup on the df1, then by the time the  
> df1 is halfway saved to the backup, a use could change the df2, or  
> another protion of the df1, and the indexes get changed.   Those  
> changes don't make it to the backup.
> and if you try to restore, index errors, free block errors, unable  
> to read some records (physical corruption) not to mention that the  
> backup has half the changes in it (logical corruption).
> its why we like postgres and pg_dump.   The database design  
> snapshots the records that make up a logically correct backup and  
> start saving those --- and you can continue to run - secure in the  
> knowledge that you have a logically correct backup - even though  
> changes are being mad 24x7.
> I think oracle is the same way.
> Openbase forces the world to stop while the backup is being made
> Not sure why mysql does.
> anyway..  it doesn't really matter - you cannot trust any backup  
> tool (backupexe, retrospect, you name it) to give you a logically  
> correct database.
> now, I step down from my soapbox.    This is, by far, the biggest  
> issue I have faced in 35 years in the database world - because we  
> find customers don't think this way and just assume a copy of the  
> live database is good enough.   It hardly ever is and can hardly  
> ever be restored.   And I have to explain why, then pull the rabbit  
> out of the hat, and then explain why its their fault and so they  
> have to pay for the hours it takes to recover their data.
> hope this helps.
> always make a snapshot, then backup the snapshot
> Doug Easterbrook
> Arts Management Systems Ltd.
> mailto:doug at artsman.com
> http://www.artsman.com
> Phone (403) 536-1205    Fax (403) 536-1210
> On Jan 17, 2010, at 10:00 AM, omnisdev-en-request at lists.omnis- 
> dev.com wrote:
>> Hi all
>> I am after picking brains again.
>> A customer has outsourced their IT support - this company says  
>> they use
>> 'Backupexec' to create 'snapshots' of the database.
>> Does any one know if this program would a) create a proper backup if
>> someone was still logged into the database and b) could it be  
>> possible for the
>> backup routine to corrupt the database if it could not create a  
>> proper
>> backup?
>> Many thanks for your help
> _____________________________________________________________
> Manage your list subscriptions at http://lists.omnis-dev.com

More information about the omnisdev-en mailing list