ODB 1.77 and Time Machine backups

Doug Easterbrook doug at artsman.com
Wed Mar 2 03:34:09 UTC 2022


hi Andrew:

I would not trust time machine on a DF1 at all.

now, you might get away with it if you:
- only have one database segment and
- time machine makes a quick copy and
- nobody is making changes to the data file

but if you have multiple segments and any sort of db updates transpiring — you are going to have record issues and they are not going to be immediately transparent.


finally, when you say they ‘restored ok’ and ‘worked ok’.

I’m sure they restored ok, its just a file.  grabbing it from the time machine backup should always work.


data integrity is a different ball of wax.

to see if they worked ok, you'd need to run data utilities/data check on the entire database to make sure there were no things like freeblock errors, missing records, truncated data or other pointer issues.    It truly is difficult to see errors in the DF1 with multiple segment data files and a comprehensive tool to do so.


Back in the omnis 7 days when we had to go to the ODB and studio 3 days when rewriting out app .. we had many many issues with the DF1.
It was much more noticeable when we we pushing upwards of 15 segments on two data files (total 30 ish segments).   I can’t tell you how many days we stayed up most of the night rebuilding/copying data to a new data file and having to find missing data via a couple of indexes to make sure it all existed.   

was omnis the cause of the issue?  not always.   Customers might dump a backup into the main db folder, or we started the ODB again before files were fully copied … and nomber of issues that caused conflict between operatig system file locks and what was being written.



I would really advise stoping ODB, backing up files when not in use, and then starting ODB back up.     otherwise, you are playing with fire (or having unprotected S*x). which ever analogy you prefer.   its not a matter of if you will get burnt, its when.    Murphy’s Law always says it will be when it is the worst time.




Doug Easterbrook
Arts Management Systems Ltd.
mailto:doug at artsman.com
http://www.artsman.com
Phone (403) 650-1978

> On March 1, 2022, at 5:16 PM, Andrew McVeigh <surfway at bigpond.com> wrote:
> 
> We rely on Time Machine to backup clients ODB data all the time so this worried me and I did some checks on a clients last weeks backups and they all restored ok and worked ok
> 
> It may depend on whether anyone was working in a record at the time but 5 different restores all appeared ok
> 
> Will keep an eye on this though
> 
> Andrew McVeigh
> Surfway Real Solutions
> Phone 02 44412679 Mobile 0418428016
> www.surfway.com.au
> www.berrarabeach.com.au
> <http://www.surfway.com.au/>
>> On 1 Mar 2022, at 12:57 pm, Doug Easterbrook via omnisdev-en <omnisdev-en at lists.omnis-dev.com> wrote:
>> 
>> hi Michael:
>> 
>> to speak to time machine on a DF1 —  it can’t be done.  
>> 
>> in fact, I’d put an exclusion on the folder as follows
>> 
>> 
>> tmutil addexclusion /path/to/Folder/ContainingDatafiles
>> 
>> 
>> 
>> why can’t you?
>> 
>> time machine needs exclusive access to the file get it backed up
>> 
>> If you managed to backup one of the data files (eg DF1 but not DF2), then you have a worthless backup because you must backup all of them at the same time to have data integrity.    You don’t know which segment data is inserted or written, and if your DF1 is auto expanding and need to expand .. the internal DB pointers will be not in a good place.
>> 
>> 
>> when we used the ODB, we had a script that would
>> 
>> 1) shut down the databridge at an appointed time
>> 2) COPY all files in the folder to another folder
>> 3) start up the data bridge
>> 4) ZIP the COPIED folder into a file like  database_backup_YYMMDD_HHMMSS.zip
>> 5) deleted the files in the copied folder
>> 
>> 
>> then let time machine backup the zipped files.
>> 
>> 
>> why COPY and then ZIP .. to minimize down time of the database.   copy is much faster, zip is slow, so best to zip a copy and let people into the real database.
>> 
>> 
>> 
>> 
>> 
>> if you were using the postgres bridge in studio 10.2, and the database was in postgres, then you can back it up using a pg_dump command while it is being used, so you can have 24x7 access to a database
>> 
>> but even then, you never backup the raw database files —  you always run a dump, and then back that up.     Same with oracle.
>> 
>> 
>> 
>> the general gist of this is .. no you can’t back up live databases  — you have to use some tool to make a copy that becomes you backup.
>> 
>> 
>> 
>> and .. all the above is because I’ve been bitten by it.
>> 
>> 
>> Doug Easterbrook
>> Arts Management Systems Ltd.
>> mailto:doug at artsman.com
>> http://www.artsman.com
>> Phone (403) 650-1978
>> 
>>> On February 28, 2022, at 5:27 PM, Michael Houlberg <michael at houlbergdevelopment.com> wrote:
>>> 
>>> My client asks since ODB is running all the time, can he rely on backups to his .df1, .df2, and other segments as trustworthy?  The thinking is that maybe ODB needs to shut down before a backup can accurately be made?
>> 
>> _____________________________________________________________
>> Manage your list subscriptions at https://lists.omnis-dev.com
>> Start a new message -> mailto:omnisdev-en at lists.omnis-dev.com 
> 
> _____________________________________________________________
> Manage your list subscriptions at https://lists.omnis-dev.com
> Start a new message -> mailto:omnisdev-en at lists.omnis-dev.com 



More information about the omnisdev-en mailing list