If you are using PowerOn, it is possible that the original implementors set up your backup scheme to use ds_transfer in "hot" mode. You may want to read the following article and see if your backups are in jeopardy.
Original Article follows....
Since there is a discussion about backups and the ds_transfer mechanism was mentioned as a solution I thought I would throw out a warning about that approach. Simply put, when using ds_transfer.new() in "hot" mode, you should ensure that no one is writing to the database while you are performing the ds_transfer.
The typical ds_transfer scenario involves ds_transferring one DS file at a time. Imagine that you ds_transfer gdb.ds and it takes a total of 15 minutes. 10 minutes into the ds_transfer a user writes a fiber(1234) RWO and geometry to that partition. But the alternative that was being written to had already been processed at minute 2 of the 15 minute process. After the 15 minutes are done, the ds_transfer starts working on the rwo.ds file.
You will not see a problem in the live production dataset, but you will now have a situation where the "ds_transferred" dataset has an inconsistency between the rwo.ds and the gdb.ds. This is a simple scenario, but imagine that changes are at a larger scale or possibly involve adding/removing alternatives while a hot ds_transfer is in progress. These will all be permitted actions but may cause unintended results in the "ds_transferred" files. If you are using the ds_transfer.new() with "hot" mode you should try opening a copy of your recently ds-transferred files to see if they are consistent with each other.
If you want to continue using the ds_transfer functionality to perform "hot" backups, refer to ds_transfer.transfer_partition() to see how it might be beneficial to you. This method has the advantage of allowing you to snapshot all DS files in a partition at the same time so you will not encounter any of these data consistency issues.
No comments:
Post a Comment