DCOM Explained
by Rosemary Rock-Evans Digital Press ISBN: 1555582168 Pub Date: 09/01/98 |
Previous | Table of Contents | Next |
Commit and rollback
If a transaction is able to be completed in its entirety, then it can be committed. The action of issuing a commit statement causes all the Resource managers to complete the commands which were held, waiting to be completed, in temporary storagemessages are put on queues, updates are applied to files and DBMSs, records are written out to the spooler of print managers, and so on.
If the transaction cannot be completed successfully, it is aborted and will be rolled back. Rolling back a transaction restores the system to the state it was before any work started. In this case, all the updates and actions in waiting which were held, waiting to be completed, in temporary storage are removed from storage as if they never existed.
As the user is working through the transaction on a screen or monitor, it will appear as though the updates are being made, as the data on screen will be up-to-date data, but in reality the database and files behind the scenes will have only this working image or log of what is required. Rolling back a transaction, thus, does not involve any change to the database, but it does involve the removal of any data which was created by the transaction and could mess up transactions which follow on or any backups and restores. What do we mean by this?
Well, this temporary store of data in waiting is normally kept in the DBMSs log file. This same log file is also used to restore the DBMS when it fails entirely.
All good DBMSs use a combination of backup files and logs of transactions to enable the system to roll forward to get it back to where it was before a crash or loss of data occurred. The log holds a record of the transactions made against the database, and the effect of rolling forward these transactions against the backup file is to redo all the updates which were lost, bringing it back to the state it was before the failure occurred. The backup file and log thus have to be synchronized. Some DBMSs perform this synchronization automatically; some need the database administrator to control the frequency of backups and the allocation of log files.
If a transaction fails, the updates in waiting need to be removed from the log file so they do not get confused with real updates that have been committed.
Locks
A lock is a preventative measure which stops other processes updating or optionally using the data while the update is being performed. It is there to ensure that the update is consistently applied. It is also there to ensure that users looking at data dont make decisions or do calculations on data that is in the process of changing.
Where you have many concurrent users and you allow one user to update data which another one is also trying to update at the same time, you can get in a real mess without locking. Lets look at an example to demonstrate.
One user may be trying to enter an order at the same time as another. The first user checks the stock and all seems OK. He or she then starts to create the order and the delivery. In the meantime, however, the second user may have started to update an order of massive proportions, which completely removes all the goods in stock.
The first user may think he or she has enough stock to fulfill the order, so this first user creates the order. He or she then tries to create the delivery and cant because there isnt enough stock to fulfill the order. Inconsistency!
You can only get a consistent update if you apply a transaction from beginning to end without any amendments to the data being allowed from other users in the meantime. This is what locks are for; they stop others updating the data while you are doing it.
Resource managers apply locks differently, and it is just one of the ways in which you can see whether they are good products or not. Locks can be applied at the record level, the page level, the file level, or the item level. The lower and more detailed the level at which locks are applied by the Resource manager, the better, as the less likely it is that performance will be degraded.
I think you should be able to see that if your DBMS locked at the file level when you updated data, you could end up stopping the update of hundreds of other users. Whereas, if your DBMS allowed locks at the record level, you are unlikely to lock out many other users at all because the likelihood of two of you working on the same data at the same time is very remote.
Locks are usually applied when the command to commit is given. In other words, when the command to commit is finally given, the DBMS or Resource manager should also apply locks to the data being updated. This deserves some explanation to those of you designing conversational online updates.
Conversational styles of dialogue force a different style to that used in batch, where locks can be employed over the duration of a transaction. A conversational style of update should normally not apply locks until the last exchange in the dialogueonce the user has committed. This is because during the conversation you would be locking out every other user from that data, and if your user sat there for hours trying to complete the transaction, he or she could bring the entire system to a standstill. Worse still, he or she may get bored and walk away, leaving the entire transaction hanging and unresolved with all locks in place.
So locks are applied normally at the end, on the commit, which means that before a commit is issued the validity of the update may be checked then by revalidation, comparison of time stamps, comparison of version numbers, plus other methods, depending on the capabilities of the DBMS.
Previous | Table of Contents | Next |