In the previous blog I described how to implement optimistic locking logic by using ORA_HASH function call.

Here I’ll explore another option that I frequently use – last time when the row has been changed.

Let’s create test table by using DEPT table as a base with one additional column – MOD_TIME.

For MOD_TIME column I’ll choose the most precise data type – TIMESTAMP (or one of it’s variation like TIMESTAMP WITH TIME ZONE or TIMESTAMP WITH LOCAL TIME ZONE), and add data from original DEPT into our TTEST table.

The most important is to know that you can define the fractional part of the second precision of the TIMESTAMP data type.

It can be in a range from 0 – 9, where 6 is a default value.

After importing, the content of TTEST table is the following:

create table ttest
tablespace users
select * from dept where 1 = 2;

alter table ttest add constraint pk_deptno primary key (deptno);

alter table ttest add (mod_time timestamp default systimestamp not null);

insert into ttest (deptno, dname, loc)
select deptno, dname, loc from dept;


column deptno FORMAT 99
column dname format a15
column loc format a15
column mod_time format 999999999999999

select * from ttest;

------ --------------- --------------- -----------------------------
10 ACCOUNTING NEW YORK 06.09.2018 12:20:41,988701000
20 RESEARCH DALLAS 06.09.2018 12:20:41,988701000
30 SALES CHICAGO 06.09.2018 12:20:41,988701000
40 OPERATIONS BOSTON 06.09.2018 12:20:41,988701000

In described approach the whole logic is based on last time when some row has been changed. For that reason I need to put some maintenance logic for MOD_TIME column.

There are two approaches that come to my mind:
– trigger
– application logic

Each of them have pros and cons.

Main benefit of trigger approach is that the same logic of maintenance will be applied no matter if one or many different applications will perform DML against TTEST table.

The only exception is compound trigger type, that is not applicable here.

More about compound triggers you can find on the following page:
Compound triggers performance and use cases


Major downside is that you are no longer use set theory (set by set processing which is the main reason of having SQL in today’s coding) and ability to apply changes on many rows at once.

Instead you’ll use row by row level processing, which is a way slower (depending of how many rows will be changed).

Later approach – using application logic, won’t suffer from the performance penalties, but you have to keep in mind that the same logic needs to be applied for any other (existing or new) application that is going to use and manipulate date on the TTEST table.

I’ll describe the application logic approach. From that you can easily construct the trigger logic if you are confident that trigger logic fit the best for your particular case.

In case I want to change/update only one row, here is the logic you can use:


l_deptno number(2,0);
l_dname varchar2(14);
l_loc varchar2(13);
l_mod_time timestamp(6);


select deptno, dname, loc, mod_time into l_deptno, l_dname, l_loc, l_mod_time from ttest where deptno = 30;

dbms_output.put_line ('------------------------------------------------------------------------------------');
dbms_output.put_line('Original values');
dbms_output.put_line('l_deptno = ' || l_deptno);
dbms_output.put_line('l_dname = ' || l_dname);
dbms_output.put_line('l_loc = ' || l_loc);
dbms_output.put_line('l_mod_time = ' || l_mod_time);
dbms_output.put_line ('------------------------------------------------------------------------------------');
dbms_output.put_line ('');
dbms_output.put_line ('');
dbms_output.put_line ('');


Original values
l_deptno = 30
l_dname = SALES
l_loc = CHICAGO
l_mod_time = 06.09.2018 12:20:41,988701

Let’s imagine that some other user made changes on the same row.

Execute the following command from the other session:

update ttest set loc = 'SAN FRANCISCO', mod_time = systimestamp where deptno = 30;
1 row updated.

Commit complete.

Now we can see that last modification time (mod_time column) has been changed.

select mod_time from ttest where deptno = 30;
06.09.2018 15:21:15,132

If I try to update the same row after someone else has already changed the same row to complete the transaction:

update ttest set loc = 'NEW ORLEANS' where deptno = 30 and mod_time = '06.09.2018 12:20:41,988701';
0 rows updated.

As we can see, no rows has been updated as the mod_time value is different from the value when transaction started.

We can conclude that logic is working as expected, and we are not going to lock any row till the very last moment, and we are confident that no data will be corrupted in a case when someone else modify the same row in the middle of my logic.


Some points I want to stress here:

  1. In case we need to update multiple rows, instead of regular variables I can use collections or temp tables. The rest of the logic will be the same.
  2. From other session we need to maintain mod_time column value
    update ttest set loc = ‘SAN FRANCISCO’, mod_time = systimestamp where deptno = 30;
    If we forget to update mod_time column, the whole logic will fail and data will be corrupted.
    That’s main advantage of trigger logic.
  3. In application logic I’m updating the row of interest at the right moment.
    update ttest set loc = ‘NEW ORLEANS’ where deptno = 30 and mod_time = ‘06.09.2018;
    In this case only 1 row will be affected.
    But what if I’m updating 100000 rows?
    Update statement will be the same, as I’m using SQL (Set theory) to update all the rows at once.
    In trigger approach, 100000 rows will be updated in row-by-row way, which is a way slower comparing it with application approach.



Major downside of described way of implementing optimistic locking by using last time when some row has been modified logic is that we need to maintain MOD_TIME column by using database trigger or to implement logic in all applications that are using TTEST table in this case.

As you can see, both approaches have their pros and cons.

In theory, logic based on last modification time and TIMESTAMP data type may also fail in a case that two sessions updates the same row, but that is really theory as precision of fragment of second (6 decimal places by default, 9 places is max precision) is not likely to be seen even on the most active Db system in the world.

The least amount of downsides I found with ORA_HASH approach which I described in the previous article, especially in combination with virtual column, as I don’t need to maintain hash column by using triggers or let application to handle it. Everything is done automatically.

Check more about ORA_HASH approach of optimistic locking logic at:

Performance tuning by using optimistic instead of pessimistic locking


Check more about virtual columns on the following page:

How to index only rows of interest


There are no comments yet. Why not start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.