Supplemental logging feature along with streaming and CDC (change data capture) is available for a long time and is used mainly as one of the integration options especially for loading data from transactional (OLTP) into DW (Data Warehouse) staging tables.
During my engagement on project for one large retailer, I noticed significant performance penalties due to usage of inappropriate technology (supplemental logging) for that particular case.
Next picture is showing top wait event.
It’s very easy to enable supplemental logging feature. You only need to execute the following command on table level:
ALTER TABLE ttest ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
or on database level:
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA(ALL) COLUMNS;
What is not easy is to know how that technology is working under the hood and which option to choose and the most important if it is smart to use that technology for this particular case at all.
As there are many options available, you definitely need to grasp theory around what kind of supplemental logging to use as I won’t describe it here.
By executing above commands, you put table ttest or database in supplemental logging mode, meaning that any change in ttest table will generate log that consists of all columns of ttest table.
As a consequence, it means that online redo log file will be much larger comparing with case when you are not using that technology at all.
Here I will provide example that can be considered as worse case, where I’m updating only one column out of 75.
In the first case I’ll update one column when supplemental logging is disabled.
TEST@test> update ttest set pack_ind = 'Y'; 730896 rows updated. Statistics ---------------------------------------------------------- 99 recursive calls 781272 db block gets 56016 consistent gets 27544 physical reads 224032664 redo size 832 bytes sent via SQL*Net to client 788 bytes received via SQL*Net from client 3 SQL*Net roundtrips to/from client 2 sorts (memory) 0 sorts (disk) 730896 rows processed
In the next case, I’ll rollback previous transaction and enable supplemental logging and repeat the same operation.
TEST@test> rollback; SYS@test> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA(ALL) COLUMNS; TEST@test> update ttest set pack_ind = 'Y'; 730896 rows updated. Statistics ---------------------------------------------------------- 1059 recursive calls 900472 db block gets 28949 consistent gets 27883 physical reads 748128484 redo size 829 bytes sent via SQL*Net to client 787 bytes received via SQL*Net from client 3 SQL*Net roundtrips to/from client 7 sorts (memory) 0 sorts (disk) 730896 rows processed
As can be seen, without using supplemental logging on all columns, Redo size is 224032664, while after enabling it, Redo size has grown to 748128484, which is 3.34 times larger than without supplemental logging feature.
I always stress importance of knowledge, especially in design phase of the project as later it’s much more difficult/time consuming/costly to correct failures and bad decision when solution is already in production, and client is facing with poor performance and is not able to scale with increased load.
You need to know how some technology is working plus context surrounding it, to know in advance if it is appropriate for problem you are trying to solve or not.
In this case you need to know the following facts:
– Oracle deprecated streaming/CDC and supplemental logging technology starting from 11g version of database. Main reason behind that decision is to make more money from Golden Gate, another Oracle product that is licensed separately (streams/CDC and supplemental logging are “free”, in case you have already purchased Oracle database Enterprise Edition).
– Volume of data that is expected to be changed and it’s impact on transactional system.
Every technology has limitation and use case. While for low to medium volume supplemental logging is good choice for integration, in high volume case it is not.
– Type of changes is extremely important.
Influence of supplemental logging depends on type of operation (INSERT/DELETE/UPDATE), volume (number of changed rows), column types, column length, number of columns in table under the change etc.
In one of the next posts I’ll explain better alternative for this case.