René Nyffenegger's collection of things on the web | |
René Nyffenegger on Oracle - Most wanted - Feedback
- Follow @renenyffenegger
|
March 11, 2008: On setting commit_write | ||
Recently, I had a company call me because they had a performance problem. After inquiring how that problem manifested itself, they told me that they needed to update an application with lots of data to a newer relase and that
they used a «tool» that migrated that data. The estimates for this tool to complete the migration amounted to some 90 hours, which was too long, because it should be completed during a weekend. So, I went to see
them to investigate that problem.
I ran the tool with a sample set of the data and observed v$session_event. By far, most of the time was spent for the
log file sync event, an indication that the tool commits to often. In fact, by
tracing the offending session, it turned out, that each insert statement was followed by a commit.
Although it is needless to say that this commit behaviour is not only unnecessary, but it also detoriates performance, we had no influence over the tool. We had to accept it as is. However, there's this
commit_write parameter which I changed to
batch,nowrite . This greatly enhanced the performance.
Here is a test case to reproduce it:
First, I create two tables:
commit_write_from and commit_write_to ...
set feedback off set serveroutput on size 1000000 format wrapped create table commit_write_to ( col_1 number, col_2 varchar2(500) ); create table commit_write_from ( col_1 number, col_2 varchar2(500) );
.. and fill 50 thousands record into
commit_write_from :
begin for r in 1 .. 50000 loop insert into commit_write_from values ( r, dbms_random.string('a', 500) ); end loop; end; / commit;
The content of this table will later be used to be copied into
commit_write_to .
Then, I create two tables with the same structure as v$session_event:
create table event_pre as select * from v$session_event where rownum = 0; create table event_post as select * from v$session_event where rownum = 0;
These tables will be used to create a snapshot of v$session_event at the beginning of the insertion and at the end of the insertion, thus making it possible
for me to compare the times used on the most relevant wait events.
Finally, I create the procedure that iterates over each record in
commit_write_from and then inserts that record into commit_write_to .
For convenience, the procedure creates the mentioned v$session_event snapshots and prints the result using dbms_output.
create procedure many_commits as start_time number; end_time number; my_sid number; num_rows number := 5; begin select sid into my_sid from sys.v_$session where audsid = sys_context('userenv', 'sessionid'); delete event_pre; delete event_post; insert into event_pre select * from v$session_event where sid = my_sid; start_time := dbms_utility.get_time; for r in (select * from commit_write_from) loop insert into commit_write_to values ( r.col_1, r.col_2 ); commit; end loop; end_time := dbms_utility.get_time; insert into event_post select * from v$session_event where sid = my_sid; dbms_output.new_line; dbms_output.put_line('time: ' || (end_time - start_time) / 100 || ' seconds'); for r in ( select post.event, (post.time_waited - nvl(pre.time_waited,0))/100 time_waited, post.total_waits - nvl(pre.total_waits,0) total_waits from event_pre pre right join event_post post using (event_id) order by post.time_waited - nvl(pre.time_waited, 0) desc ) loop num_rows := num_rows - 1; exit when num_rows = 0; dbms_output.put_line( rpad(r.event, 40) || to_char(r.time_waited, '9999.99') || ' ' || to_char(r.total_waits, '9999999') ); end loop; end many_commits; /
Now, I execute the procedure with for different settings for the commit_write parameter.
First, I use the default for this parameter which is immediate,wait:
-- default alter session set commit_write=immediate,wait; truncate table commit_write_to; exec many_commits;
The procedure needs some 56 seconds to complete of which roughly half of the time is spent commiting (log file sync).
time: 56.68 seconds log file sync 28.43 49998 db file scattered read 4.42 179 log file switch completion .98 1 log file switch (checkpoint incomplete) .04 2
Chaning the parameter to batch,wait...
alter session set commit_write=batch,wait; truncate table commit_write_to; exec many_commits;
... doesn't significantally change times:
time: 58.36 seconds log file sync 29.02 49998 db file scattered read 3.67 184 log file switch (checkpoint incomplete) 1.01 2 log file switch completion .98 1
However, changing to parameter to immediate,nowait...
alter session set commit_write=immediate,nowait; truncate table commit_write_to; exec many_commits;
reduces the overall time to 25 seconds (which is almost half of what it was before), and, only about a third of the time is spent commiting:
time: 25.22 seconds db file scattered read 9.11 188 log buffer space .64 1 db file sequential read .48 50 log file switch completion .14 3
It gets even better by setting the parameter to batch,nowait..
alter session set commit_write=batch,nowait; truncate table commit_write_to; exec many_commits;
... less than 20 seconds used to complete and a mere 20% spent commiting:
time: 19.87 seconds db file scattered read 3.23 183 log file switch (checkpoint incomplete) 1.10 3 log file switch completion .99 1 db file sequential read .24 53
Of course, this time improvement doesn't come for free: ACID properties are sacrified. That is, although the application (here: the PL/SQL procedure) commits a record, it cannot be sure that
this commit is indeed written to the online redo logs.
More on OracleThis is an on Oracle article. The most current articles of this series can be found here.
Warning: require(): open_basedir restriction in effect. File(/var/www/virtual/adp-gmbh.ch/forum/comment.inc) is not within the allowed path(s): (/home/httpd/vhosts/renenyffenegger.ch/:/tmp/) in /home/httpd/vhosts/renenyffenegger.ch/adp-gmbh.ch/blog/2008/03/11.php on line 470
Warning: require(/var/www/virtual/adp-gmbh.ch/forum/comment.inc): Failed to open stream: Operation not permitted in /home/httpd/vhosts/renenyffenegger.ch/adp-gmbh.ch/blog/2008/03/11.php on line 470
Warning: require(): open_basedir restriction in effect. File(/var/www/virtual/adp-gmbh.ch/forum/comment.inc) is not within the allowed path(s): (/home/httpd/vhosts/renenyffenegger.ch/:/tmp/) in /home/httpd/vhosts/renenyffenegger.ch/adp-gmbh.ch/blog/2008/03/11.php on line 470
Warning: require(/var/www/virtual/adp-gmbh.ch/forum/comment.inc): Failed to open stream: Operation not permitted in /home/httpd/vhosts/renenyffenegger.ch/adp-gmbh.ch/blog/2008/03/11.php on line 470
Fatal error: Uncaught Error: Failed opening required '/var/www/virtual/adp-gmbh.ch/forum/comment.inc' (include_path='.:/home/httpd/vhosts/renenyffenegger.ch') in /home/httpd/vhosts/renenyffenegger.ch/adp-gmbh.ch/blog/2008/03/11.php:470
Stack trace:
#0 {main}
thrown in /home/httpd/vhosts/renenyffenegger.ch/adp-gmbh.ch/blog/2008/03/11.php on line 470
|