Data can be corrupted in different ways, for example an I/O issue when reading data from the storage or a memory issue when updating data. The db_block_checking parameter allows to change the behavior of an Oracle database concerning block checking.

This posting will present this parameter and some tests to measure its impact on the database.

According to the documentation, the db_block_checking parameter specifies if Oracle performs a block checking or not. Block checking can prevent data corruption in the memory but it does not prevent from human corruption if a user inserts or updates wrong data.

Settings

db_block_checking is a dynamic parameter, it can be set at session or system level : alter system/session set db_block_checking = Value
From Oracle 8.1.6 to 10g, only two choices were possible, TRUE or FALSE. Siince Oracle 10gR2, 4 values are possible OFF, LOW, MEDIUM, and FULL.

Value Effect
OFF/FALSE No checks on blocks in user tablespaces. Semantic block checking is always enabled on SYSTEM tablespace.
LOW Basic block header checks after block changes in memory.
MEDIUM Same as LOW plus semantic block checking for all table blocks (except index organized tables).
FULL/TRUE Same as MEDIUM plus semantic block checking on index blocks.

 

Semantic check consists in Oracle going through the data in the block to make sure it is logically self-consistent.

As checks are performed when changes are performed on a block, the main interest of this feature is to get an internal error as soon as a corrupted block is detected to avoid wide-spread corruption from occurring: corrupted blocks are identified before they are marked corrupted.

The main disadvantage is the potential overhead on the system, from 1% and up to 10% according to Oracle. In fact, it depends on the amount of update/insert in the workload.

This parameter can be set explicitly by a statement but it can be set by through the db_ultra_safe init parameter. Enabling db_ultra_safe will set db_block_checking automatically to MEDIUM or FULL depending on the value of db_ultra_safe.

Test protocol

To get an idea of the impact of enabling db_block_checking, I measured the time to insert data in a table. The number of rows has been chosen to have a reference time while the feature was disabled for a simple loop insertion, superior than 10 minutes.

I made the tests on the table DEPT in the schema SCOTT with two different methods extracted from a small data generator I wrote for an internal event. The first method uses a simple loop whereas the second uses a collection and the FORALL statement. We will see the difference of the impact when data is bulk processed.

Here are the two procedure used to generate data:

create or replace procedure generateDEPT(p_count IN number DEFAULT 50, p_start IN number DEFAULT 50, p_increment IN number DEFAULT 10)
IS
depno number:=p_start;
BEGIN
  IF p_count > 0 THEN
    FOR x IN 1 .. p_count LOOP
      INSERT INTO DEPT VALUES (depno, 'DEPT_' || to_char(depno), 'CITY_' || to_char(depno));
      depno:=depno+p_increment;
    END LOOP;
  END IF;
END;
/
 
create or replace procedure generateDEPT(p_count IN number DEFAULT 50, p_start IN number DEFAULT 50, p_increment IN number DEFAULT 10)
IS
v_col_limit CONSTANT number:=1000;
depno NUMBER;
v_dept_gen PLS_INTEGER;
v_dept_left PLS_INTEGER;
v_dept_list dbms_sql.Number_Table;
BEGIN
   depno:=p_start;
   v_dept_gen:=0;
   v_dept_left:=p_count;
   v_dept_list.DELETE;
   WHILE v_dept_left > 0
   LOOP
      IF v_dept_left <= v_col_limit THEN
         v_dept_gen:=v_dept_left;
      ELSE
         v_dept_gen:=v_col_limit;
      END IF;
      FOR x IN 1 .. v_dept_gen LOOP
         v_dept_list(x):=depno;
         depno:=depno+p_increment;
      END LOOP;
      FORALL i IN v_dept_list.FIRST..v_dept_list.LAST
         INSERT INTO DEPT VALUES (v_dept_list(i), 'DEPT_' || to_char(v_dept_list(i)), 'CITY_' || to_char(v_dept_list(i)));
      v_dept_left:=v_dept_left-v_dept_gen;
      v_dept_list.DELETE;
   END LOOP;
END;
/

 

During the tests, database was in noarchivelog mode and, to avoid any caching surprise or issue, the environment was reset before each test, i. e. the schema scott is dropped, db_block_checking is set to the desired value and shared pool/buffer cache is flushed. I used the package sf_timer from Steven Feuerstein based on DBMS_UTILITY.get_cpu_time to measure the execution time of the procedure.

Here is the core of the test:

set serveroutput on
begin
   sf_timer.start_timer;
   generateDEPT(8000000, 100, 1);
   sf_timer.show_elapsed_time ('Generation time for db_block_checking=Value');
END;
/

Here is an example of the output:

Generation time for db_block_checking=OFF - Elapsed CPU : 693.75 seconds.
PL/SQL procedure successfully completed.

Results

Time to insert 8000000 rows (in seconds)

OFF MEDIUM FULL
loop generator 693.75 914.17 1289.64
collection generator 41.09 49.18 62.13

 

With these results, we can make three observations:

  • The impact can be important: +85% for loop insert with FULL block checking.
  • The difference between MEDIUM and FULL is more important than between OFF and MEDIUM.
  • Bulk processing is 14 to 20 times faster than loop inserts. The impact of this feature will depends on how data is processed by the application.

Conclusion

Before enabling this feature, the logical consistency of the database should be checked with RMAN or DBVERIFY to be sure original data is not already corrupted. Also, due to the unpredictable impact on the application, this parameter should not be set on live production system without a validation on test/qualification environment.

I recommend setting db_block_checking to MEDIUM when creating a database. With this setting, the indexes are not checked by Oracle so the overhead is smaller but all data can be considered protected because indexes can be regenerated from the tables.