By Franck Pachot

.
Oracle In-Memory option is for reporting. Oracle has chosen an hybrid approach: maintain the In-Memory Column Store in addition of the buffer cache. Dont’ forget our In-Memory event, but for the moment, let’s try the 1M row insert when the target table is IN MEMORY.

12c

I’ve done the previous variations on 11g because I wanted to use the Developer Days VM that has also TimesTen installed. But now I want to test some 12c features. So I run the same as the 1st variation. The one on a table with one index (primary key), conventional inserts committed only at the end. The one that takes 35 seconds on 11.2.0.2.

Here is the same on an equivallent VM with 12.1.0.2:

SQL> set timing on
SQL> declare
  2   type people_array is varray(12) of varchar(15);
  3   people people_array := people_array( 'Marc', 'Bill', 'George', 'Eliot', 'Matt', 'Trey', 'Tracy','Greg', 'Steve', 'Kristina', 'Katie', 'Jeff' );
  4   people_count number :=people.COUNT;
  5   n number;
  6  begin
  7   for i in 0..1e6 loop
  8    n:=trunc(dbms_random.value(0,10000));
  9    insert into DEMO values( i , people( dbms_random.value(1,people_count) ) , n );
 10    -- 
 11   end loop;
 12   commit;
 13  end;
 14  /

PL/SQL procedure successfully completed.

Elapsed: 00:01:17.03

Yes it’s 2 times longer and I don’t know why. I tried with different versions and for the moment it seems that the CPU consumption in 11.2.0.4 or 12c is x2 when compared to 11.2.0.3. I didn’t investigate further and I’ll probably come back to that after attending Luca Canali session next week about modern linux tools for Oracle troubleshooting.

Please don’t take any conclusion about it now. In real life, 12c is not slower than 11g. This is just one testcase on one specific context and there is nothing to say about it before understanding what happens. This test is just there to set the baseline as 01:17:00 on that 12c database.

In-Memory

The question of the day is: How this 1 million rows insert behave on an IN MEMORY table? We know that there is an overhead to maintain both the buffer cache and the In-Memory Column Store. And we know that this is probably not on problem because In-Memory often let us to get rid of a few indexes and the gain in index maintenance compensates the overhead.

SQL> create table DEMO ("id" number primary key, "text" varchar2(15), "number" number) 
INMEMORY PRIORITY CRITICAL;
Table created.

I’ve created the table in-memory with on-demand population. The load duration is not higher than without in-memory:

PL/SQL procedure successfully completed.
Elapsed: 00:01:23.35	

However in-memory journal has been updated:

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
IM space private journal bytes allocated                           25100288
IM space private journal bytes freed                               25100288
IM transactions rows journaled                                       394895

The 25MB is the size of my 1M rows but not all rows have been populated in memory:

SQL> select segment_name,inmemory_size,bytes,bytes_not_populated from v$im_segments;

SEGMENT_NA    INMEMORY_SIZE            BYTES      BYTES_NOT_POPULATED
---------- ---------------- ---------------- ------------------------
DEMO              1,179,648       23,068,672               11,354,112

If I query the table we still read some blocks from buffer cache:

SQL> set autotrace on 
SQL> select count(*) from demo;

  COUNT(*)
----------
   1000002

Execution Plan
----------------------------------------------------------
Plan hash value: 2180342005

----------------------------------------------------------------------------
| Id  | Operation                   | Name | Rows  | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |      |     1 |    28   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE             |      |     1 |            |          |
|   2 |   TABLE ACCESS INMEMORY FULL| DEMO |  1025K|    28   (0)| 00:00:01 |
----------------------------------------------------------------------------

Statistics
----------------------------------------------------------
         13  recursive calls
          1  db block gets
       4681  consistent gets
       1795  physical reads
     145188  redo size
        545  bytes sent via SQL*Net to client
        551  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

And then the whole table is now populated in memory:

SQL> select segment_name,inmemory_size,bytes,bytes_not_populated from v$im_segments;

SEGMENT_NA    INMEMORY_SIZE            BYTES      BYTES_NOT_POPULATED
---------- ---------------- ---------------- ------------------------
DEMO              8,585,216       23,068,672                        0

But even then, we need to read some blocks from buffer cache:

SQL> set autotrace on
SQL> select count(*) from demo;

  COUNT(*)
----------
   1000002

Execution Plan
----------------------------------------------------------
Plan hash value: 2180342005

----------------------------------------------------------------------------
| Id  | Operation                   | Name | Rows  | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |      |     1 |    28   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE             |      |     1 |            |          |
|   2 |   TABLE ACCESS INMEMORY FULL| DEMO |  1025K|    28   (0)| 00:00:01 |
----------------------------------------------------------------------------

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
       1381  consistent gets
          0  physical reads
          0  redo size
        545  bytes sent via SQL*Net to client
        551  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

Conclusion

The update if In-Memory is not very well documented. From this simple test, it seems that updating the in-memory column store has very limited overhead when storing the modifications into the in-memory transactional log. However, reading the rows just after the insert seems to be not very optimal. Not all rows have been updated into the transaction log. And even when all is populated, buffer cache is still read. If you want to know a bit more, with deep tracing, you can check Mahmoud Hatem investigations about that, with deep tracing.

In this post there are more questions than answers, but the fact is that maintaining the In-Memory Column Store is not a big overhead, which make it possible on our OLTP databases.