By Franck Pachot

.
In the ‘traditional’ row store, the indexes are maintained at the same time as rows are changed. It’s different with the In-memory Column Store. Changes are maintained by background processes. When rows are changed, the Snapshot Metadata Units (SMU) logs the changes and In-Memory Compression Units (IMCU) are re-populated asynchronously. Let’s see an example.

test case

I create a one million rows table inmemory and I ensure that it is populated in memory:


SQL> create table DEMO inmemory tablespace IOPS as select rownum num,mod(rownum,10) ten from xmltable('1 to 1000000');
Table created.
 
SQL> alter session set "_inmemory_populate_wait"=true;
Session altered.
 
SQL> set timing on
SQL> select count(*) from DEMO;
 
  COUNT(*)
----------
   1000000
 
Elapsed: 00:00:04.32

I’ve used the “_inmemory_populate_wait” to wait for the first population and see how long it takes. It is not set for the following sessions.

Invalidation and repopulation statistics

For the moment, all rows are populated in memory


SQL> select to_char(sysdate,'hh24:mi:ss') "now",load_scn,total_rows,invalid_rows,invalid_blocks from V$IM_SMU_HEAD;
 
now      LOAD_SCN         TOTAL_ROWS INVALID_ROWS INVALID_BLOCKS
-------- ---------------- ---------- ------------ --------------
16:03:19 940920160            491079            0              0
16:03:19 940920160            508921            0              0
 
SQL> select prepopulated,repopulated,trickle_repopulated,num_rows,num_blocks,time_to_populate,to_char(timestamp,'hh24:mi:ss') timestamp from V$IM_HEADER;
 
PREPOPULATED REPOPULATED TRICKLE_REPOPULATED   NUM_ROWS NUM_BLOCKS TIME_TO_POPULATE TIMESTAM
------------ ----------- ------------------- ---------- ---------- ---------------- --------
           0           0                   0     491079        874              534 16:03:16
           0           0                   0     508921        908              506 16:03:16

As far as i know the TIME_TO_POPULATE is in milliseconds. It’s the minimum I ever seen with In-memory.

Consistent gets

Everything being in memory, a select should not read any block from the buffer cache:


Execution Plan
----------------------------------------------------------
Plan hash value: 4000794843
-----------------------------------------------------------------------------------
| Id  | Operation                  | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |      |   100K|   781K|    32   (4)| 00:00:01 |
|*  1 |  TABLE ACCESS INMEMORY FULL| DEMO |   100K|   781K|    32   (4)| 00:00:01 |
-----------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
   1 - inmemory("TEN"<0)
       filter("TEN"<0)
Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          3  consistent gets
          0  physical reads
          0  redo size
        307  bytes sent via SQL*Net to client
        489  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          0  rows processed

3 consistent gets are (probably) from the dictionary.
The IM statistics show that all rows have been read from the IM columns store:


NAME                                                                  VALUE
---------------------------------------------------------------- ----------
IM scan rows                                                        1000000
IM scan rows optimized                                              1000000

Update

Let’s update 20000 rows (that is 2% of total rows)


SQL>  UPDATE DEMO set ten=ten+1 where rownum<=20000
20000 rows updated.
 
SQL> commit;
Commit complete.

Invalid rows

The updated rows are now stale in the IMCS, they are marked as ‘invalid’ in the IMCS metadata:


SQL> select to_char(sysdate,'hh24:mi:ss') "now",load_scn,total_rows,invalid_rows,invalid_blocks from V$IM_SMU_HEAD;
 
now      LOAD_SCN         TOTAL_ROWS INVALID_ROWS INVALID_BLOCKS
-------- ---------------- ---------- ------------ --------------
16:03:30 940920160            491079        20000             35
16:03:30 940920160            508921            0              0
 
SQL> select prepopulated,repopulated,trickle_repopulated,num_rows,num_blocks,time_to_populate,to_char(timestamp,'hh24:mi:ss') timestamp from V$IM_HEADER;
 
PREPOPULATED REPOPULATED TRICKLE_REPOPULATED   NUM_ROWS NUM_BLOCKS TIME_TO_POPULATE TIMESTAM
------------ ----------- ------------------- ---------- ---------- ---------------- --------
           0           0                   0     491079        874              534 16:03:16
           0           0                   0     508921        908              506 16:03:16

No repopulation yet, the updates are only marked as invalid.
There are 20000 invalid rows and they are stored in 35 blocks in the row store.

Hybrid select

In that state, a select will have to read those 20000 rows from the buffer cache. Here are the statistics:


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
         38  consistent gets
          0  physical reads
          0  redo size
        307  bytes sent via SQL*Net to client
        489  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          0  rows processed
 
NAME                                                                  VALUE
---------------------------------------------------------------- ----------
IM scan rows                                                        1000000
IM scan rows optimized                                              1000000
IM scan rows cache                                                    20000
IM scan blocks cache                                                     35

35 additional consistent gets, which is the number of invalid blocks for gotten from buffer cache.

This is where In-Memory may not be very efficient for tables that have concurrent updates. The FULL TABLE SCAN plan is used, but may have to get some blocks from the row store. Of course, if the changes are not committed or were commited after the start of your query, some undo blocks have to be read as well.

Repopulation

Repopulation is asynchronous. Even after my select which had to read the 20000 updated rows nothing has changed in the IMCS:


SQL> select to_char(sysdate,'hh24:mi:ss') "now",load_scn,total_rows,invalid_rows,invalid_blocks from V$IM_SMU_HEAD;
 
now      LOAD_SCN         TOTAL_ROWS INVALID_ROWS INVALID_BLOCKS
-------- ---------------- ---------- ------------ --------------
16:03:35 940920160            491079        20000             35
16:03:35 940920160            508921            0              0
 
SQL> select prepopulated,repopulated,trickle_repopulated,num_rows,num_blocks,time_to_populate,to_char(timestamp,'hh24:mi:ss') timestamp from V$IM_HEADER;
 
PREPOPULATED REPOPULATED TRICKLE_REPOPULATED   NUM_ROWS NUM_BLOCKS TIME_TO_POPULATE TIMESTAM
------------ ----------- ------------------- ---------- ---------- ---------------- --------
           0           0                   0     491079        874              534 16:03:16
           0           0                   0     508921        908              506 16:03:16

The IMCO process that coordinates repopulation wakes up every two minutes and checks to see if any population tasks need to be completed (described here) so we may have that suboptimal behaviour (read invalid blocks from buffer cache) for a few minutes.
Here is how I wait until there is no invalid rows:


SQL> declare n number; begin for i in 1..500 loop select sum(invalid_rows) into n from V$IM_SMU_HEAD; exit when n=0; dbms_lock.sleep(1); end loop; end;
  2  /
PL/SQL procedure successfully completed.

It took a bit less than 2 minutes here.
Let’s check now the statistics:


SQL> select to_char(sysdate,'hh24:mi:ss') "now",load_scn,total_rows,invalid_rows,invalid_blocks from V$IM_SMU_HEAD;
 
now      LOAD_SCN         TOTAL_ROWS INVALID_ROWS INVALID_BLOCKS
-------- ---------------- ---------- ------------ --------------
16:05:17 940920320            491079            0              0
16:05:17 940920160            508921            0              0
 
SQL> select prepopulated,repopulated,trickle_repopulated,num_rows,num_blocks,time_to_populate,to_char(timestamp,'hh24:mi:ss') timestamp from V$IM_HEADER;
 
PREPOPULATED REPOPULATED TRICKLE_REPOPULATED   NUM_ROWS NUM_BLOCKS TIME_TO_POPULATE TIMESTAM
------------ ----------- ------------------- ---------- ---------- ---------------- --------
           0           1                   1     491079        874              539 16:05:16
           0           0                   0     508921        908              506 16:03:16

No invalid rows anymore, 1 repopulation.
It’s ‘trickle’ repopulation here – the one that occur when a small percentage of rows are invalid. When there is more rows, then full repopulation is done.

I don’t know the exact threshold for trickle repopulation. I’ve shown the 20000 rows update here because its about the limit for trickle repopulation. When updating 3% of rows I got full repopulation. Here is the stats after changing 30000 rows:


PREPOPULATED REPOPULATED TRICKLE_REPOPULATED   NUM_ROWS NUM_BLOCKS TIME_TO_POPULATE TIMESTAM
------------ ----------- ------------------- ---------- ---------- ---------------- --------
           0           1                   0     491079        874              577 16:51:08
           0           0                   0     508921        908              466 16:50:55

There is something to note here. It seems that the trickle repopulation has taken the same time (5 seconds) than the full repopulation. I’ve a small exemple here, and there may be some minimal time which is not proportional to data to repopulate.

So what?

For sure, if you want to implement In-Memory option, you should do a Proof of Concept and measure the benefit in your context. Then you have to analyze which tables to put in memory, which compression level, how many repopulation servers (and you need CPU for them) and the maximum percentage used for trickle repopulation.
In-Memory option is a very good way to improve analytics on OLTP databases, but it is not as straightforward as a simple ‘flip a switch’.