By Franck Pachot

.
I’ve not finished with my investigation on 12c multitenant dictionary. Here, I’m checking how metadata links are managed by the dictionary cache (aka rowcache).
As I did previously, I’ll create a metadata link function in my lab environment (this is not for production. For the moment metadata/object links are supported only for oracle objects).

SQL> connect / as sysdba
Connected.

SQL> create or replace function DEMO_MDL_FUNCTION sharing=metadata
  2    return varchar2 as
  3    begin return 'XXXXXXXXXX'; end;
  4  /
Function created.

SQL> alter session set "_oracle_script"=true container=PDB1;
Session altered.

SQL> create or replace function DEMO_MDL_FUNCTION sharing=metadata
  2    return varchar2 as
  3    begin return 'XXXXXXXXXX'; end;
  4  /
Function created.

It’s not difficult. You set _oracle_script to true and run the DDL in all containers (CDB$ROOT and PDBs) with the sharing=metadata undocumented syntax.
I’m checking object_id in both containers:

SQL> alter session set container=CDB$ROOT;
Session altered.

SQL> select object_id,con_id from cdb_objects where object_name='DEMO_MDL_FUNCTION';

 OBJECT_ID     CON_ID
---------- ----------
     92013          1
     92371          3

And I flush the shared pool in order to flush the rowcache component:

SQL> alter system flush shared_pool;
System altered.

SQL> select * from v$sgastat where name like 'row cache';

POOL         NAME                            BYTES     CON_ID
------------ -------------------------- ---------- ----------
shared pool  row cache                     8640160          1

Now I connect to the PDB and call the function:

SQL> select DEMO_MDL_FUNCTION from dual;

DEMO_MDL_FUNCTION
---------------------------------------
XXXXXXXXXX

And then back to the CDB$ROOT I check what I have in the rowcache:

SQL> alter session set container=CDB$ROOT;
Session altered.

SQL> select cache_name,con_id,utl_raw.cast_to_varchar2(key) from v$rowcache_parent
  2  where utl_raw.cast_to_varchar2(key) like chr(37)||' DEMO_MDL_FUNCTION '||chr(37) order by 1,3,2;

CACHE_NAME     CON_ID UTL_RAW.CAST_TO_VARC
---------- ---------- --------------------
dc_objects          1       DEMO_MDL_FUNCT
dc_objects          1       DEMO_MDL_FUNCT
dc_objects          3       DEMO_MDL_FUNCT
dc_objects          3       DEMO_MDL_FUNCT

I’ve two entries in each container. I’ve used the object only within the PDB (con_id=3) but I’ve also an entry for the CDB$ROOT (con_id=1). Is that a problem? I don’t know. Dictionary cache is not a large component of the shared pool so the size overhead is probably not an issue. However, I’ll have to investigate what is the consequence about dictionary cache contention.

Something interesting is that the ‘row cache’ component of the shared pool is reported only for CDB$ROOT:

SQL> select * from v$sgastat where name like 'row cache';

POOL         NAME                            BYTES     CON_ID
------------ -------------------------- ---------- ----------
shared pool  row cache                     8640160          1

SQL> alter session set container=PDB1;
Session altered.

SQL> select * from v$sgastat where name like 'row cache';
no rows selected

Then, do we have an entry for CDB$ROOT in addition to the PDB ones in that CDB$ROOT shared pool? Let’s dump it:

SQL> alter session set container=CDB$ROOT;
Session altered.

SQL> alter session set events 'immediate trace name row_cache level 8';
Session altered.

then get the trace file name:

SQL> column tracefile new_value tracefile
SQL> select tracefile from v$process where addr=(select paddr from v$session where sid=sys_context('USERENV','SID'));

TRACEFILE
--------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/cdb/CDB/trace/CDB_ora_6125.trc

and grep my function name sith 4 rows before and 8 rows after:

SQL> host grep -B 4 -A 18 "name=DEMO_MDL_FUNCTION" &tracefile

The result is here:

BUCKET 26928:
  row cache parent object: addr=0x6cd8c4e0 cid=8(dc_objects) conid=1 conuid=1
  hash=102a692f typ=11 transaction=(nil) flags=00000002 inc=1, pdbinc=1
  objectno=92013 ownerid=0 nsp=1
  name=DEMO_MDL_FUNCTION
  own=0x6cd8c5b0[0x6cd8c5b0,0x6cd8c5b0] wat=0x6cd8c5c0[0x6cd8c5c0,0x6cd8c5c0] mode=N
  status=VALID/-/-/-/-/-/-/-/-
  set=0, complete=FALSE
  data=
  00000000 45440011 4d5f4f4d 465f4c44 54434e55 004e4f49 00000000 00000000
  00000000 00000001 00000000 00000000 00000000 00000000 00000000 00000000
  00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
  00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
  00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
  00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
  00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
  00000000 00000000 00000000 0001676d 0000ffff ffffffff 02737808 08271508
  08027378 78063115 15080273 00010631 00510000 00000000 00000000 00000000
  00000000 00000006 00000000 00000000 00000000 00000000 7a7871ab b01c11e7
  61dbc633 23ade680 00000001 00000000 00000000 102a692f 6cd8c4e0 00000000
  7ec6d318 00000000 7ec6d318 00000000 00000001 9a2a6093 6cd8c4e0 00000000
  7ec64958 00000000 7ec64958 00000000
  BUCKET 26928 total object count=1
--
BUCKET 49724:
  row cache parent object: addr=0x61783968 cid=8(dc_objects) conid=3 conuid=2451138288
  hash=f5dac23b typ=11 transaction=(nil) flags=00000002 inc=1, pdbinc=1
  objectno=92371 ownerid=0 nsp=1
  name=DEMO_MDL_FUNCTION
  own=0x61783a38[0x61783a38,0x61783a38] wat=0x61783a48[0x61783a48,0x61783a48] mode=N
  status=VALID/-/-/-/-/-/-/-/-
  set=0, complete=FALSE
  data=
  00000000 45440011 4d5f4f4d 465f4c44 54434e55 004e4f49 00000000 00000000
  00000000 00000001 00000000 00000000 00000000 00000000 00000000 00000000
  00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
  00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
  00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
  00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
  00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
  00000000 00000000 00000000 000168d3 0000ffff ffffffff 02737808 08271508
  08027378 78073115 15080273 00010731 00510000 00000000 00000000 00000000
  00000000 00000006 00000000 00000000 00000000 00000000 7a7871ab b01c11e7
  61dbc633 23ade680 00000001 00000000 00000000 f5dac23b 61783968 00000000
  7ecc63d8 00000000 7ecc63d8 00000000 00000001 f452019d 61783968 00000000
  7ec059f8 00000000 7ec059f8 00000000
  BUCKET 49724 total object count=1
--
BUCKET 414:
  row cache parent object: addr=0x61783968 cid=8(dc_objects) conid=3 conuid=2451138288
  hash=f5dac23b typ=11 transaction=(nil) flags=00000002 inc=1, pdbinc=1
  objectno=92371 ownerid=0 nsp=1
  name=DEMO_MDL_FUNCTION
  own=0x61783a38[0x61783a38,0x61783a38] wat=0x61783a48[0x61783a48,0x61783a48] mode=N
  status=VALID/-/-/-/-/-/-/-/-
  set=0, complete=FALSE
  data=
  00000000 45440011 4d5f4f4d 465f4c44 54434e55 004e4f49 00000000 00000000
  00000000 00000001 00000000 00000000 00000000 00000000 00000000 00000000
  00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
  00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
  00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
  00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
  00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
  00000000 00000000 00000000 000168d3 0000ffff ffffffff 02737808 08271508
  08027378 78073115 15080273 00010731 00510000 00000000 00000000 00000000
  00000000 00000006 00000000 00000000 00000000 00000000 7a7871ab b01c11e7
  61dbc633 23ade680 00000001 00000000 00000000 f5dac23b 61783968 00000000
  7ecc63d8 00000000 7ecc63d8 00000000 00000001 f452019d 61783968 00000000
  7ec059f8 00000000 7ec059f8 00000000
  BUCKET 414 total object count=1
--
BUCKET 24724:
  row cache parent object: addr=0x6cd8c4e0 cid=8(dc_objects) conid=1 conuid=1
  hash=102a692f typ=11 transaction=(nil) flags=00000002 inc=1, pdbinc=1
  objectno=92013 ownerid=0 nsp=1
  name=DEMO_MDL_FUNCTION
  own=0x6cd8c5b0[0x6cd8c5b0,0x6cd8c5b0] wat=0x6cd8c5c0[0x6cd8c5c0,0x6cd8c5c0] mode=N
  status=VALID/-/-/-/-/-/-/-/-
  set=0, complete=FALSE
  data=
  00000000 45440011 4d5f4f4d 465f4c44 54434e55 004e4f49 00000000 00000000
  00000000 00000001 00000000 00000000 00000000 00000000 00000000 00000000
  00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
  00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
  00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
  00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
  00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
  00000000 00000000 00000000 0001676d 0000ffff ffffffff 02737808 08271508
  08027378 78063115 15080273 00010631 00510000 00000000 00000000 00000000
  00000000 00000006 00000000 00000000 00000000 00000000 7a7871ab b01c11e7
  61dbc633 23ade680 00000001 00000000 00000000 102a692f 6cd8c4e0 00000000
  7ec6d318 00000000 7ec6d318 00000000 00000001 9a2a6093 6cd8c4e0 00000000
  7ec64958 00000000 7ec64958 00000000
  BUCKET 24724 total object count=1

Do we learn something else from that? Not a lot, except that we have actually dump different entries for both containers, and object_id matches.

So it seems that sharing the dictionary do not share the dictionary cache. However, I’m not sure that the overhead is significant or not. But if you want to play with those object/metadata links then don’t forget to flush the shared_pool when you see something weird.