Infrastructure at your Service

Franck Pachot

Beyond In-Memory, what’s new in ?

By Franck Pachot

It’s just a patchset. The delivery that is there to stabilize a release with all the bug fixes. But it comes with a lot of new features as well. And not only the one that has been advertised as the future of the database. It’s a huge release.

Let’s have a look at what’s new.

First, it seems that it will be the only patchest for 12.1

Then, there is that In-Memory option awaited for a while. There has been some demo done by Larry Ellison on Exadata or even on the Oracle SPARC M6. Of course, if you have 32 TB of memory, we can understand the need for an In-Memory optimized storage. For a more real-life usage of that option, stay tune on our blog. We investigate the features in the context of our customer concerns, to fit their needs. For example, In-Memory addresses cases where some customers use Active Data Guard to offload reporting/real-time analytics to another server. But unfortunately In-Memory is not populated on a physical standby. We probably have to wait 12.2 for that.

In-Memory is an option, so available only in Enterprise Edition.

There are other new features related with large memory. There is a part of buffer cache dedicated to big tables (you just set the percentage) to be cached for In-Memory Parallel Query. And there is also a mode where all the database is in buffer cache. About performance and Parallel Query, a new transformation has been introduced to optimize the group by operation when joining a fact table to dimensions.

Second new feature is the range-partitioned hash cluster. Oracle CLUSTER segments is a very old feature but not widely used. Hash cluster is the fastest way to access to a row because the key can be directly transformed to a rowid. Unfortunately maintenance is not easy, especially when the volume increases. And we have partitioning which is the way to ease maintenance with growing tables but, until today, we can’t partition a hash cluster. I mean, not in a supported way because Oracle uses it on SPARC for the TPC benchmarks – applying a specific patch (10374168) for it.

Well, the good news is that we can finally partition hash clusters with the simple syntax:

create cluster democ1 (sample_time timestamp,sample_id number)
hashkeys 3600 hash is sample_id size 8192
partition by range (sample_time) (
partition P12 values less than( timestamp'2014-04-26 12:00:00' )

Another nice feature is Attribute Clustering. Lot of other RDBMS has the ability to arrange rows but Oracle puts any insert anywhere in a heap table, depending only on where some free space is left. The alternative is IOT of course. But it can be good to try to cluster rows on one or several columns. It’s better for index access, it’s better for cache efficiency, it’s better for storage indexes (or in-memory min/max), for ILM compression, etc. We can finally do it and I’ll blog soon about that.

Attribute Clustering is not an option, but available only in Enterprise Edition.

I think those two features are my favorite ones. Because the best optimization we can do, without refactoring the application design, is to place data in the way it will be retreived.

The trend today is to store unstructured data as JSON. XML was nice, but it’s verbose. JSON is easier to read and even PostgreSQL can store JSON in its latest version. So Oracle has it in you can store and index it. Once again stay tuned on this blog to see how it works.

Something important was missing in Oracle SQL. How do you grant a read only user? You grant only select privilege? But that’s too much because with a select privilege we can lock a table (with LOCK or SELECT FOR UPDATE). So we have now a READ privilege to prevent that. That’s my favorite new feature for developers.

Then there are a few improvements on multitenant, such as the possibility to save the state of a pluggable database so that it can be automatically opened when the CDB startup. We already addressed that in in our Database Management Kit. An undocumented parameter, _multiple_char_set_cdb, let us imagine that we will be able to have different characterset for the PDB – probably in the future. Currently it’s set to false.

And once again as beta testing partners we have put the pressure to have a fix for what we consider as serious availability bug. The behaviour in beta was even worse about CDB availability and I finally had a bug opened (Bug 19001390 – PDB SYSTEM TABLESPACE MEDIA FAILURE CAUSES THE WHOLE CDB TO CRASH) that should be fixed in 12.1

About fixes, some restrictions are now gone: we can finally use ILM with multitenant and we can have supplemental logging while using a move partition online. And you can have Flashback Data Archive in multitenant as well.

All that is good news, but remember, even if it’s only the 4th digit that is increased in the version number, it’s a brand new version with lot of new features. So, when do you plan to upgrade ? 11g is supported until January 2015. Extended support is free until January 2016 given that you are in the terminal patchset ( So either you don’t want to be in the latestet release and you will have to upgrade to before the end of the year, waiting for 12.2 maybe in 2016. Or you want those new features and will probably go to for 2015.

Talking about upgrade, there’s a bad news. We thought that multitenancy can accelarate upgrade time. Because the data dictionary is shared, you just have to plug a PDB into a newer version CDB and it’s upgraded. And we show that in our 12c new features workshop by applying a PSU. But we have tested the upgrade to in the same way, and it’s not that simple. Plugging is quick when you have only new patches that did not change the dictionary. It’s still true for PSU when the dictionary changes are limited to the root container. But when you upgrade to you have to synchronize all the PDB dictionaries (all that magic behind object links and metadata links) and that takes time. It takes the same time as upgrading a non-CDB. Conclusion: you don’t save time when you do it by plug/unplug.

But I have good news as well for that because I’ve tested a 1 minute downtime migration from to Dbvisit replicate, the affordable replication solution, supports multitenant in it’s latest version, both as source and target. If your application is compatible (which is easy to check with the 30 days trial) then it’s a good way to migrate without stress and with minimal downtime. It’s available for Standard Edition as well, but currently the download can install only an Enterprise Edition.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Franck Pachot
Franck Pachot

Principal Consultant / Database Evangelist