It’s the first edition of the Techdays, the Oracle community event in Belgium. This event happened in Antwerp these past 2 days, and a lot of speakers came from around the world to talk about their experience on focused subjects. Really amazing to heard such high-level conferences.
And it was a great pleasure for me because I’ve been working in Belgium for several years before.
I’ll will try to give you a glimpse of what I found the most interesting among the sessions I chose.
Cloud: you cannot ignore it anymore!
Until now, I did not have much interest in the cloud because my job is actually helping customers to build on-premise (that means not in the cloud) environments. If you can live without the cloud, you cannot ignore it anymore now because it deals with budget, infrastructure optimization, strategy, flexibility, scalability.
Cloud is bringing a modern pay-what-you-need-now model compared to monolitic and costly infrastructures you’ll have to pay off in years. Cloud is bringing a service for a problem.
Cloud providers have understood that now or later, customers will move at least parts of their infrastructure into the cloud.
Going into the cloud is not something you answer yes or no. It’s a real project that you’ll have to study as it needs to rethink nearly everything. Migrating your current infrastructure to the cloud without any changes would probably be a mistake. Don’t consider the cloud as just putting your servers elsewhere.
I learned that Oracle datacenters are actually not dedicated datacenters: most of the time, their cloud machines are located in existing datacenters from different providers, making your connection to the cloud sometimes only meters away from your actual servers!
And for those who still don’t want to move their critical data somewhere outside, Oracle brings another solution named Cloud-at-customer. It’s basically the same as pure cloud in terms of management, but Oracle delivers the servers in your datacenter, keeping your data secured. At least for those who are thinking that way
You probably know that EXADATA is the best database machine you can buy, and it’s true. But EXADATA is not the machine every customer needs (actually ODA is much more affordable and popular), only very demanding databases can benefit from this platform.
Gurmeet Goindi, the EXADATA product manager at Oracle, told us that EXADATA will still increase the gap from classic technologies.
For example, I heard that EXADATA’s maximum numbers of PDB will increase to 4’000 in one CDB, even you’ll probably never reach this limit, it’s quite an amazing number.
If I didn’t heard about new hardware coming shortly, major Exadata enhancements will come with 19c software stack release.
19c is coming
Maria Colgan from Oracle introduced the 19c new features and enhancements.
We were quite used to previous Oracle releases, R1 and R2, R1 being the very first release with a low adoption from the customer, and R2 being the mature release with longuer term support. But after the big gap between 12cR1 and 12cR2, Oracle changed the rules for a more Microsoft-like versioning: version number is now the year of product delivery. Is there still longer term release like 11gR2 was? For sure, and 19c will be the one you were waiting for. You may know that 18c was some kind of 18.104.22.168. 19c will be the lastest version of the 12.2 kernel, 22.214.171.124. If you plan to migrate your databases from previous releases this year, you should know that 19c will be available shortly, and that this could worth the wait.
19c will bring bunch of new features, like automatic indexing, which could be part of a new option. PDB could now have separate encryptions keys, and not only one for the CDB.
InMemory option will receive enhancements and now supports the storage of objects in both column and row. InMemory content can now be different between the primary and the active standby, making distribution of read only statements more efficient. If your memory is not big enough to store all your InMemory data, it will soon be possible (on Exadata only) to put the columnar data into flash to keep the benefit of columnar scans outside memory. It makes sense.
Brief overview of new “runaway queries” feature, there will be a kind of quarantine for statements that reach resource plan limits. Goal is to avoid the need for the DBA to connect and kill the session to free up system resources.
Autonomous database will also be there in 19c, but first limited to Cloud-at-customer Exadatas. It will take some years for all databases to become automomous
Zero Downtime Migration tool
What you’ve been dreaming of for years is now nearly there. A new automatic migration tool without downtime. But don’t dream too much because it seems to be limited to migration to the cloud and source and target should be in the same version (11g, 12c or 18c). Don’t know actually if it will support migrations from on-premise to on-premise.
Ricardo Gonzalez told us that with this new tool, you will be able to migrate to the cloud very easily. It’s a one button approach, with a lot of intelligence inside the tool to maximize the security of the operation. And as described it looks great, and everything is considered, pre-checks, preparation, migration, post-migration, post-checks and so on. You’ll still have to do the final switchover yourself, and yes it’s based on Data Guard, so you can trust the tool as it relies on something reliable. If something goes wrong, you can still move back to on-premise database.
Another great tool presented by Mike Dietrich is coming with 19c. It’s a java based tool able to plan and manage database upgrades with a single input file describing the environment. It seems very useful if you have a lot of databases to upgrade. Refer to MOS Note: 2485457.1 if you’re interested.
So many interesting things to learn in these 2 days! Special thanks to Philippe Fierens and Pieter Van Puymbroeck for the organization.