I started the second day ay UKOUG with an Exadata Management Deep dive with Oracle Enterprise Manager Cloud 12c. Through a certain numer of plugins, Oracle Enterprise Manager Cloud 12c offers an integrated view of hardware and software. In Enterprise Management Cloud 12c you have access to the following menus:
Software Cell Management:
Cell home page and performance page
Workload distribution by database
Interprets immediately what’s happening
Performance workload distribution may help analyze analyses performance problems
Infiniband Network management
Infiniband network and switches are discovered as part of the database machine target
Other hardware components are checked, such as power supply failure, fan failure or temperature out of range.
OEM Cloud 12c allows displaying the Exadata component monitoring. The common performance issues can concern the network, the disk in the hardware side, the SQL or the database in the software side.
Concerning the network the Cloud 12c console is able to detect if there is a bad port or a loose cable, ports with errors are marked as red and details of the problem can be found in the metric error page. There is also the possibility to perform infiniband administration to disable a bad port or enable ports.
In another way disk failure, over utilized hard disks or under-utilized flash disks are also checked by the administration console. The notion of cell health is defined and there is a description of where the failure is and what the failure is. The Exadata system health is computed using information collected from network, disks and ASM. You can visualize the Exadata health in the database target performance page.
One interesting point is the visualization of the over utilization of the hard disks. The performance page of Exadata storage server target provides real-time and historical utilization information. You can visualize the performance of the database and at what time of the day the I/O bandwidth is exhausted, you can identify the database which causes the increased I/O usage, and what cause the I/O activity.
Another practical point is to have the possibility to ensure I/O bandwidth for all database targets, and to be sure that one database is not running away with all I/O bandwidth. The notion of Inter Database Resource plan is useful when you need to manage I/O priorities across those databases. It offers the possibility to allocate I/O resources across databases by means of resource plans configured on each storage cell.
This Oracle presentation was very interesting and gave a complete and amazing overview of what EM Cloud 12c can administer in an Exadata environment.
Even after having tested the Oracle 12c new features concerning RMAN, I decided to assist at the Andy Colvin’s RMAN 12c new features session. I’m reassured; the new features are the same!
There is a new security role named SYSBACKUP which is recommended for connecting with rman.
RMAN supports Pluggable Database, can backup CDB and it is possible to realize point in time recovery on a specific PDB.
There is the possibility to restore a table which is very useful when you cannot use flashback. It offers the possibility to import the table with another name if needed using data pump syntax. Just to notice a bad issue for the presenter, the restore table demo failed live, but finally he could finalize it!
The active duplicate feature utilizes now backup set and empty space is moved faster to the target database.
The next session was animated by Maria Colgan and Jonathan Lewis concerning the 10 optimizer hints we can’t miss. Mrs. Colgan and Mr. Lewis were talking each at their turn about the optimizer and each one had a different point of view.
For example Mrs. Colgan told us about the auto_sample_size parameter when gathering statistics showing us that this parameter is very efficient since Oracle 11g version. From the other side Mr. Lewis demonstrated that this comportment was not always true in particular when there is a heterogeneous distribution because the optimizer can miss rare values in the histograms.
The problematic about function was also discussed, because the optimizer has no idea how the function affects values in columns. The main advice is to create a function based index with the exact same syntax as in the SQL request or the better solution is to use the opposite function on the right side (not the field from the table).
Some useful hints: GATHER_PLAN_STATISTICS which allows displaying the estimated and the actual rows, OPT_PARAM which allows value for init.ora parameter to be changed in a specific way for example you might use /* _fast_full_scan_enabled, FALSE */
In conclusion a complete representation set of statistics gets the best plan, optimizer hints should be used with caution.
The last session of the day was also animated by Mrs. Colgan concerning the future new Oracle 220.127.116.11 feature: In Memory Database.
The principle is the following; the data is stored classically on disk but now a new format is available to store the table in memory. The table is stored as columns instead of rows in a new dedicated memory area in the SGA. You can decide to move the table data in memory with an easy command such as alter table T inmemory; or the opposite: alter table T no inmemory;
The data is loaded in memory for active tables or partitions on startup or first access. You can exclude columns for a table not to be in memory.
New views will be available in the 18.104.22.168 version and will allow us to list the objects in memory, and to visualize the compression ratio.
The live demo showed us the amazing efficiency of the In Memory Database option. When running in a classical buffer cache way the response time was 4 or 5 times slower than with the in memory database.
Unfortunately we did not have any information about the licensing cost of this option, we have to wait for next year.