By Franck Pachot

.
A thread on OTN Forum about Exadata came to the following question: “But how can I monitor if it is effectively used or not?”. This is a common question. There are 3 exclusive features coming with Exadata, and instance statistics can show their usage. Even better: two of them can even be checked on your current (non-Exadata) system. And that is good to foresee how Exadata can improve your workload.

Let’s find how to measure the following feature efficiency:

  • Have reads eligible to SmartScan
  • Avoid I/O with Storage Index
  • Avoid transfert with offloading

Have reads eligible to SmartScan

First of all, SmartScan occurs only on direct-path reads. If you don’t see ‘cell smart table scan’ and ‘cell smart index scans’ in your Top timed events, then SmartScan can do nothing for you. And you see that as ‘direct path read’ wait event when you are not in Exadata.

If those direct-path reads are not a significant part of your DB Time, then you have something else to do before going to Exadata. You should leverage direct-path reads: full table scans, parallel query, etc.

Then when you are on Exadata and ‘cell smart table scan’ and ‘cell smart index scans’ are used, then you can check the proportion of reads that are actually using SmartScan.

SmartScan input is: ‘cell physical IO bytes eligible for predicate offload’. This is the amount of reads (in bytes) that are going to the SmartScan code. You have the total amount of reads as ‘physical read total bytes’ so you can compare it to know which part of your reads is subject to SmartScan.

If ‘cell physical IO bytes eligible for predicate offload’ / ‘physical read total bytes’ is small, then you have something to tune here. You want to do direct-path reads and you want to see ‘TABLE ACCESS STORAGE’ in the execution plan.

Not yet in Exadata? The Performance Analyzer can simulate it. The statistic is ‘cell simulated physical IO bytes eligible for predicate offload.’

Avoid I/O with Storage Index

When you know that SmartScan is used or can be used on a significant part of your reads, then the first thing you want to do is to avoid physical I/O. Among the ‘cell physical IO bytes eligible for predicate offload’, some reads will not necessitate disc I/O at all, thanks to Storage Indexes. You have the volume in ‘cell physical IO bytes saved by storage index’. Just compare that with the eligible volume and you know the amount of disk reads that have been saved by Storage Indexes. That is the most efficient optimization of SmartScan: you don’t have to read them, you don’t have to uncompress them, you don’t have to filter them, you don’t have to transfer them…

Avoid transfert with offloading

Then there is the proper offloading. The previous (Storage Indexes) addressed I/O elimination. This is the key feature for performance. Offloading addresses the transfer from storage to database servers. This is the key feature for scalability.

In the last decade, we replaced lot of direct attached disks with SAN. That was not for performance reasons. That was for maintainability and scalability. Having a shared storage system helps to allocate disk space when needed, get good performance by striping, get high availability by mirroring. The only drawback is the transfer time that is higher than direct attached disks.

Exadata still has the scalable architecture of the SAN, but releases the transfer bottleneck with offloading (in addition fo the fast interconnect which is very efficient). What can be filtered early on storage cells do not have to be transferred: columns not in the select clause, rows outside of the where (or join) clause predicates.

And you can measure it as well. When you measure it on non-Exadata with the performance analyzer, you compare the SmartScan output, which is the ‘cell simulated physical IO bytes returned by predicate offload’, to the SmartScan input ‘cell simulated physical IO bytes returned by predicate offload’. And this is a good estimation of the efficiency you can expect when going to Exadata.

When you are on Exadata, that may be different. Compressed data have to be uncompressed in order to apply the predicates and projection at the storage cells. Then the predicate/projection offloading input is: ‘cell IO uncompressed bytes’. and you compare that to ‘cell physical IO interconnect bytes returned by smart scan’

Summary

If you want to see Exadata SmartScan efficiency, just check an AWR report and compare the following:

cell physical IO bytes eligible
 for predicate offload
      /
physical read total bytes
     Goal:
     high %
cell physical IO bytes saved
 by storage index
   
      /  
 
cell physical IO bytes eligible
 for predicate offload
     Goal:
     high %
cell physical IO interconnect bytes
 returned by smart scan
      /
 
cell IO uncompressed bytes

      Goal:
      small %

 

You probably wonder why I don’t use the ‘smart scan efficiency ratio’ that we find at different places? They are often wrong for two reasons:

  • They compare ‘cell physical IO interconnect bytes returned by smart scan’ to ‘cell physical IO interconnect bytes’. But the latter includes the writes as well, and because of ASM mirroring, writes are multipled when measured at interconnect level.
  • The ‘cell physical IO interconnect bytes returned by smart scan’ can’t be compared with ‘physical read total bytes’ because the former has some data uncompressed.

For that reason, we cannot use only a single ratio that covers all the SmartScan features.

This is why I always check the 3 pairs above in order to get a relevant picture. And two of them are available with the simulation mode (I’ll blog about it soon).