Infrastructure at your Service

Michael Wirz

What you should measure on your database storage and why

How can you do a good capacity and performance management for your databases, when you do not know, what your storage system is capable of, and what you need?
In this article, we are discussing a way to test a storage system from an Oracle database point of view.

Before you read

This is a series of different blog posts:
First, we talk about “What you should measure on your database storage and why” aka this post 😉
The second blog post will talk about “How to do database storage performance benchmark with FIO”.
The third blog post will show “How good is the new HUAWEI Dorada 6000V3 All-Flash System for databases” measured with the methods and tools from post one and two.

The first two posts give you the theory to understand all the graphics and numbers I will show in the third blog post.

Often when I arrive at a customer, they show me their new storage system and pointing out how fast and powerful it is. When I ask for performance specs or performance baselines, I often hear, it is very quick because it is Full-Flash but no facts & figures. A lot of full flash systems provide a suitable performance, but what is suitable for your business and how do you know what you can build on when you do not know how solid your basement is?

Oracle I/O types

The “normal” traffic an Oracle database creates has the same size as the database block size.
The most databases (I would say >95%) will use 8 KByte block size.

All multiblock I/O from Oracle (Full Table Scans, Backups, Duplicates, etc.) has a block size of 1024 KByte I/O.
This is configurable over the db_file_multiblock_read_count: Yes you can configure, that a multi-block I/O is smaller than 1024 KByte but normally this does not make sense on modern storage systems and RMAN Backup/Restore and Duplicate creates 1 MByte blocksize independent of the value of this parameter.

We should test 8 Kbyte and 1024 KByte and of course a mix of both because a normal workload will always be a mixture of these two.

But this is just half the truth:
Some other questions a good storage performance test should answer is:
-How many devices should I bundle into a disk group for best performance?
-How many backups/duplicates can I run in parallel to my normal database workload without interfering with it?
-What is the best rebalance power I can use on my system?

What tool to use?

We should do the tests at least with 2 different tools. This ensures, the correctness of the result.
Also, the performance view within the storage should show the same values as the test tool.
The last point sounds easy but is often not. E.g. when you are in a virtualized environment with OVM before Version 3.3 and you creating 1000 IOPS @ 1 MByte you would expect to see 1000 [email protected] on server and storage side (assuming your storage can handle that) BUT you will see approx. 24k [email protected] KBytes block size. This because the virtual disk handler breaks the IO request into this block size.

See here the answer from the Oracle support about this behavior:
OVM blocksize

On OVM 3.4 this value changed to 128 KByte so every 1 MByte IO is split up into 8 IOPS. Better than before, but still room for improvement.

Because (normally) one server can not fully utilize a powerfully All-Flash Storage, we should run the tests parallel from multiple servers.
This means, we need a tool, which can run multiple tests on multiple servers at best in just one run.
Also, that tool has to record not just the IOPS performance but also the service time (latency).
Because what brings some 100k IOPS when the latency is bad?

I worked with iozone for the last few years, but it has two drawbacks:
-It does not measure the service time.
-No option to run a test on multiple servers (ok manually but not really accurate…)

The best tool for these test cases IMHO is FIO.
Some of my colleagues already talked about it in former blog posts:
FIO (Flexible I/O) – a benchmark tool for any operating system by Gregory Steulet

Simulating database-like I/O activity with Flexible I/O by Yann Neuhaus

Als an interessting article is from PureStorage: IO Plumbing tests with fio

Test procedure

So we have our storage, we have the server and we know which block sizes we wanna test and with which tool. But how do we test?

Let us start small and get bigger

There are two parameters we wanna increase
-Number of disks
-Number of parallel threads

Find the limits of your storage solution

We start with: 1x disk and 1x thread, then we increase both steps by step up to 10 disks and 10 threads.
Every of these test runs for 8 KByte and 1024 KByte and for different load types
8 KByte: random read, random write, sequential read, sequential write
1024 KByte: sequential read and sequential write

So a complete test set will have 6 tests x 10 Devices x 10 threads x 60 seconds each which sums up to 36’000 seconds or 10 hours for a complete test run.

Initially, we run this test on every server separately to check, that all servers have the same performance.

Yes, you need time to measure a storage system. There is now a shortcut. I often see, that system administrators just do a quick test with dd (mostly without oflag=direct) and think that replaces a complete performance test. Suprise: It does not!

Compressed and Uncompressed

When your storage system offers an option to compress the disks, you should do the test once with and once without storage compression.
Even or better especially when your storage vendors tell you, that you will not recognize the difference…

After the initial tests, we do the same tests in parallel from multiple servers and if we even then should not have reached the maximum possible performance of the storage system, we can start to increase the threads or the number of disks.

But these are “just” max out tests. These test should give us a picture of what is possible with a single type of I/O. Normally we will never have just one type of I/O. We always will have a mixture of small blocks with sequential and random read/write and big blocks with mostly sequential read and sometimes sequential write.

To simulate this, we will create a baseline of 8 KBybte blocks with random read and random write. The amount of IOPS you need to generate depends on your database load profile. If you do not know your current load profile, start with 40-50k IOPS.
Then add 1000 IOPS with 1024 KByte block size of reading and write to simulate an Oracle duplicate from an active database over a 10 GBit/s link.

Why 1000 IOPS?
A 10GBit/s link can transmit 1125 MByte/s (10’000 MBit/s / 8 ==> 1125 MByte/s). Normally you do not reache that completle. I have seen setups (even without MTU:9000) where we reached 1070 – 1100 MByte/s.
But for easy calculation and analysis the data we assume, that a 10GBit/s link can transmit 1000 MByte/s.

We start adding up these duplicates until we see a drop in performance.
This tests will show us, how many backups/duplicates/full table scans we can run in parallel until we have a performance impact on our normal workload.

Short Summary

For Oracle databases, test your storage with 8 KByte and 1024 KByte block size.
Run the tests from 1 disk 1 thread to 10 disks 10 threads
Use asynchronous I/O
Use direct I/O
Test peak and mixed workload
Read my other blog posts 😉

The next blog post will show you how to configure the open source tool FIO to run a test in the described way.

So long

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Michael Wirz
Michael Wirz

Delivery Manager and Consultant