Many times I can testify that IT professionals are talking about performance without even try the most simplistic tests to find out what are the HW capabilities/limits they have.
One of the first things I usually do when facing with unknown environment is to run several tests to find out how much I can get from hardware that is on my disposal.
In this article I’ll explain how to perform quick HDD test on Linux/Unix systems.
In this case the word HDD means not only physical hard drive, but the VG (volume group)/partition or RAID as well.
Although I’m aware of many sites that explain the same topic, one of the most popular Oracle related sites – Oracle Base, which serves as an excellent reference in a case you need to find some commands quickly, provides not quite correct description of how to perform quick HDD test.
Anyway, at the following link:
you can find several ways to find HDD performance limits, and one of the first method is to use dd Linux utility.
When I tried to use the same command on my external HDD, this is what I’ve got.
test@test>time sh -c "dd if=/dev/zero of=test_write.img bs=8k count=100k && sync" 102400+0 records in 102400+0 records out 838860800 bytes (839 MB, 800 MiB) copied, 0,393667 s, 2,1 GB/s real 0m7,798s user 0m0,018s sys 0m0,429s
That’s unbelievable fast, but unfortunately it’s also false value because *nix sync command does not affect the operation of previous command in any way.
For that reason, even with a classic spinning HDD, I’ve got more than 2 GB/s when writing on the disk.
The following command will provide correct value:
test@test>time dd if=/dev/zero of=test_write.img bs=8k count=100k oflag=dsync 102400+0 records in 102400+0 records out 838860800 bytes (839 MB, 800 MiB) copied, 65,6975 s, 12,8 MB/s real 1m5,702s user 0m0,174s sys 0m5,271s
The only one change that I made is “oflag=dsync” switch which ensures synchronous writing to disk, without any kind of caching or buffering.
Here keep in mind that Oracle is using async io whereever possible (controlled by filesystemio_options parameter).
As you can see, now the writing speed is between 12.7 – 12.8 MB/s, which is 168 times slower than before, but the value is correct.
To test a read speed, first you need to execute the following command as root user before each test, to clear the file system cache:
/sbin/sysctl -w vm.drop_caches=3
After that you can perform read test by executing the following command:
--external HDD USB 3.0 test@test>time dd if=test_write.img of=/dev/null bs=8k 102400+0 records in 102400+0 records out 838860800 bytes (839 MB, 800 MiB) copied, 7,31331 s, 115 MB/s real 0m7,985s user 0m0,113s sys 0m1,054s
Maximum read speed is 115 MB/s.
Next step is to put given results in the appropriate context.
What I mean by that is to be able to compare those results with SSD or NVMe, USB stick, RAID configuration, enterprise storage cache etc.
Here are results from USB and SSD (read test only).
--USB stick 2.0 test test@test>time dd if=test_write.img of=/dev/null bs=8k 102400+0 records in 102400+0 records out 838860800 bytes (839 MB, 800 MiB) copied, 49,626 s, 16,9 MB/s real 0m49,641s user 0m0,127s sys 0m1,105s --SSD internal drive test test@test>time dd if=test_write.img of=/dev/null bs=8k 102400+0 records in 102400+0 records out 838860800 bytes (839 MB, 800 MiB) copied, 1,95717 s, 429 MB/s real 0m1,972s user 0m0,028s sys 0m0,338s --NVMe internal drives (Volume Group) test 102400+0 records in 102400+0 records out 838860800 bytes (839 MB, 800 MiB) copied, 0,574935 s, 1,5 GB/s real 0m0,595s user 0m0,029s sys 0m0,245s
Test results in case of SSD (429 MB/s) and Volume Group consisting of 2 NVMe disks (1.5 GB/s) are much better comparing with external HDD (115 MB/s) and USB stick (16.9 MB/s) which is also limited by USB 2.0 connection speed.
Now when you have first feedback of disk speed, you can start building complete picture to start tuning the whole environment.
In performance tuning, one of the main skills you should acquire is to know how to read results that you get.
Creating reports are usually easy task to do.
What really matters is to know how to correctly interpret results that you get, and to create action plan that you should follow.
HDD performance test exercise is very similar with reading AWR reports or any other technical report.
It’s very easy to get the report out, but it’s difficult to properly explain their meaning and to create action plan based on that.
Although very simple, in this case to correctly interpret test results you should be aware of:
– dd command limitations (e.g. not able to measure Oracle ASM and random disk access)
– what kind of IO is measured (sequential read/write, not random)
– technology behind classical spinning disk, USB, SSD and NVMe disks
– interface (in this case USB stick is 2.0 compatible, while external HDD is 3.0 while SSD’s are internal)
– disk configuration (Volume Group, ASM diskgroups, RAID…)
– file system (ext4 or Oracle ASM)
In the article Oracle Base from the beginning of this article, you can find some other ways/tools to measure disk subsystem performance.
I found some dangerous pitfalls and limitations that you should be aware of before start testing other tools (ORION, DBMS_RESOURCE_MANAGER.CALIBRATE_IO
, SLOB, Swingbench …).
If you intend to use those tools, I suggest to acquire some knowledge before you start.
In one of the following articles I’m going to explain more realistic ways to measure disk IO performance with Oracle database.