HP SSD Smart Path not always a smart choice

Jan 23, 2015 11:35

Inkbunny uses one of HP's RAID controllers. I've gotten intimately familiar with it over the past week while transforming Inkbunny's submissions array to RAID 5. It's pretty sweet.

I recently found out about a new feature it supports called SSD Smart Path. Unfortunately, for our purposes, it's not all that smart to use it.

One of the way RAID is meant to help is that it gives you more I/O operations per second (IOPS) by ganging together multiple drives using a dedicated processor. However, many enterprise SSDs are capable of so many IOPS that this system becomes a bottleneck when you place large numbers of them together in an array.

The solution? Rewrite the driver so that reads go almost directly to the disk, bypassing the RAID stack as far as possible.

image Click to view



Sounds sweet. Here's the problem: It disables the RAID write cache on your SSDs, which may be slow at writes.

Inkbunny doesn't do a huge amount of I/O - maybe 110 read/15 writes on the hard disks, 20 reads/140 writes on the SSD. Most I/O on the SSDs are writes; we only have two of them; and they're not that fast (MLC, 64GB, SandForce 2nd gen.).

In practice, it ends up being significantly faster to write to the RAID controller and have it say "you're good" immediately (this is safe because it's got a backup power supply). What was meant to be a performance optimization resulted in a measurable performance degradation, and a factor-of-ten latency increase - 0.3ms vs. 3ms.



Thursday Morning: Linux 3.2.0-4, hpsa 2.0.2-1, Controller Caching
Thursday Night: Linux 3.16.0-0, hpsa 3.4.4-1, Smart Path
Friday Morning: Linux 3.16.0-0, hpsa 3.4.4-1, Controller Caching
HP P420i, 1GB FBWC, 6.00 Firmware, 2x Transcend SSD320 64GB (RAID 1)
On the plus side, it looks like the new hpsa driver is better overall; with caching back on, write latency improved markedly, to 0.1ms.

To be completely fair to HP, they don't say that this will be useful for two bargain-basement SSDs - the lowest example they give in their PDF graph is 4x 6Gb/sec SAS SSD. They also highlight the fact that write-heavy environments "might not benefit as much", and that database logging "will not benefit". (And, in point of fact, read I/O time did improve.)

However, HP don't disclose that Smart Path can be a bad thing in these situations. Worse, I believe it is now set as the default if you create a new array with SSDs. So, take heed: it's important to test both configurations with your workload.

Also, be sure to enable caching explicitly once you have disabled Smart Path. While the command-line utilities will gladly turn off caching when you enable SSD Smart Path, they don't turn it back on when you disable it!

Here's some testing with PostgreSQL's pg_test_fsync:

SSD, SMART PATH Disabled, Caching enabled

5 seconds per test
O_DIRECT supported on this platform for open_datasync and open_sync.

Compare file sync methods using one 8kB write:
(in wal_sync_method preference order, except fdatasync
is Linux's default)
open_datasync 18458.734 ops/sec 54 usecs/op
fdatasync 16547.748 ops/sec 60 usecs/op
fsync 15453.193 ops/sec 65 usecs/op
fsync_writethrough n/a
open_sync 20808.213 ops/sec 48 usecs/op

Compare file sync methods using two 8kB writes:
(in wal_sync_method preference order, except fdatasync
is Linux's default)
open_datasync 9766.422 ops/sec 102 usecs/op
fdatasync 14776.967 ops/sec 68 usecs/op
fsync 15690.580 ops/sec 64 usecs/op
fsync_writethrough n/a
open_sync 10881.626 ops/sec 92 usecs/op

Compare open_sync with different write sizes:
(This is designed to compare the cost of writing 16kB
in different write open_sync sizes.)
1 * 16kB open_sync write 20205.040 ops/sec 49 usecs/op
2 * 8kB open_sync writes 10705.867 ops/sec 93 usecs/op
4 * 4kB open_sync writes 5397.342 ops/sec 185 usecs/op
8 * 2kB open_sync writes 2746.414 ops/sec 364 usecs/op
16 * 1kB open_sync writes 1330.621 ops/sec 752 usecs/op

Test if fsync on non-write file descriptor is honored:
(If the times are similar, fsync() can sync data written
on a different descriptor.)
write, fsync, close 14111.681 ops/sec 71 usecs/op
write, close, fsync 14439.795 ops/sec 69 usecs/op

Non-Sync'ed 8kB writes:
write 294564.516 ops/sec 3 usecs/op

SSD, SMART PATH Enabled, DWC disabled

5 seconds per test
O_DIRECT supported on this platform for open_datasync and open_sync.

Compare file sync methods using one 8kB write:
(in wal_sync_method preference order, except fdatasync
is Linux's default)
open_datasync 8113.737 ops/sec 123 usecs/op
fdatasync 7080.239 ops/sec 141 usecs/op
fsync 7144.653 ops/sec 140 usecs/op
fsync_writethrough n/a
open_sync 6521.053 ops/sec 153 usecs/op

Compare file sync methods using two 8kB writes:
(in wal_sync_method preference order, except fdatasync
is Linux's default)
open_datasync 3830.510 ops/sec 261 usecs/op
fdatasync 5681.908 ops/sec 176 usecs/op
fsync 6019.883 ops/sec 166 usecs/op
fsync_writethrough n/a
open_sync 3426.762 ops/sec 292 usecs/op

Compare open_sync with different write sizes:
(This is designed to compare the cost of writing 16kB
in different write open_sync sizes.)
1 * 16kB open_sync write 6432.940 ops/sec 155 usecs/op
2 * 8kB open_sync writes 3310.152 ops/sec 302 usecs/op
4 * 4kB open_sync writes 2047.958 ops/sec 488 usecs/op
8 * 2kB open_sync writes 907.577 ops/sec 1102 usecs/op
16 * 1kB open_sync writes 541.312 ops/sec 1847 usecs/op

Test if fsync on non-write file descriptor is honored:
(If the times are similar, fsync() can sync data written
on a different descriptor.)
write, fsync, close 6383.491 ops/sec 157 usecs/op
write, close, fsync 7278.264 ops/sec 137 usecs/op

Non-Sync'ed 8kB writes:
write 344621.304 ops/sec 3 usecs/op

Hard disk, caching enabled (just for kicks)

5 seconds per test
O_DIRECT supported on this platform for open_datasync and open_sync.

Compare file sync methods using one 8kB write:
(in wal_sync_method preference order, except fdatasync
is Linux's default)
open_datasync 13295.531 ops/sec 75 usecs/op
fdatasync 14070.271 ops/sec 71 usecs/op
fsync 12962.255 ops/sec 77 usecs/op
fsync_writethrough n/a
open_sync 47.513 ops/sec 21047 usecs/op

Compare file sync methods using two 8kB writes:
(in wal_sync_method preference order, except fdatasync
is Linux's default)
open_datasync 8236.926 ops/sec 121 usecs/op
fdatasync 12051.472 ops/sec 83 usecs/op
fsync 11455.867 ops/sec 87 usecs/op
fsync_writethrough n/a
open_sync 28.710 ops/sec 34831 usecs/op

Compare open_sync with different write sizes:
(This is designed to compare the cost of writing 16kB
in different write open_sync sizes.)
1 * 16kB open_sync write 47.050 ops/sec 21254 usecs/op
2 * 8kB open_sync writes 23.962 ops/sec 41734 usecs/op
4 * 4kB open_sync writes 10.965 ops/sec 91198 usecs/op
8 * 2kB open_sync writes 7.862 ops/sec 127198 usecs/op
16 * 1kB open_sync writes 3.314 ops/sec 301781 usecs/op

Test if fsync on non-write file descriptor is honored:
(If the times are similar, fsync() can sync data written
on a different descriptor.)
write, fsync, close 10743.682 ops/sec 93 usecs/op
write, close, fsync 9926.386 ops/sec 101 usecs/op

Non-Sync'ed 8kB writes:
write 292369.381 ops/sec 3 usecs/op
Previous post Next post
Up