btrfs being notable slow in HDD RAID6

Disclaimer: I don't want to blame anyone. I love btrfs and would very much like to get it running in a good manner. This is a documentation about a use case, where I have some issues

I am setting up a new server for our Telescope systems. The server acts as supervisor for a handful of virtual machines, that control the telescope, instruments and provide some small services such as a file server. More or less just very basic services, nothing fancy, nothing exotic.

The server has in total 8 HDD, configured as RAID6 connected via a Symbios Logic MegaRAID SAS-3 and I would like to setup btrfs on the disks, which turns out to be horribly slow. The decision for btrfs came from resilience considerations, especially regarding the possibility of creating snapshots and conveniently transferring them to another storage system. I am using OpenSuSE 15.0 LEAP, because I wanted a not too old Kernel and because OpenSuSE and btrfs should work nicely together (A small sidekick in the direction of RedHat to rethink their strategy of abandoning btrfs in the first place)

The whole project runs now into problems, because in comparison to xfs or ext4, btrfs performs horribly in this environment. And with horribly I mean a factor of 10 and more!

The problem

The results as text are listed below. Scroll to the end of the section

I use the tool dbench to measure IO throughput on the RAID. When comparing btrfs to ext4 or xfs I notice, the overall throughput is about a factor of 10 (!!) lower, as can be seen in the following figure:

Throughput measurement using dbench - xfs's throughput is 11.60 times as high as btrfs !!!

sda1 and sda2 are obviosly the same disk and created with default parameters, as the following figure demonstrates

My first suspicion was, that maybe cow (copy-on-write) could be the reason for the performance issue. So I tried to disable cow by setting chattr +C and by remounting the filesystem with nodatacow. Both without any noticeable differences as shown in the next two figures

Same dbench run as before, with with chattr +C. No notice-worthy difference
Same dbench run as in the beginning, but with nodatacow mount option. Still, negligible difference

Hmmmm, looks like something's fishy with the filesystem. For completion I also wanted to swap /dev/sda1 and /dev/sda2 (in case something is wrong with the partition alignment) and for comparison also include ext4. So I reformatted /dev/sda1 with the btrfs system (was previously the xfs filesystem) and /dev/sda2 with ext4 (was previously the btrfs partition). The results stayed the same, although there are some minor differences between ext4 and xfs (not discussed here). The order of magnitude difference between btrfs and ext4/xfs remained

Swapping /dev/sda1 and /dev/sda2 and testing dbench on ext4 yields the same results: btrfs performs badly on this configuration

So, here are the results from the figures.

/dev/sda1 - xfsThroughput 7235.15 MB/sec 10 procs
/dev/sda2 - btrfsThroughput 623.654 MB/sec 10 procs
/dev/sda2 - btrfs (chattr +C)Throughput 609.842 MB/sec 10 procs
/dev/sda2 - btrfs (nodatacow)Throughput 606.169 MB/sec 10 procs
/dev/sda1 - btrfsThroughput 636.441 MB/sec 10 procs
/dev/sda2 - ext4Throughput 7130.93 MB/sec 10 procs

And here is the also the text file of the detailed initial test series. In the beginning I said something like a factor of 10 and more. This relates to the results in the text field. When setting dbench to use synchronous IO operations, btrfs gets even worse.

Other benchmarks

Phoronix did a comparison between btrfs, ext4, f2fs and xfs on Linux 4.12, Linux 4.13 and Linux 4.14 that also revealed differences between the filesystems of varying magnitude.

As far as I can interpret their results, they stay rather inconclusive. Sequential reads perform fastest on btrfs, while sequential writes apperently are horrible (Factor 4.3 difference).

Leave a Comment