You could even do it manually if you felt such an inclination...
Note: I find the images produced by dracut to be somewhat ... bloated.
I commented that I thought that it was not really helpful to post a benchmark which you well know will show very poor btrfs performance when this is not indicative of the filesystem in general, but of a bug in the specific kernel you are benchmarking.... most people will just look at the graphs and not realize that this was the expected outcome or why.
As for the up-coming distributions, most people will not want to take such a vital piece of the system out of the mechanism for auto-update etc, etc .. and some binary drivers are dependent on kernel version... not to mention that installing on BTRFS with this bug can take anything up to 10 times longer than installing on another filesystem... some people have reported more than 10 hours to install, you cannot install a different kernel before installing!
TBH, it's really still an experimental filesystem, regardless of how it's billed... it's been incredibly stable for me, but until the tools ship with a working "fsck" tool... what can I say. But, it needs people to want to start testing, developing tools to take advantage of the features and just getting to know how to manipulate the filesystem... and the unfortunate fact that this kind of bug slipped into the kernel which will be used for the next round of distributions will set back people's willingness to do that, and hence the uptake of the filesystem.
Of course, that may be a good thing in the long run... 6 months more to stabalise before widespread testing could be a good thing, a working 'fsck', as well as 'raid5/6' more debugging and optimisation etc etc... but it won't help with getting it into peoples hands to develop tools which make use of it for inclusion in later distributions... and without these kinds of tools the filesystem doesn't provide the advantage that it has the potential to.
using iozone to claim that btrfs with compression have better performance is bulls*it. iozone uses very simple pattern for writing so it is no big miracle that this data compresses so well. Please use realistic benchmarks, not microbenchmarks (which are usefull, but needs good interpretation).
I have a computer with 2 SSDs. one is 2~4x faster than the other... so i installed debian7 on the faster one, and created just one ext4 and then one btrfs partition on the slow one.
I did some tests with running apache from there and also from running virtual machines (and counting the boot time plus visual studio compile time). the difference was minimal, but they were there, and they are the same of the more discrepant test i'm going to relate here:
the naive FS benchmark test! simply copying /usr to the new drive. First with the empty drive, then with the drive filled with the previously copy. twice.
ext4 did it in 4m. 5m40. 6m02. (i didn't write the exact seconds for the 2 first tests before closing the term.)
brtfs did it in 4m. 4m20. 4m36.
brtfs with lzo compression did it in >6m on the first two passes so i ignored it.... i was planning to write zeroes to the whole disk and then doing the copy with and without compression to see how much it actually impact the media usage, but after that performance hit i gave up.
So this is the result. ext4 and btrfs. Untunned. debian7 default. btrfs without compression is faster for file copy and arguably running VMs from.
Update: re-run the tests with noauto. brtfs no compression now runs at the same time as before. a little worse :/ can't explain. and ext4 is down to 3m27, 3m32. it is the clear winner if you are just setting up a new laptop with SSD and do not want to overthink too much.
testing only reading (my vm tests had a lot of writting, now i'm just cat a bunch of small files (~1500 fiels, totaling ~160mb) to dev/nul) ext4 wins always by ~0.5s out of the total 6s