Why is there such concern over which kernel has a small performance regression in it(iin a not for production use FS no less)? Can you not upgrade kernels in ubuntu/fedora/suse/etc? Does a vanilla kernel not work? If 2.6.35 is bad for the default FS of ubuntu/$DISTRO surely they would ship 2.6.34 or some other version? If you don't like changes, run one of the long term kernels, take your pick of older kernels listed as stable on http://www.kernel.org/ 2.6.34.x, 2.6.33.x, 2.6.27.x All of these receive backported fixes for bugs and security issues.
I'm sure i'm missing something as i switched to Gentoo some 7 years ago, as I grumpy at not being able to use a vanilla kernel with some DRM patches on redhat(it was redhat then) and suse. It sure would be nice if someone made a "make config" option for the kernel, but gentoo has genkernel and it tends to work. Do ubuntu/fedora kernels have config.gz support turned on? if so it should be very easy to rebuild a kernel. Although I'm guessing that ubuntu/etc use initramfs-es these days, making it a bit harder to make your own kernel. Is there a reason to always use the provided Ubuntu kernel? or is it imposable to use a non ubuntu packaged kernel?
Really though, I'm curious why it's always "THE SKY IS FALLING" sort of news related to some version/check-in of the kernel as it relates to ext4 or btrfs. Don't get me wrong, I like to see people testing new code, and if i had more time/hardware I would be as well.
I'm not sure what you are complaining about in particular, and I don't think it's really about if it's easy to build a new kernel or not.
I commented that I thought that it was not really helpful to post a benchmark which you well know will show very poor btrfs performance when this is not indicative of the filesystem in general, but of a bug in the specific kernel you are benchmarking.... most people will just look at the graphs and not realize that this was the expected outcome or why.
As for the up-coming distributions, most people will not want to take such a vital piece of the system out of the mechanism for auto-update etc, etc .. and some binary drivers are dependent on kernel version... not to mention that installing on BTRFS with this bug can take anything up to 10 times longer than installing on another filesystem... some people have reported more than 10 hours to install, you cannot install a different kernel before installing!
TBH, it's really still an experimental filesystem, regardless of how it's billed... it's been incredibly stable for me, but until the tools ship with a working "fsck" tool... what can I say. But, it needs people to want to start testing, developing tools to take advantage of the features and just getting to know how to manipulate the filesystem... and the unfortunate fact that this kind of bug slipped into the kernel which will be used for the next round of distributions will set back people's willingness to do that, and hence the uptake of the filesystem.
Of course, that may be a good thing in the long run... 6 months more to stabalise before widespread testing could be a good thing, a working 'fsck', as well as 'raid5/6' more debugging and optimisation etc etc... but it won't help with getting it into peoples hands to develop tools which make use of it for inclusion in later distributions... and without these kinds of tools the filesystem doesn't provide the advantage that it has the potential to.
using iozone to claim that btrfs with compression have better performance is bulls*it. iozone uses very simple pattern for writing so it is no big miracle that this data compresses so well. Please use realistic benchmarks, not microbenchmarks (which are usefull, but needs good interpretation).