I think I'm on the fence on this one (ha ha, no pun intended, silly graphics developers and your fences ). On the one hand, bureaucracy sucks, and slows down innovation (and it also helps to chill down motivation and progress because developers would rather spend time developing rather than writing boring documentation and tests). On the other hand, a more stable master would be pretty nice, and the "same" level of experimental code would probably still be available in branches for the extremely adventurous.
I feel worried that requiring this type of thing to go in "all at once" might hinder multi-driver development, though.
Let's say somebody is implementing XYZ feature (a new libdrm API or something) that will impact all Gallium3d drivers. But they lack the hardware knowledge or motivation to implement it for all G3D drivers, so they just implement it for the one(s) they know best.
Now, if another developer working on another G3D driver actually wants to use this new common code in their driver, they will have to pull from and maintain patches against somebody else's branch, rather than just pulling it from master. Because the common code probably wouldn't be merged because it wouldn't be able to be tested until it was fully implemented and ready in at least one driver.
So if intel or r300g does something neat with the mesa common code, they'd have to completely polish off a workable demo on at least intel hardware to get it merged, and in the meantime, the other g3d developers would have to sit on their laurels and wait for the thing to be ready before they can even START on their work on it.
That would be the least optimal outcome, of course. More likely, in practice, is that the common interfaces could be developed in a branch, and then have the driver-specific code live in yet another branch (forked from the common), so you'd have something like branch "A" being the basis for branches "B" and "C" (where B is for e.g. intel and branch C is for e.g. r600g). I could see this working fairly well, because git is so easy to use.
So I guess I want this post to be kind of a "hesitant +1" in favor of the new rules, but I'd want to make sure that all the core developers know what their alternatives are to continue to develop new features at a rapid pace and release them in a way that the enthusiast/testing community can play with them.
Then again, Jerome's patches for 2d color tiling for r600g went fairly well, and THEY weren't merged to master until they were practically in their final iteration (I think it went through like 8 or 9 iterations in Jerome's directory on people.fdo before it got merged). So it's not impossible to continue to develop/test even in the face of a pickier master branch.
Anyway, I'm rambling, but +1 for stability and all that good stuff...
I think there have been some clarifications that lessen the downside of these changes
1st - it just has to be working in 1 driver, so there is no argument that it must be working across gallium, or specifically in the Intel or other classic drivers. The idea is that if it's implemented for any 1 driver, there may still be changes necessary later on but it's good enough to test that the feature is largely working and porting to other drivers should be relatively simple.
2nd - it seems like they are more concerned about this with large new features - UBOs, geometry shaders, etc. It sounds like anything smaller will probably not see much of a change. A lot of the larger features are already in now as part of GL3, so this may not be as big a change as it first seems.