Yep, this is really annoying. I wish GPU makers would flat-out stop making huge architectural changes to their graphics architecture every release (or every few releases). We're to a point now where a GPU is basically just a massively parallel general purpose CPU, whose only material distinction from an actual CPU is that it sacrifices per-core serial performance for higher massively parallel performance. That, and the relatively higher latency between submitting a job and getting the result. As far as the programming model, I don't think it needs to change.
The industry really did need this distinction between "fast serial performance" and "fast parallel performance"; now we've got it -- CPU, GPU. Done deal, right? Stop reinventing the wheel and just make the parts smaller and smaller, and maybe toss in incremental changes which are no more disruptive than what HD6xxx did. Let's stick to an architecture like GCN for at least a decade and just keep rolling. The open drivers will be able to stabilize in that time, which is neat.