I guess the main advantage is the ability to have pointers in shader programs without letting shaders run amok across system and video memory.
Second advantage is performance -- using VM eliminates the need for the kernel driver to translate ("relocate") all the buffer addresses from handles to physical addresses during command submission.
EDIT -- sorry, just realized your question was more about the way it was implemented -- one VA space per client -- rather than advantages of VM in general. There it's mostly client separation, but in the future we want to have GPU and CPU using the same per-process address space and this is a step in that direction.
Then we know all what it means: the gpu will be re-centered at the core of the kernel, namely with kernel intimate structures/functioning. That would be for APUs where CPU and GPU share directly the same memory controller. For discrete cards, you would still have the PCIE bus and CPU IOMMU in between (with GPU DMA engines).
2 really different models.
AMD Heterogeneous Systems Architecture roadmap is probably of interest here. Unified memory, GPU compute context switching, sounds to me like you'll go from the CPU to the GPU just like the CPU does a context switch between processes. I'm thinking SSE on steroids here, at the cost of a context switch. Of course this is still a couple years out, then you have to actually get application/compiler support for it. I'm sure this is important for AMD for where they want to be 3-5 years from now, but for the rest of us it's a bit early to get concerned with, I think.