As you pointed out, all your modern X apps are already doing all the rendering locally. Fonts are rendered client-side, toolkits use APIs like RENDER (not easy to accelerate over a network protocol), etc. Your client machine will need a "powerful enough" GPU, but literally every major CPU being sold now has a GPU builtin, and ones that lack an integrated GPU will disappear over time. This is not something worth worrying about.The other problem that the the remote machine now must handle drawing the windows instead of the machine with the display. In X, the machine with the display (the server) does the drawing which puts less load on the remote client machine (imagine when a remote machine has to draw for several clients, you can see how this could become a problem). Wayland windows are all drawn via GPU too, correct? This means the remote machine sending the windows over the network would also have to have a GPU powerful enough to handle all of the drawing now.
First off, Wayland does no such thing, as Wayland is just a protocol. Weston very well might (don't know), but it's FOSS and you can change it if you want, or just use a different Wayland server implementation (I believe at least one other is already in development by the Qt folks, but I'm not sure).Are the Wayland devs still locking things to the refresh rate of the monitor? For most people, 60 fps is fine, but there may be cases where this is not sufficient. I know this has been debated on this forum in the past, but there are times when the latency from 60 fps may be too high (especially when you add this to the latency of the monitor). How do I know? I own an arcade where we regularly throw tournaments. We have some professional players come in (and even occasionally some semi-big ones, but I won't drop any names), and latency is a big concern of their's.
"Professional gamers" (ugh) usually imagine the latency problem; they're interested in getting any possible edge, real or not, and are all too ready to blame anything and anyone except themselves when they don't play perfectly ("I don't suck, this computer just had vsync turned on, this is bullcrap!"). You get a lot of that in any competitive sport; people who believe in lucky socks, crap like that. There's feedback bias going into it. The monitor is physically incapable of displaying faster than its refresh rate, with or without vsync. That's not to say that it wouldn't better to have higher refresh rates than the relatively low 60hz that has become standard (120hz being the golden number, as it is slightly higher than what a human can perceive and has numerical properties that make it ideal for compatibility with legacy tech), but unless the monitor supports 120hz there is zero reason to actually render at 120hz. Note also that many games are just coded improperly and suffer additional lag in the whole system with vsync on; this is not the fault of vsync or the display system in general, but rather the game engines blocking everything until a vsync event instead of merely delaying rendering (triple buffering is the easy way to fix this for games that work that way, if your driver allows you to force it on).
And this is why we try to get such high unlocked FPS in an engine and game despite the lack of a need to actually render that fast. If your game can only just barely handle 60 FPS on target hardware, then yes you have the problem where you drop to 30 fps. If you can render 200 frames then even if you hit a major problem and start running at 50% speed you're still well over the minimum 60 FPS. For general gaming, it's better to just lock the game to 30 fps for consistency if it can't handle 60, and for competitive play it's better to turn off all the graphical bells and whistles that you don't care about so the game always runs at 60 fps no problems.Then there's the issue of when the framerate can't keep up with the refresh rate... What if the framerate dips to 59? Well, you're not going to see 59 fps, it's going to drop to something like 30 or 45 fps to sync up with the monitor which can jarring and ugly (but is at least still tear-free).
That said, adaptive vsync is a thing (sync to 60 fps if it can handle it, and disable vsync if it can't), and it would be very easy to make a Wayland do that if it turns out to be a problem. I imagine that full-screen apps could be allowed to directly set the scanout buffer as well. As is, there is only a single extra context switch required when flipping buffers, but if Wayland has a command (or possibly it can be all automated by the server with no protocol changes at all!) to map a fullscreen window to that display's scanout buffer, the app/game would be in full control of when buffers are swapped.