Page 1 of 2 12 LastLast
Results 1 to 10 of 13

Thread: Wayland & The Network; Gallium3D Netpipe?

  1. #1
    Join Date
    Jan 2007
    Posts
    15,133

    Default Wayland & The Network; Gallium3D Netpipe?

    Phoronix: Wayland & The Network; Gallium3D Netpipe?

    In recent days on the Wayland development mailing list there's been a discussion about a HPC (High Performance Compute) architecture for Wayland. A few interesting ideas have been brought up...

    http://www.phoronix.com/vr.php?view=OTIyNw

  2. #2
    Join Date
    Jan 2009
    Posts
    1,709

    Default

    what would be the benefit of having something like that in the driver????

    i mean the goal should be -or at least that makes sense to me- to have something that will be platform/OS/toolkit independent


  3. #3
    Join Date
    Jan 2009
    Posts
    291

    Default

    don't forget there are x11 state trackers too... so this would also be a new render path that X11 could go on... I've acutally ran the r300g xorg state tracker it seemed a bit more buggy at the time a few weeks ago but it did work

  4. #4
    Join Date
    Jan 2009
    Posts
    1,709

    Default

    Quote Originally Posted by cb88 View Post
    don't forget there are x11 state trackers too... so this would also be a new render path that X11 could go on... I've acutally ran the r300g xorg state tracker it seemed a bit more buggy at the time a few weeks ago but it did work
    yes but does X need that??

    i mean X can run on many platforms and do its network stuff without a problem and will be legacy software sooner or later (hopefully sooner)

  5. #5
    Join Date
    Jan 2011
    Posts
    46

    Default

    What I've been wishing a long time is a way to offload HD video decoding/postprocessing to a more powerful machine. I once tried to use X forwarding for this purpose, but discovered that my gigabit LAN didn't have enough bandwidth for uncompressed video streaming. Also, nvidia proprietary video acceleration refused to work this way - it only supports rendering directly to the monitor attached to the videocard.

  6. #6
    Join Date
    Feb 2011
    Posts
    1,244

    Default GSOC

    It isn't surprising no one has stepped up yet for the GSOC project, applications are not supposed to start coming in until the 28th. Some people seem to be jumping the gun a little but we still have a few weeks.

  7. #7
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,514

    Default

    Quote Originally Posted by kirillkh View Post
    What I've been wishing a long time is a way to offload HD video decoding/postprocessing to a more powerful machine. I once tried to use X forwarding for this purpose, but discovered that my gigabit LAN didn't have enough bandwidth for uncompressed video streaming. Also, nvidia proprietary video acceleration refused to work this way - it only supports rendering directly to the monitor attached to the videocard.
    You probably want to offload decode to a more powerful machine but postprocessing aka render aka present should be on your local machine because of the large volume of data and because generating the result directly in the frame buffer saves a lot of big copies.

  8. #8
    Join Date
    Jan 2011
    Posts
    46

    Default

    Sure, if that's the best way to split the work.

  9. #9
    Join Date
    Sep 2010
    Posts
    474

    Default

    Quote Originally Posted by bridgman View Post
    You probably want to offload decode to a more powerful machine but postprocessing aka render aka present should be on your local machine because of the large volume of data and because generating the result directly in the frame buffer saves a lot of big copies.
    Whadever you do, you're going to have to compress the output from the screen/framebuffer and then transmit that to the device. Working with video will ask for that because nothing does support that.

    I'm surprised nobody is doing anything based on Kernel Virtual Machine.
    Making that network transparent would be a very universal solution.

    I also see that there are two things: sending calls, which does require little bandwidth and sending pieces of or the whole screen(buffer-s) to it. That will require a video codec that can do streaming.

    And the best solution is of course a protocol specification that can do both because in a program. Doing both can be efficient as possible while allowing all kinds of content.

    All this stuff about X.Org doing that. We need a linux kernel infrastructure for this because we need a universal system. KVM would be great for that.

  10. #10
    Join Date
    Oct 2010
    Posts
    461

    Default

    Quote Originally Posted by plonoma View Post
    I also see that there are two things: sending calls, which does require little bandwidth and sending pieces of or the whole screen(buffer-s) to it. That will require a video codec that can do streaming.
    I don't know about that....

    A stream of video would require that data be constantly sent from the server to the client. If you want to be sure the stream isn't corrupted, you'd need either TCP or some other mechanism to verify the data, meaning extra round-trips (adding latency). You could just as easily do the same with individual (and smaller) frames that represent only a portion of the screen, sent with coordinates and a checksum over UDP (or some other simple protocol) and use a lot less bandwidth and have lower latency. If ever there is corruption (the checksum doesn't match), the client could send back a request for the full screen, and then it could continue from there with the small frames.

    The problem with that is that, on unstable/unreliable connections, the client could potentially "freeze" (no input, keyboard or mouse, would do anything to the widgets on the screen) and nothing (except the mouse, if it's rendered locally) would move or animate (if it was in the first place). The client could, of course, detect that it's no longer receiving any frames/updates and display a notice to the user, say, "No longer receiving communications from server, connection may be unstable or broken". Don't ask me how the client would reconnect; that'd be implementation specific (depending on whether it's the whole desktop being remoted, full-screen; just an app; or the whole desktop within a "virtual desktop" window).

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •