Page 2 of 2 FirstFirst 12
Results 11 to 15 of 15

Thread: Open-Source Linux Driver Support For 4K Monitors

  1. #11
    Join Date
    Oct 2009
    Posts
    2,108

    Default

    Somehow, I suspect that most of this "4k support" stuff involves two considerations;
    1) That the necessary protocols (i.e. HDMI 2.0, DP 1.2) are actually implemented in software,
    2) Making sure that things aren't *broken* and just not discovered with conventional displays. I.e., things that divide the screen into 4 quadrants and switch them around would be considered "broken".

  2. #12
    Join Date
    Jan 2013
    Posts
    147

    Default

    I can tell you this, once those 4K monitors are available next year with the new HDMI 2.0, I will be doing some testing with Ubuntu (maybe Debian too depending on how I can set things up) gaming and media with the latest GPUs I can get my grubby little paws on and I will share the results with you wonderful penguin loving oddballs.

  3. #13
    Join Date
    Jun 2012
    Posts
    343

    Default

    Quote Originally Posted by schmidtbag View Post
    And how exactly do you expect the software to support something the hardware doesn't support?
    Forward thinking specs. Why shouldn't the kernel support 10k resolutions? Maybe someone will announce a 10k TV tomorrow. There's no technical reason to limit screen resolution output within the kernel. Sure, no device on the planet may support then, yet, but they are there for the day they will be.

    Now, if you want to limit what outputs you can use within the graphic cards driver layer, that's fine. But there's NO reason the kernel should be arbitrarily limited in this way. The Kernel should be able to support any possible screen resolution.

    It doesn't matter what the software is capable of if the hardware can't do it, so it's better to not let the drivers say "hey look, I can do 4k screens!" and someone attaches one only to find out it doesn't work due to a hardware limitation.
    Driver != Kernel. If the GPU can handle 4k output, if the device can handle 4k output, and the transport layer can handle 4k output, then it should be allowed as an output. Even if this isn't the case, the Kernel should have the ability to output 4k should these conditions become true.

    This isn't the same thing as having a CPU too slow to play a game, because the game will still run. If you buy a 4K screen because your drivers SAY they can support it, you're going to be pretty unhappy to find out the screen won't even leave standby.
    Again, Driver != Kernel. If the screen doesn't support 4k, the drivers should disable it as an option. The Kernel, however, should be fully capable of handling 4k. And 8k. And 200k. Its the driver layer which specifies what the actual output is going to be for a given device.

    Setting screen resolutions is a lot more complicated than most people are aware of - there's a lot more than width, height, color depth, and refresh rate. It isn't as simple as just flicking a switch and suddenly getting 4K resolutions.
    Which you can also handle. There is no technical reason why the Kernel shouldn't have support for every possible screen resolution, color depth, refresh rate, under the sun. If you want to limit what is output for a connected device in driver land, find, but the kernel itself shouldn't have any limitations because "Its not supported by hardware yet".

  4. #14
    Join Date
    Dec 2007
    Posts
    2,371

    Default

    While someone could design hardware to potentially work with 10k modes or whatever, it's probably not a good use of resources. Support for arbitrary modes comes down to a few things:
    - link support. You need to make sure the spec supports high enough clocks over the link to support the mode. E.g., HDMI uses single link TMDS. The original TMDS spec only supported clocks <=165 Mhz. Newer versions of HDMI have increased this.
    - clock support. The clock hardware on the asic has to be validated against higher clock rates. If there is no equipment to use the higher clocks, it's harder to validate and increases validation costs to support something that may or may not come to be during the useful lifetime of the card.
    - line buffer support. Once again, it takes die space for the line buffers. Extra die space costs money. You don't want to add and validate more line buffer than you need to cover a reasonable range of monitors that will likely come to be during the card's useful lifetime.

    Also, the current kernel and even older kernels can handle 4k timings just fine using the standard mode structs. What got added in 3.12 was support for fetching 4K modes from the extended tables in the monitor's EDID.
    Last edited by agd5f; 09-10-2013 at 10:21 AM.

  5. #15
    Join Date
    Jan 2012
    Posts
    151

    Default

    Quote Originally Posted by gamerk2
    Forward thinking specs. Why shouldn't the kerne..
    Quote Originally Posted by agd5f View Post
    While someone could design hardware....
    thank you for clearing this up!

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •