Page 3 of 4 FirstFirst 1234 LastLast
Results 21 to 30 of 34

Thread: S3TC Is Still Problematic For Mesa Developers, Users

  1. #21
    Join Date
    Jul 2009
    Location
    Germany
    Posts
    527

    Default

    Quote Originally Posted by Kivada View Post
    Uh, if you need to go THAT big why not go projection or get a few 100" screens? If you actually had to render something on screens of a decent resolution instead of just play back video or static images then even a GTX Titan wouldn't be anywhere near enough grunt to not end up a choppy mess.
    Getting one large screen isn't the only use case for such cards. If you want to display simple things at different locations, this is a perfect way to do this.

  2. #22

    Default

    Quote Originally Posted by droste View Post
    Getting one large screen isn't the only use case for such cards. If you want to display simple things at different locations, this is a perfect way to do this.
    As opposed to lining out to coax and using a splitter and a spool of RG-6 cable to run it to a bunch of cheap TVs? Convert to video over Cat5/5e/6 and back again?

    If you don't already have the screens it's cheaper then VGA/DVI/HDMI/DisplayPort cable if you need to run it more then the few feet of cable the screens came with.

    If you don't already have some of the pieces on hand there are there are allot of ways to skin this cat.

  3. #23
    Join Date
    Feb 2008
    Location
    Linuxland
    Posts
    5,269

    Default

    Yes, Via is still fairly strong in the embedded sector (gambling machines, digital signage).

  4. #24
    Join Date
    Feb 2013
    Posts
    371

    Default

    Did they ever consider asking for permission to be allowed to use it?

  5. #25
    Join Date
    Jan 2010
    Posts
    367

    Default

    Quote Originally Posted by curaga View Post
    Yes, Via is still fairly strong in the embedded sector (gambling machines, digital signage).
    Maybe, but I really wonder why. VIAs products simply cannot compete in terms of power consumption, features or performance. I remember the last time I heard about VIA they tried to peddle a 25W CPU as a "market-leading energy efficient" solution, which was quite ridiculous. On top of that, VIAs hardware tends to be quite buggy and software/driver support is pretty bad, even on Windows.

    I guess most embedded hardware that still uses VIA solutions only does so because it was designed ages ago when VIA still had a small edge. I can't imagine anyone using VIA-based hardware for new developments.

  6. #26
    Join Date
    Feb 2008
    Location
    Linuxland
    Posts
    5,269

    Default

    Quote Originally Posted by brent View Post
    Maybe, but I really wonder why. VIAs products simply cannot compete in terms of power consumption, features or performance. I remember the last time I heard about VIA they tried to peddle a 25W CPU as a "market-leading energy efficient" solution, which was quite ridiculous. On top of that, VIAs hardware tends to be quite buggy and software/driver support is pretty bad, even on Windows.

    I guess most embedded hardware that still uses VIA solutions only does so because it was designed ages ago when VIA still had a small edge. I can't imagine anyone using VIA-based hardware for new developments.
    Yes, inertia is a big reason for any entrenched industry.

    However they do still have some edge - as far as I know, no Atom is capable of 1W max / 0.1W idle. The lowest-powered Atom is around 3W IIRC. Their CPUs aren't really buggy; but the same can't be said for their graphics and to some extent chipsets.

  7. #27
    Join Date
    Jan 2010
    Posts
    367

    Default

    Quote Originally Posted by curaga View Post
    Yes, inertia is a big reason for any entrenched industry.

    However they do still have some edge - as far as I know, no Atom is capable of 1W max / 0.1W idle. The lowest-powered Atom is around 3W IIRC. Their CPUs aren't really buggy; but the same can't be said for their graphics and to some extent chipsets.
    Well, VIA may have a 1W TDP CPU, but it is extremely slow (C7 @ 500 MHz), and still requires a two-die chipset to function. On the other hand, both Intel and AMD have SoCs (chipset fully integrated) with an overall TDP of < 5 W. Intel even has < 3 W TDP parts. I don't see an edge for VIA here at all. VIA-based designs are more complex (three dice on the PCB instead of one), will chug more power and perform worse.

  8. #28
    Join Date
    Jul 2012
    Posts
    5

    Default

    Quote Originally Posted by GreatEmerald View Post
    Eh? So you're saying that you think it'll fail due to it being too new?.. IIRC ASTC is part of the OpenGL spec, so drivers will have to support it if they want to claim OpenGL compliance. Not necessarily in hardware, but if you're supporting it in software, might as well also have it accelerated. So yes, it's still not the fault of the developers that S3TC is preferred, but it will be in a few years. Or, you know, they will stop preferring it.
    ASTC was announced with iirc OpenGL 4.3, but didn't became part of the profile nor did it in 4.4. It still is an extension, neither is it implemented in any desktop driver. So atm there is no ASTC. It's just available on a paper. And cause of its complexity I assume it will stay like that for a while.
    Never the less, as long as all drivers implement a texture compression in software, it is worthless (like ETC). And software implementations can't be accelerated, cause they send the decompressed bitmap to the GPU increasing bandwidth a lot when accessing the texture (the whole point of texture compression (on PCs) is to reduce gpu<->vram bandwidth, reducing memory requirements doesn't matter with >512MB vram). So the only thing that can be accelerated is the decompressing before sending it to the GPU, but that's worthless and has no impact on the _final_ performance (it just reduces a lag when sending the texture).
    Last edited by -jK-; 08-15-2013 at 03:39 PM.

  9. #29
    Join Date
    Sep 2008
    Location
    Vilnius, Lithuania
    Posts
    2,635

    Default

    Quote Originally Posted by -jK- View Post
    ASTC was announced with iirc OpenGL 4.3, but didn't became part of the profile nor did it in 4.4. It still is an extension, neither is it implemented in any desktop driver. So atm there is no ASTC. It's just available on a paper. And cause of its complexity I assume it will stay like that for a while.
    Never the less, as long as all drivers implement a texture compression in software, it is worthless (like ETC). And software implementations can't be accelerated, cause they send the decompressed bitmap to the GPU increasing bandwidth a lot when accessing the texture (the whole point of texture compression (on PCs) is to reduce gpu<->vram bandwidth, reducing memory requirements doesn't matter with >512MB vram). So the only thing that can be accelerated is the decompressing before sending it to the GPU, but that's worthless and has no impact on the _final_ performance (it just reduces a lag when sending the texture).
    It's not yet part of it? Hmm, well, that can be a problem indeed, then.
    As for the acceleration part, I meant "if the hardware has no such capability, do it in software; else do it completely in hardware; since it's much faster in hardware and we have a software implementation for it written already, let's add hardware support for it in our new graphics cards".

  10. #30

    Default

    Quote Originally Posted by brent View Post
    Well, VIA may have a 1W TDP CPU, but it is extremely slow (C7 @ 500 MHz), and still requires a two-die chipset to function. On the other hand, both Intel and AMD have SoCs (chipset fully integrated) with an overall TDP of < 5 W. Intel even has < 3 W TDP parts. I don't see an edge for VIA here at all. VIA-based designs are more complex (three dice on the PCB instead of one), will chug more power and perform worse.
    Yep, the 4.5w AMD G-T16R is one of the most current for low draw x86.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •