Page 3 of 4 FirstFirst 1234 LastLast
Results 21 to 30 of 32

Thread: AMD's Catalyst Evolution For The Radeon HD 7000 Series

  1. #21
    Join Date
    Aug 2007
    Posts
    6,622

    Default

    @kwahoo

    I would not say that rage is so demanding. i played it with my old nv 8800 gts 512 @ 1920x1200 without aa using wine and it was fast enough. often games that have got console ports are NOT so demanding as pc only games. of course when you want to enable cuda mode then you need to compile the needed wine lib as well, did not do that for this game. you can not play it with intel oss drivers however, win + hd 4000 would work. fglrx sucked badly with wine. did not check it with that game, but fglrx has got some weird wine "optimisations". when you compare glxinfo -l to what you get when you copy the binary to wine (basically any name with w in front had this effect for me) then the vendor reported switches from amd to ati. Also a few settings report lower numbers, most likely due to a bug. Feel free to compare yourself. The diff in the bug report does not show the amd -> ati change.

    http://ati.cchtml.com/show_bug.cgi?id=528

    Maybe somebody wants to rename the wine BINARY to zine or so. It does not apply for scripts.

  2. #22
    Join Date
    Mar 2008
    Location
    40N 105W
    Posts
    45

    Default Well Said

    [QUOTE=Nasenbaer;273888]Maybe because the tested games are faaaar from demanding for this graphics card.



    I would highly suggest to rethink this kind of tests. >400 fps what in hell tells us such a result? Nothing! /QUOTE]

    Well said, lol..

    Thanks for the post.

    Be real, be sober.

  3. #23
    Join Date
    Mar 2008
    Location
    40N 105W
    Posts
    45

    Default

    @artivision

    I think I see your point, but if the tests are run at high enough resolution the slower card will fail, yes? No?

    I think D3D is a huge conflict of interest with the D3Db console game sales.

    Thanks for the post.

    A contribution share ratio gives context to the contribution and credits the contributor.

    Frustration makes the wine taste sweetest, the truth most bitter.
    Last edited by WSmart; 07-10-2012 at 06:20 AM. Reason: Reply didn't appear to have an address; added.

  4. #24
    Join Date
    Jul 2012
    Posts
    148

    Default

    Quote Originally Posted by Kano View Post
    @kwahoo

    I would not say that rage is so demanding. i played it with my old nv 8800 gts 512 @ 1920x1200 without aa using wine and it was fast enough. often games that have got console ports are NOT so demanding as pc only games.
    Carmack claims "Rage is not console port" Manually tweaked Rage should be more demanding.
    Last edited by kwahoo; 07-10-2012 at 06:40 AM. Reason: I missed a point

  5. #25
    Join Date
    Dec 2009
    Posts
    492

    Default

    Quote Originally Posted by kwahoo View Post
    Carmack claims "Rage is not console port" Manually tweaked Rage should be more demanding.
    One could argue it's not a game either, it's a tech demo. Good enough for a benchmark, tho.

  6. #26
    Join Date
    Apr 2011
    Posts
    328

    Default

    Quote Originally Posted by smitty3268 View Post
    You realize that the # of stream processors doesn't directly equal the final performance, right? That's like saying a 3Ghz CPU will always be the same speed, whether it was made by Intel, AMD, or based off an ARM design.

    And theoretical performance is just that - theoretical. There are all kinds of reasons hardware never reaches those kinds of numbers in practice - the caches might be too small, not enough bandwidth to feed the processors, etc. There are hundreds of possible reasons, and it will be different for each and every design.

    As far as the precision - ha, i still remember the 9700 vs FX days, when NVidia was insisting the 16 bits was all you ever needed, and no one could tell the difference between that and those fancy 24bit precision ATI cards.

    Re: a grand conspiracy by MS to help AMD and hurt NVidia - uh, ok. whatever dude.
    I hope you realize that the kind of execution unit does not matter for stream processing, only the bitrate matters. Kepler has 3.5Tflops@64bitFMAC or 7Tflops@32bitFMAC, wile Radeon7000 has 4Tflops@32bitfmac. Second, there is not any possibility for some one to produce a GPU that does not fill the shaders because of memory for example, that way where is the proof? (why the card reaches maximum thermal and use?). As I said before that is impossible for stream processing. Kepler is 70+% faster than Radeon7000 and 2times more efficient, also is 2*faster than Fermi. In benchmarks is only 20% faster than Radeon7000 and 40% than Fermi because the driver turns in quality and precision mode. So when You have 2*GPUs you have only +50% framerate. Todays benchmarks cant measure quality and precision, and they cant force the hardware on a very specific precision, because they speak to the driver and not directly to the hardware.

  7. #27
    Join Date
    Sep 2008
    Location
    Vilnius, Lithuania
    Posts
    2,551

    Default

    Quote Originally Posted by Kano View Post
    but fglrx has got some weird wine "optimisations".
    And you made sure to turn off Catalyst AI?

  8. #28
    Join Date
    Aug 2007
    Posts
    6,622

    Default

    Try it yourself.

  9. #29
    Join Date
    Sep 2008
    Location
    Vilnius, Lithuania
    Posts
    2,551

    Default

    Quote Originally Posted by Kano View Post
    Try it yourself.
    Renaming the executable? I'm pretty sure that's not healthy in a package-based environment...

  10. #30
    Join Date
    Aug 2007
    Posts
    6,622

    Default

    You think too complicated.
    Code:
    mkdir -p /tmp/fglrx-test
    cd /tmp/fglrx-test
    cp /usr/bin/glxinfo wine
    ./wine -l > w.l
    glxinfo -l > g.l
    diff g.l w.l

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •