Page 2 of 2 FirstFirst 12
Results 11 to 20 of 20

Thread: Intel Ivy Bridge: UXA vs. SNA - Updated Benchmarks

  1. #11
    Join Date
    Sep 2012
    Posts
    665

    Default

    Quote Originally Posted by kigurai View Post
    Showing completion time as frequency would be really annoying. Frequency tends to mean that something is repeating, which is not what's going on in the tests here.
    Well [nb of tasks]/[time] is not only a frequency, but also a speed. And that is meaningful even for single test units (how many time does it need to complete a given test => at which speed does it run through this given test).

  2. #12
    Join Date
    Nov 2012
    Location
    France
    Posts
    564

    Default

    Quote Originally Posted by mendieta View Post
    I think two simple things would make tests easier to digest:

    1. always show things in a way more is better. For instance, if something is measured in seconds to completion, show it as a frequency (1 / time-to-completion )
    2. Once a battery of tests are performed, each configuration gets a geometric average of all measurements, so you get one global number for each configuration

    Optionally, one could add one final plot where all averages are normalized to the slowest. Tom's hardware show their reviews like that, and it's pretty awesome.

    Back to Intel, I am so glad to see full open source support. My next desktop rig will be Intel (for the first time, ever)

    Cheers!
    +1 for 2., but for 1., if the test is about "Less is better", bars and the "Less is better" text could be colored in dark red instead of dark blue.

  3. #13
    Join Date
    Jul 2008
    Location
    Berlin, Germany
    Posts
    821

    Default

    Quote Originally Posted by mendieta View Post
    I think two simple things would make tests easier to digest:

    1. always show things in a way more is better. For instance, if something is measured in seconds to completion, show it as a frequency (1 / time-to-completion )
    2. Once a battery of tests are performed, each configuration gets a geometric average of all measurements, so you get one global number for each configuration
    I disagree that making the article easier to digest in that particular regard is a worthwhile goal. This would just make it easier for readers to skim the article, counting wins and losses. In my opinion, users need to be educated that the benchmark numbers have to be viewed in context and their validity and significance be understood, and that it is not possible to boil down the measured performance to a single number. Figuring out whether more or less is better in a figure is the first tiny step to this.

    Readers who are not willing to read and understand the article can simply skip to the conclusion, they just won't be able to delude themselves into thinking that they formed their opinion based on the data.

  4. #14
    Join Date
    Jul 2008
    Location
    Berlin, Germany
    Posts
    821

    Default

    Quote Originally Posted by erendorn View Post
    Well [nb of tasks]/[time] is not only a frequency, but also a speed. And that is meaningful even for single test units (how many time does it need to complete a given test => at which speed does it run through this given test).
    Some publications just measure time to completion, and then calculate an arbitrary score out of this. E.g. you sometimes find Linux kernel compile score given in 1000/s. (So if the compile finishes in 1000 seconds, one gets a score of 1, and if it finishes in 500 seconds a score of 2).

  5. #15
    Join Date
    Apr 2009
    Posts
    529

    Default

    Quote Originally Posted by erendorn View Post
    Well [nb of tasks]/[time] is not only a frequency, but also a speed. And that is meaningful even for single test units (how many time does it need to complete a given test => at which speed does it run through this given test).
    Exactly! I'm a physicst so I said "frequency", but the concept is exactly as you stated it.

  6. #16
    Join Date
    Apr 2009
    Posts
    529

    Default

    Quote Originally Posted by Rexilion View Post
    Same here, just too bad that there are only integrated variants of this great product :/ .
    Yeah, IGPs are bad, but APUs (in die graphics) are advancing at a tremendous rate. That's probably where the future lies. CPU's with lots of cores, and in-CPU fully integrated graphics. Cheers!

  7. #17
    Join Date
    Apr 2009
    Posts
    529

    Default

    Quote Originally Posted by mendieta View Post
    Exactly! I'm a physicst so I said "frequency", but the concept is exactly as you stated it.
    And to be concrete. You can use a convenient time unit. For instance, for the Kernel, you may want to do "number of kernel compiles per hour", which will hopefully be a few

    We may want to pull Michael into this conversation

  8. #18
    Join Date
    Jan 2011
    Posts
    360

    Default

    These benchmarks show an incredible improvement on some operations, but I wonder how that translates in real-world use; it seems that 2D operations are already very fast on my desktop (my netbook on the other hand is slow but I always thought it was due to the processor, not the GPU). The only thing that I (think I) understand is the Firefox canvas test; a 3 times improvement in drawing speed could be useful at times.

    With regards to PTS graphs, +1 for always using units where “more is better”, or (maybe simpler) drawing bars in a different color when “less is better”.

  9. #19

    Default

    Quote Originally Posted by stqn View Post
    These benchmarks show an incredible improvement on some operations, but I wonder how that translates in real-world use; it seems that 2D operations are already very fast on my desktop (my netbook on the other hand is slow but I always thought it was due to the processor, not the GPU). The only thing that I (think I) understand is the Firefox canvas test; a 3 times improvement in drawing speed could be useful at times.
    Indeed, the reality as shown by those benchmarks is that the application/toolkit are more often the rate limiting factor in 2D tasks. For example, the qgears2 "XRender" benchmark does all the image processing and shape rasterisation client side and fails to use XRender at all for GPU offload. The gtkperf demos spend more time doing runtime type checking of pointers than actually rendering. About the only time the DDX affects those results at all is when it performs atrociously.

    Firefox is the standout example; everything from page loading, to scrolling and canvas noticeably benefits from improvements in the DDX. What is harder to measure are the latency improvements that result in X requiring less CPU time to do the same amount of work - especially on these "big core" processors. Where this work matters most is on those slow devices, such as the Atom netbook and its descendents. You would be surprised by how much you ascribed to poor hardware that was in fact attrocious software and drivers.

  10. #20
    Join Date
    Sep 2012
    Posts
    665

    Default

    Quote Originally Posted by ickle View Post
    Where this work matters most is on those slow devices, such as the Atom netbook and its descendents. You would be surprised by how much you ascribed to poor hardware that was in fact attrocious software and drivers.
    This. Of course, on my overclocked 2600K + discrete GPU, the refresh rate of my screen is the limit, but I have some atom netbooks and htpc too, and they could live with a bit more snappiness

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •