Page 3 of 4 FirstFirst 1234 LastLast
Results 21 to 30 of 33

Thread: NVIDIA GeForce GTX 550 Ti

  1. #21
    Join Date
    Dec 2011
    Posts
    4

    Default GeForce GTS 250

    Thank you for your advice!
    Before I pull the trigger I just want to make sure this specific card is fast enough:
    Galaxy 25SGF6HX1RUV GeForce GTS 250 1GB 256-bit DDR3
    I can get it for $50 which is dirt cheap!
    Comments?

  2. #22
    Join Date
    Apr 2011
    Posts
    359

    Default

    You totally wrong about vga part. A shader can run raster and texture mapping data, but using rops and tmus making it faster(if you theoretically cut all tmus and rops, still the vga will produce the same graphics at 60% speed. Rops and tmus are as many needed to assist stream processors, so you count only teraflops. A 512bit fermi has 700 64bit instruction teraflops at 1.3-1.4ghz, but you must count stream 32bit simple add functions. So you multiply by 6(fmac=3ops,64bit dual issue cores=2ops), and its 4+teraflops. CellBE for example has 250-i-gflops or 3-s-teraflops, rsx(200gflops uses 6spes= +1.85tflops. In d3d-to-ogl translations, gflops don't mater match, that mater is the instruction set, if you have many emulation and jit instructions then you are fast, see this in l3c part for example: http://en.wikipedia.org/wiki/Loongson

  3. #23
    Join Date
    Apr 2011
    Posts
    359

    Default

    1) D3d-to-ogl translations have nothing to do with vga power or generation. The best vga is the one with good emulation instruction on its instruction set. So cuda has 90% translation efficiency wile vliw
    has 30%.
    2) Wine support state-trackers for d3d and hlsl. You can install dx11 from winetricks and wine will run it inside opengl, with heavy translations like tessellation. So newer vga's are better.
    3) Newest graphics-machines like unigine, unreal3, cry3, id4, id5, are unified and api-less. So the newest games will be d3d and ogl equal, without the need for translations. Old ones like unreal2 want
    translations or bigger effort to be ogl friendly.
    4) Do as i say and buy a 90dollar gtx460, do bios update to free more cores and some overclocking.

  4. #24
    Join Date
    Dec 2011
    Posts
    4

    Default 90??

    Quote Originally Posted by artivision View Post
    4) Do as i say and buy a 90dollar gtx460, do bios update to free more cores and some overclocking.
    ok where do I find a $90 gtx 460?? cause they start from $160!

  5. #25
    Join Date
    Nov 2010
    Location
    Moscow, Russian Federation
    Posts
    44

    Default

    Quote Originally Posted by artivision View Post
    You totally wrong about vga part...
    Making such statements without quoting the original statements that you believe to be "totally wrong" is nothing else than trolling.

    Quote Originally Posted by artivision View Post
    ... so you count only teraflops....
    Videocard end-user who use GPU for games usually counts exactly one thing - FPS. Terraflops, VLIW4 vs. classical RISC and all other thing are only interesting for curios, geeks and GPGPU users. This fact is pretty simple and obvious.

  6. #26
    Join Date
    Nov 2010
    Location
    Moscow, Russian Federation
    Posts
    44

    Default

    Quote Originally Posted by jarg View Post
    Galaxy 25SGF6HX1RUV GeForce GTS 250 1GB 256-bit DDR3
    DDR3 is a show-stopper here (for comparison diagram look here: http://www.ixbt.com/video3/guide/guide-06.shtml ; the article is written in Russian language but the diagram I'm writing about - it is one with "Far Cry 2" at the top right side - has only English words on it and is pretty self explanatory). Look for GDDR5 or - at the very least - GDDR3. Ordinary DDR3 is using 4x data transfers per clock, GDDR3 is based on DDR2 and uses 2x transfer rate per tick. So in case you would aim at GDDR3-equipped card - make sure it has at least 256-bit memory bus width and memory chips running at the highest freq possible.

    As for the GPU GTS-250 card is based on vs. SC2 under Wine: my old GTS-250 from GigaByte had been able to run SC2 under wine at around @32-40FPS when playing full screen 1680x1050 without AA, forced 16x ANISO and in-game graphical settings all set to max except for shaders. Latter were set to "middle" and it enforced lightning into "Low" and post-processing into "middle". Now, with GeForce GTX-550 Ti equipped with 1Gb of GDDR5 VRAM and using 192-bit bus to access it, I've got around 40-45 FPS with the same settings. Setting shaders to "high" or "ultra" levels resulted in huge FPS drop with older versions of Wine. Now days performance drop isn't that big but it causes rendering glitches so I prefeur to stick with lower quality settings but play with higher FPS and visually-correct picture. Lowering shaders settings to "low" raised up FPS to 50-60 on GTS-250. Doing the same with 550 Ti results in smooth 60 FPS, so if you're really in search of smoothness and don't mind lowering quality - that's the way to go. Typical difference in speed between older generations of GPUs can be illustrated by this graph: http://www.ixbt.com/video/itogi-vide...1680-pcie.html

    P.S. If you feel adventurous, have a good power supply installed in your PC and you are experienced enough to hack around with videocard BIOS reflash to unlock cores - it might be OK to head in the way artivision suggests you. Be prepared though that "Fermi" cards are pretty hot and power-consuming, and also you should be lucky enough to find a card that would suit your needs for a small enough cost on second-hand market. If you don't want "all that adventures" and simply want to buy-install-play something cheap (i.e. less than 150-200 USD) yet fast enough - aim at something like second-hand fab-OC version of GTS-250 from GigaByte with Zalman cooler (http://www.ixbt.com/video3/images/gi...scan-front.jpg, model GV-N250OC-1GI) or anything more modern you would be lucky to get for a good price. What to avoid: GT 240 and less, GTX-260 with less than 216 stream processors unlocked, any card with memory bus width less than 192-bit, any card which had non GDDRx type of memory installed, dual-GPU models.

  7. #27
    Join Date
    Nov 2010
    Location
    Moscow, Russian Federation
    Posts
    44

    Default

    Quote Originally Posted by artivision View Post
    1) D3d-to-ogl translations have nothing to do with vga power or generation. The best vga is the one with good emulation instruction on its instruction set. So cuda has 90% translation efficiency wile vliw has 30%.
    That's exactly the point: vga power, generation, e.t.c. - have nothing to do with d3d-2-ogl translation done by Wine, and they also doesn't matter at all for the end user as long as the card does its job and renders desired picture at 60FPS. 90% or 30% efficiency when it comes to execution at the low-level instruction set level doesn't matter for most of people out there as not everyone out there is GPGPU user and/or geek.

    2) Wine support state-trackers for d3d and hlsl. You can install dx11 from winetricks and wine will run it inside opengl, with heavy translations like tessellation. So newer vga's are better.
    Do you really believe in what you write here :-)? Had you ever tried it at home with any of latest DirectX11-capable titles? If "yes" and "yes", then - did it work? If "yes" again - would you mind posting a video on youtube that proves, for example, that DX11 features really work in - say - "Batman" when played under Wine with native DX11 libs installed through winetricks?

    3) Newest graphics-machines like unigine, unreal3, cry3, id4, id5, are unified and api-less. So the newest games will be d3d and ogl equal, without the need for translations. Old ones like unreal2 want translations or bigger effort to be ogl friendly.
    Tell that to engine creators out there. idTech4 engine you had mentioned has an ogl-based built in renderer and is pretty old. idTech5 was based on idTech4 and also officially uses ogl on PC. Essentially it is the only "triple A" engine out there that has ogl rendering backend for win32/64 platform. Unigine is cool and linux-friendly engine which has multiple rendering backends including ogl and various versions of d3d, but - unfortunately - this engine isn't widely used by gamedev companies yet (I hope the situation would change in a near future). Anything else out there only officially supports DirectX 9, 10 or 11 on PCs, despite the fact that the engines are really api-agnostic and ogl render backend can be written as easily as egl or d3d9. SC2 is a pretty good example: it has support for ogl in its engine (as it's the API SC2 uses on Mac OS X to do rendering), but it had been cut off from windows build of SC2 during the open beta-testing phase.

  8. #28
    Join Date
    Apr 2011
    Posts
    359

    Default

    I thing we agree on most of the things, but i still disagree about newest graphic-machines like Unigine. In Unigine you don't have the choice(as documents say) to create different back-ends(d3d,ogl), you can only auto-generate both. There is not choice for Linux or Windows policy. And as for 90dolar gtx460, i mean used not new.

  9. #29
    Join Date
    Nov 2010
    Location
    Moscow, Russian Federation
    Posts
    44

    Default

    Quote Originally Posted by artivision View Post
    I thing we agree on most of the things, but i still disagree about newest graphic-machines like Unigine. In Unigine you don't have the choice(as documents say) to create different back-ends(d3d,ogl), you can only auto-generate both. There is not choice for Linux or Windows policy. And as for 90dolar gtx460, i mean used not new.
    Indeed we agree on most things, and also it's obvious that both you and me are geeks GPGPU users :-). As for Unigine - I hadn't had a chance to look into sources/SDK/anything-like-that and only had a chance to run their products including benchmarks like Tropics and Heaven and also OilRush game. I had tried them both on linux and windows hosts, and from what I've seen, engine allows to select rendering backend you wish to use (of course required API should be supported by underlying OS) and visual results for, say, DX10 vs. OpenGL 3/4 backends looks pretty much the same. Knowing that they have ported the engine to Android (thus they must be using something like EGL/OpenGL ES there) and that the feature set available through D3D9 is pretty limited when comparing to D3D10/11 or OGL3/4 made me believe that the engine itself is pretty modular and adding another rendering backend which would use yet another API (PoverVR, Glide, name-anything-you-like) shouldn't be of a big deal. IMO it's just the way I expect it to be with modern game engine - I don't like being artificially limited to a selected API (or its subset) as that would limit the portability of the resulting product - and that's not a good thing for nowdays gaming market, where you want to support as much target platforms/devices as possible. Xbox, PS3, iOS, Android, Win32/64, Linux/FreeBSD, Mac OS X - the more the better.

    Actually this discussion is a bit offtop here, but to conclude I want to mention one fact: as a developer I don't want to use any of APIs and would prefer to have direct access to hardware features. Multiple levels of abstractions is a thing that plagues PC platform and drains a lot of performance. Yeah, having a "uniform spherical hardware in vacuum" offered for access through higher-level API is a convenient thing when writing some quick-n-hackish 3D app using utility lib like freeglut, but as soon as things would get complicated - you eventually would hit some strange behavior that can be explained only if we dive deeper into real hardware capabilities and end up finding out that actually we're hitting some hw limitation that the API ICD tries to silently workaround using slow-as-hell software fallback. That's why I like and look forward to wide adoption of OpenCL - it is possible to use it in a way so you would gain almost direct access to underlying ASIC and thus would be able to use it up to its limits.
    Last edited by lexa2; 12-13-2011 at 02:03 PM.

  10. #30
    Join Date
    Apr 2011
    Posts
    359

    Default

    Totally agree that OpenCL and LLVM is the future in our situation. I will go a step farther and i will say that non reconfigurable processors must extinct. See "Tabula Abax 8-20floοrs" for example and imagine that flash-based(10+times energy efficient). It can even match 8xFermi server in a cellphone with a soft-ip like Mips or OpenCores.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •