Page 4 of 7 FirstFirst ... 23456 ... LastLast
Results 31 to 40 of 65

Thread: Gallium3D / LLVMpipe With LLVM 2.8

  1. #31
    Join Date
    Oct 2008
    Posts
    3,096

    Default

    Can anyone explain what the difference between LLVMpipe and Gallivm is? Both seem to be projects to use LLVM, but why do they both exist? As I understand it, LLVMpipe is based on the same design as a normal softpipe CPU driver, does Gallivm take a different approach somehow?

  2. #32
    Join Date
    Oct 2009
    Posts
    343

    Default

    Quote Originally Posted by pingufunkybeat View Post
    Crysis needs at least Windows Vista, not sure about the others.

    Which version of Windows are you trying to run them with?

    you didnt understand my last sentence...

    even crysis works faster then 5 fps on 4850...

  3. #33
    Join Date
    Aug 2009
    Posts
    2,264

    Default

    I buy a new computer every six years. By the time UT3 was released and I still dual booted Windows, my then six year old 2800 XP+, 1GB RAM and 9800 pro could still run it at medium settings with a res of 1024*786 @ 24-30fps.

    Now I have a Phenom 9950 X4, 8GB RAM and a 5770 and I don't want to upgrade whenever I get an 'sorry unsuported error', simply because my hardware can't do some OpenGL version that would be supported in the next four years, while my rig would still be able to somewhat cope with semi-HW fallback.

    Remember that my old 9800 pro was OpenGL 1.5! :/

  4. #34
    Join Date
    Aug 2009
    Posts
    2,264

    Default

    PS: Almost 3yo already. Damn time flies...

  5. #35
    Join Date
    Oct 2007
    Location
    Under the bridge
    Posts
    2,129

    Default

    Quote Originally Posted by V!NCENT View Post
    I buy a new computer every six years. By the time UT3 was released and I still dual booted Windows, my then six year old 2800 XP+, 1GB RAM and 9800 pro could still run it at medium settings with a res of 1024*786 @ 24-30fps.

    Now I have a Phenom 9950 X4, 8GB RAM and a 5770 and I don't want to upgrade whenever I get an 'sorry unsuported error', simply because my hardware can't do some OpenGL version that would be supported in the next four years, while my rig would still be able to somewhat cope with semi-HW fallback.

    Remember that my old 9800 pro was OpenGL 1.5! :/
    Your rig should be able to cope with a full-HW fallback for a few years. Most games still offer D3D9/GL2.1 codepaths and you have D3D11/GL4.1 hardware! It's highly unlikely you will ever need to fall back to software rendering during the next 4-5 years (and probably more).

    Modern hardware has evolved faster than your typical software/games (a mid-range card can play everything at 1920x1200) so unless you are using specialized programs you'll probably won't face performance issues during your system's lifetime. Add a SSD next year or so and you are golden! (Not kidding, a fast SSD will make your run-of-the-mill netbook feel like a supercomputer).

  6. #36
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,434

    Default

    Quote Originally Posted by smitty3268 View Post
    Can anyone explain what the difference between LLVMpipe and Gallivm is? Both seem to be projects to use LLVM, but why do they both exist? As I understand it, LLVMpipe is based on the same design as a normal softpipe CPU driver, does Gallivm take a different approach somehow?
    LLVMpipe is a driver that uses LLVM to turn TGSI into CPU code so shaders (and possibly other operations) don't have to be interpreted at runtime. I imagine there is a TGSI to LLVM IR converter followed by LLVM itself.

    Gallivm is part of LLVMpipe -- think about LLVM (C++) wrapped up for use in a C driver -- but is also likely to be used "above the line" for *generating* TGSI (as part of the OpenCL stack and possibly as part of a future GLSL compiler).

    - The driver-independent parts of the LLVM / Gallium code are found in src/gallium/auxiliary/gallivm/.
    http://cgit.freedesktop.org/mesa/mes...lvmpipe/README

    Gallivm is in the src/gallium/auxiliary folder, along with other "helper" routines like draw.

    Disclaimer - I found this on the internet so while it was probably true at one time it may no longer be true today

  7. #37
    Join Date
    Aug 2009
    Posts
    2,264

    Default

    I don't trust an SSD with my data, yet. Better does this tech mature a little more over time and even then I have to do some homework. Of course I could put /home on my HDD, but I currently don't feel that file operations are too slow for my liking

    And upgrading in between the lifetime of my rig is a little pointless; I paid money for a computer that I configured to have a lifetime of 6 years

  8. #38
    Join Date
    Oct 2007
    Location
    Under the bridge
    Posts
    2,129

    Default

    Quote Originally Posted by V!NCENT View Post
    I don't trust an SSD with my data, yet. Better does this tech mature a little more over time and even then I have to do some homework. Of course I could put /home on my HDD, but I currently don't feel that file operations are too slow for my liking

    And upgrading in between the lifetime of my rig is a little pointless; I paid money for a computer that I configured to have a lifetime of 6 years
    Fair enough. SSDs are rather more reliable than spinning ceramic platters, however.

  9. #39
    Join Date
    Feb 2008
    Location
    Linuxland
    Posts
    5,053

    Default

    'til they reach their write limit, or if they have flaky firmware, or a flaky controller and don't save data in the first place, or...

  10. #40
    Join Date
    Oct 2007
    Location
    Under the bridge
    Posts
    2,129

    Default

    Quote Originally Posted by curaga View Post
    'til they reach their write limit, or if they have flaky firmware, or a flaky controller and don't save data in the first place, or...
    Intel SSDs are rather more reliable than spinning ceramic platters, then. :P

    The nice thing is that SMART data from SSDs is actually accurate and you can predict failures well in advance. Much nicer than spinning disks which can fail catastrophically without any prior indication. Google's grand hard disk test concluded that "models based on SMART parameters alone are unlikely to be useful for predicting individual drive failures."

    After a year of hard use (1.65TB writes, virtual machines, browser caches, etc) my Intel SSD (X25-M G2) has a wearout indicator of 98/100 and a single reallocated sector. The drive is considered broken when the wearout indicator reaches 10/100 and with the current rate it will last >10 years. Not bad!

    (I'm gaining levels in thread derailment, btw.)

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •