Page 1 of 2 12 LastLast
Results 1 to 10 of 50

Thread: TitaniumGL 3D drivers (linux version)

Hybrid View

  1. #1
    Join Date
    May 2011
    Posts
    107

    Default TitaniumGL 3D drivers (linux version)

    ohai Phoronix users again,
    i now deliver another software of me:

    Linux version of TitaniumGL is now finally released!



    TitaniumGL is a FREEWARE driver architecture. Project's goal is to support OpenGL on graphics cards with broken/bad or missing OpenGL drivers.
    ( WARNING: The author does not possess an OpenGL license from SGI, and makes no claim that TitaniumGL is in any way a compatible replacement for OpenGL or associated with SGI. USE AT YOUR OWN RISK. )

    *************************
    WHAT IS THIS, I DO NOT EVEN...
    *************************
    TitaniumGL driver meant for computers where no opengl drivers available for the system. Its compatible with a several platforms, and now, since cthulhu does not coming out from the computer when using the linux version, i released it publically.

    *************
    HOW IT WORKS?
    *************
    Just like any other opengl implementation for linux, this is also a libGL.so.1, that you should copy to your system directory (read the readme for more informations). No other actions required to install it.

    TitaniumGL renders the scene with all of your (well, up to 4 max) cpu cores in your computer. It can run almost ALL popular opensource game PLAYABLE (over 24 fps) with a core2duo cpu in enjoyable screen resolutions.

    Warning: TitaniumGL is NOT meant to run CAD applications, built-in legacy/useless gear renderer tools (!!!!!!!!), its ONLY meant to run REAL games! (however, maybee other kind of softwares also run. there is a compatibility list on the webpage available)


    Why it wasnt released earlyer?
    ***********************
    There was a lot of bugs and almost unfixable compatibility issues, becouse the linux graphics api and the x11's documentation is simply useless, and the quality of linux rendering infrastructures simply makes no sense. Also my code had a few bugs that only caused mess on the Linux platform. But now, they are not any more exist, so its a good time to release the software.

    Limitations:
    ********
    -The software under linux is somewhat slower than the Windows version, due to linux kernel's thread handling.
    -In some systems/cases, for some strange reason, only one cpu core being used.
    -On some systems, rendering speed capped to VSYNC.
    -On some systems, rendering speed capped to 70, or 40 fps, or some strange speed like that

    i wish you all happy using.
    download & read more informations on the webpage and in the readme file.

    download: http://titaniumgl.tk

  2. #2
    Join Date
    Jun 2010
    Location
    ฿ 16LDJ6Hrd1oN3nCoFL7BypHSEYL84ca1JR
    Posts
    1,052

    Default

    Quote Originally Posted by Geri View Post
    and the quality of linux rendering infrastructures simply makes no sense
    Wat.

    Maybe people are wondering:
    It really is only a libGL.so.1 and nothing else. It doesn't even need much fancy stuff
    Code:
     % ldd libGL.so.1
    ldd: warning: you do not have execution permission for `./libGL.so.1'
            linux-gate.so.1 =>  (0xf7741000)
            libX11.so.6 => /usr/lib32/libX11.so.6 (0xf7431000)
            libstdc++.so.6 => /usr/lib32/libstdc++.so.6 (0xf7348000)
            libm.so.6 => /usr/lib32/libm.so.6 (0xf731a000)
            libgcc_s.so.1 => /usr/lib32/libgcc_s.so.1 (0xf72ff000)
            libpthread.so.0 => /usr/lib32/libpthread.so.0 (0xf72e3000)
            libc.so.6 => /usr/lib32/libc.so.6 (0xf7140000)
            libxcb.so.1 => /usr/lib32/libxcb.so.1 (0xf711e000)
            libdl.so.2 => /usr/lib32/libdl.so.2 (0xf7119000)
            /lib/ld-linux.so.2 (0xf7742000)
            libXau.so.6 => /usr/lib32/libXau.so.6 (0xf7115000)
            libXdmcp.so.6 => /usr/lib32/libXdmcp.so.6 (0xf710d000)
    You can use LD_PRELOAD if you don't want to replace system files.
    Code:
     % LD_PRELOAD=/home/chris/TitaniumGL_linux_version/libGL.so.1 glxinfo32| grep -E 'version|render'
    direct rendering: Yes
    server glx version string: 1.3
    client glx version string: GLX_ARB_create_context GLX_ARB_get_proc_address GLX_SGIX_fbconfig
    GLX version: 1.2
    OpenGL renderer string: TitaniumGL/4 THREADs/SOFTWARE RENDERING/4 TMUs
    OpenGL version string: 1.4 v2009-2012/3/08 (c)Kovacs Gergo
    It does say on his website that it's only OpenGL version 1.4...

    He says it's not for glxgears but it runs (but looks funny :)):
    Code:
     % LD_PRELOAD=/home/chris/TitaniumGL_linux_version/libGL.so.1 glxgears32
    915 frames in 5.0 seconds = 182.916 FPS
    899 frames in 5.0 seconds = 179.675 FPS
    832 frames in 5.0 seconds = 166.311 FP
    I don't have many 32 bit games here... The secondlife client runs but rendering is very much not accurate, but somewhat runs.

    Every time I close an X window using that library it opens the homepage in my browser wtf.

  3. #3
    Join Date
    May 2011
    Posts
    107

    Default

    there will be 64 bit version, once. but not in these months. i dont have so mutch time in these days, i must care about different projects also.

  4. #4
    Join Date
    Jun 2010
    Location
    ฿ 16LDJ6Hrd1oN3nCoFL7BypHSEYL84ca1JR
    Posts
    1,052

    Default

    Since in the last 5+ years there has been no real reason for most people to compile software for 32 bit a 64 bit version would probably be more useful to people (assuming it's faster than llvmpipe?)...


    Also, Windows programmers...
    Code:
     % strings libGL.so.1 | grep 'C:\\'
    C:\%s
    C:\Windows\%s
    C:\Documents and Settings\%s
    C:\%s.txt
    C:\Windows\%s.txt
    C:\%s.txt.txt
    C:\Windows\%s.txt.txt
    C:\%s.doc
    C:\Windows\%s.doc
    C:\%s.doc.txt
    C:\Windows\%s.doc.txt
    C:\%s.doc.doc
    C:\Windows\%s.doc.doc
    C:\%s.txt.doc
    C:\Windows\%s.txt.doc
    C:\Documents and Settings\%s.txt
    C:\Documents and Settings\%s.txt.txt
    C:\Documents and Settings\%s.doc
    C:\Documents and Settings\%s.doc.txt
    C:\Documents and Settings\%s.doc.doc
    C:\Documents and Settings\%s.txt.doc
    What about people that don't use firefox?
    Code:
     % strings libGL.so.1 | grep 'http://'
    firefox http://lgfxadserv.no-ip.org/titaniumgladssrvc/titaniumgladssrvc.php
    http://LegendgrafiX.tk
    http://TitaniumGL.tk

  5. #5
    Join Date
    May 2011
    Posts
    107

    Default

    assuming it's faster than llvmpipe
    i havent compared.

    Also, Windows programmers...
    oooops


    What about people that don't use firefox?
    they can still enjoy this awesome software without creating money for meh :|
    but no problem, i am happy to see that peoples are so loving my software that they even watching the code inside it

  6. #6
    Join Date
    Feb 2008
    Location
    Linuxland
    Posts
    5,185

    Default

    Maybe I remember wrong, but wasn't titaniumgl an old fork of mesa?

  7. #7
    Join Date
    Jul 2008
    Location
    Greece
    Posts
    3,798

    Default

    Quote Originally Posted by Geri View Post
    they can still enjoy this awesome software without creating money for meh :|
    On Linux, you open URLs not with "firefox <URL>" but with "xdg-open <URL>". It's similar to the Windows "start" command; it will use the user's browser (whatever that is) to open links.

    I guess I just doubled your income. Do you need my PayPal account? ;-)
    Last edited by RealNC; 03-09-2012 at 01:05 PM.

  8. #8
    Join Date
    Jan 2007
    Posts
    459

    Default

    Quote Originally Posted by ChrisXY View Post
    Wat.


    You can use LD_PRELOAD if you don't want to replace system files.
    Code:
     % LD_PRELOAD=/home/chris/TitaniumGL_linux_version/libGL.so.1 glxinfo32| grep -E 'version|render'
    direct rendering: Yes
    server glx version string: 1.3
    client glx version string: GLX_ARB_create_context GLX_ARB_get_proc_address GLX_SGIX_fbconfig
    GLX version: 1.2
    OpenGL renderer string: TitaniumGL/4 THREADs/SOFTWARE RENDERING/4 TMUs
    OpenGL version string: 1.4 v2009-2012/3/08 (c)Kovacs Gergo
    im not sure if it's distro-agnostic ? but you could probably use the better /etc/ld.so.preload file I.E
    something like
    echo /home/chris/TitaniumGL_linux_version/libGL.so.1 > /etc/ld.so.preload

    Geri:there will be 64 bit version, once. but not in these months. i dont have so mutch time in these days, i must care about different projects also.
    what are these "different projects" you also care about, anything we might be interested in too ?

    BTW Geri,why are you making it "freeware" (very good) but not putting it on github and releasing the code (not so good) and did you use a lot of benchmarked SIMD code in your separate routines or did you add in a fast 3rd party SIMD lib like Eigen
    http://eigen.tuxfamily.org/index.php?title=Main_Page to make your life easier ? to get the far better speed than LLVMpipe etc
    Last edited by popper; 03-10-2012 at 09:23 AM.

  9. #9
    Join Date
    May 2011
    Posts
    107

    Default

    ,,what are these "different projects" you also care about, anything we might be interested in too ?''

    1. i have worked in the past few weeks without a pause (24/7 - sleeping), on my softwares. i must rest a bit.

    2. i have 3d rpg maker software and i need to fix it under windows with intel gpu's. something is terrible wrong with the intel graphics drivers for windows, i was able to run it on intel now, but it still falls apart. i dont have intel testmachine so its a FUGLY work to do.

    3. GENERAL THINKING FROM INDUSTRY - i am working on an experimental real-time ray tracer for cpu. i want to get rid of gpu's, i want to get rid of rasterization at all. we are living in the last minutes of the era of the raster based 3d. The rendering mechanism, wich is invented basically by 3dfx in wide range, is what we still using, will maybee start dying soon. we dont need gpu's at all. imho, attempts to run generic code on gpu is a wrong concept of creating applications, and only forced by nvidia becose they cant ship x86(-64) cpus. not only just becouse they have no license to do it, but becouse they dont have the technology to create it (nvidia x86 cpu cores probably still sucking around 5-600 mhz in they laboratorys). The problem with opencl/cuda/shaders/and all other things related this that does not make the ability to create real applications in it. These things creating an assimethric platform, and no serious application EVER done in them, becouse they are useless if somebody want to create a real application in them. This conception creates an incohesive, non-monolithic programming way that is not able to be programmed in simple, algorithmic ways. Yes, i know everyone who is interested in amd, nvidia, powervr, and any other products related to this, says the opposite of this. But in reality, no real programmer will touch this platforms ever on his free will. This assimethric conception of application development multiples the time of the developing, and in some cases its useless. 90% of the algorythms simply canot efficently ported into gpus, becouse in the real world, an apllication is not just flows a bounch of datas over a pipeline. Also, its not possible to use the gpu in that way, like what we do it with FPU, becouse reaching FPU is just a few clock, but to utilize GPU's need a driver and an operating system, it cant just be reached with interrupts. This technology simply means that will never be able to step over this point. Creating a real application is work like that, the programmer creates functions, creates arrays, cycles, ifs in a COHESIVE way through the language constructs of the selected programming language, then the application can use the libraries of the operating system to read files, get mouse cursor, etc. This is impossible with the GPGPU concepcion, this is the main reason, why nobody uses it (exept the *never-heard-of* projects those are directly goals the gpgpu industry). This kind of programming ,,ideology'' hangs on the umbilical cord of nvidia, and other multicorporations those are interested in rasterization, becouse they products are rely on rasterization - the newest gpus are just also a tweaked voodoo1's, and additional circuits to reach general purpose functionality, and to be able to run shaders. And the problem is that they CANT just create a new products, becouse even if they can see beyond this, they will rely on such kind of application development methods that rely on the previous wrong style of software creation, so therefore its rarely succesfull, while the industry swallows in the chaos due to the so called ,,software and hardware properties'' created by illegitime ,,democracy'' in some contries. And i also decided to pull down the last skin from my foxies, this is why titaniumgl is also released now. Its still just a rasterizer. And when the time comes, i will switch to software ray tracing in my products, and press shift-delete on my old gpu related codes. They worth nothing. I investigated the possibility of creating a real time ray tracing architecture that is ONLY based on pure algorithmical implementations, and i have found out that we (the whole industry) have been tricked - again. The real situation is that we alreday reached the point to speed up the ray tracing over the speed of rasterisation in 2006-2009. I have done some calculations and i have find out that implementing a ray tracing hardware would be possible from around the same amout of transistors used in the raster hardwares since years, while the technology would allow basically unlimit number (~1-20 billion (different) polygon in real time) in fullHD resolution with shadowed, refracted (basically: in a really ray traced) environment over 24 fps. Of course if nvidia would create a hardware like this, that would mean that ALL of they techniques can go literally into the garbage, and they must start the work from 0. Who are into this things, this was the really reason why nvidia have bought physx back then, they was lack of the proper technologies to reach this goal. Shamefully, they are still lack it, physx is not good to reach this very specific kind of phisical simulation that need to reach real-time ray tracing. And realtime real ray tracers implemented on gpus are still just generic-like softwares created by individuals who wish the whole conception into the garbage, and wish to code a monolithic environment instead of that. And i also decided to jump down from this boat, wich will sink, including opencl, directx, opengl, cuda, shaders, gpus, gpgpu conception, whatever. And no, GPGPU is unable to be more than this - to be more, it should have hdd-s connected in, it should boot Windows (or linux, does not matter), it should be able to run a firefox.... Basically, then it would become a CPU, so it wouldnt be a GPU any more. The definition of a GPGPU causes the conception failure in itself. And i decided to try to create a real time ray tracer, however, i am unsuccesfull at the moment, becouse i have really lot of bugs (and some limitations that i still get becouse i dont want platformspecific things in the code) My ray tracer is a very epic, probably undoable, but painfull project. Its very fallen apart yet, produces ugly and graphics, and effects are not yet properly implemented. So i must rape it together until i get some enjoyable graphics quality, and it still looks like some bad quake2 clone from 1993. But the dinosaurs gona die out. Its cannot be avoided. And i want to have a finished, usable, and good technology, when (if?) that happends. This project have real time priority in my brain, this is wich is really forbids me to proceed the others

    ,,SIMD''
    oh, i accudentally also partly answered this. no, i dont use simd, i dont like to ruin my algorithms. TitaniumGL does not even have inline SSE, or any kind of inline assembly, it just pure algorithm. however, some other projects of me may have inline assembly, but not as a concept or base-thing, just to implement a simply function.
    Last edited by Geri; 03-10-2012 at 10:26 AM.

  10. #10
    Join Date
    Jul 2008
    Location
    Greece
    Posts
    3,798

    Default

    How do you intend to run vastly parallel algorithms on the CPU then? The GPU can run them orders of magnitude faster. For university (numerical analysis), I wrote some C code to solve matrices (for linear equations). No matter how good the algorithm was (even taking brain-dead shortcuts like not checking for divisions by zero), as soon as it was ported to CUDA, it solved them dozens of times faster.

    Unless CPUs can do that stuff, GPUs are here to stay.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •