Page 3 of 7 FirstFirst 12345 ... LastLast
Results 21 to 30 of 62

Thread: Windows 7 & Windows 8 vs. Ubuntu 13.04 & Fedora 18

  1. #21

    Default

    Quote Originally Posted by smitty3268 View Post
    Edit: you are supposed to be able to go to the results on openbenchmarking.org somehow, but i can't figure it out. I thought there was supposed to be a link in all the charts, but i can't find one.

    If you can get to it, i think that records a list of the glxinfo results, which would have the extensions available, so you could check if S3TC is showing up there.
    If you are on a modern web-browser (anything supporting SVG, a.k.a most anything in past few years aside from IE), you get the SVG graphs that all have links to the OpenBenchmarking.org results... But if you're using AdBlock or similar, it seems to think that result graphs are ads, so it might be blocking you from seeing it.

    Yes, on OpenBenchmarking.org are all of the key system log files.

  2. #22
    Join Date
    Oct 2008
    Posts
    3,219

    Default

    Quote Originally Posted by Michael View Post
    If you are on a modern web-browser (anything supporting SVG, a.k.a most anything in past few years aside from IE), you get the SVG graphs that all have links to the OpenBenchmarking.org results... But if you're using AdBlock or similar, it seems to think that result graphs are ads, so it might be blocking you from seeing it.
    Guilty

    Oh, that's weird. I tried opening the site in Chrome (26 on windows) and I'm getting the images instead of svg. Every other browser is giving me svg. Even IE.

  3. #23
    Join Date
    Jan 2011
    Posts
    470

    Default

    Quote Originally Posted by smitty3268 View Post
    I think Intel actually enabled S3TC/floating point textures in Mesa by default on their hardware. Presumably some lawyer types at Intel told them their hardware was covered, although that didn't happen for other drivers (radeon/nouveau).

    And then Fedora promptly disabled it on their distro. So i'm not sure what Michael's got.


    Edit: you are supposed to be able to go to the results on openbenchmarking.org somehow, but i can't figure it out. I thought there was supposed to be a link in all the charts, but i can't find one.

    If you can get to it, i think that records a list of the glxinfo results, which would have the extensions available, so you could check if S3TC is showing up there.
    Ok cool. Hope someone works it out.

    Seems strange that low resolutions and some skew formats cause a slow down in the Ubuntu score.

    Btw for other readers I know some of the history on S3TC. There used to be a S3 graphics card that had this texture compression feature, so It was obviously bought out and or continues to hold rights today. I recall them demoing the feature with Egyptian temple

  4. #24

    Default

    Quote Originally Posted by smitty3268 View Post
    Guilty

    Oh, that's weird. I tried opening the site in Chrome (26 on windows) and I'm getting the images instead of svg. Every other browser is giving me svg. Even IE.

    What's your HTTP user agent string there?

  5. #25
    Join Date
    Feb 2012
    Location
    Kingston, Jamaica
    Posts
    316

    Default

    Quote Originally Posted by liam View Post
    The "kernel" can be run in most any situation really well. You can even run it without on systems that don't have an mmu.
    I think what you mean is that no SINGLE kernel is good at everything?
    The RT guys are working on that, actually. One of the main devs, Gleixner, claims that RT will eventually be as fast as linus' tree, and faster at some things (network packet handling).
    So can most if not all kernels. You just need to do the work to enable it right? This is not exclusive to Linux.

    There are various things that Linux cannot do without drastic changes that the developers probably wouldn't agree with.

  6. #26
    Join Date
    Jan 2009
    Posts
    1,480

    Default

    Quote Originally Posted by jayrulez View Post
    So can most if not all kernels. You just need to do the work to enable it right? This is not exclusive to Linux.

    There are various things that Linux cannot do without drastic changes that the developers probably wouldn't agree with.

    You'll have to point me out to the nt and xnu RT kernels, as well as cooperative kernels.
    IOW, no, not every kernel can make these changes as easily. I don't think either of those kernels can make these changes without drastic changes. In linux these patch sets have evolved alongside the kernel.
    There's a reason Linux gets the "toaster to a supercomputer" crowd and not, say, a BSD kernel.

  7. #27
    Join Date
    Feb 2012
    Location
    Kingston, Jamaica
    Posts
    316

    Default

    Quote Originally Posted by liam View Post
    You'll have to point me out to the nt and xnu RT kernels, as well as cooperative kernels.
    There are various kernels/OSes that I can mention like Integrity, Fiasco.OC, QNX... There are quite a few others. (Real-time is not the only short coming of the Linux Kernel).

    Quote Originally Posted by liam View Post
    IOW, no, not every kernel can make these changes as easily.
    I didn't say whether the changes could be made easily or otherwise.

    Quote Originally Posted by liam View Post
    I don't think either of those kernels can make these changes without drastic changes. In linux these patch sets have evolved alongside the kernel.
    There's a reason Linux gets the "toaster to a supercomputer" crowd and not, say, a BSD kernel.
    Just to mention one use case where it is not ideal to use Linux. Where fault isolation is desired or deemed necessary. A practical example of this is the Samsung Galaxy SIII. We all know this is using the Linux Kernel. However, not everyone knows that Linux is actually running on alongside a microkernel (L4 derivative).

    There are safety-critical systems where Linux is not even an option at the moment or for the foreseeable future.

    I know Linux is "awesome" and ubiquitous and is used in many places (probably more than any other kernel). However, you'd be doing a disservice to yourself to believe that is or will be the best choice for all or even most use cases.
    Last edited by jayrulez; 04-06-2013 at 01:06 AM.

  8. #28
    Join Date
    Aug 2009
    Location
    south east
    Posts
    342

    Cool Surprise

    Let's realign perceptions:

    1. The target tests for Ubuntu should be LTS releases only!
    2. Ubuntu 12.04 is comparable to Windows 8.
    3. Ubuntu 10.04 is comparable to Windows 7.
    4. 6-10 frames per second (FPS) after 30 FPS, is in that 1% of undetectable.


    Ultimately it is up to Intel to insure their hardware operates with Linux/Xorg.
    Same is said when Microsoft releases an OS to hardware vendors.

    Intel needs a stand-alone driver external to Xorg. The reason, Ubuntu 10.04 can run the same driver Ubuntu 12.04 utilises downloaded from Nvidia.

    Heads up, Microsoft died in 2012.
    Read here.
    http://tech.slashdot.org/story/13/04...e-ever-in-2013
    Last edited by squirrl; 04-06-2013 at 12:57 AM.

  9. #29
    Join Date
    Jan 2009
    Posts
    1,480

    Default

    [QUOTE=jayrulez;324217]There are various kernels/OSes that I can mention like Integrity, Fiasco.OC, QNX... There are quite a few others. (Real-time is not the only short coming of the Linux Kernel).

    Umm, why are you mentioning microkernels? Those certainly don't run on anything close to the range of hardware I was speaking of.
    I don't think anyone who wasn't a fanboy would argue that linux is the best at everything, but it is really, really, good at many, many things. More so than any other kernel I've heard of. Lastly you said the kernel "is not good at everything", and I am simply pointing out that doesn't seem to be the case (with the exception of safety critical applications, but those are amongst the most specialized systems you can imagine, and certainly not general purpose in the sense I've been speaking of).

    I didn't say whether the changes could be made easily or otherwise.
    No, but since we are, hopefully, talking about what is actually available, that matters.
    The point I was making is that you can, right now, put linux on most any "computer", other than a supercomputer (that usually requires a good deal of customization, as I've been told) and it will run "reasonably" well.
    If you want to throw out realism, then let me tell you about this completely lockless kernel I'm working on that is going to be faster at serving web pages than a linux coop...


    Just to mention one use case where it is not ideal to use Linux. Where fault isolation is desired or deemed necessary. A practical example of this is the Samsung Galaxy SIII. We all know this is using the Linux Kernel. However, not everyone knows that Linux is actually running on top of a microkernel (L4 derivative) rather than on the hardware.
    Somebody reads osnews I read everything I could find about mobicore (not much), so I don't know how exactly it performs its functions. That said, I wouldn't be surprised if it were the case. It's not unusual for linux to run as a process inside a microkernel in RT enviroments.
    Also, you didn't say ideal, you said "not good".


    There are safety-critical systems where Linux is not even an option at the moment or for the foreseeable future.
    https://www.osadl.org/Presentations-...cuments.0.html
    There have been a number of papers written about this exact topic.
    The thrust being that sil 3 should be achievable, and that sil 4 might be possible. Both of those would require substantial changes and testing, but this is the lone area where I'd agree linux wouldn't be a good choice.

    I know Linux is "awesome" and ubiquitous and is used in many places (probably more than any other kernel). However, you'd be doing a disservice to yourself to believe that is or will be the best choice for all or even most use cases.

    I don't think I ever said it was always the best choice, and if I did I'd appreciate a quote to that, since I don't want to do a "disservice" to myself.
    You said it wasn't always a good choice, and, with the exception of safety critical systems, that isn't true (and even in those cases, it might be, given sufficient paring of the kernel and tracing).

  10. #30
    Join Date
    Feb 2012
    Location
    Kingston, Jamaica
    Posts
    316

    Default

    [QUOTE=liam;324225]
    Quote Originally Posted by jayrulez View Post
    There are various kernels/OSes that I can mention like Integrity, Fiasco.OC, QNX... There are quite a few others. (Real-time is not the only short coming of the Linux Kernel).

    Umm, why are you mentioning microkernels? Those certainly don't run on anything close to the range of hardware I was speaking of.
    I don't think anyone who wasn't a fanboy would argue that linux is the best at everything, but it is really, really, good at many, many things. More so than any other kernel I've heard of. Lastly you said the kernel "is not good at everything", and I am simply pointing out that doesn't seem to be the case (with the exception of safety critical applications, but those are amongst the most specialized systems you can imagine, and certainly not general purpose in the sense I've been speaking of).



    No, but since we are, hopefully, talking about what is actually available, that matters.
    The point I was making is that you can, right now, put linux on most any "computer", other than a supercomputer (that usually requires a good deal of customization, as I've been told) and it will run "reasonably" well.
    If you want to throw out realism, then let me tell you about this completely lockless kernel I'm working on that is going to be faster at serving web pages than a linux coop...




    Somebody reads osnews I read everything I could find about mobicore (not much), so I don't know how exactly it performs its functions. That said, I wouldn't be surprised if it were the case. It's not unusual for linux to run as a process inside a microkernel in RT enviroments.
    Also, you didn't say ideal, you said "not good".




    https://www.osadl.org/Presentations-...cuments.0.html
    There have been a number of papers written about this exact topic.
    The thrust being that sil 3 should be achievable, and that sil 4 might be possible. Both of those would require substantial changes and testing, but this is the lone area where I'd agree linux wouldn't be a good choice.




    I don't think I ever said it was always the best choice, and if I did I'd appreciate a quote to that, since I don't want to do a "disservice" to myself.
    You said it wasn't always a good choice, and, with the exception of safety critical systems, that isn't true (and even in those cases, it might be, given sufficient paring of the kernel and tracing).
    I read OSNews but I wasn't aware there was an article on MobiCore. Do you mind linking to it?

    There are quite a few solutions available for running Linux as a task on microkernels. Some are Codezero, L4Linux, The Robin project, Solution from OK Labs now owned by General Dyamics.

    As to why I mentioned those microkernels, they could be ported to all the architectures that Linux runs on if one was motivated enough to do that.
    But fair enough since you mentioned that you are talking about what is available now and also a fair point that I did not mention that I was talking about "ideal" solutions.

    Just a note: QNX supports ARM, POWER, X86, SH and MIPS. Obviously not close to the range of architectures supported by Linux but shows that it could be done if desired.

    Most people I've had to deal with regarding these topics are actually fan boys that believe that Linux is the end all of operating system development so forgive me for pooling you with them initially.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •