Announcement

Collapse
No announcement yet.

KVM Drops Support For IA64 While Adding Various x86 Improvements

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • KVM Drops Support For IA64 While Adding Various x86 Improvements

    Phoronix: KVM Drops Support For IA64 While Adding Various x86 Improvements

    The KVM changes have been queued up and called for pulling into the Linux 3.19 kernel...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Why was IA-64 dropped?

    Why was the IA-64 (Itanium) architecture dropped?

    The Itanium processors failed. People wanted backwards compatibility and wanted to run their old legacy x86 applications, so Intel implemented a x86 layer in Itanium.
    Then people whined that the Itanium had worse performance when executing x86 code than a native x86 processor.

    So people had the wrong expectations of Itanium and expected things from it that they shouldn't have expected from it, and judged it on its performance of executing non-native instructions.
    Perhaps the microprocessors, the implementations of the architecture weren't great either.

    But is the architecture sane?

    x86 is a old legacy architecture. Intel and HP spent a lot of time and resources inventing the IA-64 architecture as a modern architecture and it held promises.

    Comment


    • #3
      Originally posted by uid313 View Post
      Why was the IA-64 (Itanium) architecture dropped?

      The Itanium processors failed. People wanted backwards compatibility and wanted to run their old legacy x86 applications, so Intel implemented a x86 layer in Itanium.
      Then people whined that the Itanium had worse performance when executing x86 code than a native x86 processor.

      So people had the wrong expectations of Itanium and expected things from it that they shouldn't have expected from it, and judged it on its performance of executing non-native instructions.
      Perhaps the microprocessors, the implementations of the architecture weren't great either.

      But is the architecture sane?

      x86 is a old legacy architecture. Intel and HP spent a lot of time and resources inventing the IA-64 architecture as a modern architecture and it held promises.
      No, because if you make stuff that sucks people won't use it.

      Comment


      • #4
        Originally posted by uid313 View Post
        Why was the IA-64 (Itanium) architecture dropped?

        The Itanium processors failed. People wanted backwards compatibility and wanted to run their old legacy x86 applications, so Intel implemented a x86 layer in Itanium.
        Then people whined that the Itanium had worse performance when executing x86 code than a native x86 processor.

        So people had the wrong expectations of Itanium and expected things from it that they shouldn't have expected from it, and judged it on its performance of executing non-native instructions.
        Perhaps the microprocessors, the implementations of the architecture weren't great either.

        But is the architecture sane?

        x86 is a old legacy architecture. Intel and HP spent a lot of time and resources inventing the IA-64 architecture as a modern architecture and it held promises.
        one thing with itanium is that it's really hard to write a compiler for it
        another thing is what i heard that it had memory access problems

        other than that i think it's an awsome architecture

        Comment


        • #5
          This architecture failed mostly because dedicated IA64 memory was at a horribly high price while x86 memory was already expensive.

          Memory manufacturers got sentenced for illegal engagements after this period...
          but consumers had already make their choice : they did not payed 3 times for the same performance and kept x86...

          AMD was much smarter and won the 64 bits battle with full 32 bits compatibility and speed

          Anyway we will never know if this architecture was brilliant or bad : no widespread benchmarks optimized for it has never been compiled for IA64 AFAIK
          and it is unlikely that the perf/$ ratio could beat x86 for several years...

          Comment


          • #6
            Originally posted by Passso View Post
            no widespread benchmarks optimized for it has never been compiled for IA64 AFAIK
            Actually, an Itanium system was at #2 of the 24th Top500 supercomputers list. The ranking was determined by the Linpack benchmark, which is the classic benchmark for HPC systems.

            You can be sure that Intel and SGI optimized the heck out of the benchmark to look good in that list. Still, it wasn't enough to beat BlueGene/L.

            Originally posted by uid313 View Post
            Why was the IA-64 (Itanium) architecture dropped?
            The real reason is presumably that, with the exception of Gentoo, all popular Linux distributions have dropped ia64 from their supported architectures.

            Comment


            • #7
              Originally posted by chithanh View Post
              Actually, an Itanium system was at #2 of the 24th Top500 supercomputers list. The ranking was determined by the Linpack benchmark, which is the classic benchmark for HPC systems.
              That's also an inherent problem with the VLIW type of architecture as used on the Itanium.
              (i.e.: one "instruction" is actually a very long list of micro-ops that each subunit of the CPU will run in parallel during that cycle - instead of the classical approach where the CPU's pipeline is in charge of scheduling micro-ops for each unit).


              For specific workloads, like running a small piece of code over and over again on a huge amount of data, they perform well and are more or less well understood.
              That's why in benchmarks and on HPC workloads Itanium performs well.
              (And that why they were used in GPUs, geared to apply the same shaders over and over on millions of pixels)


              For real-world complex computation, they are a pain to optimize for. The general approach of HP, Intel, etc. was "It will end up being handled by the compiler". Saddly such high efficiency compiler weren't available back then and only slowly getting better now.
              (Which is also part of the reason that GPU are moving away from it, now that GP-GPU (like CUDA, OpenCL, etc.) is gainning traction and GPUs have to accomplish more generalist kind of workload)



              So that's why you can't even base your opinion on beanchmark. Making a simple computation running fast isn't the hard part. The hard part is actually turning your complex problem into a piece of efficient VLIW-code.

              Thus Itanium ended up being successful in the very narrow field of HPC, but didn't gain much traction in the server market as Intel has hoped.

              Comment

              Working...
              X