Announcement

Collapse
No announcement yet.

Linux 6.8 To Add Support For The AMD MicroBlaze V Soft-Core RISC-V Processor

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Linux 6.8 To Add Support For The AMD MicroBlaze V Soft-Core RISC-V Processor

    Phoronix: Linux 6.8 To Add Support For The AMD MicroBlaze V Soft-Core RISC-V Processor

    A few months back AMD announced the MicroBlaze V processor as a soft-core RISC-V processor for embedded system use. With Linux 6.8 the necessary DeviceTree support is landing for the AMD MicroBlaze V...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    I would love a MicroBlaze V 64bit "softcore" for high end AMD CPUs, plus adding massive FPGA cores instead silly AI stuff.

    Comment


    • #3
      Originally posted by timofonic View Post
      I would love a MicroBlaze V 64bit "softcore" for high end AMD CPUs, plus adding massive FPGA cores instead silly AI stuff.
      What use case would you love it for? Any products you have waiting for this?

      As for AI being silly... well Lisa Su seems to think it's important enough to say AI 250 times during their product introduction (as covered by Ian Cutress.)

      For the people who say 32-bit is dead... yes on the ever-fattening desktop, and even high-end mobile, but this 32-bit RV core is mostly going to be used as a soft microcontroller to run firmware which marshalls C/Fortran code compiled to FPGA for multimedia and AI workloads. AMD spent tens of billions to bring this into their product suite... so I think 32-bit has a home for a long time... at least until firmware exceeds 4GB ;-)
      Last edited by linuxgeex; 07 January 2024, 01:58 PM.

      Comment


      • #4
        Originally posted by linuxgeex View Post
        What use case would you love it for? Any products you have waiting for this? As for AI being silly... well Lisa Su seems to think it's important enough to say AI 250 times during their product introduction (as covered by Ian Cutress.) For the people who say 32-bit is dead... yes on the ever-fattening desktop, and even high-end mobile, but this 32-bit RV core is mostly going to be used as a soft microcontroller to run firmware which marshalls C/Fortran code compiled to FPGA for multimedia and AI workloads. AMD spent tens of billions to bring this into their product suite... so I think 32-bit has a home for a long time... at least until firmware exceeds 4GB ;-)
        AI bubble is making them richer for some time, but that bubble will explode and provoke a massive tech industry destruction. I don't give a crap about their corporate bullshit. Current AI is just evolved statistical processes and such.

        RISC-V i struction set on massive high end CPU? Very big and powerful FPGA cores with it? A lot ​

        Comment


        • #5
          Originally posted by timofonic View Post
          AI bubble is making them richer for some time, but that bubble will explode and provoke a massive tech industry destruction. I don't give a crap about their corporate bullshit. Current AI is just evolved statistical processes and such.

          RISC-V i struction set on massive high end CPU? Very big and powerful FPGA cores with it? A lot ​
          Nah, AI is gonna be huge. That's all AMD cares about and gaming to a somewhat lesser extent. Practically nothing else.

          Comment


          • #6
            If we could get away from the artificially limited (Intel, AMD, "tradition") desktop / platform architecture then there would be a lot more opportunities for innovation and
            architectural efficiency / expansion.

            The only things "graphical" about GPUs are the display interfaces and whatever ray tracing / video CODEC HW they embody.

            The big pile of fast wide VRAM and the 8192+ "SIMD" compute cores are both things that should benefit the core computer architecture and not "merely" one
            plug in card on some dead-end slowpoke dirt road PCIE bus segment. Put it front and center on the motherboard or at least go back to a motherboard / module
            interconnect system where we've got fast and wide bandwidth between add-on chips / modules / cards / boxes.

            FPGA? Absolutely, same as with GPU SIMD processors, put some in there and let it be used synergistically to accelerate whatever things the RISC CPU and SIMD "GPU" processor cores don't efficiently. Or at least make a high bandwidth pipe possible where you can plug FPGA modules, GPU modules, compute modules together and get decent interconnect bandwidth between them.

            IDK that the AI stuff is "silly" -- it depends on the architecture as to how usable it is for general compute vs. say processing integer / FP tensor operations or whatever.
            GPUs do AIML fairly efficiently despite the misnomer between GPU/NPU use cases. Anyway big FPGAs and big GPUs have almost been the only game in town though
            we're seeing more ML accelerator / TPU type things that aren't merely CPUs / GPUs / FPGAs.

            We already see what data center computing gear looks like today for NUMA servers, GPU servers, etc. A question that seems open is for prosumer desktops / workstations / servers as well as SMB computers what can happen to get beyond the single boring old CISC CPU, a tiny number of anemic PCIE slots, a couple of anemic slow RAM channels, and no meaningful way to deviate / scale the RAM, CPU, or interconnect bandwidth / architecture to more interesting / scalable things.

            (GPU, FPGA, NPU, ...).

            I keep wondering when I'll just see a standalone box with a big GPU in there and on-board 24 core RISCV application processor or something like that as a standalone
            unit of computing, forget about ATX, X86.



            Originally posted by timofonic View Post
            I would love a MicroBlaze V 64bit "softcore" for high end AMD CPUs, plus adding massive FPGA cores instead silly AI stuff.

            Comment

            Working...
            X