Announcement

Collapse
No announcement yet.

Removing Some Old Arm Drivers & Board/Machine Code To Lighten The Kernel By 154k Lines

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Removing Some Old Arm Drivers & Board/Machine Code To Lighten The Kernel By 154k Lines

    Phoronix: Removing Some Old Arm Drivers & Board/Machine Code To Lighten The Kernel By 154k Lines

    The SoC tree's "for-next" branch has picked up a big set of patches that is set to lighten the kernel by 154k lines of code, documentation, and DeviceTree files in clearing out some old drivers and obsolete board/machine support...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    thats a lot of weight loss, I should ask the kernel devs for a diet plan

    Comment


    • #3
      Good move - that's surprisingly a lot of code for architectures old enough that a modern system would struggle to run on it.

      As is always the case, for such old devices, just use an older kernel.


      I really wish kernel versions were based on dropped architectures. It'd be much clearer what is supported that way, rather than some arbitrary rule of "#.20 is getting big".

      Comment


      • #4
        I'm surprised to learn ARM9 used the Harvard architecture (I had to Google the boards to get an idea of how old the devices are). I wonder if that introduces significant extra work to maintain.

        Originally posted by schmidtbag View Post
        As is always the case, for such old devices, just use an older kernel.
        It depends. If those devices are connected to the internet*, that just won't do, due to lacking security fixes (tho I think there's like 7 years more on the latest LTS?). On the long term, in the unlikely case they don't get replaced in that time frame, they may need to switch to something like OpenBSD I guess.
        In any case, if nobody's putting the work then there's pretty much no choice but to drop it.

        *While this may sound counter-intuitive given these are most likely not powerful enough for the web, those are clearly meant for embedded devices that may need to be online to provide service or to monitor their status. Assuming any of them are still alive anyway.

        Comment


        • #5
          Originally posted by sinepgib View Post
          ...(tho I think there's like 7 years more on the latest LTS?)...
          I have not seen how many years 6.1 will be maintained for, but 5.10 has a projected EOL of (Dec, 2026) ~4 years from now.

          Comment


          • #6
            Originally posted by sinepgib View Post
            I'm surprised to learn ARM9 used the Harvard architecture (I had to Google the boards to get an idea of how old the devices are). I wonder if that introduces significant extra work to maintain.
            Every ARM has been a Harvard chip. Pretty much every chip outside of tiny microcontrollers are these days. If you see separate Data and Instruction caches, then it's Harvard at least in the core--everything merges memory accesses by the time they get to the main memory, though--except for some tiny microcontrollers. The only real variation is how much effort is put into projecting the illusion of having a unified memory to the core. That is to say, how far along the memory heirarchy does the synchronization between data writes go before they can alter instruction cache. In most modern processors that happens either within the L1 or between the L1 and L2.

            No general purpose machine is a *strict* Harvard machine as they wouldn't be able to be self hosting--if there's no mixing between data and instruction, you can't compile new code and run it because that's treating data as instructions. So, clearly, nothing is that silly--except for some microcontrollers.

            For every rule, there is an exception and in computing, it's probably some damn strange microcontroller.

            Comment


            • #7
              Originally posted by schmidtbag View Post
              Good move - that's surprisingly a lot of code for architectures old enough that a modern system would struggle to run on it.

              As is always the case, for such old devices, just use an older kernel.


              I really wish kernel versions were based on dropped architectures. It'd be much clearer what is supported that way, rather than some arbitrary rule of "#.20 is getting big".
              I still have a Intel Xscale/PXA device in my closet that's fully functional if I could manage to power it. It's an old HP PDA type that runs Windows Mobile 2005. There was some effort to get it to run Linux using Angstrom back then, but it never really went anywhere useful. I have my doubts devices like this are still being used beyond some retro computing hobbiests and set top/microcontroller/kiosk type devices, none of which are likely to be actively supported and probably run something else like VxWorks or Windows CE. Linux wasn't ubiquitous back then in that realm.

              Comment


              • #8
                Originally posted by willmore View Post

                Every ARM has been a Harvard chip. Pretty much every chip outside of tiny microcontrollers are these days. If you see separate Data and Instruction caches, then it's Harvard at least in the core--everything merges memory accesses by the time they get to the main memory, though--except for some tiny microcontrollers. The only real variation is how much effort is put into projecting the illusion of having a unified memory to the core. That is to say, how far along the memory heirarchy does the synchronization between data writes go before they can alter instruction cache. In most modern processors that happens either within the L1 or between the L1 and L2.

                No general purpose machine is a *strict* Harvard machine as they wouldn't be able to be self hosting--if there's no mixing between data and instruction, you can't compile new code and run it because that's treating data as instructions. So, clearly, nothing is that silly--except for some microcontrollers.

                For every rule, there is an exception and in computing, it's probably some damn strange microcontroller.
                That makes sense. Thanks for the explanation.

                Originally posted by elatllat View Post

                I have not seen how many years 6.1 will be maintained for, but 5.10 has a projected EOL of (Dec, 2026) ~4 years from now.
                Thanks for the correction.​​

                Comment


                • #9
                  Originally posted by willmore View Post

                  Every ARM has been a Harvard chip. Pretty much every chip outside of tiny microcontrollers are these days. If you see separate Data and Instruction caches, then it's Harvard at least in the core--everything merges memory accesses by the time they get to the main memory, though--except for some tiny microcontrollers. The only real variation is how much effort is put into projecting the illusion of having a unified memory to the core. That is to say, how far along the memory heirarchy does the synchronization between data writes go before they can alter instruction cache. In most modern processors that happens either within the L1 or between the L1 and L2.

                  No general purpose machine is a *strict* Harvard machine as they wouldn't be able to be self hosting--if there's no mixing between data and instruction, you can't compile new code and run it because that's treating data as instructions. So, clearly, nothing is that silly--except for some microcontrollers.

                  For every rule, there is an exception and in computing, it's probably some damn strange microcontroller.
                  I would say it is a bit more complicated. To be honest Harvard architecture is kinda misleading because what makes data a "data". Technically speaking yes you have chip trying to predict what instructions will be used and what data will be used so it is loaded into memory.

                  In classic microcontrollers harvard architecture meant, you had ROM with instructions and all executable code could only be there and you had seperate memory space with anything else. In fact some harvard architectures were strict architectures where all executable code was normally not writable and writes could be only on writable but not executable memory space.

                  In case of ARM however there is no seperate memory space, it is simply harvard bus architecture so you can fetch at the same time instructions and data seperatly. But in fact that is every modern half decent performance wise cpu out there - you have cache predictors predicting for instruction cache and predicting for data. But it has nothing to do with seperate memory spacces, in fact you can mark on ARM entire RAM as Read/executable as well mark entire RAM as read/write. In fact you could even mark entire memory pages are read/write/executable (something not liked from security perspective) and when you do it, it has nothing to do with harvard architecture from RAM perspective (every adress can be everything), and everything can be everything from perspective of execution.
                  Last edited by piotrj3; 15 January 2023, 10:20 PM.

                  Comment


                  • #10
                    Originally posted by sinepgib View Post
                    It depends. If those devices are connected to the internet*, that just won't do, due to lacking security fixes (tho I think there's like 7 years more on the latest LTS?). On the long term, in the unlikely case they don't get replaced in that time frame, they may need to switch to something like OpenBSD I guess.
                    In any case, if nobody's putting the work then there's pretty much no choice but to drop it.

                    *While this may sound counter-intuitive given these are most likely not powerful enough for the web, those are clearly meant for embedded devices that may need to be online to provide service or to monitor their status. Assuming any of them are still alive anyway.
                    As far as I'm aware, the vast majority of those security fixes only apply to newer architectures. As far as I'm concerned, the kernel shouldn't be responsible for most userland security, and most tasks associated with the internet are userland.

                    Comment

                    Working...
                    X