Announcement

Collapse
No announcement yet.

Many Networking Improvements Land In Linux 5.10

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Many Networking Improvements Land In Linux 5.10

    Phoronix: Many Networking Improvements Land In Linux 5.10

    The big networking pull request has landed in Linux 5.10 Git...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    And here I was.. thinking that the headline meant improvments that mere mortals were likely to benefit from.
    Err. Not so much.

    Comment


    • #3
      Originally posted by milkylainen View Post
      And here I was.. thinking that the headline meant improvments that mere mortals were likely to benefit from.
      Err. Not so much.
      What home user wouldn't benefit from 200G Ethernet

      Now I can transode h264 to raw over to my PS4 and really saturate the network

      Or RAID10 384 SSDs on two different systems and do a ZFS send with terabit speeds

      Comment


      • #4
        15Gbps between container and host is still a little low, considering NICs are touching 200Gbps now. Now I understand why performance people hate the kernel networking stack so much.

        Comment


        • #5
          It's funny Linux BPF implementation have much wider usage than FreeBSD one. It says a lot about development he alth of both systems.

          Comment


          • #6
            Originally posted by dxin View Post
            15Gbps between container and host is still a little low, considering NICs are touching 200Gbps now. Now I understand why performance people hate the kernel networking stack so much.
            Don't confuse wire speeds with actual transfer speeds. They are worlds apart.

            Look here. Lets abuse stuff in a mindgame just for the fun of it.
            At minimal UDP packet size a 100Mbit Ethernet can create 148k8 packets per second.
            A 1GBit link 1,488Mpps. * 100 ~ 150Mpps.
            A packet needs to traverse the entire firewall stack, packet identification and placement,
            the entire routing stack, traffic control and queueing before leaving on the NIC driver.
            Now assume only 1 Mpps on say a quad core with 3GHz CPUs
            (12G cycles available. Very simplistic, reduction for all complexities).
            That would mean that you have 12000 cycles per packet,
            excluding all context switches, load/branch misses, pagefaults,
            interrupt handling, a zillion other tasks etc.
            Also. TCP is more complex than UDP.
            Usually, the packet rate is more of a problem than the packet size.
            The sheer administration that happens for each and every packet in a modern stack is pretty mind-boggling.

            Now up that rate to 100 Mpps and that poor box now has 120 cycles per packet to play with.
            Yeah. You realize the issue at hand here? The problem blows up. Right in your face.
            A protocol stack in an OS, that literally has more features than you can shake a stick at,
            is not going to shuffle data events (packets) at ludicrous speeds.
            Harsh reality.
            Last edited by milkylainen; 16 October 2020, 03:28 PM.

            Comment

            Working...
            X