Announcement

Collapse
No announcement yet.

Podman Desktop 1.0 Released As An Alternative To Docker Desktop

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Podman Desktop 1.0 Released As An Alternative To Docker Desktop

    Phoronix: Podman Desktop 1.0 Released As An Alternative To Docker Desktop

    Released this week from the Red Hat Summit is Podman Desktop 1.0 as a container management tool akin to Docker Desktop...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    The last thing which Linux needs ti desktop container management tool.

    In case of Linux there are tons of missing things on kernel layer. Few most important:
    - container has no isolated procfs
    -- anyone who is logged in container can see bare metal system details
    -- reboot of the container has no impact on uptime of the container (it shares uptime with bare metal system)
    -- core dumps are not isolated and cannot be catch in container and at the same time container can "spam" bare meta storage
    - devfs is not isolated and it is not possible to define block access to exact block devices in container

    Comment


    • #3
      Originally posted by kloczek View Post
      The last thing which Linux needs ti desktop container management tool.

      In case of Linux there are tons of missing things on kernel layer. Few most important:
      - container has no isolated procfs
      -- anyone who is logged in container can see bare metal system details
      -- reboot of the container has no impact on uptime of the container (it shares uptime with bare metal system)
      -- core dumps are not isolated and cannot be catch in container and at the same time container can "spam" bare meta storage
      - devfs is not isolated and it is not possible to define block access to exact block devices in container
      I think containers are mostly just for ease of deployment.
      Ideally you shouldn't need too much security, as the software you're using in itself should be secure. If you can't even trust it in a VM, why use it at all?
      (This is why nobody should ever use anything written in Java, but they do anyway. VMs should not be a stop-gap for the world's most insecure language.)

      Comment


      • #4
        Originally posted by Ironmask View Post
        I think containers are mostly just for ease of deployment.
        Ideally you shouldn't need too much security, as the software you're using in itself should be secure. If you can't even trust it in a VM, why use it at all?
        (This is why nobody should ever use anything written in Java, but they do anyway. VMs should not be a stop-gap for the world's most insecure language.)
        All those things + things like FULL network stack virtualisation (no network bridges needed on bringing up network layer) was present from FIRST DAY of availability of Solaris zones +14 years ago.
        Effect of not finishing namespaces on Linux is that we have now +10 implementations of namespaces management and none are able to fully work.

        Result of lack of privileges management on Linux is that namespace UIDS must be mapped to bare metal/global zone system UIDS.
        In case of Solaris you can easily specify stripping down privs of zone and even if someone will run something with UID=0 will be not bale to do out of those privs.


        On Linux you can cap memory and CPU usage however you cannot cap per zone memory used to cache VFS layer.
        Result is that if you have namespace which will be reading like crazy its own VFS data and size of those data will exceed physical memory it can kick off other namespaces cached data causing performance issues.
        More .. using in namespace "echo 3 >/proc/sys/vm/drop_caches" you can flush cached data used by namespaces.

        All that is only tip of the iceberg and fact that namespaces are after more than decade are still not finished causes that no one ne is uses them in some serious use cases.

        Comment


        • #5
          They need to finally implement WSL integration. Until then, it's utterly useless for me. All my container workflow is in WSL.

          Comment


          • #6
            Originally posted by kloczek View Post
            - container has no isolated procfs
            -- anyone who is logged in container can see bare metal system details
            -- reboot of the container has no impact on uptime of the container (it shares uptime with bare metal system)
            -- core dumps are not isolated and cannot be catch in container and at the same time container can "spam" bare meta storage
            You should not bind the host /proc, but instead mount your own procfs inside the container. This only leaves a bit of the second and the third point, which do not sound too bad.

            - devfs is not isolated and it is not possible to define block access to exact block devices in container
            I'm not sure what you really mean here, but defining device access inside the container is in fact pretty annoying. Easiest way is to bind the allowed device files from the host, with obvious limitations.

            Comment


            • #7
              IMHO they should first attempt to make podman-compose feature equivalent to docker-compose. So far podman tooling is a joke.

              Comment


              • #8
                podman "desktop" for when your podman is incomplete, buggy and missing features. gg redhat.

                Comment


                • #9
                  Just curious what all the hate about podman is about. My use cases are relatively light but have included a few compose tasks and seemed to work fine. I haven't had to do lots of local bridge networking/sidecar proxying etc with podman, but I did implement a couple of freeipa containers using selinux rulesets using podman for a small cloud vpn/auth stack so I'm really curious what is lacking.

                  Comment


                  • #10
                    Originally posted by archkde View Post
                    You should not bind the host /proc, but instead mount your own procfs inside the container. This only leaves a bit of the second and the third point, which do not sound too bad.
                    Please test mounting procfs inside for example lxc zone.

                    I'm not sure what you really mean here, but defining device access inside the container is in fact pretty annoying. Easiest way is to bind the allowed device files from the host, with obvious limitations.
                    If you have no devfs isolation it is possible for example mount bare metal system, root fs inside zone and change whatever you want.
                    In case of zfs it is concept called "zoned" which allow as well mask parent fs in zfs pool to hide from where zone fs has been mounted.
                    You can as well limit for the zone access to only exact storage devices
                    This section provides the following information: File Systems Mounted in Zones File System Mounts and Updating

                    In case of Linux namespaces there is no ANY kernel API which allows to limit access to exact storage devices.
                    In any Linux namespace you have FULL access to everything you want.

                    Comment

                    Working...
                    X