Announcement

Collapse
No announcement yet.

oVirt 4.1.5 Provides Performance Boost With Gluster

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • oVirt 4.1.5 Provides Performance Boost With Gluster

    Phoronix: oVirt 4.1.5 Provides Performance Boost With Gluster

    For those making use of Red Hat's oVirt as a virtualization management platform, there should be better performance now when using the Gluster network-attached file-system...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    I recently finished my homelab with Proxmox 5. I am now reading about oVirt with Gluster, this is bad.

    Comment


    • #3
      Ovirt provides standalone WEBUI on top of hypervisor? and HA like Proxmox does? Last i checked it didn't. Proxmox is amazing though.

      Comment


      • #4
        Originally posted by sarfarazahmad View Post
        Ovirt provides standalone WEBUI on top of hypervisor? and HA like Proxmox does? Last i checked it didn't. Proxmox is amazing though.
        Since version 4 they rely on project Cockpit for management via web.

        oVirt is a free open-source virtualization solution for your entire enterprise


        Cockpit makes it easy to administer your Linux servers via a web browser.

        Comment


        • #5
          Looks like I dodged a bullet. Still it would have been comforting to know about it before I started, so much virtual machine software so little time.

          Comment


          • #6
            Originally posted by Jabberwocky View Post
            I recently finished my homelab with Proxmox 5. I am now reading about oVirt with Gluster, this is bad.
            oVirt is commercially branded as "Red Hat Virtualization". It's what I use at work for hosting all our RHEL VM's. But at home, I use Proxmox. oVirt is large and complex, Proxmox is small and light in comparison and also easier to setup and use. I work with oVirt every day at work, and I do not recommend it for a home lab.

            Comment


            • #7
              Well, I had the opposite situation. I knew about oVirt and not Proxmox when I built my build cluster. My use case is mostly booting more computer when the build queue grow. Not the typical use case for such systems, but not unheard of either.



              From my experiences, oVirt is very complex and unstable. It has hard time synchronizing itself with itself... It has has many bugs as features and tend to toggle between online and offline for no reasons multiple time a day (the VMs stay up, but the engine health go red). It also has serious issues restoring itself after powerloss or maintenance. The documentation is mostly the bug tracker and mailing list. The devs are unresponsive (on IRC) and unconcerned unless you have 1000 severs. It works, but I would not recommend it in production without a support contract. It has cool features, but I preferred the ancient VMWare server 2.x (the free one). With the hosted engine, it's better than it was, but upgrades are still a pain (they almost universally fail to restore the cluster to a working state and require manual work).
              Last edited by Elv13; 23 August 2017, 05:07 PM.

              Comment


              • #8
                opennebula was also amazing. i havent followed up on these projects was opennebula felt promising.

                Comment


                • #9
                  proxmox use libgfapi since the beginning of introduction of gluster gfapi in qemu (2014). That's seem incredible that ovirt only get support now. (as gluster and ovirt are redhat projects)

                  Comment


                  • #10
                    Originally posted by Elv13 View Post
                    Well, I had the opposite situation. I knew about oVirt and not Proxmox when I built my build cluster. My use case is mostly booting more computer when the build queue grow. Not the typical use case for such systems, but not unheard of either.

                    From my experiences, oVirt is very complex and unstable. It has hard time synchronizing itself with itself... It has has many bugs as features and tend to toggle between online and offline for no reasons multiple time a day (the VMs stay up, but the engine health go red). It also has serious issues restoring itself after powerloss or maintenance. The documentation is mostly the bug tracker and mailing list. The devs are unresponsive (on IRC) and unconcerned unless you have 1000 severs. It works, but I would not recommend it in production without a support contract. It has cool features, but I preferred the ancient VMWare server 2.x (the free one). With the hosted engine, it's better than it was, but upgrades are still a pain (they almost universally fail to restore the cluster to a working state and require manual work).
                    Yup, it is very complex for sure. In the earlier v4.x releases, I experienced much of the buggy behavior you describe, but it has been actually quite stable for us since about v4.1.2. I upgraded our production clusters to v4.1.5 earlier today. The Red Hat Virtualization documentation is actually quite good, but it's behind their support contract paywall so not freely available. There's no way I would have been able to successfully set it up without Red Hat's documentation. I agree, the freely available oVirt documentation is pretty sparse.

                    IMO the hosted engine makes things more complicated, as it has special commands and procedures to manage it and update it, as compared with a stand-alone engine. Either way, a lot more complicated than Proxmox or Vmware. For example, there is no way to migrate the hosted engine from one storage pool to another. So if you bought a new SAN, and want to migrate everything off the old SAN, you cannot do it. You have to shutdown the hosted engine, manually back up its database, deploy a new hosted engine on the new storage, and then restore the database. And everything must be done in the precise sequence or it will fail. Again, Red Hat's documentation is excellent in this regard and it walks you through the entire process - but there's no way for anyone to figure this stuff out on their own, you have to have access to the Red Hat docs.

                    Comment

                    Working...
                    X