A key drawback in the use of full system virtualization is the performance penalty introduced by hypervisors. This problem is especially present on ARM, which has significantly higher overhead for some workloads compared to x86, due to differences in the hardware virtualization support. The key reason for the overhead on ARM is the need to multiplex kernel mode state between the hypervisor and VMs, which each run their own kernel. This talk will cover how we have redesigned and optimized KVM/ARM, resulting in an order of magnitude reduction in overhead, and resulted in less overhead than x86 on key hypervisor operations. Our optimizations rely on new hardware support in ARMv8.1, the Virtualization Host Extensions (VHE), but also support legacy hardware through invasive modifications to Linux to support running the kernel in the hypervisor-specific CPU mode, EL2.
Libvirt has long provided the standard API for managing virtual machines on individual hosts. It has delegated the task of managing clusters of hosts to higher level applications like OpenSack, oVirt, or Proxmox, just to name just a few. Despite their differences, these applications have a lot of infrastructure needs in common and as a result have often re-invented the same solutions to problems.
In this talk we are going to look at how to leverage libvirt and KVM to enable general purpose management of virtual machines with Kubernetes. It will show how the Kubernetes platform can be used to support application container, data center virtualization and cloud virtualization use cases from a single application & API. At a technical level it will examine some of the challenges integrating virtual machines with the Kubernetes architecture.
Failing migrations: How & Why (David Gilbert, Red Hat) - Despite our best efforts, QEMU migrations sometimes fail. This talk will describe some of the types of failures we see, their causes and how device emulation developers can help ensure successful migrations in production. Hints on debugging failed migrations will be included as well as what information should be provided to help making troubleshooting easy.
QEMU is a large program, that links with hundreds of libraries. Over a million lines of C code in a single process, to hold a VM: this brings a lot of room for security vulnerabilities, even when using sandboxing. Sandboxing has to be quite permissive for all the code, and doesn't prevent QEMU from crashing. Having a single process also means it is harder to run concurrent work and schedule it well, or could lead to more memory fragmentation. QEMU could use multiple processes for various tasks, like device emulation. vhost-user is a solution for virtio devices, mostly used for networking, but can be applied to other kind of devices. This brings modularity, and allow device emulation to exist in external projects. However, new interfaces would have to be created for other kind of devices and tasks.