A practical guide on QEMU VMs setup for kubernetes local cluster
In this article I will explain how to setup your QEMU for your kubernetes local cluster. This guide will not talk about Kubernetes itself

By Omkaram - Jan 4th, 2025


Now before we do a deep dive I really expect you to be familar with linux FHS, and general concepts about linux like what is a service, what is a daemon, what is socket etc. Without this knowledge it would be hard to follow. However, I will explain as much as I could but if you are still in doubt, try search forums on Gentoo, Arch, Ubuntu and other wikis.

If you wish to learn about Linux FHS I already wrote an article

Alright, I think we are good to proceed. Lets start with some basic teriminologies in my own words. If you already know this, then skip to the next part.


Terminologies

QEMU: QEMU (Quick EMUlator) is a VM emulator and a Userland program which is baked into most linux distributions. The emulator itself is responsible for running the VMs which you launch either via the terminal or GUI

KVM: KVM (Kernel based Virtual Machine) is kernel module which is packed with linux and it is responsible to execute CPU instruction of the guest VMs in the host VM by directly talking to the host CPU registers.

It can do much more such as triggering hardware interrupts, hardware acceleration for the VMs etc. QEMU depends on KVM for guest to host direct CPU instruction executions. But it is not necessary to use KVM to spin up VMs in QEMU. When KVM is not enabled or used, QEMU will emulates the Guest VM's CPU, devices, network, RAM etc using pure software abstractions and data structures. This takes a performance hit on the VMs because QEMU has to do the vCPU and virtual memory mapping from Guest to host using software alone.

libvirtd: Libvirt is a Virtualization management daemon which can aid with tools like Virt-Manager, Virt-viewer and Virsh. You do not need this daemon to be running to spin up VMs directly with qemu-system_x86_64 command in the terminal.

Virt-Manager is a GUI tool to create, manage, destroy VMs and network. Virsh is a CLI alternative to do the same. Virt-viewer is CLI program to open a running VM on a window




So what's the big idea?

My plan is to create a few Nodes for my local kubernetes cluster because doing it in with minikube is no fun, although this can be achieved the easy way using solutions like VirtualBox or VMWare. However, I like to keep my VMs being simplistic. Virt-Manager is quick and simple too, however, I like to do it via command line so I can write a simple script which can

  1. Spin up two ubuntu VMs
  2. Have them both linked on a bridge network interface

In the process of doing the above I have found several good ways to do it. Tortured myself for several hours. TBH, its mostly dealing with QEMU bridge networks which is the pain point unlike spining up the VMs

If you are time constrainted, then its better to use Virt-Manager, create a "default" NAT interface and attach the VMs to it.


Let's begin

First we need to install the necessary packages. I use fedora as my host system, so we use dnf to install and check if you have them

                            
sudo dnf install bridge-utils libvirt qemu-kvm virt-manager
dnf list installed bridge-utils libvirt qemu-kvm virt-manager
                              
                        

Next check if you have Virtualization enabled in your system. The below commands should return either vmx (for intel) or svm (for amd)

                            
cat /proc/cpuinfo | grep vmx
cat /proc/cpuinfo | grep svm
lsmod | grep kvm
                              
                        

Now add your user to the kvm and libvirt group and validate

                            
sudo usermod -aG libvirt guest
sudo usermod -aG kvm guest
groups
id -nG # or use this to check
libvirtd --version #check your libvirtd version
                              
                        

You can stop the libvirtd for now, but when it does not properly try closign the sockets too

                            
sudo systemctl stop libvirtd
sudo systemctl stop libvirtd.socket libvirtd-ro.socket libvirtd-admin.socket #if necessary
                              
                        

Sometimes you would want to run virsh without sudo.

                                
ls -l /var/run/libvirt/libvirt-sock
695  sudo chmod 660 /var/run/libvirt/libvirt-sock
696  sudo chown root:libvirt /var/run/libvirt/libvirt-sock
                                  
                            
But this is temporary and it wont work.


Its time to create our VM. You can create many VMs the same way

FIrst create a directory where you will place your qemu VM disk file. These files have .qcow2 format, and this is where the VM is installed. Each VM instance must have its own disk file

                            
# here guest is my username, and this is the place where I store the disk files
mkdir -p /home/guest/kvm/images/ 

qemu-img create -f qcow2 /home/guest/kvm/images/ubuntu_kube_master.qcow2 20G
                              
                        

This creates the ubuntu-kube-master node disk file with 20GB file size. It wont allocate all the space upfront though.

Now we need to install the ubuntu VM into this disk file and there are two ways to do it.

One method is to manage your VMs using Virsh CLI program. Using Virsh, you dont have to spin up your VM directly with `qemu-system-x86_64` command each time. Virsh manages and remembers the state of the VMs it creates. And the VMs which are also created by Virt-Manager are visible while using virsh. Virt-Manager must be using Virsh under the hood.

The other method is to use qemu emulator directly using `qemu-system-x86_64`.


Method 1: Using virt-install and virsh to create and manage VMs

Honestly, this is the simpler way to do this. However, virsh is tricky with the bridge and nat network device it creates. I will show the pit falls later.

Also, libvirt by default comes with a "default" nat device. This is the virtual device which libvirt only understands and something which we connect our VMs with. The "default" interface performs all the NAT through libvirtd and to mimic what is does by default manually is a pain.

Moreover, if you mistakenly delete this default device, you need to recreat it again, but for that you need manually configure the default using a default.xml

The tricky business I was taking about using virsh is that, if you dont use sudo, it will either not allow you to list, define or start this default device, or it would show a default device which is at the session level and not the system level

                            
# enable and start libvirtd
sudo systemctl enable --now libvirtd

# list your interface. You must see only with "default"
sudo virsh net-list --all

# if the state is inactive then you need to start it
# -c qemu:///system is the session we need to connect with qemu and this is the tricky business I had to break my head. 
# If you dont provide it, you would get qemu:///session 

sudo virsh -c qemu:///system net-start default

# also make it auto-start next time libvirtd starts 
sudo virsh -c qemu:///system auto-start default
                            
                        

If for some reason you destroyed your "default" vnet last time, then recreate it

                            
cat  << EOF > default.xml
<network>
<name>default</name>
<uuid>afd4e923-66cb-45ca-9120-1e46e72899a3</uuid>
<forward mode='nat'>
    <nat>
        <port start='1024' end='65535'/>
    </nat>
</forward>
<bridge name='virbr0' stp='on' delay='0'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
    <dhcp>
        <range start='192.168.122.2' end='192.168.122.254'/>
        <host mac='52:54:00:ab:cd:ef' ip='192.168.122.3'/>
        <host mac='52:54:00:12:34:56' ip='192.168.122.4'/>
    </dhcp>
</ip>
</network>
EOF
                            
                        

then do the below which is verify important

                            
$ sudo sysctl -w net.ipv4.ip_forward=1 # temporarily forward ip 
or permanently allow ip forwarding by uncommenting the below in /etc/sysctl.conf
#net.ipv4.ip_forward = 1 


$ sudo virsh -c qemu:///system net-define default.xml
$ sudo virsh -c qemu:///system net-autostart default
$ sudo virsh -c qemu:///system net-start default
                            
                        

When you're using QEMU with a bridge (like virbr0) and a wireless interface (wlo1), enabling IP forwarding allows the host to act as a router and forward network packets between the virtual bridge (virbr0) and the wireless interface (wlo1).

By enabling IP forwarding, the host system is allowed to forward packets between the virtual bridge (virbr0) and the wireless interface (wlo1). This means VMs can access the outside world through the host's wireless network connection.

Now before you apply net-start with virsh, I want you to check if the bridge virbr0 is appearing when you do `ip a`. The reason for this is `default` is virtual network device not an virtual interface, and it creates the virtual interfaces like virbr0 and tap0. Only libvirtd and virsh understands how to work with `default`.

If virbr0 is already there then check if it is DOWN or UNKNOWN. Either way, when you do `net-start default` a virbr0 entry will appear in `ip a` output.

As soon as you connect a VM instance to it using `virt-install` or `qemu-system-x86_64` it will automatically provide a tap0 interface and attach it to your virbr0. The virbr0 will come UP, with the specified CIDR in the default.xml

You can also check if your virtual interfaces virbr0, tap0 appear using one of the following

    
nmcli dev status
brctl show
    

So what exactly is a bridge, tun and tap?

On a very high level these are virtual network interfaces which are used by tools like docker, VMs etc to either bridge, tunnel or tap two or more virtual or physical interfaces

You might have seem `docker0` which is also a bridge created by dockerd daemon. A bridge helps to connect another virtual interface which are also called as vnets (like tun0, tap0) to a real interface like eth0 or wlo1.

You cannot connect a VM directly to a bridge without tapping it using a vnet like tap0.


A painful problem wireless interfaces like wlo1

Another thing I want to talk about is the issue with wlo1. It is almost impossible to connect a bridge to it unlike eth0 which is easy because wlo1 wireless adaptor does not support something called promiscuous mode. It's too technical and I don't like to waste more of my time going too deep, so let's ignore it.

So the workaround here is to do NAT forwarding with iptables to wlo1 from virbr0 bridge. But this is automatically taken care of by the default device which comes with libvirt.

In later sections you will see how I will do this nat forwarding by manually creating my own virbr1 and vnet1 (like virbr0 and tap0) and attach a VM to it and make it work.

A highlight I need to mention here is using default and virsh, libvirt manages to create dnsmasq service for you automatically and hence it will handle DHCP. So, essentially, with this Method 1, you dont have to worry about manually adding ip addresses to your VMs using `ip addr add`

But when I tried doing the same manually (like assigning a IP range and mac ids) to virbr1, for some reason it did not work. It could be I messed up with iptables rules for DNS port 53 etc.


Explaining default.xml contents

If we go back to our default.xml, virbr0 gets its own ip `192.168.122.1`. Once you assign a VM and create a tap0 or vnet1 automatically or manually, you would see they won't get their own ip, but virbr0 does. Both virbr0 and tap0 will be UP in the `ip a` output

Also, this default network device is a NAT forwarder to wlo1 for ports higher than 1024.

The dhcp entries in the default.xml will assign the specified ip to the target VM if the VM is launched with that respective mac id as a CLI option.


Time to install Ubuntu VM

If you remember the step where we created a qcow2 disk file by assiging 20 GB virtual space to it, now is the time to install the ubuntu vm to it. I decided to install the ubuntu server to it.

    
qemu-system-x86_64 -m 2.5G -cdrom /opt/libvirt/images/pool/ubuntu-24.10-live-server-amd64.iso -hda /home/guest/kvm/images/ubuntu_kube_master.qcow2 -boot d -enable-kvm -smp 4
    

The above command is pretty self explanatory. After successfully installing the server to the VM, you can run the following to quickily connect to it to see if its properly setup.


qemu-system-x86_64 -m 2.5G -hda /home/guest/kvm/images/ubuntu_kube_master.qcow2 -enable-kvm -smp 4    

By default qemu uses built USER MODE NAT to setup network for the above running VM which allows the VM to access external networks (like the internet) but prevents external systems from directly accessing the VM

With QEMU there are three variants of network you can configure

  1. USER Mode NAT (which I explained above)
  2. Bridge NAT "default" (which we talked a lot above)
  3. Host-only networking (means host and VMs can talk but are isolated to the external world)


From here on there are two ways to move forward. Either you spin up your VM using virt-install, virsh or virt-viewer, or use qemu-system_x86_64 instead.

No matter which route you choose, since you already decided to go with virbr0/default which virsh created in the previous steps. We will have a working VM which can talk to other VMs, the host and the internet.


Allow virbr0 to talk to-from the outside world!

    
sudo iptables -I FORWARD -i virbr0 -j ACCEPT
sudo iptables -I FORWARD -o virbr0 -j ACCEPT
sudo iptables -L
sudo iptables-save
    
    

Connect to your VM using virt-install or virt-viewer

    
# This will create a VM instance (more like registers a VM with Virsh)
# and also connect to it by default
sudo virt-install --name kube-master --memory 2048 --vcpus 3 --disk path=/home/guest/kvm/images/ubuntu_kube_master.qcow2,format=qcow2 --import --network network=default --osinfo linux2022

# to view your registered VM
sudo virsh list

# If you stopped the VM and want to start it next time
sudo virsh --connect qemu:///system start kube-master

# If your VM is redirecting its tty to a serial console then
sudo virsh console kube-master

# to connect to a running VM with a viewer window
sudo virt-viewer kube-master

# to shutdown a VM
sudo virsh shutdown kube-master

# to unregister a VM
sudo virsh undefine kube-master

I would say not to mess with the default virsh device at all, but if decided to remove it then do


sudo virsh -c qemu:///system net-undefine default.xml
sudo virsh -c qemu:///system net-destroy default


What if you don't want to use virsh by use the virbr0 which it manages?

No problem, just do net-start on default and make sure it shows a virbr0 in `ip a` output, and then


qemu-system-x86_64 -m 2.5G -hda /home/guest/kvm/images/ubuntu_kube_master.qcow2 -enable-kvm -smp 4 -netdev bridge,id=net0,br=virbr0 -device virtio-net-pci,netdev=net0,mac=52:54:00:ab:cd:ef



Congratulations if you made it so far. Now why would we go further and torture ourself if we already achieved our original goal, i.e, create VM and make them ping each other, ping the host and the outside world?

Because it's fun. Why would I need virsh to do this for me if I can do it myself

na

Method 2: Create my own virbr1 and vnet1

BTW there is a big problem problem with this approach. You wont get an IP assigned to your guest VM and so, ping to basically anything would fail. So, to overcome this issue, you either add an IP and default gateway manually, or setup a dnsmasq dhcp service and bind it to virbr1. I tried the second approach but it didn't work. Manually adding the IP and gateway works though.

Steps to follow

1. Create a bridge, assign ip range and UP


sudo ip link add virbr1 type bridge
sudo ip addr add 192.168.100.1/24 dev virbr1
sudo ip link set virbr1 up
        

With the above 192.168.100.1 becomes your bridge gateway, and the CIDR range is from 1 to 255. Bringing it UP actually throw it into an UNKNOWN state. Let it be.


2. Create a tap interface named vnet1


sudo ip link set vnet1 master virbr1
sudo ip link set vnet1 up # this is unecessary, because qemu will bring it up
        

3. Apply NAT forwarding rules


sudo iptables -t nat -A POSTROUTING -o wlo1 -j MASQUERADE
sysctl net.ipv4.ip_forward # check if this is 1
sudo iptables -I FORWARD -i virbr1 -j ACCEPT
sudo iptables -I FORWARD -o virbr1 -j ACCEPT
        

In the above, the first command allows virtual machines on virbr0 to access the internet by using the host's external IP (wlo1) through NAT, replacing the VMs' private IPs with the host's public IP for outgoing traffic

Second command checks if ip forwarding is enabled

Third and Fourth command allow all incoming and outgoing traffic from virbr1. If these last two commands are not applied, then ping to outside world (1.1.1.1) from the VMs will timeout.


4. Start and connect to the VM using qemu and tap


qemu-system-x86_64 -m 2.5G -hda /home/guest/kvm/images/ubuntu_kube_master.qcow2 -enable-kvm -smp 4 -netdev tap,id=net0,ifname=vnet1,script=no,downscript=no -device virtio-net-pci,netdev=net0qemu-system-x86_64 -m 2.5G -hda /home/guest/kvm/images/ubuntu_kube_master.qcow2 -enable-kvm -smp 4 -netdev tap,id=net0,ifname=vnet1,script=no,downscript=no -device virtio-net-pci,netdev=net0

With this your VM would start, it would wait a while to boot because systemd-networkd is starting and it won't get a IP for the ens3 network interface inside guest. But it will boot in a while

If you check virbr1 and vnet1 by doing `ip a` on your host, you would see both UP


5. Add ip, gateway to ens3 inside guest VM


# 192.168.100.10 will be your VM ip
sudo ip addr add 192.168.100.10/24 dev ens3
ip route # checks if an IP entry appears in the output
sudo systemctl restart systemd-networkd # if necessary

# 192.168.100.1 is gateway, you know this from virbr1
sudo ip route add default via 192.168.100.1

# this should work now
ping 1.1.1.1

cat << EOF > /etc/resolv.conf
nameserver 1.1.1.1
nameserver 9.9.9.9
EOF

# this should also work
ping www.google.com
ping [your host ip]

Congratulations, you have successfully created a VM using qemu. All you need to do is create more VMs and attach it to the vnet1 tap or use the default with virsh

Hope my suffering brought your joy :)