QEMU System Emulation: Low-Level Virtualization on Linux
Knowledge Overview
Prerequisites
- Basic Linux command line skills
- Understanding of virtualization concepts
- System administration fundamentals
- Network basics
- Hardware architecture awareness
Time Investment
28 minutes reading time
56-84 minutes hands-on practice
Guide Content
Understanding QEMU System Emulation
QEMU system emulation is a powerful, open-source low-level virtualization tool that enables you to run virtual machines with different CPU architectures and custom hardware configurations on Linux. Unlike traditional hypervisors, QEMU provides complete hardware emulation capabilities, allowing you to test cross-platform software, develop embedded systems, and create isolated virtualization environments without requiring matching physical hardware. When combined with KVM (Kernel-based Virtual Machine), QEMU delivers near-native performance for same-architecture virtualization while maintaining the flexibility to emulate entirely different processor types.
Quick Start Command:
# Create a virtual disk image
qemu-img create -f qcow2 mydisk.img 20G
# Launch a basic VM with 2GB RAM
qemu-system-x86_64 -enable-kvm -m 2048 -drive file=mydisk.img,format=qcow2
Table of Contents
- What is QEMU system emulation and how does it differ from other virtualization?
- How does QEMU system emulation work with KVM acceleration?
- How to install QEMU on major Linux distributions?
- What are QEMU image formats and which should you choose?
- How to create and configure virtual machines with QEMU?
- How to optimize QEMU system emulation performance?
- What networking modes does QEMU support?
- How to manage QEMU virtual machines effectively?
- FAQ
- Troubleshooting Common QEMU Issues
- Additional Resources
What is QEMU System Emulation and How Does It Differ from Other Virtualization?
QEMU system emulation stands out as one of the most versatile virtualization solutions available for Linux systems. Furthermore, understanding its unique position in the virtualization landscape helps system administrators make informed decisions about deployment strategies.
Core QEMU Architecture
QEMU (Quick EMUlator) functions as both a full system emulator and a virtualizer. Consequently, it operates in two distinct modes that serve different purposes:
Pure Emulation Mode:
- Translates instructions between different CPU architectures
- Runs ARM code on x86 processors or vice versa
- Provides complete hardware emulation including peripherals
- Ideal for cross-platform development and testing
Virtualization Mode (with KVM):
- Leverages hardware virtualization extensions (Intel VT-x, AMD-V)
- Delivers near-native performance for same-architecture VMs
- Integrates with Linux kernel through KVM module
- Suitable for production workloads requiring high performance
Check CPU virtualization support:
# Verify hardware virtualization capabilities
egrep -c '(vmx|svm)' /proc/cpuinfo
# Non-zero result indicates support
# Check if KVM modules are loaded
lsmod | grep kvm
# Should show kvm_intel or kvm_amd
QEMU vs Other Virtualization Technologies
- QEMU System Emulation vs VirtualBox:
| Feature | QEMU System Emulation | VirtualBox |
|---|---|---|
| Architecture Support | Multiple CPU architectures | x86/x86_64 only |
| Performance with KVM | Near-native speed | Good, but proprietary extensions |
| GUI Management | Command-line focused | User-friendly GUI |
| Licensing | GPL (fully open source) | GPL with proprietary additions |
| Use Case | Professional/embedded development | Desktop virtualization |
2. QEMU System Emulation vs KVM:
Moreover, this comparison often confuses newcomers because QEMU and KVM work together rather than competing. Specifically, KVM is a Linux kernel module that provides hardware virtualization support, while QEMU handles device emulation and user-space operations. Therefore, when running same-architecture VMs, QEMU leverages KVM for acceleration, creating a powerful combination commonly referenced as QEMU/KVM.
Verify KVM acceleration availability:
# Test KVM access permissions
kvm-ok
# Should return: KVM acceleration can be used
# Alternative check
[ -e /dev/kvm ] && echo "KVM available" || echo "KVM not available"
3. QEMU System Emulation vs Xen:
Additionally, while both provide enterprise-grade virtualization, QEMU offers greater flexibility for mixed workloads. Xen uses a different architectural approach with a dedicated hypervisor layer, whereas QEMU integrates more tightly with the Linux kernel through KVM. Subsequently, QEMU provides easier setup for smaller deployments while Xen excels in large-scale enterprise environments.
How Does QEMU System Emulation Work with KVM Acceleration?
Understanding the technical architecture of QEMU system emulation reveals why it's such a powerful tool for Linux virtualization. Therefore, let's examine the internal mechanisms that make QEMU both flexible and performant.
The QEMU/KVM Integration Model
When QEMU system emulation operates with KVM acceleration, the workload is intelligently distributed across several components. Consequently, this architecture delivers optimal performance while maintaining security and isolation.
Component Responsibilities:
- KVM Kernel Module - Handles CPU virtualization and memory management
- QEMU User-Space Process - Emulates devices, BIOS, and I/O operations
- Virtual CPU (vCPU) Threads - Execute guest instructions with hardware assistance
- Device Emulation Layer - Provides virtual hardware to guest systems
Enable KVM acceleration:
# Load KVM kernel modules
sudo modprobe kvm
sudo modprobe kvm_intel # For Intel processors
# sudo modprobe kvm_amd # For AMD processors
# Verify module loading
dmesg | grep kvm
# Should show KVM initialization messages
# Check KVM device permissions
ls -l /dev/kvm
# Should be readable/writable by your user or virtualization group
CPU Virtualization Modes
Furthermore, QEMU system emulation supports multiple CPU emulation strategies depending on your requirements:
Hardware-Assisted Virtualization (KVM):
# Enable KVM with optimal CPU settings
qemu-system-x86_64 \
-enable-kvm \
-cpu host \
-smp 4,cores=2,threads=2 \
-m 4096 \
-drive file=vm.img,format=qcow2
Binary Translation (TCG - Tiny Code Generator):
# Pure emulation without KVM (slower, but works cross-architecture)
qemu-system-aarch64 \
-M virt \
-cpu cortex-a57 \
-m 2048 \
-drive file=arm64-vm.img,format=qcow2
Memory Management in QEMU
QEMU system emulation implements sophisticated memory management to ensure guest systems operate efficiently. Therefore, understanding these mechanisms helps optimize VM performance.
Configure memory settings:
# Allocate 8GB RAM to VM with huge pages support
qemu-system-x86_64 \
-enable-kvm \
-m 8192 \
-mem-path /dev/hugepages \
-mem-prealloc \
-drive file=vm.img,format=qcow2
# Check huge pages configuration on host
cat /proc/meminfo | grep -i huge
How to Install QEMU on Major Linux Distributions?
Installing QEMU system emulation varies slightly across distributions, but the core process remains straightforward. Moreover, proper installation ensures you have all necessary components for optimal performance.
Ubuntu/Debian Installation
On Debian-based systems, QEMU system emulation packages are well-maintained and regularly updated. Therefore, installation is simple and reliable.
# Update package repositories
sudo apt update
# Install QEMU system emulation with KVM support
sudo apt install qemu-system-x86 qemu-kvm libvirt-daemon-system \
libvirt-clients bridge-utils virt-manager
# Install additional architecture support (optional)
sudo apt install qemu-system-arm qemu-system-aarch64 qemu-system-mips
# Add your user to required groups
sudo usermod -aG kvm $USER
sudo usermod -aG libvirt $USER
# Restart to apply group membership
# Or use: newgrp kvm && newgrp libvirt
# Verify QEMU installation
qemu-system-x86_64 --version
which qemu-img
Red Hat/CentOS/Fedora Installation
Similarly, Red Hat-based distributions provide comprehensive QEMU system emulation packages optimized for enterprise environments.
# Enable required repositories (RHEL 8/9)
sudo subscription-manager repos --enable codeready-builder-for-rhel-9-x86_64-rpms
# Install QEMU with KVM (Fedora/RHEL 9)
sudo dnf install qemu-kvm libvirt virt-install virt-manager \
virt-viewer bridge-utils
# For RHEL/CentOS 7
sudo yum install qemu-kvm qemu-img libvirt libvirt-python \
libvirt-client virt-install virt-viewer bridge-utils
# Start and enable libvirt service
sudo systemctl enable --now libvirtd
# Add user to libvirt group
sudo usermod -aG libvirt $USER
sudo usermod -aG kvm $USER
# Verify virtualization support
virt-host-validate qemu
Arch Linux Installation
Furthermore, Arch Linux provides bleeding-edge QEMU system emulation packages with extensive customization options.
# Install QEMU and virtualization tools
sudo pacman -S qemu-full libvirt virt-manager dnsmasq bridge-utils
# Enable and start libvirt
sudo systemctl enable --now libvirtd
# Configure user permissions
sudo usermod -aG libvirt $USER
sudo usermod -aG kvm $USER
# Install optional UEFI support
sudo pacman -S edk2-ovmf
# Verify installation
qemu-system-x86_64 --version
virsh version
Post-Installation Configuration
After installation, several configuration steps optimize QEMU system emulation for your environment. Consequently, these tweaks improve both performance and usability.
# Configure default libvirt network
sudo virsh net-list --all
sudo virsh net-start default
sudo virsh net-autostart default
# Configure KVM nested virtualization (if needed)
# For Intel CPUs:
echo "options kvm_intel nested=1" | sudo tee /etc/modprobe.d/kvm.conf
# For AMD CPUs:
echo "options kvm_amd nested=1" | sudo tee /etc/modprobe.d/kvm.conf
# Reload KVM modules
sudo modprobe -r kvm_intel && sudo modprobe kvm_intel
# Or for AMD: sudo modprobe -r kvm_amd && sudo modprobe kvm_amd
# Verify nested virtualization
cat /sys/module/kvm_intel/parameters/nested # Should show Y or 1
What Are QEMU Image Formats and Which Should You Choose?
QEMU system emulation supports multiple disk image formats, each offering distinct advantages for different use cases. Therefore, selecting the appropriate format significantly impacts storage efficiency and VM performance.
QCOW2: QEMU Copy-On-Write Version 2
QCOW2 represents the most popular image format for QEMU system emulation due to its advanced features and flexibility. Moreover, it provides the best balance between performance and functionality.
QCOW2 Advantages:
- Thin provisioning (sparse file allocation)
- Built-in compression support
- Snapshot and backing file capabilities
- AES encryption support
- Improved performance over QCOW1
Create QCOW2 images:
# Create a 50GB QCOW2 image (thin provisioned)
qemu-img create -f qcow2 myvm.qcow2 50G
# Create QCOW2 with specific options
qemu-img create -f qcow2 \
-o preallocation=metadata,cluster_size=64K,lazy_refcounts=on \
optimized-vm.qcow2 100G
# Create encrypted QCOW2 image
qemu-img create -f qcow2 \
-o encryption=on \
encrypted-vm.qcow2 30G
# Check QCOW2 image information
qemu-img info myvm.qcow2
RAW Format: Maximum Performance
In contrast, the RAW format offers the simplest disk image structure with minimal overhead. Consequently, it delivers the best I/O performance but lacks advanced features.
When to use RAW format:
- Maximum I/O performance requirements
- Direct device-like access needed
- Minimal QEMU overhead desired
- Compatibility with other hypervisors
# Create RAW disk image (fully allocated)
qemu-img create -f raw rawdisk.img 20G
# Create sparse RAW image (thin provisioned)
dd if=/dev/zero of=sparse-raw.img bs=1 count=0 seek=50G
# Convert between formats
qemu-img convert -f qcow2 -O raw myvm.qcow2 myvm.raw
# Compare performance
time qemu-io -f qcow2 -c "read 0 1M" myvm.qcow2
time qemu-io -f raw -c "read 0 1M" myvm.raw
VMDK and VDI: Cross-Platform Compatibility
Additionally, QEMU system emulation supports formats used by other virtualization platforms. Therefore, you can easily migrate VMs between different hypervisors.
# Create VMDK format (VMware compatible)
qemu-img create -f vmdk vmware-compatible.vmdk 40G
# Create VDI format (VirtualBox compatible)
qemu-img create -f vdi virtualbox-compatible.vdi 40G
# Convert existing VM to VMDK
qemu-img convert -f qcow2 -O vmdk myvm.qcow2 myvm.vmdk
# Convert VMDK to QCOW2
qemu-img convert -f vmdk -O qcow2 imported.vmdk converted.qcow2
Image Format Comparison Table
| Format | Snapshot Support | Compression | Encryption | Performance | Best Use Case |
|---|---|---|---|---|---|
| QCOW2 | Yes | Yes | Yes | Good | General purpose, development |
| RAW | No | No | No | Excellent | Production, performance-critical |
| VMDK | Limited | No | No | Good | VMware migration |
| VDI | Limited | No | No | Good | VirtualBox migration |
| VHD/VHDX | Limited | No | No | Good | Hyper-V compatibility |
Advanced Image Management
Furthermore, QEMU provides powerful tools for managing disk images efficiently. Consequently, you can optimize storage usage and improve VM manageability.
# Resize QCOW2 image (expand)
qemu-img resize myvm.qcow2 +10G
# Check for corruption and repair
qemu-img check -r all myvm.qcow2
# Compress QCOW2 image (reclaim unused space)
qemu-img convert -f qcow2 -O qcow2 -c original.qcow2 compressed.qcow2
# Create backing file chain for snapshots
qemu-img create -f qcow2 -b base-image.qcow2 -F qcow2 snapshot1.qcow2
# Commit changes from snapshot to backing file
qemu-img commit snapshot1.qcow2
# Rebase snapshot on different backing file
qemu-img rebase -b new-base.qcow2 -F qcow2 snapshot1.qcow2
How to Create and Configure Virtual Machines with QEMU?
Creating virtual machines with QEMU system emulation provides complete control over VM configuration and hardware emulation. Therefore, understanding the command-line options enables you to build customized virtualization environments.
Basic VM Creation Workflow
The fundamental process for creating QEMU virtual machines follows a consistent pattern. Moreover, mastering these steps enables rapid VM deployment.
1: Create Disk Image
# Create 30GB QCOW2 disk image for new VM
qemu-img create -f qcow2 ubuntu-server.qcow2 30G
# Verify image creation
qemu-img info ubuntu-server.qcow2
2: Download Installation Media
# Download Ubuntu Server ISO
wget https://releases.ubuntu.com/22.04/ubuntu-22.04.3-live-server-amd64.iso
# Verify ISO checksum
sha256sum ubuntu-22.04.3-live-server-amd64.iso
3: Launch Installation
# Start VM installation with optimal settings
qemu-system-x86_64 \
-enable-kvm \
-m 4096 \
-cpu host \
-smp 4 \
-drive file=ubuntu-server.qcow2,format=qcow2,if=virtio \
-cdrom ubuntu-22.04.3-live-server-amd64.iso \
-boot order=d,menu=on \
-device virtio-net-pci,netdev=net0 \
-netdev user,id=net0,hostfwd=tcp::2222-:22 \
-vnc :0 \
-monitor stdio
4: Connect to Installation
# Connect via VNC viewer (in another terminal)
vncviewer localhost:5900
# Or use SPICE for better performance
# Add to QEMU command: -spice port=5930,disable-ticketing
# Connect with: spicy -h localhost -p 5930
Essential QEMU Command-Line Options
Understanding QEMU system emulation command-line options enables fine-grained VM configuration. Consequently, you can optimize virtual machines for specific workloads.
CPU Configuration:
# Host CPU passthrough (best performance)
-cpu host
# Specific CPU model
-cpu Skylake-Client
# Multiple CPU cores and threads
-smp 8,cores=4,threads=2,sockets=1
# List available CPU models
qemu-system-x86_64 -cpu help
Memory Configuration:
# 8GB RAM allocation
-m 8192
# 8GB with 2GB maximum hotplug
-m 8G,slots=4,maxmem=16G
# Enable memory ballooning
-device virtio-balloon-pci,id=balloon0
Storage Configuration:
# VirtIO disk (best performance for Linux guests)
-drive file=vm.qcow2,format=qcow2,if=virtio
# IDE disk (broad compatibility)
-drive file=vm.qcow2,format=qcow2,if=ide
# Multiple disks
-drive file=system.qcow2,format=qcow2,if=virtio,index=0 \
-drive file=data.qcow2,format=qcow2,if=virtio,index=1
# Read-only disk
-drive file=shared-data.qcow2,format=qcow2,if=virtio,readonly=on
Boot Configuration:
# Boot from hard disk
-boot order=c
# Boot from CD-ROM first, then hard disk
-boot order=dc,menu=on
# UEFI boot (requires OVMF)
-bios /usr/share/ovmf/OVMF.fd
Advanced VM Configuration
Furthermore, QEMU system emulation supports sophisticated configurations for specialized use cases. Therefore, advanced users can create highly customized virtualization environments.
USB Device Passthrough:
# Enable USB support
-device qemu-xhci,id=xhci \
-device usb-tablet,bus=xhci.0
# Pass through USB device by vendor and product ID
-device usb-host,vendorid=0x1234,productid=0x5678
# List connected USB devices
lsusb
PCI Device Passthrough:
# Identify PCI device
lspci -nn | grep -i network
# Unbind from host driver and bind to VFIO
echo "0000:01:00.0" | sudo tee /sys/bus/pci/drivers/e1000e/unbind
echo "8086 10d3" | sudo tee /sys/bus/pci/drivers/vfio-pci/new_id
# Pass through to VM
-device vfio-pci,host=01:00.0
Graphics Configuration:
# VNC graphics (default)
-vnc :1
# SPICE graphics (better performance)
-spice port=5930,disable-ticketing \
-device qxl-vga,vgamem_mb=64
# No graphics (headless server)
-nographic
# Virtio GPU for 3D acceleration
-device virtio-vga-gl \
-display gtk,gl=on
Creating Production-Ready VMs
Additionally, production environments require careful VM configuration to ensure reliability and performance. Consequently, these examples demonstrate enterprise-ready QEMU system emulation setups.
Complete Production VM Example:
#!/bin/bash
# production-vm-start.sh
VM_NAME="prod-web-server"
VM_DISK="/var/lib/libvirt/images/${VM_NAME}.qcow2"
ISO_FILE="/var/lib/libvirt/images/ubuntu-22.04-server.iso"
MEMORY="8192"
CPUS="4"
VNC_PORT="5901"
qemu-system-x86_64 \
-name "${VM_NAME}" \
-enable-kvm \
-cpu host,kvm=on \
-smp ${CPUS},cores=${CPUS},threads=1,sockets=1 \
-m ${MEMORY} \
-machine type=q35,accel=kvm \
-drive file="${VM_DISK}",format=qcow2,if=virtio,cache=writeback \
-device virtio-net-pci,netdev=net0,mac=52:54:00:12:34:56 \
-netdev tap,id=net0,ifname=tap0,script=/etc/qemu-ifup \
-device virtio-balloon-pci,id=balloon0 \
-device virtio-rng-pci,rng=rng0 \
-object rng-random,id=rng0,filename=/dev/urandom \
-rtc base=utc,clock=host \
-vnc :1 \
-monitor unix:/var/run/qemu-${VM_NAME}.monitor,server,nowait \
-pidfile /var/run/qemu-${VM_NAME}.pid \
-daemonize
echo "VM ${VM_NAME} started on VNC port ${VNC_PORT}"
echo "Monitor socket: /var/run/qemu-${VM_NAME}.monitor"
How to Optimize QEMU System Emulation Performance?
Optimizing QEMU system emulation performance requires understanding multiple factors affecting VM execution. Therefore, implementing these techniques can dramatically improve virtualization efficiency.
CPU Performance Tuning
CPU optimization represents the most impactful area for QEMU system emulation performance improvements. Moreover, proper CPU configuration ensures optimal guest system responsiveness.
Enable KVM Hardware Acceleration:
# Always use KVM for same-architecture VMs
qemu-system-x86_64 -enable-kvm ...
# Alternative syntax
qemu-system-x86_64 -accel kvm ...
# Verify KVM is active inside VM
lscpu | grep Hypervisor
# Should show: Hypervisor vendor: KVM
CPU Pinning for Dedicated Resources:
# Pin vCPUs to specific physical cores
taskset -c 0-3 qemu-system-x86_64 \
-enable-kvm \
-cpu host \
-smp 4 \
-m 4096 \
-drive file=vm.qcow2,format=qcow2,if=virtio
# Check CPU affinity
ps aux | grep qemu
taskset -cp <PID>
CPU Model Selection:
# Use host CPU model for best performance
-cpu host
# Enable all CPU features
-cpu host,+x2apic,+avx,+avx2
# List available CPU models
qemu-system-x86_64 -cpu help | less
# Check guest CPU features
# Inside VM:
cat /proc/cpuinfo | grep flags
Storage I/O Optimization
Storage performance critically impacts overall QEMU system emulation efficiency. Consequently, optimizing disk I/O delivers substantial performance gains.
VirtIO Block Device:
# Use VirtIO for best I/O performance
-drive file=vm.qcow2,format=qcow2,if=virtio,cache=none,aio=native
# Multiple I/O threads
-drive file=vm.qcow2,format=qcow2,if=virtio,cache=none,aio=native \
-object iothread,id=iothread0 \
-device virtio-blk-pci,drive=drive0,iothread=iothread0
Cache Modes Comparison:
| Cache Mode | Host Cache | Guest Cache | Performance | Data Safety | Use Case |
|---|---|---|---|---|---|
| writeback | Yes | Yes | Best | Lower | Development/testing |
| writethrough | Yes | No | Good | Medium | General purpose |
| none | No | No | Medium | Best | Production databases |
| directsync | No | No | Lower | Highest | Critical data |
| unsafe | Yes | Yes | Highest | Lowest | Disposable VMs |
Benchmarking Storage Performance:
# Inside VM - test disk I/O
sudo apt install fio
# Random read/write test
fio --name=randrw --ioengine=libaio --iodepth=16 \
--rw=randrw --bs=4k --direct=1 --size=1G \
--numjobs=4 --runtime=60 --group_reporting
# Sequential write test
fio --name=seqwrite --ioengine=libaio --iodepth=32 \
--rw=write --bs=1M --direct=1 --size=2G \
--numjobs=1 --runtime=60 --group_reporting
Memory Optimization
Memory management significantly affects QEMU system emulation performance and host resource utilization. Therefore, proper memory configuration prevents bottlenecks.
Huge Pages Configuration:
# Enable huge pages on host
echo 1024 | sudo tee /proc/sys/vm/nr_hugepages
# Verify huge pages allocation
cat /proc/meminfo | grep -i huge
# Use huge pages in QEMU
qemu-system-x86_64 \
-enable-kvm \
-m 8192 \
-mem-path /dev/hugepages \
-mem-prealloc \
-drive file=vm.qcow2,format=qcow2,if=virtio
Memory Ballooning:
# Enable memory ballooning device
-device virtio-balloon-pci,id=balloon0
# Inside VM - install balloon driver (usually automatic in modern kernels)
# On host - adjust VM memory dynamically
echo "balloon 4096" | socat - UNIX-CONNECT:/var/run/qemu-monitor.sock
echo "info balloon" | socat - UNIX-CONNECT:/var/run/qemu-monitor.sock
NUMA Configuration:
# Configure NUMA topology for multi-socket performance
qemu-system-x86_64 \
-enable-kvm \
-cpu host \
-smp 16,cores=8,threads=2,sockets=1 \
-m 32G \
-numa node,memdev=mem0,cpus=0-7,nodeid=0 \
-numa node,memdev=mem1,cpus=8-15,nodeid=1 \
-object memory-backend-ram,size=16G,id=mem0 \
-object memory-backend-ram,size=16G,id=mem1
Network Performance Tuning
Network optimization ensures QEMU virtual machines achieve maximum throughput and minimum latency. Moreover, proper network configuration prevents performance degradation.
VirtIO Network Driver:
# Use VirtIO network for best performance
-device virtio-net-pci,netdev=net0,mac=52:54:00:12:34:56 \
-netdev tap,id=net0,ifname=tap0,script=no,downscript=no,vhost=on
# Enable multi-queue VirtIO
-device virtio-net-pci,netdev=net0,mq=on,vectors=10 \
-netdev tap,id=net0,queues=4,vhost=on
Network Benchmarking:
# Install iperf3 on both host and guest
sudo apt install iperf3
# On host - start iperf3 server
iperf3 -s
# Inside VM - test throughput
iperf3 -c <host-ip> -t 60 -P 4
# Test latency with ping
ping -c 100 <host-ip> | tail -1
What Networking Modes Does QEMU Support?
QEMU system emulation provides multiple networking modes to accommodate different use cases and security requirements. Therefore, selecting the appropriate networking configuration ensures proper VM connectivity and isolation.
User Mode Networking (SLIRP)
User mode networking offers the simplest QEMU network configuration without requiring root privileges. However, it provides limited functionality and moderate performance.
Basic User Mode Setup:
# Simple user mode networking with port forwarding
qemu-system-x86_64 \
-enable-kvm \
-m 2048 \
-drive file=vm.qcow2,format=qcow2,if=virtio \
-netdev user,id=net0,hostfwd=tcp::2222-:22,hostfwd=tcp::8080-:80 \
-device virtio-net-pci,netdev=net0
# Access SSH from host
ssh -p 2222 user@localhost
# Access web server from host
curl http://localhost:8080
User Mode Features:
- No root privileges required
- Automatic DHCP and DNS
- Built-in port forwarding
- NAT for outbound connections
- Cannot receive inbound connections without forwarding
- Limited performance compared to TAP
TAP/Bridge Networking
TAP networking provides full network integration allowing VMs to appear as physical machines on your network. Consequently, this mode delivers best performance and complete network functionality.
Create Bridge Interface:
# Install bridge utilities
sudo apt install bridge-utils
# Create bridge configuration (Ubuntu with Netplan)
sudo nano /etc/netplan/01-bridge.yaml
network:
version: 2
ethernets:
enp0s3:
dhcp4: no
bridges:
br0:
interfaces: [enp0s3]
dhcp4: yes
parameters:
stp: false
forward-delay: 0
# Apply bridge configuration
sudo netplan apply
# Verify bridge creation
ip addr show br0
brctl show
TAP Interface Script: Create /etc/qemu-ifup:
#!/bin/bash
# /etc/qemu-ifup
BRIDGE=br0
INTERFACE=$1
ip link set $INTERFACE up
brctl addif $BRIDGE $INTERFACE
Create /etc/qemu-ifdown:
#!/bin/bash
# /etc/qemu-ifdown
BRIDGE=br0
INTERFACE=$1
brctl delif $BRIDGE $INTERFACE
ip link set $INTERFACE down
Make scripts executable:
sudo chmod +x /etc/qemu-ifup /etc/qemu-ifdown
Launch VM with TAP Networking:
# Start VM with bridged network
sudo qemu-system-x86_64 \
-enable-kvm \
-m 4096 \
-cpu host \
-smp 4 \
-drive file=vm.qcow2,format=qcow2,if=virtio \
-device virtio-net-pci,netdev=net0,mac=52:54:00:12:34:56 \
-netdev tap,id=net0,ifname=tap0,script=/etc/qemu-ifup,downscript=/etc/qemu-ifdown \
-vnc :0
# Verify TAP interface creation
ip link show tap0
brctl show br0
VDE (Virtual Distributed Ethernet)
Additionally, VDE provides software-based Ethernet switching for complex virtual network topologies. Therefore, it's ideal for creating multi-VM network scenarios.
# Install VDE tools
sudo apt install vde2
# Start VDE switch
vde_switch -s /tmp/vde.ctl -daemon
# Connect VMs to VDE switch
qemu-system-x86_64 \
-enable-kvm \
-m 2048 \
-drive file=vm1.qcow2,format=qcow2,if=virtio \
-netdev vde,id=net0,sock=/tmp/vde.ctl \
-device virtio-net-pci,netdev=net0,mac=52:54:00:11:11:11
# Second VM on same switch
qemu-system-x86_64 \
-enable-kvm \
-m 2048 \
-drive file=vm2.qcow2,format=qcow2,if=virtio \
-netdev vde,id=net0,sock=/tmp/vde.ctl \
-device virtio-net-pci,netdev=net0,mac=52:54:00:22:22:22
Socket Networking
Socket networking enables direct VM-to-VM communication without external network infrastructure. Consequently, it's perfect for isolated multi-tier application testing.
# First VM (server)
qemu-system-x86_64 \
-enable-kvm \
-m 2048 \
-drive file=vm1.qcow2,format=qcow2,if=virtio \
-netdev socket,id=net0,listen=:1234 \
-device virtio-net-pci,netdev=net0
# Second VM (client)
qemu-system-x86_64 \
-enable-kvm \
-m 2048 \
-drive file=vm2.qcow2,format=qcow2,if=virtio \
-netdev socket,id=net0,connect=:1234 \
-device virtio-net-pci,netdev=net0
How to Manage QEMU Virtual Machines Effectively?
Effective QEMU system emulation management requires understanding monitoring, control, and automation tools. Therefore, mastering these techniques enables professional-grade VM operations.
QEMU Monitor Console
The QEMU monitor provides real-time VM control and diagnostics. Moreover, it enables administrative operations without stopping the virtual machine.
Access Monitor Console:
# Start VM with monitor on stdio
qemu-system-x86_64 \
-enable-kvm \
-m 2048 \
-drive file=vm.qcow2,format=qcow2,if=virtio \
-monitor stdio
# Or use Unix socket
qemu-system-x86_64 \
-enable-kvm \
-m 2048 \
-drive file=vm.qcow2,format=qcow2,if=virtio \
-monitor unix:/var/run/qemu-monitor.sock,server,nowait
# Connect to monitor socket
socat - UNIX-CONNECT:/var/run/qemu-monitor.sock
Essential Monitor Commands:
# Inside QEMU monitor:
(qemu) help # List all commands
(qemu) info status # Check VM status
(qemu) info cpus # Display CPU information
(qemu) info block # Show block devices
(qemu) info network # Display network devices
(qemu) info snapshots # List snapshots
(qemu) info migrate # Migration status
# System control
(qemu) system_reset # Reboot VM
(qemu) system_powerdown # Graceful shutdown
(qemu) stop # Pause VM
(qemu) cont # Resume VM
(qemu) quit # Terminate QEMU
# Snapshot management
(qemu) savevm snapshot1 # Create snapshot
(qemu) loadvm snapshot1 # Restore snapshot
(qemu) delvm snapshot1 # Delete snapshot
# Device hot-plugging
(qemu) device_add virtio-net-pci,netdev=net1,id=nic1
(qemu) device_del nic1
# Change removable media
(qemu) change ide1-cd0 /path/to/new.iso
(qemu) eject ide1-cd0
Snapshot Management
QEMU system emulation snapshots enable point-in-time recovery and testing scenarios. Consequently, they're essential for development and disaster recovery planning.
Internal Snapshots (QCOW2 only):
# Create internal snapshot
qemu-img snapshot -c snapshot1 vm.qcow2
# List existing snapshots
qemu-img snapshot -l vm.qcow2
# Apply snapshot (restore)
qemu-img snapshot -a snapshot1 vm.qcow2
# Delete snapshot
qemu-img snapshot -d snapshot1 vm.qcow2
# Get detailed snapshot information
qemu-img info vm.qcow2
External Snapshots:
# Create external snapshot (backing file)
qemu-img create -f qcow2 -b vm.qcow2 -F qcow2 vm-snapshot1.qcow2
# Use snapshot as active disk
qemu-system-x86_64 \
-enable-kvm \
-m 2048 \
-drive file=vm-snapshot1.qcow2,format=qcow2,if=virtio
# Commit snapshot changes back to base
qemu-img commit vm-snapshot1.qcow2
# Create snapshot chain
qemu-img create -f qcow2 -b vm-snapshot1.qcow2 -F qcow2 vm-snapshot2.qcow2
Automation Scripts
Furthermore, automation streamlines QEMU virtual machine management and ensures consistent deployments. Therefore, these scripts demonstrate professional VM operations.
VM Startup Script:
#!/bin/bash
# /usr/local/bin/start-qemu-vm.sh
set -euo pipefail
VM_NAME="${1:-default-vm}"
VM_DIR="/var/lib/qemu/vms"
VM_DISK="${VM_DIR}/${VM_NAME}.qcow2"
PID_FILE="/var/run/qemu-${VM_NAME}.pid"
MONITOR_SOCK="/var/run/qemu-${VM_NAME}.monitor"
MEMORY="${2:-2048}"
CPUS="${3:-2}"
VNC_PORT="${4:-0}"
# Check if VM is already running
if [ -f "$PID_FILE" ]; then
PID=$(cat "$PID_FILE")
if kill -0 "$PID" 2>/dev/null; then
echo "Error: VM $VM_NAME is already running (PID: $PID)"
exit 1
fi
rm -f "$PID_FILE"
fi
# Verify disk image exists
if [ ! -f "$VM_DISK" ]; then
echo "Error: VM disk $VM_DISK not found"
exit 1
fi
echo "Starting VM: $VM_NAME"
echo " Memory: ${MEMORY}MB"
echo " CPUs: $CPUS"
echo " VNC: localhost:${VNC_PORT}"
qemu-system-x86_64 \
-name "$VM_NAME" \
-enable-kvm \
-cpu host \
-smp "$CPUS" \
-m "$MEMORY" \
-drive file="$VM_DISK",format=qcow2,if=virtio,cache=writeback \
-device virtio-net-pci,netdev=net0,mac=52:54:00:$(od -An -N3 -tx1 /dev/urandom | tr -d ' ') \
-netdev tap,id=net0,script=/etc/qemu-ifup,downscript=/etc/qemu-ifdown \
-device virtio-balloon-pci \
-rtc base=utc,clock=host \
-vnc ":${VNC_PORT}" \
-monitor unix:"$MONITOR_SOCK",server,nowait \
-pidfile "$PID_FILE" \
-daemonize
# Verify startup
sleep 2
if [ -f "$PID_FILE" ]; then
PID=$(cat "$PID_FILE")
if kill -0 "$PID" 2>/dev/null; then
echo "VM started successfully (PID: $PID)"
echo "Monitor: $MONITOR_SOCK"
echo "VNC: vncviewer localhost:$((5900 + VNC_PORT))"
exit 0
fi
fi
echo "Error: Failed to start VM"
exit 1
VM Shutdown Script:
#!/bin/bash
# /usr/local/bin/stop-qemu-vm.sh
set -euo pipefail
VM_NAME="${1:-}"
MONITOR_SOCK="/var/run/qemu-${VM_NAME}.monitor"
PID_FILE="/var/run/qemu-${VM_NAME}.pid"
TIMEOUT=60
if [ -z "$VM_NAME" ]; then
echo "Usage: $0 <vm-name>"
exit 1
fi
# Check if VM is running
if [ ! -f "$PID_FILE" ]; then
echo "VM $VM_NAME is not running"
exit 0
fi
PID=$(cat "$PID_FILE")
if ! kill -0 "$PID" 2>/dev/null; then
echo "VM $VM_NAME PID file exists but process not found"
rm -f "$PID_FILE" "$MONITOR_SOCK"
exit 0
fi
echo "Stopping VM: $VM_NAME (PID: $PID)"
# Try graceful shutdown via monitor
if [ -S "$MONITOR_SOCK" ]; then
echo "system_powerdown" | socat - UNIX-CONNECT:"$MONITOR_SOCK" 2>/dev/null || true
# Wait for shutdown
for i in $(seq 1 $TIMEOUT); do
if ! kill -0 "$PID" 2>/dev/null; then
echo "VM stopped gracefully"
rm -f "$PID_FILE" "$MONITOR_SOCK"
exit 0
fi
sleep 1
done
echo "Timeout waiting for graceful shutdown"
fi
# Force termination
echo "Forcing VM termination..."
kill -TERM "$PID" 2>/dev/null || true
sleep 2
if kill -0 "$PID" 2>/dev/null; then
kill -KILL "$PID" 2>/dev/null || true
fi
rm -f "$PID_FILE" "$MONITOR_SOCK"
echo "VM terminated"
Make scripts executable:
sudo chmod +x /usr/local/bin/start-qemu-vm.sh
sudo chmod +x /usr/local/bin/stop-qemu-vm.sh
# Usage examples
sudo /usr/local/bin/start-qemu-vm.sh webserver 4096 4 1
sudo /usr/local/bin/stop-qemu-vm.sh webserver
FAQ
What is the difference between QEMU and KVM?
QEMU is a complete system emulator that can run virtual machines in pure software mode. In contrast, KVM is a Linux kernel module that provides hardware virtualization support. Therefore, when used together, QEMU handles device emulation and user-space operations while KVM accelerates CPU execution using hardware virtualization extensions. This combination delivers near-native performance for same-architecture virtualization.
Can QEMU run virtual machines without KVM?
Yes, QEMU system emulation can run VMs without KVM through binary translation (TCG mode). However, performance will be significantly slower since all CPU instructions must be translated in software. Consequently, KVM acceleration is recommended whenever possible. Additionally, QEMU without KVM enables cross-architecture emulation, allowing you to run ARM virtual machines on x86 hosts or vice versa.
What is the best disk image format for QEMU?
QCOW2 represents the best general-purpose format for QEMU system emulation due to its snapshot support, thin provisioning, and compression capabilities. However, RAW format delivers superior I/O performance for production databases and performance-critical applications. Therefore, choose QCOW2 for development and testing, while RAW suits production workloads requiring maximum throughput.
How do I convert between different VM image formats?
Use the qemu-img convert command to transform images between formats:
# Convert QCOW2 to RAW
qemu-img convert -f qcow2 -O raw source.qcow2 destination.raw
# Convert VMDK to QCOW2
qemu-img convert -f vmdk -O qcow2 vmware.vmdk qemu.qcow2
# Compress during conversion
qemu-img convert -f qcow2 -O qcow2 -c original.qcow2 compressed.qcow2
Can I run multiple VMs simultaneously with QEMU?
Absolutely! QEMU system emulation supports running multiple virtual machines concurrently. However, ensure your host system has sufficient resources (CPU, memory, and I/O bandwidth) to accommodate all VMs. Therefore, monitor host resource utilization using htop, iotop, and vnstat to prevent overcommitment.
How do I enable nested virtualization in QEMU?
Nested virtualization allows running VMs inside VMs. First, enable it on the host:
# Intel CPUs
echo "options kvm_intel nested=1" | sudo tee /etc/modprobe.d/kvm.conf
sudo modprobe -r kvm_intel && sudo modprobe kvm_intel
# AMD CPUs
echo "options kvm_amd nested=1" | sudo tee /etc/modprobe.d/kvm.conf
sudo modprobe -r kvm_amd && sudo modprobe kvm_amd
# Then expose CPU virtualization features to guest
qemu-system-x86_64 -enable-kvm -cpu host,vmx=on ...
What's the recommended memory size for QEMU VMs?
Memory requirements depend on the guest operating system and workload. Consequently, allocate at minimum:
- Linux server (minimal): 512MB-1GB
- Linux desktop: 2GB-4GB
- Windows 10/11: 4GB-8GB
- Database servers: 8GB-16GB+
Moreover, leave at least 20-30% of host memory free to prevent swapping and maintain host responsiveness.
How do I backup QEMU virtual machines?
Several backup strategies work effectively with QEMU system emulation:
# Shut down VM and copy disk image (safest)
qemu-img convert -f qcow2 -O qcow2 vm.qcow2 backup.qcow2
# Create external snapshot for live backup
qemu-img create -f qcow2 -b vm.qcow2 -F qcow2 snapshot.qcow2
# Backup original vm.qcow2 while VM runs from snapshot
# Then commit snapshot: qemu-img commit snapshot.qcow2
# Use rsync for incremental backups
rsync -avP /var/lib/qemu/vms/ /backup/qemu/
Why is my QEMU VM running slowly?
Several factors can degrade QEMU system emulation performance. Therefore, check these common issues:
- KVM acceleration not enabled (use
-enable-kvm) - Using IDE instead of VirtIO drivers
- Incorrect cache mode (
cache=writebackorcache=none) - Insufficient host resources or overcommitment
- Missing virtio drivers in guest OS
- Slow storage backend (use SSD when possible)
- Network bottlenecks (use VirtIO network driver)
Troubleshooting Common QEMU Issues
KVM Acceleration Not Available
Symptom: QEMU runs very slowly, or you see errors about KVM not being available.
Diagnosis:
# Check CPU virtualization support
egrep -c '(vmx|svm)' /proc/cpuinfo
# Should return non-zero value
# Check if KVM modules are loaded
lsmod | grep kvm
# Verify KVM device exists
ls -la /dev/kvm
# Test KVM accessibility
kvm-ok
Solution:
# Enable virtualization in BIOS/UEFI first!
# Load KVM modules
sudo modprobe kvm
sudo modprobe kvm_intel # or kvm_amd
# Fix permissions on /dev/kvm
sudo usermod -aG kvm $USER
newgrp kvm
# Verify access
[ -r /dev/kvm ] && [ -w /dev/kvm ] && echo "KVM accessible" || echo "KVM not accessible"
Network Connectivity Problems
Symptom: VM has no network access or cannot communicate with host.
Diagnosis:
# Inside VM - check network interface
ip addr show
ip route show
# Check if VM received DHCP address (user mode networking)
cat /etc/resolv.conf
# On host - verify TAP interface (bridged networking)
ip link show tap0
brctl show br0
# Test connectivity from VM
ping -c 4 8.8.8.8
ping -c 4 google.com
Solution for User Mode Networking:
# Restart VM with proper port forwarding
qemu-system-x86_64 \
-enable-kvm \
-m 2048 \
-drive file=vm.qcow2,format=qcow2,if=virtio \
-netdev user,id=net0,hostfwd=tcp::2222-:22,dns=8.8.8.8 \
-device virtio-net-pci,netdev=net0
# Inside VM - verify networking
sudo dhclient eth0
systemctl restart systemd-networkd
Solution for Bridge Networking:
# Recreate bridge on host
sudo ip link del br0
sudo ip link add name br0 type bridge
sudo ip link set enp0s3 master br0
sudo ip link set br0 up
sudo dhclient br0
# Fix TAP interface scripts
sudo chmod +x /etc/qemu-ifup /etc/qemu-ifdown
# Restart VM with sudo for TAP access
sudo qemu-system-x86_64 ...
Disk Image Corruption
Symptom: VM fails to boot, or you see I/O errors in guest system.
Diagnosis:
# Check image for corruption
qemu-img check vm.qcow2
# Get detailed image information
qemu-img info vm.qcow2
# Verify file system integrity (if RAW format)
sudo fsck -f /dev/nbd0p1 # After mounting with qemu-nbd
Solution:
# Attempt automatic repair
qemu-img check -r all vm.qcow2
# If repair fails, recover data to new image
qemu-img convert -f qcow2 -O qcow2 corrupted.qcow2 recovered.qcow2
# Restore from backup if available
cp backup-vm.qcow2 vm.qcow2
# Mount image with NBD to recover files
sudo modprobe nbd max_part=8
sudo qemu-nbd -c /dev/nbd0 vm.qcow2
sudo mount /dev/nbd0p1 /mnt/recovery
# Copy important files
sudo umount /mnt/recovery
sudo qemu-nbd -d /dev/nbd0
High CPU Usage on Host
Symptom: Host system becomes unresponsive when QEMU VMs are running.
Diagnosis:
# Check overall system load
top
htop
# Identify QEMU processes
ps aux | grep qemu
# Check CPU usage per VM
top -p $(pgrep -d',' qemu)
# Verify KVM is being used
grep -i kvm /proc/$(pgrep qemu)/cmdline
Solution:
# Ensure KVM acceleration is enabled
# Check QEMU command includes -enable-kvm
# Limit VM CPU usage with cgroups
sudo cgcreate -g cpu:/qemu-vms
sudo cgset -r cpu.shares=512 qemu-vms
sudo cgclassify -g cpu:qemu-vms $(pgrep qemu)
# Pin QEMU to specific CPU cores
taskset -c 0-3 qemu-system-x86_64 ...
# Reduce VM CPU count if overcommitted
# Change -smp 8 to -smp 4 in VM configuration
VNC Connection Refused
Symptom: Cannot connect to QEMU VM via VNC viewer.
Diagnosis:
# Check if QEMU is listening on VNC port
sudo netstat -tlnp | grep qemu
sudo ss -tlnp | grep qemu
# Verify VNC display number
ps aux | grep qemu | grep vnc
# Test VNC port accessibility
telnet localhost 5900 # For VNC :0
Solution:
# Restart VM with correct VNC settings
qemu-system-x86_64 \
-enable-kvm \
-m 2048 \
-drive file=vm.qcow2,format=qcow2,if=virtio \
-vnc :0,password # Listen on all interfaces: -vnc 0.0.0.0:0
# Set VNC password in monitor
# Connect to QEMU monitor and run:
change vnc password
# Enter password when prompted
# Allow VNC through firewall
sudo ufw allow 5900/tcp
sudo firewall-cmd --add-port=5900/tcp --permanent
# Use SSH tunnel for remote access
ssh -L 5900:localhost:5900 user@remote-host
vncviewer localhost:5900
QEMU Monitor Not Responding
Symptom: Cannot access QEMU monitor console for VM management.
Diagnosis:
# Check monitor socket exists
ls -la /var/run/qemu-*.monitor
# Verify socket is accessible
file /var/run/qemu-vm.monitor
# Test socket connection
echo "info status" | socat - UNIX-CONNECT:/var/run/qemu-vm.monitor
Solution:
# Restart VM with proper monitor configuration
qemu-system-x86_64 \
-enable-kvm \
-m 2048 \
-drive file=vm.qcow2,format=qcow2,if=virtio \
-monitor unix:/var/run/qemu-vm.monitor,server,nowait
# Fix socket permissions
sudo chmod 660 /var/run/qemu-vm.monitor
sudo chown $USER:kvm /var/run/qemu-vm.monitor
# Alternative: use stdio monitor
# Start with: -monitor stdio
# Monitor appears in QEMU process terminal
Additional Resources
Official Documentation and References
QEMU Project Resources:
- QEMU Official Documentation - Comprehensive QEMU system emulation documentation
- QEMU Wiki - Community-maintained guides and tutorials
- QEMU Source Code - Official Git repository
Linux Virtualization Documentation:
- KVM Kernel Documentation - KVM acceleration technical details
- Libvirt Documentation - VM management library documentation
- Linux Virtualization Guide - Red Hat's comprehensive virtualization guide
Hardware Virtualization Standards:
- Intel VT-x Technology - Intel virtualization extensions
- AMD-V Virtualization - AMD virtualization technology
Community Forums and Support
QEMU Community:
- QEMU Mailing Lists - Official discussion forums
- QEMU IRC Channel - #qemu on irc.oftc.net
- Stack Overflow QEMU Tag - Q&A community
Linux Virtualization Communities:
- LinuxQuestions Virtualization Forum - Community support
- Reddit r/QEMU - QEMU discussions and troubleshooting
- Reddit r/linux_virtualization - General Linux virtualization
Related LinuxTips.pro Articles
Foundation Virtualization Topics:
- Post #66: KVM Kernel-based Virtual Machine Setup - Understanding KVM integration
- Post #67: VirtualBox on Linux Desktop Virtualization - Alternative virtualization solution
- Post #15: Cron Jobs and Task Scheduling - Automate VM management tasks
Advanced Cloud and DevOps Integration:
- Post #71: AWS CLI Managing AWS Resources from Linux - Cloud virtualization
- Post #74: Terraform Infrastructure as Code on Linux - Automate VM deployment
- Post #76: Jenkins on Linux CI/CD Pipeline Setup - Integrate QEMU in CI/CD pipelines
Storage and Performance:
- Post #18: LVM Logical Volume Management Complete Guide - Advanced storage for VMs
- Post #42: Disk I/O Performance Analysis - Optimize VM storage performance
- Post #50: Custom Monitoring Scripts and Alerts - Monitor QEMU performance
Books and In-Depth Learning
Recommended Reading:
- Mastering KVM Virtualization by Humble Devassy Chirammal
- Linux Kernel Development by Robert Love (KVM internals)
- The Linux Programming Interface by Michael Kerrisk (System-level virtualization concepts)
Tools and Utilities
Essential QEMU Management Tools:
- virt-manager - Graphical VM management interface
- virsh - Command-line VM control utility
- libguestfs - Tools for accessing and modifying VM disk images
Performance and Monitoring:
- perf - Linux performance analysis tool
- bpftrace - High-level tracing language for eBPF
- atop - Advanced system and process monitor
Network Virtualization:
- Open vSwitch - Advanced virtual networking
- Linux Bridge - Kernel-level bridging
- iproute2 - Modern network configuration
Conclusion
QEMU system emulation represents a powerful, flexible virtualization solution for Linux systems. Throughout this comprehensive guide, we've explored how QEMU combines low-level hardware emulation with KVM acceleration to deliver both versatility and performance. From basic VM creation to advanced optimization techniques, you now possess the knowledge to deploy production-ready virtualization environments.
Remember that QEMU system emulation excels in scenarios requiring cross-architecture emulation, custom hardware configurations, and fine-grained control over virtual machine resources. Moreover, when combined with modern tools like libvirt and virt-manager, QEMU provides an enterprise-grade virtualization platform rivaling commercial alternatives.
As you continue your Linux mastery journey, experiment with different QEMU configurations, explore advanced features like PCI passthrough and SR-IOV, and integrate QEMU into your DevOps workflows. The skills you've developed here form the foundation for cloud computing, container orchestration, and infrastructure automationβessential competencies for modern system administrators.
Next Steps:
- Proceed to Post #69: Vagrant Automated Development Environments for VM automation workflows
- Explore Post #70: Proxmox VE Complete Virtualization Platform for enterprise virtualization management
- Review Post #61: Docker Fundamentals to compare container vs VM virtualization approaches
Published: 08.11.2025
Author: LinuxTips.pro Team
Category: Linux Virtualization
Tags: QEMU, KVM, system emulation, virtualization, Linux virtual machines, hypervisor, QEMU/KVM, VM management