Linux, AMD Ryzen 3900X, X570, NVIDIA GTX 1060, AMD 5700XT, Looking Glass, PCI Passthrough and Windows 10
In the first part I explained my hardware setup and the BIOS configuration. Now we prepare everything we need on the Linux side to make PCI passthrough (among other things) work.
First we need a few packages installed to create and manage virtual machines and also make Looking Glass
work later:
pacman -S qemu libvirt ovmf ebtables dnsmasq bridge-utils openbsd-netcat virt-manager
From Archlinux User Repository (AUR) we can already install Looking Glass
but that’s optional if you want your Windows VM output on a second screen or a different monitor input (yay
is the AUR helper I’m using but you can use the AUR helper of your choice):
yay looking-glass
At time of writing this post B1-1
was the latest version (make sure later that you use the same Looking Glass
version for Windows too). You should see a few other packages but we’re only interested in the looking-glass
package.
For further information about QEMU, KVM, and related stuff can be found in the Archlinux or libvirt wiki pages:
QEMU wiki
libvirt
libvirt networking
Since we need the Windows 10 installation medium you can already start downloading the Windows 10 64bit ISO file here. For performance reasons it makes also sense to use the Windows VirtIO drivers during installation. For this also download the ISO file here. In my case the ISO file was called virtio-win-0.1.172.iso
. For more information about the VirtIO
driver see Creating Windows virtual machines using virtIO drivers
Next I added two kernel boot parameter in order to enable IOMMU support
during boot. I’m using systemd-boot
as boot loader and my EFI system partition is /boot
. If you use Grub
the next step might be different. So in /boot/loader/entries/
you should find a .conf
file for the kernel you booted. In my case it’s 5.4.0-rc3-mainline
(use uname -r
) and the file is called mainline.conf
. The content looks like this:
title Arch Linux - Mainline
linux /vmlinuz-linux-mainline
initrd /amd-ucode.img
initrd /initramfs-linux-mainline.img
options root=UUID=f6b073af-5d8f-45ee-aca8-88712b3e19eb rw amd_iommu=on iommu=pt
The important parameter are the ones in the options
line: amd_iommu=on iommu=pt
. iommu=pt
will prevent Linux from touching devices which cannot be passed through and amd_iommu=on
turns IOMMU on of course. Add them accordingly.
I’m not sure if this is needed (but it also doesn’t hurt) but I also black listed all Nvidia drivers to prevent them from loading in /etc/modprobe.d/blacklist.conf
:
blacklist nouveau
blacklist rivafb
blacklist nvidiafb
blacklist rivatv
blacklist nv
Now add your username to libvirt
, kvm
and input
groups (replace “user” with your actual username of course):
usermod -aG libvirt user
usermod -aG kvm user
usermod -aG input user
Next reboot your computer. After reboot login in and enter dmesg | grep -i iommu
. You should see something like this:
[ 0.000000] Command line: initrd=\amd-ucode.img initrd=\initramfs-linux-mainline.img root=UUID=f6b073af-5d8f-45ee-aca8-88712b3e19eb rw amd_iommu=on iommu=pt
[ 0.000000] Kernel command line: initrd=\amd-ucode.img initrd=\initramfs-linux-mainline.img root=UUID=f6b073af-5d8f-45ee-aca8-88712b3e19eb rw amd_iommu=on iommu=pt
[ 1.245161] pci 0000:00:00.2: AMD-Vi: IOMMU performance counters supported
[ 1.245445] pci 0000:00:01.0: Adding to iommu group 0
[ 1.245469] pci 0000:00:01.0: Using iommu direct mapping
[ 1.245539] pci 0000:00:01.1: Adding to iommu group 1
[ 1.245560] pci 0000:00:01.1: Using iommu direct mapping
[ 1.245621] pci 0000:00:01.2: Adding to iommu group 2
[ 1.245642] pci 0000:00:01.2: Using iommu direct mapping
...
[ 1.248399] pci 0000:0f:00.3: Adding to iommu group 10
[ 1.248411] pci 0000:0f:00.4: Adding to iommu group 10
[ 1.248426] pci 0000:10:00.0: Adding to iommu group 10
[ 1.248440] pci 0000:11:00.0: Adding to iommu group 10
[ 1.248583] pci 0000:00:00.2: AMD-Vi: Found IOMMU cap 0x40
[ 1.250040] perf/amd_iommu: Detected AMD IOMMU #0 (2 banks, 4 counters/bank).
[ 1.689319] AMD-Vi: AMD IOMMUv2 driver by Joerg Roedel <jroedel@suse.de>
Next I needed to ensure that the IOMMU groups are valid. An IOMMU group is the smallest set of physical devices that can be passed to a virtual machine. To get all groups we can use this Bash script:
#!/usr/bin/env bash
shopt -s nullglob
for g in /sys/kernel/iommu_groups/*; do
echo "IOMMU Group ${g##*/}:"
for d in $g/devices/*; do
echo -e "\t$(lspci -nns ${d##*/})"
done
done
This produces quite some output but here is the interesting one:
IOMMU Group 28:
0d:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP106 [GeForce GTX 1060 6GB] [10de:1c03] (rev a1)
0d:00.1 Audio device [0403]: NVIDIA Corporation GP106 High Definition Audio Controller [10de:10f1] (rev a1)
This is basically exactly what we want and what we need. The NVIDIA graphics card which I want to use for the Windows VM and the NVIDIA HDMI Audio output are in one IOMMU group. That’s perfect :-) If there would have been other devices listed in this IOMMU group that would have been not so nice. Because then the additional devices either also must be passed through to the Windows VM or you have to try different PCIe slots and hope that the Nvidia card will be placed in different IOMMU group next time.
Next we need to isolate the NVIDIA GPU. For this we need a VFIO driver
in order to prevent the host machine from interacting with them. The Archlinux linux kernel and the linux-mainline kernel (from AUR) contains a module called vfio-pci
. vfio-pci
normally targets PCI devices by ID, meaning you only need to specify the IDs of the devices you intend to passthrough. If you have a look above the IOMMU Group 28
has two PCI device IDs: 10de:1c03
and 10de:10f1
. So we create a file called /etc/modprobe.d/vfio.conf
and pass the two IDs as option for the kernel module:
options vfio-pci ids=10de:1c03,10de:10f1
This, however, does not guarantee that vfio-pci will be loaded before other graphics drivers. To ensure that, we need to statically bind it in the kernel image alongside with its dependencies. Also, ensure that the modconf
hook is included in the HOOKS list. To accomplish this we edit /etc/mkinitcpio.conf
. In my case the important lines looks like this (amdgpu
is needed for my AMD 5700XT graphics card as it is the one I use for Linux graphics display):
# For Linux kernel <= 6.1
MODULES=(vfio_pci vfio vfio_iommu_type1 vfio_virqfd amdgpu)
# For Linux kernel >= 6.2
MODULES=(vfio_pci vfio vfio_iommu_type1 amdgpu)
...
HOOKS=(base udev autodetect modconf block filesystems keyboard fsck)
Now reboot again and verifying that the configuration worked. If you run dmesg | grep -i vfio
you should see something like this:
[ 1.639984] VFIO - User Level meta-driver version: 0.3
[ 1.643092] vfio-pci 0000:0d:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=none
[ 1.656699] vfio_pci: add [10de:1c03[ffffffff:ffffffff]] class 0x000000/00000000
[ 1.673573] vfio_pci: add [10de:10f1[ffffffff:ffffffff]] class 0x000000/00000000
[ 60.098993] vfio-pci 0000:0d:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=none
If I now execute lspci -nnk -d 10de:1c03
I can see that the vfio-pci
is in use for the Nvidia GTX 1060:
0d:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP106 [GeForce GTX 1060 6GB] [10de:1c03] (rev a1)
Subsystem: ASUSTeK Computer Inc. GP106 [GeForce GTX 1060 6GB] [1043:85ac]
Kernel driver in use: vfio-pci
Kernel modules: nouveau
The same is true if I execute lspci -nnk -d 10de:10f1
for the Audio Controller:
0d:00.1 Audio device [0403]: NVIDIA Corporation GP106 High Definition Audio Controller [10de:10f1] (rev a1)
Subsystem: ASUSTeK Computer Inc. GP106 High Definition Audio Controller [1043:85ac]
Kernel driver in use: vfio-pci
Kernel modules: snd_hda_intel
So everything perfect so far :-)
Next we need to setup OVMF. OVMF is an open-source UEFI firmware for QEMU virtual machines. We already installed the ovmf package. To manage VMs we use the utilities which the libvirt package provides and we also already installed that one. So we need to make libvirt
aware of the UEFI firmware. To do this we edit /etc/libvirt/qemu.conf
:
nvram = [
"/usr/share/ovmf/x64/OVMF_CODE.fd:/usr/share/ovmf/x64/OVMF_VARS.fd"
]
Afterwards restart two services (also make sure that both services are enabled with sudo systemctl enable ...
):
sudo systemctl restart libvirtd.service
sudo systemctl restart virtlogd.socket
You may also need to activate the default libvirt network. The default libvirt networking mode is NAT forwarding
. I don’t like that one. I’m using Bridged networking (aka “shared physical device”). This makes live way easier in the long run. For this you need a bridge
obviously ;-) How to setup such a bridge
is beyond the scope of this guide. But to give you a few hints you basically need three files. I used the systemd-networkd approach. My physical network card is called enp6s0
and my bridge is called br0
.
First, create a virtual bridge interface. We tell systemd to create a device named br0
that functions as an ethernet bridge. So we create a file /etc/systemd/network/99-br0.netdev
:
[NetDev]
Name=br0
Kind=bridge
Then sudo systemctl restart systemd-networkd.service
to have systemd create the bridge. The next step is to add to the newly created bridge a network interface (enp6s0
in my case). For this we create a file called /etc/systemd/network/98-enp6s0.network
:
[Match]
Name=enp6s0
[Network]
Bridge=br0
Since I wanted to have a static IP for my bridge br0
I needed one file called /etc/systemd/network/99-br0.network
:
[Match]
Name=br0
[Network]
Address=192.168.2.254/24
Gateway=192.168.2.1
DNS=1.1.1.1
DNS=9.9.9.9
As you can see br0
will get the IP 192.168.2.254/24
assigned. The gateway is 192.168.2.1
and I’m using the DNS server at Cloudflare (https://1.1.1.1/) and Quad9 DNS (https://quad9.net/) (no it doesn’t always have to be Google DNS :D ). Finally make sure that systemd-networkd.service
is enabled and started. For further information please also see Archlinux Wiki Network bridge.
While not strictly needed I would now reboot the computer and check if everything is up and running as expected. We’re now ready to create our Windows VM which we’ll do in the next part!