Linux, AMD Ryzen 3900X, X570, NVIDIA GTX 1060, AMD 5700XT, Looking Glass, PCI Passthrough and Windows 10

Contents

2024-03-26

Note: Times go by and since this blog post was written some things changed. If you want to run games with Windows running as a KVM you might be out of luck for newer games. Fortnite, along with most competitive games, detects and prevents playing inside a VM because it’d be impossible to prevent manipulation that could be coming from the host OS. Basically all games protected by BattlEye anti cheat most probably wont work anymore. Also see post on X by BattlEye.


I love Linux! Didn’t used Windows for more then 15 years anymore. But once you’ve children they want to play computer games sooner or later. And they want to play multi-player games with you. And besides all efforts lets be honest: Either buy a Playstation, Xbox or stuff like that or install Windows. Besides some really nice games the big productions don’t run on Linux. Yes, I know Stream but I don’t want any 32bit libs on my computer anymore 😉 So lets face it: You need some closed source stuff. Windows in my case…

As my computer was already 10 years old I thought it’s time to get a new one which has enough power for Linux and Windows but I don’t wanted Dual Boot. That means you either run Windows or Linux. But I don’t wanted to shutdown Linux in order to start Windows. Running Windows via a virtual machine is quite nice but graphics output isn’t fast enough if you want to play current games. So I already read about PCI passthough with Linux. That basically means you have two graphic cards installed. You use one for Linux. The second one you basically black list for Linux so that the kernel don’t use it for graphics output. As this second card isn’t occupied by Linux it can be used by a virtual machine by passing through the PCI device to KVM (Kernel Virtual Machine) which runs Windows.

This is cool of course but for this you need a second monitor too (or at least a monitor with two inputs HDMI and Display Port e.g.). But at least you only need one computer to run both OSes at the same time. You may share also your keyboard and mouse or have a additional keyboard and mouse and also pass the two USB devices to the Windows VM. The later option reduces latency a little bit. For some ego shooter this might be an issue. You can also route the Windows sound output through Pulseaudio and use your usual Linux sound settings.

But we can do even better: By using a fantastic tool called Looking Glass we can have Windows screen output in a window in your Linux Desktop Environment (which is KDE Plasma in my case) at near native speed! In my case 60 frames per second (fps) at FullHD (1920x1080) worked perfectly. Even fast enough to play the lastest ego shooter 😄 Well I don’t care about ego shooter but even for something like Anno 1800 you need a little bit of compute power 😉

So after this little introduction I’ll describe my setup and how I installed everything. Maybe it might be helpful for other people too. While most of the information provided here should be true for recent hardware too this blog post isn’t meant to be a general PCI passthrough guide. It’s written as a HOWTO for my hardware only (but as already said it may be helpful even for different hardware). Most of the content here is based on the excellent Archlinux wiki PCI passthrough via OVMF. So if you need more help or information contact this wiki page.

The hardware I’m using is mostly quite recent. And as usual if you use Linux you also need a very recent distribution to get it work out of the box. But even this isn’t true in this case. I’m using Archlinux and besides Linux Kernel 5.3 (which you need for the 5700XT) I needed a lot of stuff from AUR. Meanwhile a nice user put together a wiki page that contains all information to get the 5700(XT) (Navi10 chip) up and running: Navi 10. There is also a thread in Archlinux forum about this topic. I also created a bug at Freedesktop Bugzilla because of way too high power consumption of the graphics card. While I expected 8W at idle I got 34W instead. Turned out it was partly an issue because of my big resolution (5120x1440) and partly of Kernel 5.3. There is a setting in Kernel 5.3 which causes the graphics card to always run at highest memory clock speed. This is only solved in the graphics card driver patch set that the developers prepared for Kernel 5.4. So if you read this text in a few month and you’ve the following software versions installed the AMD 5700(XT) Navi10 chip should work out of the box (this are the most important packages):

  • Kernel 5.3 (5.4 for the fixed idle power consumption issue. You can use linux-mainline from AUR if you use Archlinux. It contains Kernel 5.4rc4 while I wrote this text)
  • Mesa 19.2
  • LLVM 9.0
  • libdrm 2.4.99

So here is my hardware setup:

Graphics card for Linux - Sapphire Pulse 5700XT: I have chosen that card because I’ve quite big screen, I wanted a fast and quite recent graphics card and more important I wanted open source driver. Also this graphics card has the advantage it has only two fans so it’s not as long as some other cards with three fans. Also: It has idle fan stop. That means if you only work in your Desktop Environment (like KDE) and maybe even watch a FullHD or 4K video the fans won’t spin. So it is really silent :-) Even playing Minecraft with 5120x1440 fans don’t spin. But be aware like most of the Navi10 cards it uses more then two slots (only about about a few millimeters but still more then two slots…). And it only uses 8W power in idle mode (which is basically always true if you just do work in KDE and not playing some ego shooter ;-) ). But you need Linux kernel >= 5.4 for this low power consumption. Additionally its a PCIe 4.0 card. I placed this card in the first PCIe slot.

Graphics card for Windows 10 - Asus GeForce GTX 1060 ROG Strix 6GB: I don’t like NVIDIA. Their open source support is just not existent. But I wanted to do some Machine/Deep Learning stuff too and all the software out there basically only supports NVIDIA. For this we need the binary driver anyways and a supported OS like Ubuntu. So using the NVIDIA binary driver in a VM was ok for me. This is one of the cheapest but still useful cards for Machine/Deep Learning and stuff like that that you can get currently. And the other use case is of course Windows and Windows Gaming and in this case you just use the Windows drivers from Nvidia anyways. Additionally this card is exactly two slots high. With a few other GTX 1060 cards I would have got into trouble because I needed space for one additional PCIe card. The card has three fans, quite a bit of memory, is able to run current Windows games in FullHD with at least 50-60 fps and also has a quite low idle power consumption at about 8W. I placed this card in PCIe slot 3. A NVIDIA GTX 1660 Super or NVIDIA GTX 1650 might be a good alternative.

Board - Asus ROG STRIX X570-E GAMING: The reason I did choose this board is that it has enough PCIe slots. It has 8 SATA connectors and you can install two NVMe drives. The pricing was ok for what you get for your money. Besides that the X570 chipset uses around 10-12W power in idle mode and you have a fan for the chipset, I wanted a board with X570 chipset because you’ve plenty of PCIe lanes and Asus used it wisely on this board. Also my hope was that such a board/chipset is able to put most PCIe devices in it’s own IOMMU group. This is important for PCI passthrough as you need to pass all devices that are in one IOMMU group. And it held true: The Nvidia card and the Nvidia HDMI audio output are located in one group with no other devices as we see later.

M.2 NVMe SSD - I decided to install two M.2 NVMe disks. A Samsung EVO 970 for Linux and a Silicon Power for Windows. As disk speed is important I decided to use the whole Silicon Power M.2 NVMe for Windows and passed it as raw device to Windows. So I don’t use qcow2 files or something like that for the VM image. Both disks are Gen3x4 with read/write speed up to 3400/3000MB/s. The PCIe4 NVMe SSDs are currently way too expensive.

RAM - Corsair Vengeance RGB Pro Black 64GB DDR4 Kit 3200 (4x16GB) C16 K4: You can’t have enough. Period. :D I would recommend to have at least 8 GB for Linux and 8 GB for Windows.

CPU - AMD Ryzen 3900X: Just because :D The power of this CPU is just incredible for the price. Compiling a Linux Kernel or Chromium is plain fun… So you can use 6 CPU cores for the Windows VM and still have 6 CPU cores left for Linux and even other VMs. But Linux support isn’t perfect - yet. Idle power consumption could be lower. Currently for me idle power consumption is about 10W higher then if you run Windows. I suspect by mid 2020 when all pending patches for the new Ryzen 3 processors are finally merged and rolled out it should be in a good shape.

Power unit - be quiet! BN253 Dark Power Pro 11 (850 Watt)

CPU cooler - Noctua NH-U12A

That said lets start!

You will probably want to have a spare monitor or one with multiple input ports connected to the two different GPUs (the passthrough GPU will not display anything if there is no screen plugged in and using a VNC or Spice connection will not help your performance). Also a (USB) spare mouse and keyboard is quite helpful which you can pass to your Windows VM. This improves reaction time and easier to setup in general. And if anything goes wrong, you will at least have a way to control your host machine this way. In my case the 5700XT is connected via DisplayPort and the GTX1060 via HDMI to one monitor that has multiple input ports.

After turning on your computer first make sure that a few BIOS settings are enabled:

plain

Advanced:
-> SVM Mode: Enabled

SVM Mode turns IOMMU support (AMD-Vi) on (for more information see Memory Management (computer programming): Could you explain IOMMU in plain English?). The important takeaway is: IOMMUs are primarily used for protecting system memory against erring I/O devices. If you run virtualized environments you want to be sure that e.g. one virtual machine can’t access the memory of the other virtual machine.

While not needed for GPU passthrough I also changed the following settings:

plain

Advanced:
-> SMT Mode: Auto
-> PCI Subsystem Setting
   -> SR-IOV-Support: Enabled

SMT Mode turns Hyperthreading on. Single Root I/O Virtualization (SR-IOV) may get interesting if you want to share PCIe devices for more then one virtual machine e.g. the 2.5Gbit card which the mainboard mentioned above offers.

plain

AI-Tweaker:
-> Memory Frequency
   -> DDR4 - 3400 MHz

The Corsair RAM mentioned above allows overclocking a little bit. The default setting was DDR4 - 2600 MHz which is quite low. With 3400 MHz the host is running very stable. Haven’t tried 3600 MHz yet but that would definitely be the upper limit and also makes much less sense for the AMD Ryzen 3900X CPU. According to AMD themselves, they recommend 3600MHz CL16 as the sweet spot for Ryzen 3000 series processors, in terms of price to performance ratio, and thanks to the Infinity Fabric technology, memory will scale up at a 1:1 ratio up to 3733 MHz, and above this clock speed it will begin to use dividers, and the performance effects of higher clock timings will become marginally smaller.

plain

Monitor:
-> Qfan Tuning

In this menu you can let the BIOS tune your CPU cooler automatically. That’s quite handy esp. if you have a big CPU cooler to lower the CPU fan noise.

plain

Boot:
-> Secure Boot
   -> Other OS

Also make sure that UEFI is enabled (which is the default). Afterwards boot Archlinux. In the next part we’ll configure everything needed on the Linux side to be able to use PCI passthrough.