VFIO - How I game on Linux

For the better part of my life I've used Linux either as a secondary or primary OS. Over the past 8 years it's replaced the majority of my computing activities. For the past year or so at Discord I've been running Linux on a custom desktop (for day-to-day operations), a lenovo thinkpad, and more recently a Dell XPS 15 (9560). On my personal time all of my development happens over SSH on my Linux server, either via a laptop or my windows desktop + chrome SSH. The only holdout to being entirely Linux has been the elephant in the room for all Linux desktop users, gaming.

I've played games for longer than I've used Linux and they've grown to be an huge portion of my life. Many of my best friends were met and our relationship continued because of games. My girlfriend of two years was met online over games of Overwatch. In the past when I was a big CSGO player I actually managed to operate on Linux as a primary OS for quite a while, from my memory somewhere around a year. While the CSGO on Linux experience was certainly not perfect it worked well enough for my needs and avoided the pain of dual booting or maintaining even more machines.

However I've moved on from CSGO in recent years with most of my gaming activity taking place on Overwatch, PUBG or Fortnite. For this time I've come accustom to running Windows as my home PC and just using SSH or laptops to do development work. I've considered a lot of options for improving my home-development workflow, from having multiple desktop setups to building complicated display/input (kvm) switchers. After a plethora of research I finally decided to take a stab at whats colloquially known as vfio.

VFIO is a technology in the Linux kernel which exposes direct device access inside userspace. This allows us to pass through certain hardware-level devices into VMs running on our machine. The most common use-case for VFIO is giving VMs running on a Linux machine access to physical hardware devices, improving their connectivity or performance. As people have learned more and more about VFIO and the technology behind it has improved, there have been some seriously awesome projects built like LinusTechTip's 7 Gamers, 1 CPU.

Having decided to bite the bullet I wiped my desktops Windows 10 install with a nice fresh kubuntu 18.04 + i3wm. I was lucky enough to have my girlfriends old NVidia 960 sitting around so that got quickly added to my setup and I began the process of setting up VFIO.


As any gamer I'm fairly picky about my setup and its performance. So to start out with lets define some requirements;

  1. My primary gaming monitor is a Dell S2716DG, a 144hz 1440p G-Sync monitor, this is flanked on either side by Dell U2715H's. This setup needs to utilize my primary monitor with its full 144hz + 1440p + GSync capabilities.
  2. My normal keyboard and mouse must be able to pass-through into the VM, with extremely low to zero latency. Ideally this is togglable without shutting off the VM.
  3. The overall performance of the VM had to be good enough for running Overwatch at 144hz / 1440p, with effectively zero input lag, screen tearing, or other common woes.

Zero to VFIO Hero

Getting the GPU setup with VFIO

At this point I had done enough research to have developed an idea of what needed to be done. However there is so much varying information on VFIO out there (mostly because setups change depending on the hardware you have, the distro you run, etc) that I found quite a bit of mis-or-contradictory information. Regardless of that I ended up using the Archlinux Wiki Page extensively. I got a little unlucky with my motherboard (Z270 chipset) setup since all the available PCI lanes landed on the same IOMMU group.

For those who don't know (or don't care) IOMMU groups are a separation of devices and the minimal unit that can be passed through to a VM. This was no good to me since I didn't wanna pass through both of my GPUs and it would be a very bad idea to even attempt passing through my PCI bridges (lol). Luckily the internet has a remedy for all woes and in this case I was able to just install the pre-built ACSO override kernel builds for my system (available here).

After doing the rest of the configuration which mostly consisted of enabling kernel modules and specifying the IDs of my GPU and it's audio device a reboot landed me into an OS with a GPU configured for VFIO:

$ lspci -v
02:00.0 VGA compatible controller: NVIDIA Corporation GM206 [GeForce GTX 960] (rev a1) (prog-if 00 [VGA controller])
        Subsystem: ASUSTeK Computer Inc. GM206 [GeForce GTX 960]
        Flags: bus master, fast devsel, latency 0, IRQ 17
        Memory at dc000000 (32-bit, non-prefetchable) [size=16M]
        Memory at a0000000 (64-bit, prefetchable) [size=256M]
        Memory at b0000000 (64-bit, prefetchable) [size=32M]
        I/O ports at d000 [size=128]
        Expansion ROM at dd000000 [disabled] [size=512K]
        Capabilities: <access denied>
        Kernel driver in use: vfio-pci
        Kernel modules: nvidiafb, nouveau

02:00.1 Audio device: NVIDIA Corporation Device 0fba (rev a1)
        Subsystem: ASUSTeK Computer Inc. Device 854d
        Flags: bus master, fast devsel, latency 0, IRQ 18
        Memory at dd080000 (32-bit, non-prefetchable) [size=16K]
        Capabilities: <access denied>
        Kernel driver in use: vfio-pci
        Kernel modules: snd_hda_intel
IOMMU Group 12 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP104 [GeForce GTX 1080] [10de:1b80] (rev a1)
IOMMU Group 12 01:00.1 Audio device [0403]: NVIDIA Corporation GP104 High Definition Audio Controller [10de:10f0] (rev a1)
IOMMU Group 13 02:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM206 [GeForce GTX 960] [10de:1401] (rev a1)
IOMMU Group 13 02:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:0fba] (rev a1)

Creating the VM

One of the first steps to getting your VM up and running is installing and setting up libvirt/QEMU. Part of this process includes getting the OVMF EDK II UEFI firmware setup so QEMU can use it. I found that this forum thread was the best source for getting that setup correctly such that virt-manager could see the firmware.

The rest of the VM creation process was fairly straightforward and similarly followed the Arch wiki's guide. After installing Windows 10 I was ready to attempt GPU passthrough. I swapped my DP cable onto the secondary GPU and moved to HDMI for my primary GPU (losing 144hz/GSync on the Linux side a trade off I'm ok with). After adding the two PCI devices via virt-manager, restarting the VM and swapping display inputs, I could see my windows install!

For the moment I lacked any proper hardware input to the VM, so to resolve this temporarily I plugged in a seperate keyboard/mouse and used USB passthrough to force-add them to the VM.

I was now ready to install the NVidia drivers on the guest. I first tried doing this via the drivers provided on Nvidias site. However after a reboot it appeared Windows was not taking to the driver and was still stuck in low resolution. I attempted to install the drivers two more times once through the OEM Nvidia pages and once through Windows device manager. Neither of these worked and I finally debugged the problem enough to discover the following message on my GPU device:

Windows has stopped this device because it has reported problems. (Code 43)

Some quick research online provided a few suggestions but the most promising was that Nvidias drivers appear to detect the existance of a VM and flip off. I had read a few prior posts about people swapping some configuration in their libvirt/qemu XML config, so I popped open virsh edit and added the following;

    <relaxed state='on'/>
    <vapic state='on'/>
    <spinlocks state='on' retries='8191'/>
    <vendor_id state='on' value='whatever'/>
    <hidden state='on'/>
  <vmport state='off'/>

Rebooting with this configuration caused the driver to work and a full resolution screen (with 144hz and GSync no less) was displayed!

Keyboard & Mouse

My next task was getting keyboard and mouse input working. Most of my reading up to this point suggested that the only way to get cross host/guest input was using something like Synergy, which would have been unacceptable due to my latency requirements. I finally found a few posts on r/vfio that showed using libevdev device passthrough. After some more research I discovered this blog post which described the process well. I added the XML required for this and also made sure to flip the virtual keyboard/mouse from ps2 to virtio (which solves a key-sticking issue). After booting the VM I found my keyboard and mouse immedietly passed through to the VM. Pressing the left and right ctrl keys at the same time caused the input to toggle between guest and host. While the input feels different in Windows I wasn't able to discern any input lag or other input problems.


I ended up having a few issues in game with keys sticking and things that felt like they where related to key rollover). I was under the impression that adding a virtio keyboard/mouse would solve these, but I wasn't seen that result. After some further investigation I found that I had to install virtio drivers within windows. To do this I downloaded the latest virtio-win iso from this fedora project page and then followed the instructions on this redhat page. After installing the drivers all key sticking and other issues, and I also noticed the "different" feel I described above went away (however arbitrary that is...)


Again based on my research I had heard a lot of varying information on how to get sound working. Most people seemed to fail or describe the process as complicated and error-prone. It turned out to be fairly easy and I used this reddit post for instructions. After enabling everything and rebooting I swapped my Windows output device and everything seemed to work. I have yet to notice any crackling/snapping/lag/etc with audio, but I don't plan on passing through any mic input (which makes this easier).


There is a ton of information online on improving VM performance however in my initial tests I was very pleased with the results I was getting in Overwatch. Within test range (which is fairly synthetic, but a useful data point nonetheless) I managed upwards of 160-180FPS, more than enough for my standard 143FPS cap. I did end up adding a few performance tweaks I read about online mostly because I understand what they would do based on my own past experience/knowledge making them feel a bit less snake-oily. The two big ones are CPU pinning (do this!) and using huge pages for memory backing.

Monitor Swapping

Within Dells monitor firmware it allows you to configure some custom buttons to do "quick" options. I set these buttons to swap between the two inputs so getting control over the VM is two button presses (one selection, one confirmation) away. This actually makes the entire process of "swapping" to my Windows side being pressing two buttons on the monitor, and the left/right ctrl buttons on my keyboard.


Overall I'm extremely pleased with the result. I've spent a bit of time testing various games (hilariously enough I haven't fully played a game yet, so we'll see how that goes) and the performance/latency seems to be nearly the same as my original desktop setup. I may at some point swap the host and guests cards so that my Linux side uses the 960 and my Windows VM can take full advantage of the 1080, however that will depend on the in-game performance I see. The ability to run my standard i3/Linux setup for everything I need while being able to have an almost frictionless swap to a performant Windows VM is so far unmatched in my opinion.

There are various additional benefits to running Windows in a VM (snapshots, etc) that I won't enumerate here, but I suggest doing your own research and figuring out if this strategy is right for you. While this process took a few hours in total to setup / work out the kinks, anyone with general Linux knowledge will probablly feel comfortable getting things working. I would caution folks attempting this that the pure quantiy of information out there can be more of a curse than a blessing at most points, so do a lot of research and figure out what will work well for your setup.

Finally if you have any specific questions or just wanna show off your VFIO setup, feel free to hit me up on twitter.

libvirt/QEMU VM Configuration

<domain type='kvm' id='1' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
  <memory unit='KiB'>16777216</memory>
  <currentMemory unit='KiB'>16777216</currentMemory>
  <vcpu placement='static'>4</vcpu>
    <vcpupin vcpu='0' cpuset='0'/>
    <vcpupin vcpu='1' cpuset='1'/>
    <vcpupin vcpu='2' cpuset='2'/>
    <vcpupin vcpu='3' cpuset='3'/>
    <type arch='x86_64' machine='pc-i440fx-bionic'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/qemu/OVMF-pure-efi.fd</loader>
    <boot dev='hd'/>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='8191'/>
      <vendor_id state='on' value='whatever'/>
      <hidden state='on'/>
    <vmport state='off'/>
  <cpu mode='host-passthrough' check='none'/>
  <clock offset='localtime'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
    <timer name='hypervclock' present='yes'/>
    <suspend-to-mem enabled='no'/>
    <suspend-to-disk enabled='no'/>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' io='threads'/>
      <source file='/var/lib/libvirt/images/diskstore/win10.qcow2'/>
      <target dev='hda' bus='ide'/>
      <alias name='ide0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    <controller type='usb' index='0' model='ich9-ehci1'>
      <alias name='usb'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x7'/>
    <controller type='usb' index='0' model='ich9-uhci1'>
      <alias name='usb'/>
      <master startport='0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/>
    <controller type='usb' index='0' model='ich9-uhci2'>
      <alias name='usb'/>
      <master startport='2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x1'/>
    <controller type='usb' index='0' model='ich9-uhci3'>
      <alias name='usb'/>
      <master startport='4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x2'/>
    <controller type='pci' index='0' model='pci-root'>
      <alias name='pci.0'/>
    <controller type='ide' index='0'>
      <alias name='ide'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    <controller type='virtio-serial' index='0'>
      <alias name='virtio-serial0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    <interface type='network'>
      <mac address='52:54:00:b9:e3:e0'/>
      <source network='default' bridge='virbr0'/>
      <target dev='vnet0'/>
      <model type='rtl8139'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    <serial type='pty'>
      <source path='/dev/pts/1'/>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      <alias name='serial0'/>
    <console type='pty' tty='/dev/pts/1'>
      <source path='/dev/pts/1'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    <channel type='spicevmc'>
      <target type='virtio' name='com.redhat.spice.0' state='disconnected'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    <input type='mouse' bus='virtio'>
      <alias name='input0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
    <input type='keyboard' bus='virtio'>
      <alias name='input1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0b' function='0x0'/>
    <input type='mouse' bus='ps2'>
      <alias name='input4'/>
    <input type='keyboard' bus='ps2'>
      <alias name='input5'/>
    <sound model='ich9'>
      <alias name='sound0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
      <model type='cirrus' vram='16384' heads='1' primary='yes'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
        <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
      <alias name='hostdev0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
        <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/>
      <alias name='hostdev1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    <redirdev bus='usb' type='spicevmc'>
      <alias name='redir0'/>
      <address type='usb' bus='0' port='2'/>
    <redirdev bus='usb' type='spicevmc'>
      <alias name='redir1'/>
      <address type='usb' bus='0' port='3'/>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
  <seclabel type='dynamic' model='dac' relabel='yes'>
    <qemu:arg value='-object'/>
    <qemu:arg value='input-linux,id=mouse1,evdev=/dev/input/event9'/>
    <qemu:arg value='-object'/>
    <qemu:arg value='input-linux,id=kbd1,evdev=/dev/input/event10,grab_all=on,repeat=on'/>
    <qemu:env name='QEMU_AUDIO_DRV' value='pa'/>
    <qemu:env name='QEMU_PA_SERVER' value='/run/user/1000/pulse/native'/>