NOTE: Since writing this article, much has changed in the Linux gaming landscape and passing through a GPU to a windows VM is no longer really necessary due to Proton/DXVK/VKD3D/etc. As such, I'm no longer running this setup myself.
Windows 10 being the straw that broke the camel's back, a switch to
GNU/Linux on my desktop machine was inevitable. Not fully
wanting to give up on playing certain Windows-only games, however,
the easiest solution without dualbooting seemed to be
turning my existing Windows 7 install on an old SSD into a virtual
machine running on my Arch Linux host, passing through a dedicated GPU
to it. Unlike most existing configurations, I wanted to skip
libvirt
and preferred to use 'raw' QEMU.
Below I will go through the steps I went through in setting this up.
Steps
- Enabling KVM
- Making sure the passthrough GPU is using the vfio driver
- Preparing for bridged network connectivity inside the VM
- Allowing your user to pass through devices
- Increasing process memory limits for your user
- Preparing the guest
- Running the VM
Enabling KVM
Note: this part assumes an AMD-based CPU, but
Intel should be as easy as replacing amd
with
intel
in the following samples.
To enable CPU hardware acceleration, the kvm
kernel
modules must be loaded (and active):
# modprobe kvm_amd nested=1Ensure that this is loaded on boot:
# vim /etc/modules-load.d/kvm_amd.conf -------------------------------------- kvm_amd
And persist the nested page table option:
# vim /etc/modprobe.d/kvm.conf ------------------------------ options kvm_amd nested=1
Ensure that kvm
was successfully loaded by scouring
dmesg
or journalctl
for any erroneous
kvm
-related output, or by simply checking whether
lsmod | grep kvm
returns anything at all.
Making sure the passthrough GPU is using the vfio
driver
Find your GPU's vendor and product IDs by running lspci -nn
.
It should look something like this (example used is my GTX970):
$ lspci -nn ... 2e:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM204 [GeForce GTX 970] [10de:13c2] (rev a1) 2e:00.1 Audio device [0403]: NVIDIA Corporation GM204 High Definition Audio Controller [10de:0fbb] (rev a1) ...
The relevant parts in this case are "10de:13c2
" and
"10de:0fbb
". You should make sure to replace
occurrences of these in any further code samples with your own.
Next, you'll have to make sure to blacklist the nouveau
(or amdgpu
for an AMD card) driver —which the kernel
will always attempt to autoload— for this device.
# vim /etc/modprobe.d/nouveau.conf ---------------------------------- blacklist nouveau
And make the vfio
module to attach itself to the IDs
noted above:
# vim /etc/modprobe.d/vfio.conf ------------------------------- options vfio-pci ids=10de:13c2,10de:0fbb
The easiest way for these changes to take effect is to reboot
— that way you'll also be sure that they persist properly.
If rebooting is not an option, you may try the following:
# rmmod nouveau # modprobe vfio-pci ids=10de:13c2,10de:0fbb
To make sure that this was successful, you can scour the
output of lspci -nnk
:
$ lspci -nnk ... 2e:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM204 [GeForce GTX 970] [10de:13c2] (rev a1) Subsystem: Micro-Star International Co., Ltd. [MSI] GM204 [GeForce GTX 970] [1462:3171] Kernel driver in use: vfio-pci Kernel modules: nouveau 2e:00.1 Audio device [0403]: NVIDIA Corporation GM204 High Definition Audio Controller [10de:0fbb] (rev a1) Subsystem: Micro-Star International Co., Ltd. [MSI] GM204 High Definition Audio Controller [1462:3171] Kernel driver in use: vfio-pci Kernel modules: snd_hda_intel ...
The confirmation you're looking for is the value of "Kernel driver in use".
Preparing for bridged network connectivity inside the VM
As I was already using NetworkManager, I used NetworkManager to set
up a bridge interface containing my actual ethernet connection. A
tap
device belonging to the VM is then added to this
bridge each time the VM is started. I used this type of setup to
ensure that I could reliably connect to the Synergy server inside
the guest from the host, in order to share the mouse and keyboard.
Since pictures say more than a thousand words, I'll just link to the simple guide I used to create the bridge. Ultimately, there are many ways to create a bridge and add devices to it and you can accomplish this however you wish. To me this seemed the easiest.
After creating the bridge, your networking setup should look
something like this. In my case, my ethernet connection is
enp42s0
and the bridge it is a slave to is
bridge0
:
$ ip a ... 2: enp42s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master bridge0 state UP group default qlen 1000 link/ether [...] 5: bridge0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether [...] inet [...] valid_lft 80878sec preferred_lft 80878sec inet6 [...] valid_lft forever preferred_lft forever ...
Allowing your user to pass through devices
By default, only the root user can allow devices to be handed over
to the QEMU process. This can be rectified by creating a few specific
udev
rules that will also grant these rights to your user.
First, you'll need to find the vendor and product IDs for the USB devices you wish to pass through (generally your keyboard and mouse):
$ lsusb ... Bus 003 Device 002: ID 046d:c085 Logitech, Inc. ...
In the above case, my mouse vendor ID is 046d
and its
product ID is c085
. Note this down for every USB device
you want to pass through. Next, create a new udev
rule file:
# vim /etc/udev/rules.d/10-qemu-hw-users.rules ---------------------------------------------- # GPU SUBSYSTEM=="vfio", TAG+="uaccess" # Keyboard SUBSYSTEM=="usb", ATTRS{idVendor}=="04d9", ATTRS{idProduct}=="0296", TAG+="uaccess" # Mouse SUBSYSTEM=="usb", ATTRS{idVendor}=="046d", ATTRS{idProduct}=="c085", TAG+="uaccess"
Adjust the above and/or add/remove items as necessary.
Increasing process memory limits for your user
Most systems will by default not allow a single process to use
up large amounts of memory by itself. This limit can by adding
the following to
/etc/security/limits.conf
(replace username
with your own):
# vim /etc/security/limits.conf ------------------------------- ... username hard memlock 9000000 username soft memlock 9000000 ...
This change will not take effect until you relog (e.g. quit X and log back in).
Preparing the Windows guest
For better performance, the virtio storage and network drivers must be installed in Windows. Instead of verbatim repeating what can be found on the Arch Wiki, I will just link to the relevant section here.
Running the VM
You should now be able to attempt to boot the VM. I use a
basic shell script to do this; the latest version can be found
on GitHub.
Modifications to this script should be fairly straight-forward.
Of note are the explanatory variable declarations near the top, and
the setup()
and teardown()
functions. They
should be modified to fit your setup and requirements.
The ID-type variables required should already have been obtained in
earlier parts of this post.
Further sources of information
I obviously didn't figure all of this out on my own, and this post is much in thanks to the following more detailed sources:
- The QEMU page on the Arch Wiki
- The VFIO page on the Arch Wiki
- The inspiration for the startup script
- The VFIO subreddit
Comments? Feedback?
Feel free to leave me a note at kenneth at this domain.