Skip to main content
← blog

5 May 2025

Installing Windows for Dual Boot Without a USB, Using QEMU Drive Passthrough

Windows wanted to touch drives it had no business touching, I had no USB stick, and my BIOS couldn't help. So I installed it in a VM with full drive passthrough.

linuxqemukvmvirtualizationwindows

I wanted Windows on a second drive for occasional use, mostly games that anti-cheat refuses to run on Linux. The setup should have been simple: boot a USB installer, install to the second drive, done. It wasn't.

The problem stack

Three things conspired against the normal approach:

Windows will overwrite your EFI partition on a drive it has no business touching. My Linux install lives on an NVMe drive. The Windows target was a separate SATA SSD. Windows doesn't care. It scans every disk for an EFI System Partition and writes its bootloader to whichever one it feels like, regardless of which disk you're actually installing to. If it picks your Linux drive, your existing boot entries get clobbered.

I didn't have a USB stick. The 8GB drive I would normally use was physically somewhere else, and I wasn't going to wait for delivery.

I couldn't easily disconnect the Linux drive. My GPU is large enough that reaching the M.2 slot behind it requires pulling the card. I wasn't doing that for what should be a 20-minute task.

My BIOS (a mid-range ASUS board) has an option to disable SATA ports, but not NVMe slots individually. So I couldn't tell the firmware to just pretend drive 0 didn't exist during the Windows install.

The actual solution: QEMU with full drive passthrough

The key thing about QEMU: it can pass a raw block device (an entire physical disk, not a virtual disk image) directly to a virtual machine. Windows running in that VM sees the drive as if it were real hardware, writes its bootloader to it, and sets up the partition layout normally. The EFI entry Windows creates goes into the VM's own firmware, not your host UEFI. After the install, you boot the physical drive directly from your BIOS and it works.

Preparing the target drive

First, wipe the drive and create a fresh GPT partition table. Do this from Linux before handing the drive to the VM; it avoids Windows making decisions about your existing layout:

bash
# Identify your target drive, double-check before wiping
lsblk

# Wipe existing partition table
sudo wipefs -a /dev/sdX

# Create new GPT layout (Windows needs GPT for UEFI install)
sudo parted /dev/sdX mklabel gpt

Leave the rest to the Windows installer; it will create its own EFI, MSR, and primary partitions.

Setting up the VM in virt-manager

Virtual Machine Manager (the virt-manager GUI for QEMU/KVM) makes drive passthrough straightforward. When creating a new VM:

  1. Choose "Import existing disk image" as the install method (we're not using a disk image but we'll swap it out)
  2. For the install media, attach the Windows ISO as a virtual CD-ROM
  3. Add the physical disk via Add Hardware → Storage → Select or create custom storage
    • Set device type to "Disk"
    • Set bus to "VirtIO" or "SATA" (SATA is safer for Windows compatibility)
    • In the source field, type the raw device path: /dev/sdX

The critical setting: in the VM's boot order, put the ISO (CD-ROM) first so it boots the installer, with the physical disk as the install target.

You also need to make sure the VM is using UEFI firmware (OVMF), not legacy BIOS. Windows 11 requires it, and it's what you'll be booting from on real hardware anyway. In virt-manager: Overview → Firmware → UEFI x86_64: /usr/share/edk2/x64/OVMF.fd (package name varies by distro).

The install

Boot the VM, go through the Windows installer normally, select the physical drive as the install target. The installer will partition it, write the bootloader, and reboot the VM a few times. This all works just like real hardware. Windows is writing to a real disk, just routed through the hypervisor.

One thing to watch: Windows may complain about not having VirtIO drivers if you used VirtIO bus for the disk. Either use SATA bus (slower but driver-free), or load the VirtIO driver ISO during the installer's "Load driver" step. The VirtIO disk driver is what makes the drive visible in the first place if you went that route.

After the install: booting directly

Once the Windows install finishes inside the VM, shut it down. Now you can boot the physical drive directly from your BIOS boot menu, no VM needed.

On actual hardware, your motherboard's UEFI will see the Windows Boot Manager entry on the SATA drive and boot straight into Windows. Your Linux bootloader on the NVMe drive is untouched because Windows never had access to it during the install.

From here, you can set up your preferred dual-boot method:

  • BIOS boot menu (F12 / F11): simplest, no shared bootloader config
  • rEFInd: a graphical boot manager that auto-detects both OS entries
  • GRUB with os-prober: sudo grub-mkconfig -o /boot/grub/grub.cfg will add a Windows entry if os-prober finds the drive

I went with GRUB. Running os-prober and regenerating the GRUB config picked up the Windows Boot Manager on the SATA drive automatically, so both entries appear at boot without any manual config editing.

One-command reboots with grub-reboot-manager

The only remaining friction was rebooting into Windows. GRUB defaults to Linux, so switching meant either sitting at the menu or changing the default. I wrote a small helper script, grub-reboot-manager, to deal with this.

It lists all GRUB menu entries, lets you pick one by number, then calls grub-reboot to set that entry as a one-time boot target before rebooting. The next boot goes to Windows (or whatever you picked); after that, GRUB falls back to the default again. No permanent config changes.

bash
sudo grub-reboot-manager.sh
# lists entries, prompts for selection, confirms, reboots

Drop it in /usr/local/bin and alias it. Switching to Windows becomes a single command from the terminal.

Why this works better than it sounds

The drive passthrough approach gets talked about mostly in the context of GPU passthrough builds, but it's useful any time you want to install an OS without risking your existing setup. The VM provides isolation: Windows can't touch your Linux drive because the hypervisor doesn't expose it. The physical disk ends up with a native install that runs at full speed when booted directly.

One thing to know: Windows activation is tied to hardware fingerprints, and those fingerprints differ between running in a VM and running on bare metal, so if you activate Windows inside the VM you may need to reactivate when you boot it directly. With a digital license tied to a Microsoft account this is usually automatic; with a volume key you may need the activation troubleshooter.

Total time from "I have no USB stick" to "working Windows install": about 45 minutes, mostly waiting for the installer to copy files.

// share

// comments