TLDR; tell me if this is a waste of time before I spend forever tinkering on something that will always be janky

I want to run multiple OSs on one machine including Linux, Windows, and maybe OSX from a host with multiple GPUs + igpu. I know there are multiple solutions but I’m looking for advice, opinions and experience. I know I can google how-to but is this worh pursuing?

I currently dual boot Bazzite and Ubuntu, for gaming and develoent respectively. I love Bazzite ease of updates and Ubuntu is where it’s at for testing and building frontier AI/ML tools.

What if I kept my computer running a thin hypervisor 24/7 and switched VMs based on my working context? I could pass through hardware as needed.

Proxmox? XCP-NG? Debian + QEMU? Anyone living with these as their computing machines (not homelabs/server hosts)?

This is inspired by Chris Tidus’s (YouTube) setup on arch but 1) i don’t know arch 2) I have a fairly beefy i7 265k 192gb build, but he’s on an enterprise xenon ddr5 build so in a differenrent power class 3) I have a heterogenous mix of graphics cards I’m hoping to pass though depending on workload

Use cases:

  • Bazzite + 1 gpu for gaming
  • Ubuntu + 1 or more GPUs for work
  • Windows + 0 or more GPU Music Production paid vstis and kernel-level anti cheat games (GTAV, etc)
  • OSX? Lightroom? GPU?

Edit: Thank you all for your thoughts and contributions

Edit: what I’ve learned

  • this is viable but might be a pain
  • a Windows VM for getting around anti-cheat in vames defeats the purpose. I’d need a dual boot for that use case
  • hyperV is a no. Qubes Qemu libvirt, yes
  • may want to just put everything on sparate disks and boot / VM into them as needed

Edit: distrobox/docker works great but doesn’t fit all my needs because I can’t install kernel-level modules in them (AFAIK)

  • BananaTrifleViolin@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    7 days ago

    It’d be interesting project but it seems overkill and over complicatiion when the simplest solution is dual booting and giving each OS complete access to the hardware. Hypervisors for all your systems would be a lot of configuration, and some constant overhead you can’t escape for potentially minimal convenience gain?

    Are you hoping to run these OS at the same time and switch between them? If so I’m not sure the pain of the set up is worth it for a little less time switching between OS to switch task? If you’re hoping to run one task in one machine (like video editing) while gaming in another, it makes more sense but you’re still running a single i7 chip so it’ll still be a bottleneck even with all the GPUs and that RAM. Sure you can share out the cores but you won’t achieve the same performance of 1 chip and chip set dedicated to 1 machine that a server stack gives (and which Hypervisors can make good use of).

    Also I’d question how good the performance you’d get on a desktop motherboard with multiple GPUs assigned to different tasks. It’s doubtful you’d hit data transfer bottlenecks but it’s still asking a lot of hardware not designed for that purpose I think?

    If you intend to run the systems 1 at a time then you might as well dual boot and not be sharing system resources with an otherwise unneeded host for hypervisor software.

    I think if you wanted to do this and run the machines in parallel then a server stack or enterprise level hardware probably would be better. I think it’s a case of “just because you can do something doesn’t mean you should”? Unless it’s just a “for fun” project and you don’t mind the downsides? Then I can see the lure.

    But if I were in your position and wanted the “best” solution I’d probably go for a dual boot with Linux and Windows. In Linux I’d run games natively in the host OS, and use Qemu to have a virtual machine for development (passing through one of the GPUs for AI work). The good thing in this set up is you can back-up your whole development machine hard drive and restore it easily if you make big changes to the host Linux. Windows I’d use for kernel anti cheat games and just boot into it when I wanted.

    Personally I dual boot Linux and windows. I barely use windows now but in Linux I do use Qemu and have multiple virtual machines. I have a few test environments for Linux because I like to tinker, plus a docker server stack that I use to test before deploying to a separate homelab device. I do have a Win11 VM, barely used - it doesn’t have a discrete GPU and it’s sluggish. If you’re gaming I’d dual boot and give it access to the best GPU as and when you need it.

    And if you want the best performance, triple boot. Storage is cheap and you could easily have separate drives for separate OS. I have an Nvme for Linux and another Nvme for Windows for example. You could easily have 2 separate discrete Linux installs and a Windows installs. In some ways it may be best as you’d separate Linux gaming from Linux working and reduce distractions.