Have been a happy Fusion Pro customer for years, including buying upgrades. A user can install 5 seats which includes both Workstation and Fusion, so I could have it on my Mac desktop, iMac, and Linux desktop. The licence was very clear it is kosher for commercial use. And it was perpetual, no worrying about annual renewals.
Beware the new “free” licence, which is emphatically not for commercial use. Get caught accidentally using the personal licence and your company will be on the hook to pay whatever Broadcom wants you to pay. Oracle did similar shenanigans with VirtualBox (watch out if you download the extension pack) and Java (watch out if you install a JRE on your desktop and use it to compile/develop certain software!)
This does open a market opportunity for Corel/Parallels which is mostly at feature parity with VMware… the main reason I liked using VMware Fusion was solid integration with ESXi, which also won’t be a concern anymore as with the Broadcom acquisition that’s a platform I’ll be trying to avoid.
> Beware the new “free” licence, which is emphatically not for commercial use. Get caught accidentally using the personal licence and your company will be on the hook to pay whatever Broadcom wants you to pay
Common way to do this would be to keep an eye out for large fundraises, and then track back to see if you can catch them using the personal license early on.
Given that this is owned Broadcom now, and they are going all in with squeezing every last drop from ESXi and similar offerings, I wonder what's gonna happen with Fusion in the future; while now you only pay for commercial usage, maybe they are going to let it rot over the years until it's no longer cutting edge software? Would they keep it as a loss-leader?
If you want to virtualize something with good performance on desktop windows you use Hyper-V; if you want to do it on mac you use Apple's Virtualization Framework; if you want to do it on Linux you use KVM.
Desktop virtualization products used to bring the secret sauce with them; now that every OS ships with a well-integrated and well supported type 1 hypervisor they have lost much of the reason for existing. There's only so much UI you can put in front of off-the-shelf os features and still charge hundreds of dollars per year for.
They still need to. You are glossing over the fact that you need to provide device access, USB access, graphics, and a lot of things that are not necessarily provided by the "native" hypervisor (HyperKit does not do even half of what Parallels does, for instance).
I didn't say they have no reason to exist. I indicated they are moving towards becoming UI shells around standard OS features and/or other commodity software, which they are. Look at UTM, for instance. Even VMware Workstation and VirtualBox on Windows use HyperV under the hood if you have HyperV or WSL features enabled.
While everyone still seems to be busy disagreeing with me because of <insert favorite feature>, I'll mention that HyperV does have official support for transparent GPU paravirtualization with nvidia cards, and there are plenty of other open projects in the works that strive to "bleed through" graphics/gpu/other hardware acceleration api's from host to guest on other platforms and hypervisors. With vendors finally settling around virtio as somewhat of a 'standard pipe' for this, expect rapid progress to continue.
> Even VMware Workstation and VirtualBox on Windows use HyperV under the hood if you have HyperV or WSL features enabled.
VirtualBox is consistently (and significantly) slower when it uses HyperV as backend than when it uses its original driver, and many features are not supported at all with HyperV. In fact the GUI actually shows a "tortoise" icon in the status bar when running with HyperV backend.
For a start, the list of operating systems Hyper-V supports is an order of magnitude less than what VirtualBox supports. Likewise for emulated hardware, like 3D as mentioned a number of times here. The GUI is also much better on VirtualBox.
And Windows many times forces HyperV onto you, taking exclusive control of the CPU's virtualization features, thereby forcing VirtualBox to either use Hyper-V as a (terrible) backend .... or not run at all.
The use case is mainly interoperability with VirtualBox; they can still keep thier own disk/vm formats, guest tools, etc. and use HyperV as the 'virtualization engine'. Users that have workflows that call out to VirtualBox can continue to work; a lot of vm image tools (vagrant, packer) continue to work, etc.
But yes, of course you can also change your tools to use HyperV directly.
"Having to use HyperV" is not actually anything nefarious as the other comment seems to imply. You can't have two type 1 hypervisors running cooperatively on the bare metal and you cant implement your type 2 hypervisor hooks if you have a type 1 hypervisor running. So if you have enabled HyperV directly or indirectly by using WSL2 or installing any of the container runtime platforms (Docker Desktop et al) that use it, then you have to use HyperV as your hypervisor.
Note this is different than nested virtualization (ESXi on HyperV, etc.) which is supported but a completely different beast.
For the same reason you cannot run Xen and KVM VM's simultaneously on Linux (excepting nested virtualization).
> "Having to use HyperV" is not actually anything nefarious as the other comment seems to imply. You can't have two type 1 hypervisors running cooperatively on the bare metal and you cant implement your type 2 hypervisor hooks if you have a type 1 hypervisor running.
The nefarious part is that Windows enables Hyper-V even if you don't actually use Hyper-V VMs and never will. KVM doesn't take exclusive control of VMX until you _actually_ run a KVM VM.
By the way, the distinction between type 1 / 2 is purely academic at this point: there is no definition where KVM is a type 1 hypervisor and VirtualBox isn't, as they are _literally_ the same conceptually-wise: both are a kernel module that implements a VMX manager/root. Same on Windows. The only remaining type 2 hypervisor these days is kqemu which can still work in binary translation mode (and therefore can work even without access to VMX).
> The nefarious part is that Windows enables Hyper-V even if you don't actually use Hyper-V VMs and never will.
It does not actually enable it by default but there are many settings or apps that can cause it to become enabled. Virtualization-based-security, WSL, container tools, etc. Providing a hypervisor and related functionality is part of what a modern OS kernel should do! It's not nefarious!
In truth it's not; however if the software has to do extra things like, for instance, translate io calls to a virtualbox disk format that hyperv cannot natively support or do an extra memcpy on the video framebuffer to get their UI to work then there will be necessary performance impacts. How fast a guest os "feels" is mostly down to the performance of the virtualized devices and not necessarily the overhead of virtualization itself.
Yes, it is; even the official documentation mentions it (and recommends you disable Hyper-V) and it is FAQ #1 in the support website.
One of the reasons mentioned is that VirtualBox runs (some) emulated devices in kernel space but is not allowed to do when running with Hyper-V. The official API forces custom devices to strictly be user space, and only some basic hardcoded devices are emulated from kernel space.
The "secret sauce" of a desktop virtualizer is in part in the selection of devices it emulates, so this severely cripples VirtualBox.
GPU-P is a pita to keep updated and is half baked. So many things just don't see or use the Nvidia drivers properly. If you want 60fps, Parsec is the only solution I found for desktop/mobile Nvidia graphics.
Once I discovered that, I haven't looked at parsec. Moonlight/sunshine (whatever the pair is) is... terrible. And, When I was looking YUV444 wasn't a feature. Or, at least not one anybody actually knew how to use.
Today this is mostly implemented by having a guest driver pass calls through to a layer on the host that does the actual rendering. While I agree that there is a lot of magic to making such an arrangement work, it's a terrible awful idea to suggest that relying on a vendor's emulation layer is how things should be done today.
Proper GPU virtualization and/or partitioning is the right way to do it and the vendors need to get their heads out of their ass and stop restricting its use on consumer hardware. Intel already does; you can use GVT-g to get guest gpu on any platform that wants to implement it.
So you say having a decoupled arrangement in software (which happens to be a de facto open standard) is a "terrible awful idea" and that instead you should just rely on whatever your proprietary hardware graphics vendor proposes to you? Why?
And that's assuming they propose anything at all.
Even GVT-g breaks every other Linux release, is at risk of being abandoned by Intel (e.g. how they already abandoned the Xen version) or limited to specific CPU market segments, and already has ridiculous limitations such as a limit on the number of concurrent framebuffers AND framebuffer sizes (why? VMware Workstation offers you an infinitely resizable window, does it with 3D acceleration just fine, and I have never been able to tell if they have a limit on the number of simultaneous VMs... ).
In the meanwhile "software-based GPU virtualization" allows me to share GPUs in the host that will never have hardware-based partitioning support (e.g. ANY consumer AMD card), and allows guests to have working 3D by implementing only one interface (e.g. https://github.com/JHRobotics/softgpu for retro Windows) instead of having to implement drivers for every GPU in existence.
> So you say having a decoupled arrangement in software (which happens to be a de facto open standard) is a "terrible awful idea" and that instead you should just rely on whatever your proprietary hardware graphics vendor proposes to you? Why?
Sandboxing, and resource quotas / allocations / reservations.
By itself, a paravirtualized GPU just treats each userland workload launched by any given guest onto the GPU, as all being siblings — exactly as if there was no virtualization and you were just running multiple workloads on one host.
And so, just like multiple GPU-using apps on a single non-virtualized host, these workloads will get "thin-provisioned" the resources they need, as they ask for them, with no advance reservation; and workloads may very well end up fighting over those resources, if they attempt to use a lot of them. You're just not supposed to run two things that attempt to use "as much VRAM as possible" at once.
This means that, on a multi-tenant hypervisor host (e.g. the "with GPU" compute machines in most clouds), a paravirtualized GPU would give no protection at all from one tenant using all of a host GPU's resources, leaving none left over for the other guests sharing that host GPU. The cloud vendor would have guaranteed each tenant so much GPU capacity — but that guarantee would be empty!
To enforce multi-tenant QoS, you need hardware-supported virtualization — i.e. the ability to make "all of the GPU" actually mean "some of the GPU", defining how much GPU that is on a per-guest basis.
(And even in PC use-cases, you don't want a guest to be able to starve the host! Especially if you might be running untrusted workloads inside the guest, for e.g. forensic analysis!)
Why does multi-tenant QoS require hardware-supported virtualisation?
An operating system doesn't require virtualisation to manage application resource usage of CPU time, system memory, disk storage, etc – although the details differ from OS to OS, most operating systems have quota and/or prioritisation mechanisms for these – why not for the GPU too?
There is no reason in principle why you can't do that for the GPU too. In fact, there have been a series of Linux cgroup patches going back several years now, to add GPU quotas to Linux cgroups, so you can setup per-app quotas on GPU time and GPU memory – https://lwn.net/ml/cgroups/20231024160727.282960-1-tvrtko.ur... is the most recent I could find (from 6-7 months back), but there were earlier iterations broader in scope, e.g. https://lwn.net/ml/cgroups/20210126214626.16260-1-brian.welt... (from 3+ years ago). For whatever reason none of these have yet been merged to the mainline Linux kernel, but I expect it is going to happen eventually (especially with all the current focus on GPUs for AI applications). Once you have cgroups support for GPUs, why couldn't a paravirtualised GPU driver on a Linux host use that to provide GPU resource management?
And I don't see why it has to wait for GPU cgroups to be upstreamed in the Linux kernel – if all you care about is VMs and not any non-virtualised apps on the same hardware, why couldn't the hypervisor implement the same logic inside a paravirtualised GPU driver?
> Sandboxing, and resource quotas / allocations / reservations.
But "sandboxing" is not a property of hardware-based virtualization. Hardware-based virtualization may even increase your surface attack, not decrease it, as now the guest directly accesses the GPU in some way software does not fully control (and, for many vendors, is completely proprietary). Likewise, resource quotas can be implemented purely in a software manner. Surely an arbitrary program being able to starve the rest of the system UI is a solved problem in platforms these days, otherwise Android/iOS would be unusable... Assuming the GPU's static partitioning is going to prevent this is assuming too much from the quality of most hardware.
And there is an even bigger elephant in the room: most users of desktop virtualization would consider static allocation of _anything_ a bug, not a feature. That's the reason most desktop virtualization precisely wants to to do thin-provisioning of resources even when it is difficult to do so (e.g. memory). i.e. we are still seeing this from the point of view of server virtualization, and just shows how desktop virtualization and server virtualization have almost diametrically opposed goals.
A soft-gpu driver backed by real hardware "somewhere else" is a beautiful piece of software! While it certainly has application in virtual machines, and may even be "optimal" for some use cases like desktop gaming, it's ultimately doesn't fit with the modern definition of "virtualization --
I am talking about virtualization in the sense of being able to divide the hardware resources of a system into isolated domains and give control of those resources to guest operating systems. Passing API calls from guest to host for execution inside of the host domain is not that. A GPU providing a bunch of PCIe virtual functions which are individually mapped to guests interacting directly with the hardware is that.
GPU virtualization should be the base implementation and paravirtualization/HLE/api-passthrough can still sit on top as a fast-path when the compromises of doing it that way can be justified.
I would say the complete opposite. The only reason one may have to use a real GPU driver backed by a partitioned GPU is precisely desktop gaming, as there you are more interested in performance than anything else and the arbitrary limits set by your GPU vendor (e.g. 1 partition only) may not impact you at all.
If you want to really divide hardware resources, then as I argue in the other thread doing it in software is clearly a much more sensible way to go. You are not subject to the whims of the GPU vendor and the OS, rather than the firmware, control the partition boundaries. Same as what has been done in practically every other virtualized device (CPUs, memory, etc.). We never expected the hardware to need to partition itself; I'd even have a hard time calling that "virtualization" at all. Plus, the way hardware is designed these days, it is highly unlikely that the PCI virtual functions of a GPU function as an effective security boundary. If it wasn't for performance, using hardware partitioning would never be a worthwhile tradeoff.
Yeah, if you care about 3D acceleration on a Windows guest and aren't doing pcie passthrough, then KVM sure isn't going to do it. There is a driver in the works, but it's not there yet.
edit: I made a mistake and got confused in my head with qemu and the lack of paravirtualized support. (It does have a PV 3D linux driver, though)
KVM will happily work with real virtual GPU support from every vendor; it's the vendors (except for intel) that feel the need to artificially limit who is allowed to use these features.
I guess my comments make it sound like I don't appreciate this type of work; I absolutely do. An old friend of mine[1] was responsible for the first 3d support in the vmware svga driver, so this is a space I have been following for literally decades at this point.
I just think it should be the objective of vendors to offer actual GPU virtualization first and to support paravirtualization as an optimization in the cases where it is useful or superior and the tradeoffs are acceptable.
Pretty much all of them do, though the platform support varies by hypervisor/guest OS. Paravirtualized (aka non-passthrough) 3D acceleration has been implemented for well over a decade.
However NVIDIA limits it to datacenter GPUs. And you might need an additional license, not sure about that. In their view it's a product for Citrix and other virtual desktops, not something a normal consumer needs.
Yes and no; you can use GPU partitioning in Hyper-V with consumer cards and Windows 10/11 client on both sides, it’s just annoying to set up, and even then there’s hoops to jump through to get decent performance.
If you don’t need vendor-specific features/drivers, then VMware Workstation (even with Hyper-V enabled) supports proper guest 3D acceleration with some light GPU virtualization, up to DX11 IIRC. It doesn’t see the host’s NVIDIA/AMD/Intel card and doesn’t use that vendor’s drivers, so there’s no datacenter SKU restrictions. (But you are limited to pure DX11 & OpenGL usage, no CUDA etc.)
Am I the only one who explicitly does not want type 1 hyper visor on my desktop? Am I out dated?
I like workstation and virtualbox because they're controllable and minimally impactful when I'm not using them.
Installing hyper v (and historically even WSL - not sure if it's still the case but it was never sufficiently explicit) now makes my primary OS a guest, with potential impact on my gaming, multimedia, and other performance (and occasional flaky issues with drivers and whatnots).
I used to worry about this overhead too but this appears to be nothing on modern CPUs. I had minuscule differences here-and-there on Intel 9th gen (9900K) but my current Intel 13th gen (13900K) has zero performance decrease with HV enabled. (At least on any perceptible level)
Thanks - can you share your usage patterns, what kind of usage, does it include heavy gaming and media?
Note, I'm less worried about percentage performance, as some things just not working well at all, because of assumptions of direct hardware access vs reality of running under hyper v. I.e. Are ALL hardware calls and capabilities 100% absolutely completely available once your main Windows install is running as a VM? Not most, majority, should be good; but actually, seamlessly all? My understanding was No, but things may have changed for the better.
WSL2 seems to virtualize the GPU pretty well, I had an easier time getting my GPU to work for machine learning inside WSL2 than I have with plain Windows and Linux in the past.
It does, but it’s a whole rabbit hole of specialized settings from what I can tell. Toying around with GPU-PV in Hyper-V with a Windows 11 guest was complicated and ultimately had performance & compatibility problems. (With my previous PC even deadlocking when I used the onboard video encoder from within the VM)
Hyper-V does not have PCI passthrough and with that you lost me, while ESXi does. Also I want to test my multiplatform software on all major OSes (MacOs included) ESXi is then the only one that can run Darwin in parallel with rest.
My point is that at least their paid support fixed the things I asked them to fix; I cannot say the same of VMware where support was already non-existent a couple years ago (I stopped using them the moment someone here in HN said the entire Workstation staff had been fired and replaced with skeleton overseas crew, and this was way before Broadcom).
Yes, when it was Sun VirtualBox I remember it was a favorite for testing out other operating systems for free with a simple UI. It wasn't the most powerful or flexible, but it's what was recommended if you wanted to (for example) try Ubuntu on your Windows host without dual boot or using another disk, etc.
Pretty much all desktop virtualization/VDI/etc. products have been de-emphasized by essentially everybody except to the degree that they're a largely free byproduct of server virtualization. I doubt any company is devoting more than a minimal number of resources to these products--maybe Apple more than others. Red Hat, for example, even sun-setted its "traditional" enterprise virtualization product in favor of Kuvevirt on OpenShift. And a VDI product was pretty much abandoned years ago.
I don’t know anything about high-end workstations really. But I wonder if the whole ecosystem is in a rough spot generally? Seems like cloud tooling is always getting easier.
Shame really, people do fun stuff with excess compute.
Worthless for some use cases but there are reasons to run Mac-on-Mac vms, including testing, development, and security (isolation). The first two also apply to some folks (maybe not many) for Linux VMs.
OS still needs to be ARM, as far as I know, but you can then use Rosetta to speed-up x86_64 Linux binaries.
Docker Desktop also uses this to run x86_64 Docker images, and in many cases performance is quite close to the native ARM binaries, but this heavily depends on the workload.
In terms of people who might consider Fusion you have:
- People who only use Windows
- People who only use macOS
- People who only use Linux
- People who virtualize Windows on macOS
- People who virtualize Linux on macOS
- People who run FreeBSD or similar on their computers
- People who virtualize FreeBSD or similar on macOS
- People who virtualize various operating systems on Windows
- People who virtualize various operating systems on Linux
- People who virtualize various operating systems on FreeBSD or similar
And I would guess that the largest group of people that use Fusion use it for running Windows in a VM on macOS.
I would guess that the people who develop for Linux servers would mainly use Docker if they run macOS, and that also relies on VM, but not using Fusion.
What about people who virtualize various operating systems on macOS? That was my entire team at a prior engagement (at Microsoft, as it happens…). I suspect it’s a large number, developers tend to like macOS, so if you’re making a cross platform application and want to be able to test anything at all, you need a VM.
> I would guess that the people who develop for Linux servers would mainly use Docker if they run macOS, and that also relies on VM, but not using Fusion.
x86 Docker on ARM Mac is an insanely complex setup - it runs an ARM Linux VM inside Hypervisor.framework that then uses a Rosetta client via binfmt that <somehow> communicates with the host macOS to set up all the Rosetta specific stuff and prepare the client process to use TSO for fast x86 memory access.
Unfortunately, Apple heavily gates anything Rosetta, I'm amazed Docker got enough coordination done with them - because QEMU didn't, they don't support anything Apple ARM-specific as a result and don't plan to unless Apple significantly opens up access and documentation; TSO for example is gated behind private entitlements.
Yeah that's a "how to use it in the simple case", that's not a "here is how this shit works under the hood so you can use it for more than just running userland processes" and it also doesn't state the limitations (e.g. what instructions are supported and which are not).
I had Fusion and ran Windows with it early on (it could even play some games!) and since I had it, I used it for Linux and some other things.
Those are now down with an old ESXi box or other forms of VMs now. Maybe I should look into the various VM options still, but I don't have any pressing needs.
The argument given was that VMWare became useless because of the switch to Arm.
There are more Hypervisor managers available on macOS now than there have ever been before - largely because Apple provides the underlying framework to do most of the hard work... but there is clearly significant demand to run VMs on Arm Macs still, regardless of whether that includes running Windows (which does exist for Arm too)
Well, I use Parallels to run a Windows VM for work (on ARM). It's its own little bubble universe, completely isolated from my Mac desktop, but available at a swipe.
I do use Fusion as well (on my laptop), and have a Windows VM there as well, but solely to run older games. Works fine.
My guess here: the product isn't valuable enough to sell off (see Horizon), the customers who buy the product aren't in their list of 600 accounts that they want to focus on, and the invoice price is a rounding error next to their enterprise offerings.
Broadcom is bloodthirsty and I'd suggest they're doing this out of the goodness of their hearts but there is little evidence that they have one.
When they say "Free" what they mean is these products are now the walking dead. They are on maintenance only support until any existing commercial contracts expire at which point they will be cancelled.
In this case, I admit that I think it's the right thing to do. These products don't really need to exist as commercial offerings except for a few very niche cases.
It doesn't sound like they're trying to kill them, since they're moving commercial usage to a subscription license.
I'm not sure killing them actually makes a lot of sense - the products apparently share a lot of code with esxi, so it's two products for the R&D of one.
I spent like 15 minutes trying to find an official download link, even registered a broadcom account but that was a waste of time as well. Ended up finding a working download link from some reddit comment. It seems to contain all versions of fusion, player, workstation and remote console.
I had an expired trial license, after updating it now prompts me that my license is expired, but then seems to work just fine instead of closing haha. Thanks for the link!
They're very late to do this IMO, but better late than never. I am certain that VMWare has not been selling many Workstation licenses to personal users (costing ~$300 each) and making the products free gives free advertising and mindshare to VMWare. Visual Studio is a good example of this, Microsoft making Visual Studio free for personal use in 2014 provided a huge boost to a platform that a lot of people had written off as dead, irrelevant, gray corporate software.
Except that VMWare is owned by Broadcom, which is known for only being interested in Fortune 500. That doesn't at all apply to Microsoft. No sane people will buy into VMWare anymore if 500k in cash is not a rounding error of your budget.
> VMWare has not been selling many Workstation licenses to personal users (costing ~$300 each) and making the products free gives free advertising
Not really. The VMWare Workstation Player had the same engine (but less management functionality) so personal user could actually use a VMWare virtualization product. For basic usage (including snapshotting), which fits a non-commercial user, it was a fitting choice.
Therefore, it's good that they're essentially giving more functionality for free, but they did have a free offer before (for non-commercial users).
Primary IDE for Unity and Unreal. Microsoft has been extending beyond Microsoft platforms, so I imagine a decent chunk of Visual Studio use is for cross-platform development.
It's probably the best C++ IDE out there, with a great debugger. For C# a lot of people prefer Rider, but in terms of free options VS is much better than VS Code.
I've mixed feelings on this. One side I love it's free, the annual pricing seems reasonable to what it was. On the other, i've been so burned by their pricing for everything else with clients that i'm reticent to be thankful.
I suppose it's a step in the right direction, bringing back ESXI for homelab users would be a good step too.
Yeah. Once the Proxmox team fixed the recent kernel bug that caused migration hangs with Ryzen/EPYC hardware (~20% of migrations just hung), things have been pretty great.
Presently trialling a HA cluster with it, and likely to deploy that to a local data centre in the next few weeks.
I moved the rest of my machines to proxmox as well. I had left one ESXI just for the hell of it but I just can't leave it running even with the free Workstation pro (i'll still use it on my workstations).
> bringing back ESXI for homelab users would be a good step too.
I agree and frankly I think it was smart VMWare had a a free tier for homelab users. It produces new users who can now more easily enter the workforce with ESXi experience they might not otherwise have.
By locking it down and jacking up prices they'll squeeze out more money now, but eventually the market will shift to whatever everyone has the most experience with, which might end up being Proxmox.
I suspect they knew (and to be fair, probably correct) that a decent number of small businesses (and maybe even larger ones) were using ESXi and they just decided to shut that down in a push to get more licenses.
If my theory is correct, in about two years (if they haven't killed it entirely by the) they'll introduce a "free for homelab use" variant - maybe.
But Workstation and Fusion were more used by personal people and as a support tool FOR professionals, so they needed to keep those going, but charging $79 for it just wasn't worth the hassle. Notice they're not even selling ANY licenses directly anymore; you have to go through someone else. VMWare sold directly.
Not just not worth the hassle. The product being free now means that if someone files a bug report or request for enhancement, they can more easily just shrug and say "won't fix."
You could use Workstation Pro to directly assess VMs running on an ESXi server, so maybe the idea is to make company's more dependent upon the central infrastructure where they can squeeze.
You could actually cajole the free player to do this with ESXi but it was definitely not license kosher.
It's almost impossible to create an account with all of the delays. Even then, I got into a loop in "Trade Compliance Verification" which does not proceed.
Download doesn't seem to work. The VMware download area closed in late April, and can't seem to download Fusion now. The replacement store doesn't seem to be available yet.
I'm using Parallels and it is great on an Apple Silicon Mac, but I'm a long-time VMware Workstation and Fusion user, so I'd like to try it again.
As a paying Workstation customer, I had to install it from scratch the other day and couldn't find a binary anywhere. I eventually found an old installer on archive.org (!) and settled for that. Grateful to whoever had the foresight to point the Wayback Machine at VMWare's CDN before it was too late.
my longterm homelab of ESXi 6.x on Intel NUC with unlimited license hack recently blew up (repeated power surges, no UPS). went with Intel NUC again but proxmox instead for the rebuild and its been a dream so far. i don't miss ESXi or Vmware in general one bit.
This is my setup: Nuc +Proxmox+ K3S. My goal is to set up a Kubernetes Cluster spanning 2 NUCs with K3S to build my playground for any application I decide to develop.
Pro free for now, or Pro is free abandonware. Both cases discourage adoption.
Open-source would help if there's any desire to keep it alive, otherwise this is a nice gesture but I would read this as a signal that I should stick away from it because it's dead or a trap.
We have a whole bunch of automation built around VMWare Fusion for creating and deploying macOS VMs (for CI testing). The VMs aren't long-lived, but the code using VMWare Fusion certainly is and it's a nontrivial project to migrate to a different virtualization system. Thankfully for us we were already planning to do that migration before the acquisition.
As a data point for anyone else running it on Linux, this repo is probably what you should keep an eye on for updated VMware modules that work with newer kernels:
What timing. Literally yesterday I was trying to set this up on my Mac and went with UTM instead. It worked excellently for getting Kali Linux up and doing a WLAN USB passthrough.
Performant 3d acceleration in the guest OS is still quite difficult to find an open source solution for, and linux these days relies heavily on this for window management. Mac hosts at least have ParavirtualizedGraphics, even though I don't think the popular open source clients have support for it yet.
What open source options would you recommend for running Linux and Windows VM's on Windows? I've been unhappy with Virtualbox because the audio quality is abysmal which is an issue since I use screen reading software. I'm interested to try out VMWare Workstation since it's audio support was pretty good many years ago when I used it at a prior job.
Hyper-V? I don't run VM's on windows, only on Mac and Linux. I'd imagine a first class hypervisor like Hyper-V is the way to go on that platform. AFAIK it is included with Pro versions of Windows.
I used to have (paid, personal) VMware Workstation licence, but switched to VirtualBox after they stopped updating Workstation and a windows update stopped it working. I thought it was a decent product.
I'd rather not use an Oracle product (VB) but are there any advantages in switching back? Main use is running Ubuntu VMs on Windows.
I'm the opposite, I need these desktop hypervisors because Hyper-V is trash for anything but a WSL shell or server VM.
I upgraded to Windows 11 for WSLg (figuring it would replace my Linux desktop), and it was buggy trash. You can't even get a high-resolution Ubuntu desktop (from Microsoft themselves, their own quickbox!) without jumping through hoops, searching all over reddit for knowledge obsoleted by the next update, tweaking arcane settings and running misc Powershell scripts. To say nothing of the occasional freezes.
By enabling WSL2/WSLg, your Windows host is now a privileged guest running under Hyper-V as a hypervisor. Which means lightweight desktop hypervisors like Virtualbox run like trash.
I ended up removing WSLg/turning Hyper-V off, using Virtualbox for desktop Linux, and using WSL1 (not 2) to have a quick Linux shell without enabling Hyper-V.
I'm now considering Workstation due to the superior graphics in the guest over Virtualbox.
If you are running Windows 10 with secure kernel, driver guard, among others, this features require Hyper-V.
Secondly Windows 11 doubles even more on having Hyper-V running for even more security capabilities.
I also think the future is type 1 hypervisors, and in regards to performance, my computers are beefy enough to hardly notice any major impact.
As for Linux configuration problems, business as usual, there is always something that needs hand holding, and I have been using distributions since Slackware 2.0 in 1995's Summer.
I also mostly used Virtualbox only when not allowed to use VMWare products, due to cheap project delivery conditions.
I have been using Hyper-V since the early days, and run it on both my development iron (Win11 23H2, heavily castrated) as well as my personal (non-commercial) Win2k22 21H2 Datacentre servers.
Separate note: what do folks use to virtualize Windows on Linux? Is anything good enough to run older games in Windows in a VM? Think Dota 2 (I know it's available for Linux, just using it as a perf reference).
Proxmox can do this. It is free software. It will work best if you passthrough your graphics card. I also passed through a USB controller and used my dac for sound. You can run most versions of windows in a VM, also macos and linux.
There is the Hyper-V Server 2019 [0] too which was also free and a standalone OS unlike the current version. I use that on a 2nd PC, you can also install a full GUI [1] on top of the webadmin interface so pretty good actually.
Anyone reading this, don't expect a smooth experience for desktop Linux under Hyper-V.
Hyper-V's team only cares about supporting servers. You're not gonna run a full-screen Ubuntu VM without a lot of banging your head against the wall, unless you spend days trawling random Github comments and reddit posts and fixing it whenever it breaks.
yeah Windows Sandbox is pretty great. I use it to test sketchy software when sailing the high seas. And the option to add shared readonly folders on the host OS is nice too.
Orbstack now serves 100% of my virtualization and docker needs on macOS. Hopefully I'll never feel the need to install virtualbox or fusion or parallels ever again.
Orbstack is really nice on an M1. But occasionally I run into the need for a GUI application and then I'm stuck. Is it possible to use Orbstack for that?
A provider once gave me only a Java JAR graphical UI for an old Cisco VPN thing. I was on a Mac and it would only really work on Ubuntu. I used a different Ubuntu machine to get it going but would have liked to have done it on a virtualized Ubuntu. It did actually work on virtualized Ubuntu but I had to use VirtualBox and it was slow.
Really good snapshot capabilities, if that's your kind of thing.
Workstation (and Fusion too I think) also provide DirectX 10 + 11 support in Windows VMs.
So if you're wanting to run software that uses that in a Windows, you'll need something like Workstation or Fusion since virt-manager can't (yet) do that.
MEH!!! qemu for the win! "Hardware Virtualization Support: Qemu is capable of running virtual machines without hardware virtualization support, also known as software virtualization. On the other hand, VMware Fusion requires hardware-assisted virtualization to run virtual machines efficiently."
It's probably useful for the HomeLab crowd, but when you get an idea and want to scale it for business purposes, you get screwed by their recent commercial market moves.
There's the beginnings of real FLOSS virtualization projects out there. Broadcom will make some money off of the acquisition as measured by quarterly statements, but it's not sustainable over the long run. It's not 2005 anymore. Step on enough toes and the nerds will build their own and give it away for free.
I'm a casual Docker user, ran maybe 30 images my whole life. I've never used any of these flags and didn't know most of them even existed.
Are these serious threats? I mean it seems like common sense that if you give a malicious container elevated privileges, it can do bad stuff.
Is a VM any different? If you create a VM and add your host's / directory as a share with write permissions (allowing the VM to modify your host filesystem/binaries) does that mean VMs are bad at isolation and shouldn't be used? Because that's what these "7 ways to escaper a container" ways look like to me.
Thanks, that link made me much more confident in using Docker.
I mean come on: "Attackers could try to exploit this issue by causing the user to build two malicious images at the same time, which can be done by poisoning the registry, typosquatting or other methods"
So basically ridiculous CVEs that will never affect people not in the habit of building random Dockerfiles off Github with 2 stars. Good to know. Only the 1st one isn't dismissable out of hand, I can't tell if it's bogus like the rest./
The ransomware epidemics targeting ESXi vulnerabilities probably triggered an exodus to other hypervisors and this could be an attempt to hang on to some users.
Beware the new “free” licence, which is emphatically not for commercial use. Get caught accidentally using the personal licence and your company will be on the hook to pay whatever Broadcom wants you to pay. Oracle did similar shenanigans with VirtualBox (watch out if you download the extension pack) and Java (watch out if you install a JRE on your desktop and use it to compile/develop certain software!)
This does open a market opportunity for Corel/Parallels which is mostly at feature parity with VMware… the main reason I liked using VMware Fusion was solid integration with ESXi, which also won’t be a concern anymore as with the Broadcom acquisition that’s a platform I’ll be trying to avoid.