Having worked with VMware in the past and spent literally millions of dollars in licenses, I had the opportunity to meet lots of VMware employees. My observations is that the company had some very technically brilliant people, but also a lot of people that epitomize the super smart MBA that but naïve MBA that can understand high level business models and financial cleverness, but that are terrible business people because they are completely oblivious to the skill that all great business people have and that is empathy for your customer. This is ridiculous. I'm glad I'm not in a position where I am their customer anymore.
Since VMware is public quarterly results are vital to the naive MBA. Said MBA might decide to improve profitability by pushing small customers to cloud services (Amazon) who are already customers of VMware.
This also has the advantage eliminating VMware as a competitor to some of VMware's largest customers.
ESXi is already free with limitations. This will definitely be the reasoning if we see VMware make workstation free thereby bring even more potential users in.
Lots of FUD coming from the community- at first glance it sounds bad, but a lot of us should do the math before getting the pitchfork ready. ( Script to help "do the math" http://www.lucd.info/2011/07/13/query-vram/ )
We will actually be saving money with the vRAM licensing changes.
It looks like by changing the pricing to reflect more of the provisioning they're encouraging people to stick more CPUs per guest. I guess the more people are overloading their hardware at the moment, the bigger the license price jump.
Red Hat and Oracle should start making some sales calls, right about now. They should capitalize on this opportunity to convert some disaffected VMWare customers, and fund software development for migrating off the VMWare stack.
I've been involved in a project where we have actually had to change the direction of technology as well as deployment schedule to avoid incurring a massive (think over $100 million) license upcharge.
Many large enterprises are exceptionally bad at managing physical hardware. And if they are ok at it, they spend obscene amounts of money to do so.
One of the key advantages of VMWare from a operational point of view is the management capability, which is either much better than what most people have for physical hardware or much cheaper. Even with this new licensing model, VMWare still offers a positive ROI for a large customer, since that customer would also need to buy more licenses for products like IBM/Tivoli, CA Unicenter, BMC, etc. Those products are mega-buck, and enterprise customers are/were realizing cost savings by getting rid of them.
It's hard to see this from a small/mid-size enterprise perspective. Imagine $2M in recurring licensing charges and $750k in annual consulting expenses for a product functionally similar or inferior to Nagios. Or spending $20M annually on maintenance on software that you don't use. This happens every day in the Fortune 500 and government spaces.
Agreed. I work at a small company and IT is just me and another guy. We have 2 physical servers, but probably 6, and growing, virtual servers. If we had 6 physical servers, even if it was cheaper, we'd need to hire another 'guy' to manage them. A 'guy' costs the same as 5-10 physical servers, per year...
I'm sorry to be blunt but a 1:3 ratio of sysadmins:servers means you are doing it very, very wrong. 1:300 is not uncommon these days with the right tools.
That really depends how much of the additional "IT guy" hours are going into desktop support of the (presumably) more complex configuration.
I'd say a 1:3 ratio of sysadmins to servers is actually pretty common -if- the sysadmins also do desktop support for the organization. At one of my ISP jobs the customer/production servers had a 1:50 or so ratio (but a lot of time was spent on new products/features, maintenance could easily have been 1:200 and our automation was mediocre) but the internal IT dept was around the 1:3 mark.
And he said "IT is just me and another guy", not "systems is just me and another guy", so I suspect that's the case for him.
yeah, but the point is that the marginal extra work going from, say, 2 physical servers to 6 physical servers is not very many hours at all. Sure, supporting the users on those other servers will be substantial; but the hardware itself? not a big difference.
I mean, going from 0 physical servers to 1 physical server is a pretty big marginal jump; you need someone who knows how to replace drives and deal with other hardware problems, and you need that person on pager.
But that person is going to spend a few days getting the thing set up, then maybe they will touch the hardware once a year ongoing. (as you scale up you can lower the setup time to a few minutes per server by using cobbler or another auto-provisioning setup, which is probably going to take a few days to set up in and of itself. After that, you can bank on spending some significant time every time you buy a different hardware configuration, but either way, most of the time spent on hardware will be when setting up.)
Adding more servers just means you have to run down and replace those drives more often; but like I said, if you have to physically touch each server more than once a year or so, something is seriously wrong.
Sure thats true, but small companies dont have 300 servers but still need a sysadmin. Thats why platform services that dont require sysadmins like infrastructure services are so compelling and seeing so much innovation. There is a huge value there.
The constraints on operating a large datacentre are a) power b) cooling c) floor loading (really!)
If you run out of one of these your choices are a) consolidate (which may imply virtualization) or b) relocate (which is astronomically expensive to do with no downtime).
Cheaper? Maybe -- but people like VMWare for the management interface and often the other addions like DRS/HA, vMotion, vStorage.
Certainly smaller hosts with DR capability can be done for lower capital spend, but you need more capable admins to manage it. VMWare does that for you out of the box.
My experience with VMWare is that they do a very good job estimating the TCO of N physical servers and then they charge you such that it's ~90% of that cost to virtualize them.
I.e., VMWare is usually a better deal, but just barely. Drives me insane.
I know this is getting old, but here's another nice example of why you should be using Free Software for everything that's really important. KVM works very well; I know it misses some of the nice, pretty interfaces but at least it won't stab you in the back at the next upgrade.
We were faced with a very similar dilemma when we set up our virtualized cluster... Do we go with a free open-source solution like KVM, forgoing an easy-to-use GUI, or do we go with a more expensive solution such as VMware?
We eventually decided to build a KVM-based cluster, and while we were already extremely glad we did before the vSphere 5 licensing change, this latest development only serves to confirm the wisdom of our choice. We have enterprise-level support if we need it, the virsh command-line interface is very straightforward and easy to pick up, and we have not shackled our fates to an organization that can yank the rug out from under us whenever they like. Moreover, for Linux-based folks who find themselves pondering their options, consider that VMware requires that you run Windows in your cluster if you want the full advantages of vSphere (e.g., live migration). Because of the added maintenance and security concerns, we were quite loathe to introduce any Windows operating systems into our cluster environment, and the unfortunate state of the VMware world is that it's extremely Windows-centric.
The best part? Not only do we have a fast, rock-solid virtualization solution in place, but the easy-to-use GUI we wanted is also on the horizon. Take one look at the Archipel Project at http://archipelproject.org/, and I think you'll agree its interface puts vSphere to shame. Archipel is not yet ready for critical production environments and (last I checked) currently lacks full support for libvirt-based storage APIs (e.g., for LVM-backed virtual storage pools), but development is progressing steadily. I, for one, am looking forward to getting the best of both worlds (liberated + easy-to-administer software) in our data center when the time is right. If that interests you, head on over to https://github.com/primalmotion/archipel and fork away!
What do you use to manage your KVM cluster? I'm not aware of any package that does that, although I haven't looked that deeply. I have been meaning to try out Eucalyptys/Ubuntu Enterprise Cloud (which sounds like similar functionality to a KVM cloud) but haven't done so yet.
If you have a blog, would you write something up about Archipel? That looks pretty damned nice, and I'd like to hear the perspective of an implementer (alongside reading docs).
I'm all about getting FOSS some visibility in my company.
Absolutely. I'm already half-way done writing an article on Archipel, and I'll be sure it gets posted to HN when it's ready.
As an aside, Antoine (the author of Archipel) just told me that beta 3 will be released next week, after which he'll be focusing on expanding the VM storage options.
I have two iSCSI Targets. I want to pull a LUN from each, and mirror on the KVM Host system, exposing the mirrored disks for my guests.
Compared to using ZFS on OpenSolaris with Xen 3.1, this process is incredibly cumbersome and unreliable.
Plus I just don't get the need for libvirt. It seems an incredibly complex and useless abstraction over a simply documented configuration file for Guests, and a tool to start/stop/add/remove them.
I'm sure a big part of that may just be that I'm not familiar enough with KVM. Still, it's 2011. Tying all your data to a single host with limited redundancy and depending on live-migration seems like a fundamentally flawed approach to me with a lot of needless complexity.
Libvirt is not needed to use KVM but the purpose has been to provide stability (and an abstraction as you mentioned) over the changes QEMU has had in various releases.
now, the version of KVM I'm using is really, really old, but as far as I can tell, KVM, by itself, does not do any locking. It's pretty easy to start a single guest twice when using kvm by itself, which will irreparably corrupt your instance.
So yeah, I'd strongly recommend that you use libvirt (or some other wrapper that handles things like locking the block devices) if you use KVM.
With xen, on the other hand, it handles that level locking for you out of box, so personally I see no reason to use libvirt. The libvirt devs seem pretty focused on KVM anyhow; Xen support, at least in the past, was pretty poor, so personally, I use the native xen tools for xen.
(I'm not saying this is a reason to use xen instead of KVM; I'm just saying that if you do use KVM, you should also use libvirt.)
> I have two iSCSI Targets. I want to pull a LUN from each, and mirror on the KVM Host system, exposing the mirrored disks for my guests.Compared to using ZFS on OpenSolaris with Xen 3.1, this process is incredibly cumbersome and unreliable.
I don't see the relation with KVM. There is nothing special in the way it uses open-iscsi or mdadm to access iSCSI targets and set up mirrors.
Maybe you're unfamiliar with these Linux-specific tools and mistakenly attribute to KVM your difficulties which are really Linux problems.
> Plus I just don't get the need for libvirt.
I don't use libvirt; I wrote a couple of scripts that allow me to do what I need with KVM. libvirt is useful only if you need to manage a whole lot of VMs.
No, I'm aware they're Linux problems. It just seems the mind-share is on KVM these days so I generally consider Linux Host Virtualization == KVM.
You're absolutely correct though. It's not really KVM's fault. It's the Linux iscsi, software RAID, and volume management capabilities that are so lacking.
It just happens that that makes using KVM much less appealing.
I want:
1. Reliable Snapshots and Replication
2. Simple Volume Management
3. Human readable device names
4. Consistency of volumes/devices between reboots
5. An "uncorruptable" FS backing the guests
6. Reliable Virtualization
OpenSolaris is the only platform I know addresses these concerns. Linux doesn't come close despite it feeling like it's virtualization options are more mature.
FreeBSD may be a good fit with VirtualBox, but VBox on OpenSolaris and OpenIndiana was unstable for me at least. I'd like to give FreeBSD a try sometime.
On libvirt, that's about what I expected. I guess I just wanted to complain about it in general. ;-)
"why you should be using Free Software for everything that's really important"
I'm sorry, but having regularly been in a situation where I'm trying to get sign-off on large tech project the first question is "what is the support package like" and the question never asked because caring about that kind of thing is below their pay grade is "but is this software free and open?"
That's because organizations like the one you work have governance models that aren't actually focused on delivering IT services.
If you use open-source software, you don't need to procure anything, right? At one level, that's great, but to people who run the contracts unit or procurement team, that doesn't compute. In their world, you hassle people for a discount and fight over contract terms.
Why do you think that Red Hat sells support contracts in the guise of a software license?
Answer: Because the processes that enable big companies/government to spend money on services are vastly different than software. On a services contract, you need to negotiate statements of work, etc. For a software contract, you just need to buy a software SKU and a maintenance/support SKU.
Small little detail about myself, I helped out in creating a unified driver package for ESXi and wrote a couple custom *nix net drivers for hardware VMWare wouldn't support (cheap commodity hw). So I agree with you that after I had ditched VMware for KVM I was in heaven. Shoutout to those ppl that used my stuff, and try KVM!
This is really the primary value proposition of Free and Open Source Software. No matter what happens down the road, no one can suddenly jack up licensing fees. Even if one company decides to try to change the direction of a project (Oracle; OpenOffice), another group can fork it to keep it going (LibreOffice).
Alternatives to VMware like KVM aren't just a better version of VMware that happens to be free, but it's fundamentally better because it is free (as in speech, of course).
Xen is a hypervisor. XenServer is the product name from Citrix which uses Xen underneath. Xen Cloud Platform is actually the name of opensource xenserver.
You wouldn't use just Xen as a hypervisor, in most cases. Amazon uses Xen plus their own stuff on top.
To make it more complicated, there's also XenClient, which also builds on Xen. It's also mostly open source.
Why use proprietary (less tested, less stable) solutions while there is community-tested and community-supported ones? ^_^
People still stuck with a stereotype that most brilliant programmers work for corporations. This is, obviously, not true. Most of corporations outsource their R&D and QA and spend for marketing instead. That is a very common strategy.
Now tell me - how this strategy correlates with a quality of a code or services? ^_^
Oracle vs. MySQL is a very good example - high quality community code is usually much better and well tested. (hint: it is about comparing the code quality, not a feature lists)
Being attached and depended (that is exactly what their marketing department is for) or not is your own choice. In some cases, like SAP, there is no community-supported alternatives, but it this case there is more than one.
Some people could say that we really need all those modern features, such as iscsi per lun mirroring, etc. But it is exactly this code is less tested and lower quality.
One cannot compete with Linux (Ubuntu/RHEL/CentOS) communities in matters of testing and code quality. No code is better tested than those included in mainstream kernel or a polular distribution.
> One cannot compete with Linux (Ubuntu/RHEL/CentOS) communities in matters of testing and code quality. No code is better tested than those included in mainstream kernel or a polular distribution.
I know I'm feeding the troll here, but that is just naive.
There are many great open-source projects with terrific code quality, like parts of (and certainly not the whole of) the Linux kernel. There are also many commercial solutions which are far ahead of anything available in the open source world. In many spaces it makes great sense to use a proprietary solution over an open source one.
> Oracle vs. MySQL is a very good example - high quality community code is usually much better and well tested. (hint: it is about comparing the code quality, not a feature lists)
Can you provide evidence that MySQL has fewer important bugs than Oracle? I have used both fairly extensively and have noticed more bugs with MySQL, but that's just one anecdote.
> Most of corporations outsource their R&D and QA and spend for marketing instead.
Do you have evidence that vmware development is outsourced? Do you have evidence that outsourcing leads to a lower quality product? Most studies in this area have shown quality be lower with a push to lower costs, with no correlation to outsourcing.
Yes, there are many this or that, but in general, Open Source model works just because it is community driven. That means if your project or some critical part of it (scheduler or implementation of TCP) is really important to users it will receive almost constant code reviews ans quality improvements. This was the story behind nginx, openssh and thousands more.
Of course, some parts of project might be considered less relevant and users are satisfied with code that just works.
One can achieve better code quality than fanatics, nerds or very experienced programmers could show only by hiring such people for doing what they like to do. Mediocre wage-workers can't produce anything near it. Look at nginx's or openbsd's or postfix sources.
Please, try to imagine number of installations of both products. mySQL runs on almost every crappy hosting in the world. And on Facebook. ^_^
I don't way to say that mySQL is great product. What I want to say is that it is good enough and stable enough. Otherwise there will be no usage of it.
btw, outsourcing is all about cost reduction, not quality increase. ^_^
Has anyone had any experience with VMware's products being untested or unstable? I've heard annectodally that VMware's hypervisor is miles ahead of the others in terms of performance and useful features. Does anyone have any info on this?
I have seen crashed ESX servers with unreadable error messages while no information or support except PR and forums full of new users trying to convince themselves they did right choice. (post-purchase rationalizations).
I've found that vmware's vsphere is much better than KVM. I'm not sure about the performance but in terms of useful features and a nice interface vmware is much better.
I think the conceptual switch in licensing model is fair.. from cpu-cores to vRAM entitlement.. But the vRAM allocations per license are not right. It puts sysadmins in a real bind.. having to report bad news to mgmt.
They need to to the right thing and adjust the vRAM untitlements. Sad thing is.. people are so locked in to VMWare infrastructure that they'll likely make money short term, at the expense of pissing of customers. Oracle plays this game too..
Their marketing materials say this is much easier but it doesn't seem easier to me. Right now, we have a three node Enterprise Plus, six CPU cluster. This is six licenses period.
Now, I have to track vRAM usage and decide to purchase EP licenses for all available RAM or just go with usage + growth for the year...or something. I'm not sure how that's easier.
Its easy to do this when you have no strong competition, and don't care if you piss off your existing customers.
The irony is they want to ride the new wave of public cloud computing, going so far as to sponsor development of a memory database (Redis), and then go and do something regressive like slapping a 24GB-per-CPU memory limit on their core product without a price decrease.
This is an example of VMWare getting high on their horse again. Back in the early 00's, I was working at a large university, and thanks to VMWare's $49 academic/hobby Workstation license, I got my entire department onto it (we all used Linux on the desktop), and I bought myself a personal home license. This was the v1.x days.
I forget if it was the 2.x -> 3.x or 3.x -> 4.x upgrade, but they eventually dropped the hobby/academic cheap licensing and the price jumped to something like $150 per seat. I remember calling our sales contact and complaining about how much the jump sucked, and when they wouldn't budge, I essentially told no thanks and that I'd wait for the open source options to mature. He, of course, chuckled at this quaint notion.
A few years later I had to chuckle back when they started giving away the basic versions of VMWare, since open source options and Microsoft were by then eating their lunch on the desktop.
Funny how history repeats itself. I hope VMWare lives to regret this latest trend in greed of theirs.
P.S. -- I also had a similar exchange with the Accelerated X folks, back when xfree86 didn't do multi-head very well (or at all). I accused them of gouging their faithful customers and that open source would catch up and they'd be toast. Three cheers for xorg!
Would open source have been way slower to catch up if not for the gouging? If not, then it seems very likely that the "gouging" maximized shareholder value by extracting maximum profits during the period in which they were the sole supplier of a valuable-to-you service. You get (during the period in which they are the only supplier) usage of a product that no one else in the entire world is supplying and they get a few extra bucks. Hard for me to get outraged.
There's also something called "killing the golden goose." If they had more reasonable pricing, they may have been able to extract money from more clients for a longer period. I'm not sure what they current profits look like, but gouging your customers for short term gain while destroying long term profits isn't generally considered a good way to maximize shareholder value.
The other scenario is that keeping prices reasonable means there isn't as much demand for an open source version. Developers who balked at paying $150 a seat wouldn't be sufficiently motivated if the cost was a more "reasonable" $20 say.
I just remembered a pretty good example of this with the story of BitKeeper and Git, you can read http://kerneltrap.org/node/4966 for more information. It was the catalyst of losing the cheap enough/free option that motivated the work on Git.
But the new wave of public cloud computing is not generally running VMWare's product, as it is too expensive; eg EC2 runs on Xen... they are doing better from the private cloud people who are less concerned about using commodity open source software.
Your assertion that VMware is too expensive is just plain false. The company I work for (Virtacore) offers vCloud Express and we have come to great terms with VMware and our pricing reflects that. While we don't have all the features EC2 offers (because we are only a couple months old and still building out the product) we feel they are extremely expensive. IMO they like to nickel and dime users while we keep it very simple.
In our experience, vSphere is indeed too expensive relative to the value it provides. We ended up much better off implementing a KVM-virtualized cluster that cost us nothing in licensing, allows us to keep Windows out of our cluster entirely, and frees us from fiscally onerous license changes such as the sudden vSphere 5 price increase that spawned this thread.
Memory is cheap these days - you can get 24GB for a couple hundred bucks depending on speed/ECC requirements. I always try to max out the memory based on the median cost per GB.
Those "ECC requirements" really make a difference though. And whoever is running servers without ECC memory in production should be shot dead (just wait until you have data corruption because of a faulty memory module to see what pain is).
Yep - we just bought 54 x 8GB DDR3-1333 Registered DIMMs for about 10K, and we can sell the 54 x 4GB modules we are replacing for about 2-3K on eBay. But since this is a VMware cluster, now I have to scramble to see what this is going to cost me in additional licensing before I upgrade. :(
Really? The standard host hardware for the place I work was 128gb/16 core for the longest time.
Prior to that, it was 64g/8 core.. which did suffer of memory overcommit for a while. The 128/16 balances quite nicely. 128/16 leaves plenty of room for over a dozen reasonably sized VM's (8gb/2vcpu) without even approaching oversubscription of memory. Sure, if you want to go crazy and stack more than 15-20 VM's on a host you might need more memory, but I find for most app server work loads, you end up overloading your storage i/o (even 4Gbps HBAs have their limits).
To be clear, I was talking about physical cpu's, not cores.
But honestly, that's not the right way to think about the licensing. Sure, you need a license for each physical cpu, but beyond that you're just licensing for vRAM. So the real question is, is 24GB vRAM per license low.
By my reading of this pdf, each physical CPU requires a license and each license allows for 24-48GB of guest ram. It seems they're specifically trying to limit licensing by vRAM, rather than CPU because there are more cores in CPU's now.
Agreed... Only real enterprise alternative is Hyper-V - I have used it quite a bit and do like it, but, I just don't feel as happy using it as I am ESX.
Like Windows is dominant on the desktop, ESX pretty much owns the (paid virtualisation) ecosystem through many third party programs.
Microsoft is getting there, it is just much slower, but, a price increase like this could be (but I doubt it) VMware's undoing.
Microsoft is good at making products that are good enough and staying in the game for a long time.
In 1990, high-end engineering users were very happy with their Unix workstations. Windows NT in comparison kind of sucked -- but over time the suckage was not enough to justify paying $20,000 for a workstation versus $8k for a NT Workstation.
Ditto SQL Server. Back when I was a newbie Informix DBA, my older colleagues laughed at people messing around with SQL Server. Informix was an awesome product in many ways, but cost like $40,000/cpu when SQL Server cost alot less. Who's laughing now?
Yeah, but its not surprising that converting from a .net codebase to a java code base takes a few years.... they should have just re-written from scratch.... Once RHEV-3.0 is released though RHAT will open source it, so watch the product grow at that point, just like KVM has been growing in leaps and bounds recently.
I say that as someone who was part of the RHEV support team at launch.... RHEV 2 was a mistake. Red Hat should never have sold a product that depends on windows. They just don't know how to support it.
Would you believe I heard something being bandied around about a $200 bit of software that converted c# code to java... and then have the devs fix the 20% that didn't convert.... I kid you not....
There was also talk about bringing in parts of the JBOSS stack... and I am not exactly sure why... but synergy might have something to do with it.
RHEV 3 will use more oVirt developed technologies (libvirt, etc) but now they have Enterprise customers who need compatibility for an extended period... so its going to be the ugly stepchild of RHEV2 (aka Qumranet's Product) and oVirt (redhats R&D). I suspect that RHEV is going to be carrying around baggage for a while.
What's wrong with VirtualBox? We switched from ESXi to VirtualBox this year and liked it better already, but now with these changes at VMWare, we REALLY like that decision.