See I told you VMware wasn’t evil

Me being right doesn’t happen often (just ask my wife) but when it does, I like to make sure I point it out to people 🙂  If you go back a few weeks to VMware’s big vSphere 5 announcement you will recall that no matter how many awesome features and benefits they rolled out with vSphere 5, everyone got wrapped correctaround the axle in regards to their move to the vRAM entitlement.  If you read my “VMware is not Evil” blog post I pointed out that this move was inevitable in order to keep up with the change in server technology as well as server positioning.  (Aaron Delp’s Scale up vs Scale out goes into pretty good overview).  It seemed that everyone understood the need but was REALLY unhappy with the amount of  vRAM each tier of software supported.  At least that’s what I heard from most customers.  In fact, most VMware customers felt like they were going to blow through the low amount of vRAM they were entitled to based on their current vSphere 4 environment or their future plans to buy blades with tons of memory on them.  Once they were able to run the various scripts available to see exactly where they were at (real world numbers) 9 times of out 10 they were not just below their potential entitlement, but in most cases they were WAY WAY below their number.  In fact, I received a few e-mails from customers who simply chuckled at how far off they thought they would be.  BUT, enough customers were concerned and it appears VMware listened LOUD and CLEAR.  Based on the overwhelming response, VMware has revised their vRAM entitlement licensing thus allowing me to say “See, I was right – VMware isn’t evil !!” :). 

So, lets compare and contrast what was and now what is:

New vRAM Entitlement Guide

A couple of things to point out above and beyond what is listed above.  VMware has changed a couple of other things as well. 

  • 2nd change is they are Cap’ing the amount of pooled vRAM configured per VM to 96GB.  THAT IS AWESOME !!!  So if you are contemplating the “Monster VM” scenario with 1TB of memory assigned to it, you won’t have to break the bank buying tons of licenses to support it. 
  • 3rd change is they will be implementing a 12 month running average of configured vRAM when they determine vRAM entitlement compliance.  The old way (old being a relative term 🙂 ) would be based on a “High water mark” of vRAM configured in all powered-on VM’s.  This means if you have to temporarily spike over your license you won’t have to worry about being ding for it if you ever get audited. 

I should also mention, (just in case you were not aware) that vSphere Desktop Edition doesn’t have these vRAM entitlements to worry about.  Just make sure you buy those bundles if/when you roll out VDI. 

licensing pricing packaging cover

When in doubt, ALWAYS read the following VMware vSphere 5.0 Licensing, Pricing and Packaging whitepaper.  It is the end all, be all for answering all of your questions or double checking my work 🙂

Finally, if you are still worried about these new changes based on what you feel is your current situation I would HIGHLY recommend running Alan’s script against your vCenter instance.  It produces a really nice HTML file that shows where you sit today, vs the potential move to vSphere 5.s new licensing model.

So, mark this day down in your calendar !!  I was right 🙂

@vTexan

, , , ,

  1. #1 by Mike Stanley (@mikestanley) on August 5, 2011 - 11:24 pm

    You forgot the most important change for geeks who want to run test labs at home. Bumping the free version of ESXi from 8GB vRAM allotment to a 32GB physical limit means I’ll be able to keep the free ESXi in my home lab.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: