Reducing the amount of RAM for vCenter Orchestrator

****** Disclaimer *****
This is not recommended by VMware and I am also pretty sure it is not supported either, so if you make this change then do so at your own risk.

The vCenter Orchestrator Installation and Configuration Guide states the following for Hardware requirements.

Hardware requirements

A few pages later this statement is also made.


I actually find this quite strange because Orchestrator is installed by default on each and every instance of vCenter Server. You are able to install this separately from the following location from the vCenter package.


It can be removed from the vCenter after the default installation, if you would like.


But when you are using a lab, just to put up another VM to run Orchestrator is a waste of resources. But one of the problems is you will run into is memory.

Running a vCenter server can demand a lot or resources. as per the ESX and vCenter Server Installation Guide 

Minimum Requirements:

vCenter Minimum

Medium Size:

vCenter Medium

And all of this if you are not running the database server locally on the vCenter Server.

But what happens when you start up vCenter Orchestrator?

Here is a screenshot of my vCenter Server (Database is located on a separate server) before I started Orchestrator, utilizing just under 50% of the RAM.

Before Orchestrator

Starting Orchestrator brought the RAM up to ~75%

Started Orchestrator

But what I found worrisome was actually the amount of committed KB

Resource Monitor

Where did this come from - well that was actually pretty simple. The command that started the Orchestrator process was as below

Command path

Those numbers looked pretty similar. But what is Xms and Xmx - Google led me here.

-Xms initial java heap size
-Xmx maximum java heap size

For Orchestrator these were defined as 2048m (2GB) for each.

As per Enterprise Java Applications on VMware - Best Practices Guide, sizing the right amount of RAM for a VM with Java Applications 

VM Memory (needed) = guest OS memory + JVM Memory,
JVM Memory = JVM Max Heap (-Xmx value) + Perm Gen (-XX:MaxPermSize) + NumberOfConcurrentThreads * (-Xss)

In vOrchestrator's case :

VM Memory (needed) = 4GB (Guest OS) + 2GB (-Xmx Value) + 256MB (MaxPermSize) + 256k (Xss) = 6.25GB

Which is a bit steep - IMHO.

The heap size is set by default to 2GB. Is this necessary? For a fully production environment - perhaps, for a small lab - I doubt it.

And where was this setting applied? The configuration is located by default here:

C:\Program Files\VMware\Infrastructure\Orchestrator\app-server\bin\wrapper.conf

To change the amount of RAM for the process:

Stop Orchestrator and edit the above file and make the following changes:

# Initial Java Heap Size (in MB)
wrapper.java.initmemory=2048 1024

# Maximum Java Heap Size (in MB)
wrapper.java.maxmemory=2048 1024

Restart Orchestrator.

And this is what the Heap size looked like

After modification

1GB less and fully functional.

One more time - this is not supported and can be used in your Lab environment if you are short on RAM.

During all the experiments - I got Orchestrator running with under 768MB of RAM. Just because I could..


What if you Start Over and Build it Again??

There are several environments (at least in the beginning) that their initial infrastructure was not designed with virtualization in mind. When they started out - virtualization was something that you were asked to fit into the existing infrastructure - adapt your current processes and procedures, to this new emerging technology.

So most things worked, some didn't and we all learned during this time what the pitfalls were, how things have to change. But the essence here is - the infrastructure was not designed from the ground-up, with virtualization in mind. Some added a new storage array - to deal with the IO demand. Some upgraded to the 10Gb Networks to deal with the network load. Some added 3rd party tools to deal with backup / reporting / security or compliance.

Some datacenters are built from patches on top of patches on top of patches.
And eventually this will bite you in the butt, BIG TIME!!

This is why it essential to regroup - look at things from a different perspective and look with a different set of mind.- this provides the possibility to provide a better solution in the long term.

I gathered several members of our IT department and started a "think-tank". This group includes people from our storage, network, helpdesk and infrastructure group. The main purpose being, to invoke thought and discussion:

If we were to start fresh - how would we envision the infrastructure? If we could wish and get whatever we want - what would our environment look like?

I would like to share with you our journey (we are at the beginning) - because I think it would be beneficial for anyone to take a step back - and to look at the environment from a different view.

Several prerequisites for these discussions for the first stage were established:

  1. We start with a completely clean slate. No vendor lock-in, for anything.
  2. $$$$ - are irrelevant - it can be as cheap / expensive as we want.
  3. Once we have what the vision we would like to go with - then we adapt and fit it into our corporate policy's, politics, and budget.

This does not mean we will be able to implement all of our wishes, this of course has to be aligned with the business, its needs and its resources.

We started to to identify what components are parts of the infrastructure and below is the initial list we identified (I am sure this will grow during the discussions).


Let's go through each area and the questions we would like to address in each area.


  • What do we use for the the core Infrastructure? Vendor? Technology?
  • What kind of bandwidth are we going to be designing for?
  • How do we incorporate BCP/DR into the network design? (Replication, VLAN design, etc.)
  • Planning for IO convergence. Technologies - options - how to manage all this IO?


  • What kind of disks will we be using? How do we incorporate Tiers into the storage?
  • How do we incorporate BCP/DR into the Storage design? (Replication, LUNS, etc.)
  • Backup/Restore. Which tools? What method?


  • Which servers will we using? Rack mounts / Blades?
  • Which Vendor/s ?
  • Which Hypervisor fulfills our requirements?
  • Is there place for more than one Hypervisor?
  • Hypervisor provisioning?


  • How is a VM installed? What are the criteria for new VM's? Importing? Staging?
  • Reporting and Capacity - providing the proper insight to all levels in the company.
  • How to maximize security for all layers?
  • SLA - definition of what is supported, and how?


  • How to present the infrastructure to the the end user without exposing too much? Provisioning, Chargeback/  Showback?


  • Cooling / Racks / Management / Alerting / Power
  • Providing redundancy on-site and off-site
  • 3rd party facilities - including cloud (and its implications)

As I noted above - this is a partial list - and I am sure it will grow.

I actually look forward to working with my team on these discussions - it will be a good learning experience for us all and will help focus on what areas we should be focusing on in the near future.

If you have any comments on the components or on the process - please feel free to add them below.


Where Does it All Come From?

Today I received an email with something that I found very moving - and I was also able to project onto the subject of virtualization. Here is the original text (slightly altered).

The Scope of Gratitude

Someone I know described a self-improvement group in which he participated. In order to improve their sense of gratitude, everyone in the group was to select one thing that they do frequently - and then think for 10 minutes about its ramifications.

My friend drank one cup of coffee every morning, and he chose this cup of coffee as his subject. He felt it would be easier to work on the assignment if he wrote his thoughts on paper. To his surprise, the 10 minutes quickly turned into 35. He wrote about how the coffee beans grew in Brazil. Someone planted the trees and took care of them until the coffee reached maturity. Then workers picked the beans from the trees. The beans were roasted and ground, and packed for shipping. He described all the work involved in the shipping industry which allowed the coffee to reach the United States. This alone required hundreds of people. Finally, the coffee arrived at the port in Haifa from where it was taken to his grocery story in Jerusalem.

He wrote about the gas range that boiled the water, and the match he used. (And how much easier it is to use a match rather than have to rub two sticks together!) He wrote about how the gas reached his home and what was necessary to build his stove. He wrote about the water kettle that whistled to let him know that the water had boiled. The milk he added required the work of many people from the time it left the cow until it reached his coffee cup.

At the end of 35 minutes, he saw he had not even begun to write about the actual cup, saucer, or teaspoon nor the table he placed it on, or the chair he sat on!!

Through this exercise, he became aware of so many things he'd been taking for granted.

Would you like to have a similar experience? Try it today: Pick something that you enjoy doing, and write as much as you can about what there is to appreciate.

So how is this connected to virtualization?

Someone requests a virtual machine, and with todays technology - this can be deployed within a matter hours if not minutes. But do people actually think about what actually goes into this magical ability of being able to receive the reply to their request?

  • The ability to actually request something without even to have to speak with someone - perhaps on the other side of the globe.
  • That the process can be almost instantaneous - some years ago in order to communicate you needed to send a letter by sea which took months.
  • We can with the click of a button deploy a server, install all the software and configure the application and settings. This used to be a tedious task not less than 4 years ago.
  • The underlying platform provides the storage, the network, the resilience without you even to have to worry about "all that jazz".
  • The vendors who have provided the software and the technology - that allows us to provide this service.
  • Someone spent a large number of hours designing the environment on which this all magically works
  • Many people actually spend countless hours maintaining this environment - to keep it functional and working properly.

I could go on and on for a lot longer than 35 minutes.

We have a whole lot of things to be thankful for. Every now and again it is good to look back and reflect on what things used to be like, how they are now and how far we have come.

That makes you appreciate it all even more.

Man I love virtualization!!


Israel VMUG Meeting

For those of you in Israel (Yes I know you are all dying to be here…), next week
the local Israel VMUG meeting will be held.

When: Tuesday, June 14, 2011 - 08:30 - 13:30
Where: Habayit Hayarok (The Green Villa), Tel Aviv
How to attend: Register here 

The Agenda:

08:30-09:00 Welcome & Registration
09:00-09:10 Opening by VMware Israel Country Manager
09:10-09:55 VMware vCloud Director – Technical Overview
09:55-10:40 Backup & DR for Virtual Environments
10:40-11:25 vCenter Operations – Real Time Performance Management (Including Demo)
11:25-11:50 Coffee break
11:50-12:50 End User Computing: View 4.6 (and iPad client) ThinApp Horizon AppManager
12:50-13:30 VMware vShield – Next Generation Security Architecture
13:30 Lunch

Hope to see you there!!!

My VMworld Session Not accepted

Unfortunately the session submitted by Forbes Guthrie, Scott Lowe, Tom Howarth and myself was not accepted.

Thank you for your interest in speaking at VMworld 2011. We received a record number of submissions this year and were only able to accept ~15%.  Unfortunately, we are not able to accept your session proposal, but we greatly appreciate and value the time and effort you took to submit a session proposal, and we hope that you will participate in the VMworld 2012 Call for Papers.

The session catalog is now live - and it seems there are a great number of amazing sessions that will be presented at VMworld 2011.

Hopefully next time.


NetApp Virtual Storage Console 2.1 released

NetApp has quietly released an updated version of their Virtual Storage Console Software (you will need a NetApp Now account to access).

Here is the announcement:

The Virtual Storage Console software is a single vCenter Server plug-in that provides end-to-end virtual machine lifecycle management for VMware environments running NetApp storage. The plug-in provides the following capabilities:

  • Storage configuration and monitoring using the Monitoring and Host Configuration capability (previously called the Virtual Storage Console capability)
  • Datastore provisioning and virtual machine cloning using the Provisioning and Cloning capability
  • Backup and recovery of virtual machines and datastores using Backup and Recovery capability

Note: Use of various capabilities in VSC requires the purchase of one or more NetApp software licenses. For more information on required software licenses, see the NetApp Virtual Storage Console 2.1 for VMware vSphere Installation and Administration Guide (HTML, PDF).

New Features:

Virtual Storage Console 2.1 includes the following enhancements:

  • The Virtual Storage Console capability is renamed Monitoring and Host Configuration to better describe what it is used for.
  • Monitoring and Host Configuration offers faster discovery, and the discovery now runs as a background task.
  • An updated version of the mbralign program that corrects problems with the mbralign version in Virtual Storage Console 2.0.1.
  • Monitoring and Host Configuration panels display only the resources associated with the selection in the vCenter Client Inventory panel.
  • Support for VMware View 4.6 in the Provisioning and Cloning capability.
  • Support for saving BIOS settings when cloning from template in the Provisioning and Cloning capability.
  • Enabled sense key codes and added domain user support in the Provisioning and Cloning capability.
  • Bug fixes for all VSC capabilities; see the Release Notes (HTML, PDF) for details.

I would like to thank Vaughn Stewart for the heads-up on the new release. If you would like more in-depth details of the release you can find it on his blog.