2010-01-06

Travelling Down 5th ave. - at 180 KMH

Today I got an urgent call from an administrator saying that he had one machine that
had started to crawl. It was working fine yesterday!!!”

So I pulled out my VI client.

First thing you look for is what the resources are allocated to the VM.

Lo and behold - A Windows 2003 64bit server with 8gb RAM and 4 vCPU’s

The machine was not moving, not……… at…….. all

Onto the ESX host, fire up ESXTOP and I was seeing %WAIT times for this specific VM that was x14 more than any of the others.

The machine was using 20% of one CPU, only 20%

Of course powering down the machine - removing the excess CPU’s go the machine back up and running like a jack rabbit in no time at all.

What I did want to share with you all was the email I sent to the Admin after receiving the questions below

1. It is not logical and not true that more CPUs will make a VM work slower.

2. Why yesterday the same computer worked just fine with the same resources and today it was crawling to an extreme?

My answer:

1. As always the answer is it depends

Just as a matter of methodology

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1005362 – What to do when you suffer from bad performance on a VM

Opinions in the VMware world about multiple vCPU’s

Multiple vCPU's (SMP); convincing the business (Thanks Edward!!)http://communities.vmware.com/message/1003293#1003293

Absolutely no performance increase but there could be a performance decrease. Basically that extra vCPU is sitting idle and just using up resources.

Even with co-scheduling, there may be a performance hit. In essence, unless the app is seriously multithreaded/multi-programmed such as

exchange, SQL, or Terminal Services, then there is no benefit to having more than one vCPU and often a penalty.

Performance Tuning Best Practices for ESX Server 3 - Page 3 http://www.vmware.com/pdf/vi_performance_tuning.pdf

Use as few virtual CPUs (VCPUs) as possible. Do not use virtual SMP if your application is single threaded and does not benefit from the additional VCPUs, for example.

Having virtual machines configured with virtual CPUs that are not used still imposes resource requirements on the ESX Server. In some guest operating systems, the unused virtual CPU still consumes timer interrupts and executes the idle loop of the guest operating system which translates to real CPU consumption from the point of view of the ESX Server. See “Related Publications” on page 22, KB articles 1077 and 1730.

In ESX we try to co-schedule the multiple VCPUs of an SMP virtual machine. That is, we try to run them together in parallel as much as possible. Having unused VCPUs imposes scheduling constraints on the VCPU that is actually being used and can degrade its performance.

2. This could very well be, because yesterday all the other VM’s on ESX1 were not at all busy and this morning they were, and therefore since the 4vcpu VM had to wait in the queue to get the CPU time for a longer period the machine started to crawl to an extreme

True the product allows you to add 4 vCPU’s to VM - and I am thankful for that, But just because you can does it mean that you should.

To put it make a small analogy - your car (and no Mustang jokes this time) can drive at 180 KMH and if you are really lucky faster - but the question is should you. Travelling at such high speeds have their risks. If you mitigate them - put yourself on a closed racetrack, no other cars to surprise you and good conditions, then why not - travel at the speed of light if you would like. But try going down 5th Avenue in Manhattan at 180 KMH and see how far you get.

Same for multiple vCPU’s - run your OS with 4 vCPU’s, If you optimal conditions on your ESX Host that will not “surprise you” and contend for the CPU time - it is all a question of planning.
Running a 4 vCPU machine on a ESX host that already has 20 other VM’s on it, is like trying to go down 5th avenue at 180.

I have a dream my friends … that one day.. people will learn that not always does your VM need all those vCPU’s.


Till the next multiple vCPU story.