We continue our Saga. Part 2 ended with configuring our cluster and now we go onto shared storage and vMotion.
If you looked at the previous topology of my lab you will notice that there was no interface configured there for shared storage. That was a small oversight on my part which I corrected by adding a additional NIC, by the way, that is why I love working on virtual machines as a lab - hardware does not cost anything!!
I added a fourth NIC to each ESX host with a VMKernel port connected to 1.1.2.x - named NFS.
So after adding a new nic to each of my ESX hosts the topology looks like this
We will now add a shared NFS volume to each server. I know that NFS is not the most popular choice for shared storage out there, but I do have to say that I am extremely pleased with the performance, and in my personal production environment, the benefits we receive with, ease of use, backup times and administration, has made NFS the de-facto choice for all our ESX deployments.
The NFS share is hosted on an Openfiler server (well I am exaggerating a bit - it is actually a desktop with a large disk). Extremely stable - as you can see from the screen shot below.
"There is more than one way to skin a cat" - so go the saying, and there is more than one way to add an NFS volume.
We can do it through the VI client.
| |
Or we can do it from the command-line on the ESX host.
esxcfg-nas -a <volume name> -o <hostname/ip> -s <share name>
or in my case
And now we have both Servers which see the same Shared Storage.
And this is a diagram of my environment
I now fired up a Windows Server machine to test out my vMotion
Next up on the menu Fault Tolerance