The topic we will be dealing with today is converged networking.
Why is it necessary?A few years ago, when I was starting out with virtualization, I started out with rack mount servers. From the start I knew that I would be using Network attached storage - and that the minimum amount of network cards I would need for my for these ESX servers was 6 1Gb NICs, two for management and vMotion, two for network traffic for the virtual machines and two more for iSCSI / NFS. It became apparent very quickly that this does not scale, for a number of reasons.
- The connection of each ESX host to two redundant switches, become a cumbersome process, which takes up a considerable amount of time both of the Networking team and the Server Team as well. Connecting the ports to the correct switches, making sure that the VLANs are set for each network port and so on.
- It became evident that using 6 ports for each ESX server would leave no free ports for the rest of the servers in that rack. Each patch panel has 16 ports by default. 2 ESX servers per rack - eat up almost all of the ports immediately, which means either running more that 16/24 ports to each rack, or limiting myself to how many ESX servers I can install in each rack.
- In short, this is not an easy process
Unfortunately though, this does solve all of your problems. What happens if you also need connectivity to a Fiber channel array? That means more cables coming out of your servers - more port being used (be they ethernet or SAN fabric).
And all of the above of course is relevant also for the Storage stack as well.
What are the solutions out there?If someone were to ask me who are the two major players in the converged networking game today, I would instinctively say HP and Cisco. The solutions they provide are similar in some ways, but are very different in others.
I would like to stress this is my own understanding of both of these solutions - I could be mistaken in some of the details, and as always would be happy to hear your feedback if there are any errors in my description.
Cisco control the network stack. I think that this is pretty much agreed upon by almost all. They have also gained a large market share in the past two years for the converged systems market. The continued UCS growth is something that HP cannot afford give up because it is taking good percentage of their market share.
HP have Virtual Connect and their Flex-10 technology for HP Bladesystem that will allow you to converge the both your Ethernet and Storage traffic over the same network card,
Cisco have their CNA cards that will allow you to pass Ethernet and FCoE over the same network card.
HP keep all traffic internal to the Chassis internal - thus internal VM / vMotion traffic stay within the chassis, as opposed to UCS which sends all the traffic up the top of the rack, regardless.
It is extremely difficult to explain in such a short post which one has more benefit, which one is better and who has the better solution. The answer to that question is actually very simple. The vendor that has a solution better suited to your needs - is the best solution.
Summing UPThis is the last post of a series of three that I (and several other bloggers) have written as part of the Bloggers Reality contest.
We touched on topics that were related to HP's solutions and products that we were exposed to. Some of us are very familiar with their products, some of us did know anything about them at all. Despite that, we all had some great articles published on the each of these subjects. We all learned new things, and most of all (I think I can speak for all of the contestants) we all had a good time!
I would appreciate your comments on this post, what you thought about this blogger's contest, and what you would have liked to see more of, and of course also less of.
Please remember this is a contest and your vote for this post is needed (and your comments as well).
One last note - KISS - Keep it simple stupid (for all of you who did not know what that was)