Pages

Sunday, May 15, 2011

Moving VMware vCenter from one ESXi host to another when using Nexus 1000v or Distributed Virtual Switch (DVS) Switches

I’ve long had a love and hate feeling towards VMware’s new distributed virtual switches ever since it was made available.  For those who have come across one of my posts from last year:

UCS & vCenter’s dvSwitches: One issue leads to another
http://terenceluk.blogspot.com/2010/07/ucs-vcenters-dvswitches-one-issue-leads.html

… would know that I had quite the bad experience with them during a one of my deployments.  So just a month ago during a VMware and Cisco UCS deployment, I had to move vCenter from Cisco UCS blade that was going to be removed from the chassis over to one of the blades that was there to stay.  The challenge I faced was that my practice lead had already deployed the Nexus 1000v on all of the other hosts with the VEMs installed.  The blade that ran vCenter was only using a regular vSwitch and had vCenter on the local storage.  The best option in this case was to push the VEM over to this temporary host, flip the physical NICs over to the Nexus 1000v and then copy the virtual machine over.  However, I didn’t really think this through at the beginning and had already copied the virtual machine over to the shared storage ready to mount and bring it up.  Due to the tight timeline of getting my list of tasks to be completed, I didn’t want to go back, push the VEM and start the copy again.  In hindsight, I’m sure I may have been able to edit the VMX file to manually enter a Nexus 1000v port group or if there were additional NICs available, to create a regular vSwitch, but since I only had a few minutes to get this up so I had to think fast.  While the following workaround isn’t exactly the best solution if I had thought it through, it does serve as a way of getting around this in situations such as mine or if there are other reasons why you can’t push the VEM out.

Problem

You’ve successfully copied vCenter from the local storage of a server over to shared storage and inventoried the virtual machine onto the new server with Nexus 1000v VEM installed and no regular vSwitches.  However, as soon as you try to configure the network interface’s port group, you find that you have no option of selecting the Nexus 1000v port groups because your vCenter is down:

image

Solution

What I ended up doing was rename the newly mounted vCenter with “-1” at the end so that we won’t have any name conflicts:

image

Powered up the vCenter on the old host:

image

image

image

image

Now that the old vCenter was up, we can now choose the port groups provided by the Nexus 1000v on the vCenter that was copied to the new host and shared storage:

image

image

image

1. Now that we have the right Nexus 1000v port group selected, we can proceed with:

2. Connect to the ESXi host that has the the old vCenter mounted.

3. Shutdown the old vCenter.

4. Remove the old vCenter from inventory.

5. Connect directly to the new host that has the new vCenter on shared storage.

6. Rename the vCenter logical name to remove the “-1”.

7. Power up the new vCenter.

8. Connect to the new vCenter via vSphere Client.

9. Rename the logical name once again as you can in see in the following screenshot that it has a “(1)” appended to it due to a name conflict.

image

10. Remove the orphaned entry of the old vCenter:

image

… and we’re done.  Again, this is probably not the best solution nor is it the only way to accomplish the same goal but if you’re pressed for time and need to get the vCenter up immediately, this is definitely a viable solution.

No comments: