Host Spanning not working for VMs which runs on different hosts within a vAPP and using vCDNI network.

By default when a vAPP is created it creates a new port group within the associated vDS.
From testing and learning the hard way it seems the first uplink listed in the vDS is always assigned as the active uplink in the port group for the vAPP and load balancing set to “Route based on the originating virtual port ID”. 
This of course means you cannot setup teaming/EtherChannel on the physical uplink ports and whichever uplink is assigned needs to have the same VLAN ID as is configured for the vCDNI.

Debugging the problem:

In my situation the vCloud environment started with a single host and direct attached storage so only had a single vDS which had the port groups assigned for management, vmotion, external networks and to which vCDNI was associated too.

This caused the situation that our management uplink was always selected as active uplink for vApp port groups create, since it was the first listed uplink in vDS.  We however did not want to assign the same VLAN and have traffic flow over the physical management ports, physical separation always best in my opinion.

Solution:

Created a separate vDS on which I migrated the management and vmotion port groups (virtual adapters) too, as well as another for my external networks. This can be accomplished without downtime when you have 2 or more uplinks associate to the vmkernel

On the vDS which is associated to the vCDNI I removed all the uplinks. 
(On each of the uplinks, the associated vmnic has to be removed first before you can delete the uplinks from vDS, this is accomplished by the following:
Select the host
Select configuration tab
Select networking
Select vSphere Distributed Switch
Select Manage physical adapters.
Click remove for vmnic from the uplink name.
I setup two uplinks on the vCD associated to vCDNI and assigned the same VLAN ID on both of the uplinks physical ports.

Migrate Management and vMotion virtual Adapters (vmk0,vmk1) to new distributed virtual switch (vDS) without downtime.

  1. In Vcenter server select networking.
  2. Create new vDS on vcloud datacenter.
  3. Set the amount of uplinks needed and name them appropriately.  In my case we have two uplinks each for vmotion and management so total of 4.  Create the same uplink names as on original vDS.
  4. Create new Management and Vmotion port groups (different names, cannot be same) and remember to set your VLAN and balancing/teaming policies, but most importantly change the active uplinks to the newly create uplinks. (The upcoming steps we will assign the physical adapter to the active uplinks.
  5. Now go the Hosts and Clusters.
  6. Select the ESXi host and select configuration -> networking.
  7. Select vSphere Distributed Switch
  8. Now you will see both the VDS’s.
  9. Updated simplified procedure below for steps 10 – 13, however original still works as well.
  10. On the original vDS select the manage physical adapters.
  11. Now remove the physical adapter from the 2nd mgmt and vmotion uplink.  Keeping the active primary uplink in place.
  12. After it is removed select the “manage physical adapters” on new vDS.
  13. Add the removed physical adapter to the new uplinks.
  14. On new vDS select manage virtual adapters.
  15. Click Add
  16. Select Migrate existing virtual adapters.
  17. Select the virtual adapters (vmk0,vmk1) from old vDS and select the new port group name from new vDS to be associated with on move.
  18. After completed.
  19. Now run steps 10 to 13 to remove the physical adapter from the original vDS uplink and add to the new vDS uplink.
  20. done

UPDATE: Actually found a shortcut for the process from step 10 – 13
  • On the new vDS select the manage physical adapters.
  • On the uplink name select “click to Add NIC”
  • Select the corresponding physical adapter on original vDC.
  • You will be prompted if you want to move the physical adapter from original to new vDC
  • Whalaa!