vRealize Automation 7 Org vCD Network Selection for vCloud Director Endpoints

When leveraging vRealize Automation to provision to vCloud Air Network multi-tenant vCD based platforms, the business user may sometimes require the option to select which organization vDC network to attach the VM’s to during request stage.

This can be achieved by leveraging the VirtualMachine.NetworkN.Name (where N = vNIC number) custom property within vRealize Automation and configuring the property for selection within your blueprint.

Step 1 – Select the appropriate networks within the vCD reservation

Select Networks in Reservation

Step 2 – Create property definition in property dictionary (which is now located in the administration tab)

The property definition needs to be called: VirtualMachine.Network0.Name – which assigns the first NIC in the cloud machine to the specified network… if you wanted to multi-home your machine, you could create properties for VirtualMachine.Network1.Name etc. etc.

create property definition

Set the property to “Required” to ensure that the user has to select something from the list of available networks. Also set the display advice to “Dropdown” to create a dropdown list of available org networks.

Specify the Property Label as anything you want (this is what will be visible beside the dropdown)

Specify the label for the networks and the value name of the networks from the reservation in step 1 (the name needs to match what is collected from the reservation)

Step 3 – Optionally create a property group – this will make it easier to assign properties to blueprints

preoprty group

Specify that the property should be “Shown in Request” to ensure that the form is presented during request stage.

Step 4 – Assign the property group to the blueprint

In the design tab, select the vCD blueprint, select the virtual machine on the canvas, select properties and add the new property group that you defined in step 3.

assign property group

Now when you run the request you should see a new dropdown list when selecting the virtual machines from within the request form.

request stage

Validate that the VM is attached to the correct network and the IP address has been allocated by vCloud Director.


Posted in Uncategorized | Leave a comment

Routing between nested and non-nested workloads in vCloud Air

Recently I build a complex nested lab in vCloud Air that would allow me to quickly jump in to a VMware environment from anywhere in the world. This is great for grabbing a few screenshots, testing something quickly or installing the latest and greatest features.

I really wanted this environment to be long lived and something that I can upgrade and mature over time. So I wanted to make sure that it’s not simply an isolated pod in a nested world with no access to the outside world. Tomas Fojta from my team created a fantastic blog around the definition of a complex nested lab in vCloud Air (I highly recommend you read this) So I followed this as a starting point: Complex Nested Lab in vCloud Air

The first thing I needed to do was plan the deployment; I’m an architect, so a little design work went in to this initially – I’m not talking 100 pages of architecture blueprints, but a small diagram to highlight my proposed architecture:

Lab Design

The proposed architecture had 3 vApp’s within vCloud Air, all with specific org vDC networks routed via the vCloud Edge Services Gateway (spine if you will). The Resource cluster has 3 virtual ESXi hosts and the Edge cluster has the same. Each vmnic presented to the vESXi hosts has it’s own dvSwitch as they are connected to separate vCloud Director networks. I was planning on leveraging vSAN for the shared storage solution, which I had to create emulated SSD’s. For that I leveraged William Lam’s post: How to Trick ESXi 5 in seeing an SSD Datastore.

The next challenge I had was with the deployment of the NSX controllers as they have to be nested as the NSX manager deploys them directly in to the nested infrastructure. The challenge is how can you communicate externally to the management ecosystem from a nested virtual machine on a flat port-group with mac addresses that the vCloud Air infrastructure doesnt know about? In real environments this is easy, you can enable promiscuous mode on the dvSwitch which would enable the virtual MAC’s to be presented through the virtual hosts and in to vCloud Air. We cannot enable this in vCloud Air as we don’t have access to the Virtual Infrastructure layer.

To solve this challenge you need to create a nested edge services gateway and map virtual MAC address to the same mac address assigned to the virtual ESXi hosts NIC. Verify the NIC’s MAC address in vCloud Air (vCloud Director) and when you deploy the ESG you can specify the MAC address for the uplink interface. Now the traffic could get out of the nested router upstream to the vCloud Air Environment, but how does the traffic know to come back in to the nested environment? You need to create a static route on the vCloud Air Edge Service Gateway for the network specified back to the uplink interface of the nested ESG. Now when you deploy your nested NSX controllers, they have upstream connectivity with the NSX manager and the NSX manager can apply the correct configuration for the controller. The following diagram highlights this in my nested environment:

Lab Design 1

This solution allows me to be able leverage all the NSX services in my nested lab for testing and demo purposes. I can also access this from anywhere in the world.

Posted in vCloud Air | Leave a comment