Automating VMware Cloud Director deployment on Azure

Overview

This story is about automating the deployment of VMware Cloud Director on Azure, done purely for exercise purposes but there's quite many re-usable components here.

Setup is done in GitHub and overall orchestration is done by GitHub actions. Everything is in code in GitHub and each time there's a new push the previous setup is first deleted and then re-created. Everything except binaries and certificates, which are stored Azure blob storage and fetched from there using keys stored in GitHub secrets.

The deployment consists of following component:

  • VM running VMware Cloud Director
  • Database in Azure Postgres PaaS
  • Application gateway for portal access "load balancing"
  • DNS in AWS Route 53
Yes, the title says on Azure but actually there's a bit of AWS also included in the form of public DNS hosting.

In the details there's a bit reverse engineering about using undocumented API calls and other interesting bits.

The actual code and pipeline is here: azure-vcd
And another pipeline which then does branding on top with same logic, update the code and commit - theme is automatically updated. vcd-config-demo

Workflow steps in detail

The main magic in the whole flow is in the GitHub actions workflow actually, which has following steps, with short descriptions on few. All passwords and urls are stored in GitHub secrets to keep privacy.
  • Delete resource group if exists 
  • Create resource group 
  • Create network 
  • Create VM 
  • Create database 
  • Configure VM - Install packages 
  • Configure VM - Config DB 
  • Configure VM - Update OS 
  • Configure VM - Install vCD 
  • Configure VM - Reboot 
  • Create application gateway 
  • Configure VM - Set public URL 
  • Configure DNS

Create network 

Creates one vNET of /24 and then three /29 subnets in it, one for portal, console and application gateway. Public IPs are also provisioned for application portal, assigned to gateway, and console access, assigned to VM. HTTPS and SSH allowed in on the console network security group.

Create VM 

Nothing special, just ssh key assignment and two NICs connected which were provisioned in the previous step. Using OpenLogic:CentOS:7.7:latest image.

Create database

Creates a postgres database, version 10, and connects it to portal vNET for VM access.

Configure VM - Install packages

Installs pre-requisite packages using yum:
libICE libSM libXdmcp libXext libXi libXt libXtst redhat-lsb postgresql

Configure VM - Config DB

Creates a dedicated user and database name vcloud in the the database and make user the owner of the database.

Configure VM - install vCD

Downloads installation media and pre-created certificate keystore from Azure blob storage with curl using shared access keys stored in GitHub secrets.
After this unattended install and configuration is run, one reason for adding reboot to the process is to start the service afterwards since that doesn't happen automatically.

Create application gateway

Downloads public certificate from Azure blob storage and creates an application gateway which would load balance across all IPs in the portal segment if they were active. Now there's only one VM, so this doesn't really provide value but it's there for the sake of architectural consistency. Key part to note is that this example workflow only works with public valid certificate, as the application gateway loadbalancing fails if the backend certificate is not valid. SSL is used all the way, terminated both at the application gateway and at the VM.

Configure VM - set public URL

Now this was the only tricky part in the process, for access to Cloud Director to work it needs to have proper configuration on what the public urls are, which are used in connections.
This is easily configurable using a browser when logging with IP address from the local network - not through load balancer.
So it was a problem for two reasons, because there was a load balancer in front the access didn't work at all and there's no documented way of changing the url during install or through API.
But since VMware has said everything is possible to do through APIs and that the HTML5 portal is essentially just a API consumer, I did a bit digging. I deployed a jump box in the portal subnet and intalled chrome on it and logged directly into the Cloud Director portal. Then I enabled developer tools and started looking at the network traffic.
I was able capture the API call which was done when the public URLs were configured in the portal, a super messy json bit but I gave it a try with curl.
By replicating the portal call headers and json, I got success using curl and I actually got back a very nicely formatted json about the settings. The returned json contains a lot more settings than just the URLs, I tried cleaning it up as much as possible but there's quite many other things that must be posted at thee same time for the call to be successful.
This how the minimum json looks like for configuring the urls:
{
  "type" : "application/vnd.vmware.admin.generalSettings+json",
  "absoluteSessionTimeoutMinutes" : 1440,
  "consoleProxyExternalAddress" : "###CONSOLE###",
  "hostCheckDelayInSeconds" : 300,
  "hostCheckTimeoutSeconds" : 30,
  "syslogServerSettings" : {
    "syslogServerIp1" : null,
    "syslogServerIp2" : null
  },
  "restApiBaseHttpUri" : "http://###PORTAL###",
  "restApiBaseUri" : "https://###PORTAL###",
  "sessionTimeoutMinutes" : 30,
  "syncIntervalInHours" : 24,
  "tenantPortalExternalHttpAddress" : "http://###PORTAL###",
  "tenantPortalExternalAddress" : "https://###PORTAL###"
}

The ###PORTAL### and ###CONSOLE### parts are replaced with proper fqdns before posting it out. This is the user readable version of the json, in the pipeline this is actually condensed into oneliner so that the json file is generated by the script which is used to make the curl calls. One file which is passed in through az vm run-command parameter, no need to download other files.

Configure DNS

Basic aws cli calls to register gateway and vm addresses in Route 53

Branding

Then we finally get to the good stuff, post deployment customisation. As mentioned in the beginning I created a separate pipeline for doing the branding. This is a simple flow, branding items, pics and json, are in GitHub and actions workflow pushes them in to selected API target.
Branding flow itself is built using ansible, works in public GitHub actions workers also.
I also created pipeline which creates docker image to run github actions runners locally. This image has ansible built-in. I'm personally using it on raspberry PI, armhf, but the GitHub actions flow that builds the image builds it multi-arch so it's also available in X86.

Docker image for GitHub actions with Ansible: github-runner

Comments

  1. Las Vegas' Wynn Casino - JTM Hub
    Casino. Wynn is a $4 www.jtmhub.com billion resort with 바카라 사이트 four hotel towers with 5,750 rooms and suites. Each of the hotel towers includes a titanium flat iron 20,000 square foot casino and kadangpintar a

    ReplyDelete
  2. However, 우리카지노 most games have fascinating special features and bonus games that may increase your probabilities of winning. Generally, you can see the games listed on the house page. But with so many games, it can't be simple to grasp where to start out|to begin}.

    ReplyDelete

Post a Comment

Popular posts from this blog

Why is three nines better than four in cloud availability?

reverse engineering VMware Cloud Director API

Join VMware Photon to Active Directory