A key component of DevSecOps is infrastructure-as-code, and if you are using Azure there are multiple ways to specify what you want.

Microsoft provides Azure PowerShell, the Azure CLI, as well as both Azure Resource Manager (ARM) and the newer Bicep templates. There are also third party (and cross-cloud) solutions such as Terraform and Pulumi.

In the past I have been leaning towards Azure CLI, as I found ARM templates a bit cumbersome, plus my previous experience with migrations vs desired state for database deployments. With Bicep being promoted as a lighter weight alternative I thought I would compare the Microsoft alternatives.

Having now revisited the options, I still prefer scripting, but think I will switch more to PowerShell, particularly as it makes it easier to follow to the tagging, and naming, guidelines.

My recommendations:

  • For incremental development or changing environments, use Azure PowerShell scripts. They allow easy manipulation of parameters, and a migration/scripted approach can handle changes that a desired state/template approach can not.
    • If you are already heavily invested in an alternative scripting system, e.g. Bash, then Azure CLI would be easier to use.
  • If you have relatively stable infrastructure, such as a preset development environment or sample/demo code, that you want to repeatedly tear down and then recreate the same, then Bicep offers a nicer syntax than raw ARM templates. The deployments are viewable in the Azure portal, but templates do have some limitations compared to scripting.
  • In either case, follow the Azure Cloud Adoption Framework naming guidelines, allowing for unique global resources, as well as the associated tagging guidelines.

Example code is available on Github at https://github.com/sgryphon/azure-deployment-examples

Example scenario

This sample scenario installs two resources, an IoT Hub and a Digital Twins Instance, into a demo resource group.

The examples follow the recommended Cloud Adoption Framework naming guidelines, with an additional organisation / subscription identifier used in global scope names to make them unique.

Asset (Scope) Format Name
Resource group(Subscription) rg-<app or service name>-<subscription type>-<###> rg-codefirsttwins-demo-001
IoT Hub (Global) iot-<app name>-<org id>-<environment> iot-codefirsttwins-0xacc5-demo
Digital Twins Instance (Global) iot-<app name>-<org id>-<environment> dt-codefirsttwins-0xacc5-demo

The examples also following the Cloud Adoption Framework tagging guidelines, which gives a slightly more realistic complexity than typical examples.

Scripting solutions

Azure PowerShell

PowerShell has been around for a while, with the newer Az module replacing the original AzureRm module.

To use these scripts you need to have PowerShell installed, which is available cross platform. In older versions you may need to manually uninstall AzureRm and install the newer Az.

To run the script you will need to ensure you have the required modules, connect to Azure, set the subscription context, and then run the script. As the digital twins services is currently in preview, the relevant module needs to be installed and registered separately:

Install-Module -Name Az -Scope CurrentUser -Force
Install-Module -Name Az.DigitalTwins -Scope CurrentUser -Force
Register-AzResourceProvider -ProviderNamespace Microsoft.DigitalTwins
 
Connect-AzAccount
Set-AzContext -SubscriptionId $SubscriptionId
 
./deploy-infrastructure.ps1

The actual script file, including parameters:

deploy-infrastructure.ps1

#!/usr/bin/env pwsh

[CmdletBinding()]
param (
    [string]$AppName = 'codefirsttwins',
    [string]$OrgId = "0x$((Get-AzContext).Subscription.Id.Substring(0,4))",
    [string]$Environment = 'Dev',
    [string]$Location = 'australiaeast'
)

$ErrorActionPreference="Stop"

$SubscriptionId = (Get-AzContext).Subscription.Id
Write-Verbose "Using context subscription ID $SubscriptionId"

$ResourceGroupName = "rg-$AppName-$Environment-001".ToLowerInvariant()
$DigitalTwinsName = "dt-$AppName-$OrgId-$Environment".ToLowerInvariant()
$IotHubName = "iot-$AppName-$OrgId-$Environment".ToLowerInvariant()

$Tags = @{ WorkloadName = 'codefirsttwins'; DataClassification = 'Non-business'; Criticality = 'Low';
  BusinessUnit = 'Demo'; ApplicationName = $AppName; Env = $Environment }

# Create
New-AzResourceGroup -Name $ResourceGroupName -Location $Location -Tag $Tags -Force
New-AzDigitalTwinsInstance -ResourceGroupName $ResourceGroupName -ResourceName $DigitalTwinsName -Location $Location -Tag $Tags
New-AzIotHub -ResourceGroupName $ResourceGroupName -Name $IotHubName -SkuName S1 -Units 1 -Location $Location -Tag $Tags

# Output

(Get-AzDigitalTwinsInstance -ResourceGroupName $ResourceGroupName -ResourceName $DigitalTwinsName).HostName
(Get-AzIotHub $ResourceGroupName).Properties.HostName

Azure PowerShell strengths:

  • You may already be using PowerShell for scripting anyway.
  • PowerShell is available cross platform.
  • PowerShell operates on objects, for example the dictionary of tags is easily passed to the Azure functions, and key values can be easily extracted from result objects.
  • Operations are (mostly) idempotent, so you can re-run the script without needing guard clauses.
  • Relatively short (one, or a few, lines per resource).
  • Can be written/run interactively, making it easy to debug or troubleshoot.

Note that scripts don’t get any special handling in Azure; they are each just a separate command.

remove-insfrastructure.ps1

When you are finished with development or demonstration assets you may want to have a script to easily clean them up as well:

#!/usr/bin/env pwsh
[CmdletBinding()]
param (
    [string]$AppName = 'codefirsttwins',
    [string]$Environment = 'Dev'
)
 
$ErrorActionPreference="Stop"
 
$SubscriptionId = (Get-AzContext).Subscription.Id
Write-Verbose "Removing from context subscription ID $SubscriptionId"
 
$ResourceGroupName = "rg-$AppName-$Environment-001".ToLowerInvariant()
 
Remove-AzResourceGroup -Name $ResourceGroupName

Azure CLI

Azure CLI is very similar to PowerShell, possibly even shorter. This is a single cross-platform install, which can then load any needed modules. You will still need some kind of scripting; this example uses bash.

To run you will need to ensure the required extensions are installed, login, set the subscription, and then run the deployment script:

az extension add --name azure-iot
az login
az account set --subscription <subscription id>
sh deploy-infrastructure.sh

The script file used:

deploy-infrastructure.sh

#!/bin/bash
 
subscription_id=$(az account show --query id --output tsv)
echo "Using context subscription ID $subscription_id"
 
# Arguments (override with any flags)
 
app_name=codefirsttwins
org_id="0x$(echo $subscription_id | awk '{print substr($1,0,4)}')"
environment=Dev
location=australiaeast
 
while getopts a:o:e:l: flag
do
  case "${flag}" in
    a) app_name=${OPTARG};;
    o) org_id=${OPTARG};;
    e) environment=${OPTARG};;
    l) location=${OPTARG};;
  esac
done
 
# Following standard naming conventions from Azure Cloud Adoption Framework
# https://docs.microsoft.com/en-us/azure/cloud-adoption-framework/ready/azure-best-practices/resource-naming
 
# Include an subscription or organisation identifier (after app name) in global names to make them unique 
rg_name=$(echo "rg-$app_name-$environment-001" | tr '[:upper:]' '[:lower:]')
digital_twins_name=$(echo "dt-$app_name-$org_id-$environment" | tr '[:upper:]' '[:lower:]')
iot_hub_name=$(echo "iot-$app_name-$org_id-$environment" | tr '[:upper:]' '[:lower:]')
 
# Following standard tagging conventions from  Azure Cloud Adoption Framework
# https://docs.microsoft.com/en-us/azure/cloud-adoption-framework/ready/azure-best-practices/resource-tagging
 
tags="WorkloadName=codefirsttwins DataClassification=Non-business Criticality=Low BusinessUnit=Demo ApplicationName=$app_name Env=$environment"
 
# Create
 
echo "--dt-name $digital_twins_name --resource-group $rg_name -l $location --tags $tags"
 
echo "Creating $rg_name"
az group create -g $rg_name -l $location --tags $tags
 
echo "Creating $digital_twins_name"
az dt create --dt-name $digital_twins_name --resource-group $rg_name -l $location --tags $tags
 
echo "Creating $iot_hub_name"
az iot hub create --name $iot_hub_name --resource-group $rg_name --sku S1 -l $location --tags $tags
 
# Output
 
az dt show --dt-name $digital_twins_name --query hostName --output tsv
az iot hub show --name $iot_hub_name --query properties.hostName --output tsv

Azure CLI strengths:

  • Only one tool to install, Azure CLI (it then takes care of modules), and it is cross platform.
  • Can be easily called from any script, e.g. bash. If you are already invested in a non-PowerShell solution, then Azure CLI is a good fit. You can also, of course, call it from PowerShell.
  • The most compact syntax, similar to PowerShell with one line per resource, although manipulation (like setting variables) is more complex.

Templating solutions

Bicep

The new kid on the block from Microsoft. You need to use a tool, e.g. Azure CLI (and load the Bicep module), to deploy the template. You can also use Azure PowerShell, but you need to also manually install the bicep tool. Bicep files are transpiled to ARM templates that are sent to Azure Resource Manager.

To deploy the template using the Azure CLI tool, use the following:

az login
az account set --subscription <subscription id>
az deployment sub create -l australiaeast -f infrastructure/main.bicep

The Bicep templates for this example require two files, as you can not mix target scope. The main file is scoped at the subscription level and creates the resource group. It then includes a module (the second file) that deploys assets at the resource group level (the default scope).

main.bicep

// Main template
 
targetScope = 'subscription'
 
param appName string = 'codefirsttwins'
param orgId string = '0x${substring(subscription().subscriptionId, 0, 4)}'
param environment string = 'Dev'
 
var tags = {
  WorkloadName: 'codefirsttwins'
  DataClassification: 'Non-business'
  Criticality: 'Low'
  BusinessUnit: 'Demo'
  ApplicationName: appName
  Env: environment
}
var location = deployment().location
var rgName = toLower('rg-${appName}-${environment}-001')
 
resource rgDemoDeployment 'Microsoft.Resources/resourceGroups@2021-04-01' = {
  name: rgName
  location: location
  tags: tags
  properties: {}
}
 
module demoDeployment './demoDeployment-module.bicep' = {
  name: 'demoDeployment'
  scope: rgDemoDeployment
  params: {
    appName: appName
    orgId: orgId
    environment: environment
    tags: tags
  }
  dependsOn: [
    rgDemoDeployment
  ]
}

demoDeployment-module.bicep

// Module for deploying resource group items (default target scope)
 
param appName string
param orgId string
param environment string
param tags object
 
var location = resourceGroup().location
var iotHubName = toLower('iot-${appName}-${orgId}-${environment}')
var digitalTwinsName = toLower('dt-${appName}-${orgId}-${environment}')
 
resource iot_appName_orgId_environment 'Microsoft.Devices/IotHubs@2021-07-01' = {
  name: iotHubName
  location: location
  tags: tags
  sku: {
    name: 'S1'
    capacity: 1
  }
}
 
resource dt_appName_orgId_environment 'Microsoft.DigitalTwins/digitalTwinsInstances@2020-12-01' = {
  name: digitalTwinsName
  location: location
  tags: tags
}

Bicep strengths:

  • Has a simpler syntax than ARM templates (but still about 50% longer than a scripted solution).
  • Multiple file support (modules) is easy (but each file can only target one scope level, so you need separate files for resource groups and contents).
  • Can create multiple dependent objects and let the resource manager sort out deployment order, including parallel deployments.
  • Useful for large complex setups with multiple dependent objects and modules, e.g. spread across multiples files for an entire environment.
  • Editors may have syntax support for the declarative files.
  • Potentially faster: the PowerShell script takes 4 minutes; as the Bicep resources can be created in parallel, it only takes 3 minutes.
  • Finally, deployments are a first class entity in Azure, viewable in the portal:

Bicep deploy result

Bicep is currently only in beta (version 0.4 at the moment), but with good support. Also, running via PowerShell (rather than Azure CLI) requires more complex set up.

There are some situation that desired state solutions (e.g. templates) can not handle, such as deletes and transformations, than migration scripts can. See the section on migrations vs states.

Azure Resource Manager

You still need to use a tool, e.g. either Azure PowerShell or Azure CLI, to deploy the template.

Connect-AzAccount
Set-AzContext -Subscription <subscription id>
New-AzDeployment -Location 'australiaeast' -TemplateFile 'demoDeployment.json'

The actual template file being deployed:

demoDeployment.json

{
  "$schema": "https://schema.management.azure.com/schemas/2018-05-01/subscriptionDeploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "appName": {
      "type": "String",
      "defaultValue": "codefirsttwins"
    },
    "environment": {
      "type": "String",
      "defaultValue": "Dev"
    },
    "location": {
      "type": "String",
      "defaultValue": "[deployment().location]"
    },
    "orgId": {
      "type": "String",
      "defaultValue": "[concat('0x',substring(subscription().subscriptionId, 0, 4))]"
    }
  },
  "variables": {
    "tags": {
      "WorkloadName": "codefirsttwins",
      "DataClassification": "Non-business",
      "Criticality": "Low",
      "BusinessUnit": "Demo",
      "ApplicationName": "[parameters('appName')]",
      "Env": "[parameters('environment')]"
    },
    "rgName": "[toLower(concat('rg-',parameters('appName'),'-',parameters('environment'),'-001'))]"
  },
  "resources": [
    {
      "type": "Microsoft.Resources/resourceGroups",
      "apiVersion": "2021-04-01",
      "name": "[variables('rgName')]",
      "location": "[parameters('location')]",
      "tags": "[variables('tags')]",
      "properties": {}
    },
    {
      "type": "Microsoft.Resources/deployments",
      "apiVersion": "2021-04-01",
      "name": "demoDeployment",
      "resourceGroup": "[variables('rgName')]",
      "dependsOn": [
        "[resourceId('Microsoft.Resources/resourceGroups/', variables('rgName'))]"
      ],
      "properties": {
        "mode": "Incremental",
        "template": {
          "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
          "contentVersion": "1.0.0.0",
          "parameters": {},
          "variables": {},
          "resources": [
            {
              "type": "Microsoft.Devices/IotHubs",
              "apiVersion": "2021-07-01",
              "name": "[toLower(concat('iot-',parameters('appName'),'-',parameters('orgId'),'-',parameters('environment')))]",
              "location": "[parameters('location')]",
              "tags": "[variables('tags')]",
              "sku": {
                "name": "S1",
                "tier": "Standard",
                "capacity": 1
              }
            },
            {
              "type": "Microsoft.DigitalTwins/digitalTwinsInstances",
              "apiVersion": "2020-12-01",
              "name": "[toLower(concat('dt-',parameters('appName'),'-',parameters('orgId'),'-',parameters('environment')))]",
              "location": "[parameters('location')]",
              "tags": "[variables('tags')]"
            }
          ]
        }
      }
    }
  ]
}

ARM Template strengths:

  • The native format of Azure, but about 30% longer than the equivalent Bicep file.
  • Deployments, and ARM templates, are first class entities in Azure.
  • You can export existing resources to create an ARM template.
  • The best supported; new features may appear in ARM before other solutions.

Migrations vs desired states

Using scripts (migrations) to deploy infrastructure allow for incremental change. This can be important where a state-based solution can not handle the desired change.

A similar situation exists with database deployments, where while desired state tools exist the problems with them have led to migration based tools such as DbUp.

State based solutions, like Bicep or ARM templates, attempt to take the current state of infrastructure, compare it to the desired or ideal state (in the template file), try and work out what the differences are, and then make those changes.

But this is not always possible.

For example suppose you want to split one data blob into two new ones (and remove the old); you can’t do this in a single template, as deploying in complete mode will remove the old blob before the data is copied. A single script can do the create, copy, and then delete operations.

There are also some properties that can only be set at creation, so changing them in a template has no effect (or generates an error). A script could create a brand new resource, copy across all the settings, change any references, and then remove the old.

Both Azure PowerShell and Azure CLI commands are (mostly) idempotent, so they can be re-run as needed. However a real migration solution usually needs some kind of journal ability, so that it knows which scripts have run (in a target environment), and only needs to run any new scripts.

For database deployment this is usually some kind of journal table, although I have seen other solutions (e.g. site attributes in SharePoint). Maybe resource group tags could be used to record what version migrations are up to.

Hybrid solutions

Template solutions work best when they need to be deployed from scratch (not incremental), and offer some benefits as they are reported in the Azure Portal, and can run independent components in parallel.

There are also a lot of pre-written example templates available from Microsoft.

One potential approach may be a hybrid approach, where the primary system is scripting (e.g. PowerShell), but the initial deployment of resources are done as a series of templates.

The scope of each script can be kept small, either one or a small number of resources, deployed at first via a template. If using a migrations journal, this template will only be deployed once per environment.

Instead of changing the template, subsequent scripts use direct PowerShell to manipulate the resource (or deploy a new template over the top).

Recommendations

Scripted migrations still seem the easiest approach to me, particularly because while templates seem easy at first (like database desired state) they eventually run into a situation too complex to handle, and are very difficult to debug as you don’t know the internal state.

Templates do have benefits — syntax support, visibility in the portal, parallel deployment, dependency management, and easy repeatable deployment from an empty state — but without a way to overcome eventual problems I think using scripts from the beginning is safer.

For a small project (maybe some example code) a single script that simply does all the steps is sufficient.

For a more complex project, you should consider a generic deployment script that simply runs all the scripts in a sub-folder. This is a bit nicer with multiple developers as they are adding independent scripts, rather than all trying to edit the same one, which makes merging easier.

It also makes code reviews easier if following best practices for migrations where you only ever add new migrations, and never change old ones (that will have been run as is in other environments). These is a similar solution to what is used for database migrations.

Scripted migrations may even consider hybrid approaches in some cases, getting some benefit from editor syntax support, parallel resource creation, and visibility in the portal.