Speeding up Azure Development by not Handcrafting Terraform

20 min read

We recently built a basic e-commerce site to demonstrate how to deploy to Microsoft Azure. We compared what the process looked like using Nitric vs. Terraform for the infrastructure.

The video below shows how we built and deployed the application. Here, we'll walk through the specifics of the infrastructure and application code when building with Nitric and then with Terraform.

The application code is kept relatively simple as we are mainly focused on the infrastructure code differences. This project will have two services, one for handling user submitted orders, and one for generating invoices for those orders. There will be a topic which binds these two services together and a bucket which will store the generated orders.

To get this running in Azure we will need to create the following resources:

  • Resource Group - for logical grouping
  • 2 Container Apps
  • Container Registry - to store the container images
  • Container App Environment - to run the container apps
  • EventGrid Topic
  • EventGrid Subscription
  • Storage Account - to configure the storage container
  • Storage Container
  • IAM rules - to implement least-privilege
Diagram describing the architecture of the application

For an application built using traditional Infrastructure as Code (IaC) tooling like Terraform, each of these resources needs to be individually defined, configured, and bound to the application code. Using a more Infastructure from Code approach like Nitric, the resources are defined in your application code. We will first take a look at writing and deploying this application with Nitric, and then compare this with the same application written in HCL.

Building with Nitric

To start, ensure you have the Nitric CLI installed. We can then create our Nitric project using the following command.

nitric new bookstore "official/TypeScript - Starter"

Then open the project in your preferred Typescript editor. We'll start by deleting the files in the functions folder, replacing them with two new files named order.ts and invoices.ts. Open up order.ts and we will add the following code.

import { api, topic } from '@nitric/sdk'
const ordersApi = api('orders')
const ordersTopic = topic('order-updates').for('publishing')

This code imports api and topic resources from the Nitric typescript SDK and creates two new resources. The api is called orders and the topic is called order-updates. We will give our function permissions to publish events using our topic by specifying .for('publishing'). The next step is adding a route to our API so that we can publish orders. This will be a POST route on /order which will take the request payload and publish that to the orders topic. It will return 200 with the order and a confirmation that the order was received.

ordersApi.post('/order', async (ctx) => {
const order = ctx.req.json()
await ordersTopic.publish(order)
return ctx.res.json({
message: 'Order received',
order,
})
})

The next step is to create the handler for our order notifications. This handler uses an external API to generate a PDF with the invoice for the order. The code was extracted from the comparison application code for both the Nitric and Terraform applications. This was done to not overcomplicate the comparison. The source code for the API is available here. We'll start by getting the environment variables required to access this API.

const INVOICE_API_URL = process.env.INVOICE_API_URL
const INVOICE_API_KEY = process.env.INVOICE_API_KEY

Next, we will create the invoices bucket and create a subscription for the order-updates topic.

import { topic, bucket } from '@nitric/sdk'
...
const invoiceBucket = bucket('invoices').for('writing')
topic('order-updates').subscribe(async (ctx) => {})

We can then start adding our handler code for the subscription. This will first extract the payload from the request and forward this request to the invoice creation API.

topic('order-updates').subscribe(async (ctx) => {
// Extract the order payload
const { payload: order } = ctx.req.json()
// Send a request to the external API to get a PDF generated
const response = await fetch(`${INVOICE_API_URL}/invoices`, {
method: 'POST',
headers: {
'x-api-key': INVOICE_API_KEY,
},
body: JSON.stringify(order),
})
})

Once we have the PDF returned from the response we can add it to the bucket.

topic('order-updates').subscribe(async (ctx) => {
...
// Check that the PDF was generated correctly
if (!response.ok) {
console.log(
`Failed to generate invoice for order ${order.orderNumber}, status code ${
response.status
}, ${await response.text()}`
)
throw new Error(`Failed to generate invoice for order ${order.orderNumber}`)
}
// Get the invoice in memory from the response.
const invoicePdf = await response.arrayBuffer()
// Write the invoice to the bucket
await invoiceBucket
.file(`${order.orderNumber}.pdf`)
.write(new Uint8Array(invoicePdf))
})

Now that both the services are done, we can deploy the application to the cloud. Before doing this, we must create a stack environment to describe our deployment. Run the following command and follow the prompts.

nitric stack new

The invoice PDF API must be deployed separately for access by your application code

We can then deploy to the cloud.

nitric up

Once deployed we can use the following cURL request to create an order. You will need to update the URL in the request to match the outputted URL from the deployment.

curl -X POST \
'https://xxxxxxx/orders' \
--header 'Content-Type: application/json' \
--data-raw '{
"customer": "John Doe",
"shippingAddress": {
"line1": "123 Fake St",
"city": "San Francisco",
"state": "CA",
"postalCode": "94105"
},
"orderNumber": "250-6880554-12345",
"items": [
{
"name": "Widget",
"quantity": 1,
"unitPrice": 100
},
{
"name": "Gadget",
"quantity": 2,
"unitPrice": 50
}
]
}'

To verify that it worked as expected, you can view the bucket in your cloud console and see if there is a new invoice PDF called 250-6880554-12345.pdf or similar.

Building with Terraform

With the Nitric version done, we can now look at how to create the same solution using Terraform. This approach separates the infrastructure and application code, so we first write the infrastructure definitions in HCL and then write our application code using Express.

Infrastructure Definitions

Before writing any application code, we must first define the following 11 resources. This section assumes that you have a basic understanding of how Terraform modules work.

  • Resource Group
  • 2 Container Apps
  • Container Registry
  • Container App Environment
  • EventGrid Topic
  • EventGrid Subscription
  • Storage Account
  • Storage Container
  • 2 IAM rules

We'll start by defining our Azure providers, azurerm and azuread

You'll notice that most the defined resources require input variables. We will define them in variables.tf later.

terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "= 3.84.0"
}
azuread = {
source = "hashicorp/azuread"
version = "= 2.46.0"
}
}
}
# Configure the Microsoft Azure Provider
provider "azurerm" {
client_id = var.ARM_CLIENT_ID
client_secret = var.ARM_CLIENT_SECRET
subscription_id = var.ARM_SUBSCRIPTION_ID
tenant_id = var.ARM_TENANT_ID
features {}
}
# Configure the Azure Active Directory Provider
provider "azuread" {}

Next, create your resource group to logically bind all the created resources together.

# Create the resource group
resource "azurerm_resource_group" "example" {
name = var.resource_group_name
location = var.resource_group_location
}

We will then create the storage account and the storage container. These are created to store our generated PDFs. We first need to create the storage account so we can bind the storage container to the correct account.

# Create our storage account
resource "azurerm_storage_account" "storage" {
name = "tfexamplestorage"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
}
# Create the invoice_files storage container, where we will store our generated PDFs
resource "azurerm_storage_container" "invoice_files" {
name = "invoicefiles"
storage_account_name = azurerm_storage_account.storage.name
container_access_type = "private"
}

Before we create the container apps, we need to create the container registry and the container app environment. The container registry is required to store our images and the container app environment is required to manage and run each of the container apps.

# Create the Container Registry
resource "azurerm_container_registry" "acr" {
name = "tfexamplereg"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
sku = "Basic"
admin_enabled = true
}
# Create the Container App Environment
resource "azurerm_container_app_environment" "environment" {
name = "terraform-containers"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
}

We can now push our built images to the container registry. This can be automated by using a Terraform terraform_data resource type. We will set it to run on every deployment by triggering a replace on the time change. The script will login using the Azure CLI, build the images, and then push to the container registry.

As our container apps definitions for each service will be almost identical, we can deduplicate it by creating a submodule and referencing it in our main module.

resource "terraform_data" "docker_image" {
triggers_replace = {
always_run = timestamp()
}
provisioner "local-exec" {
command = <<EOT
az acr login --name ${var.acr_name} -u ${var.registry_username} -p ${var.registry_password}
docker build --platform linux/amd64 -t ${var.registry_login_server}/${var.image_name} -f ${var.dockerfile} ${var.build_context}
docker push ${var.registry_login_server}/${var.image_name}
EOT
}
}

We can then create the container apps app which relies on our image. There are 5 main components to the container apps resource.

  • identity: defines the type of managed identity to assign to the container
  • ingress: defines the ingress rules
  • registry: defines which registry the image is stored and how to authenticate with it
  • secret: stores input value as an encrypted secret
  • template: the configuration for the container, such as the container image, cpu, memory, environment variables.
resource "azurerm_container_app" "app" {
name = var.container_app_name
resource_group_name = var.resource_group_name
container_app_environment_id = var.container_app_environment_id
identity {
type = "SystemAssigned"
}
ingress {
external_enabled = true
target_port = 3000
traffic_weight {
percentage = 100
latest_revision = true
}
}
revision_mode = "Single"
# Point to the container registry server which stores our image
registry {
server = var.registry_login_server
# References the secret which stores the registry password
password_secret_name = "pwd"
username = var.registry_username
}
# Store the registry password as an encrypted secret
secret {
name = "pwd"
value = var.registry_password
}
template {
container {
name = "app"
image = "${var.registry_login_server}/${var.image_name}"
cpu = 0.25
memory = "0.5Gi"
# Dynamically gather the environment variables
dynamic "env" {
for_each = var.env_vars
content {
name = env.key
value = env.value
}
}
env {
name = "buildstamp"
value = timestamp()
}
}
}
# Depends on our image being generated
depends_on = [terraform_data.docker_image]
}

That's all the code required for the container apps module, we just need to define our input variables and our outputs. There are quite a number of inputs for our container app as we want most of the properties to be completely configurable.

variable "resource_group_name" {
description = "The name of the resource group in which to create the Container App and ACR."
type = string
}
variable "location" {
description = "The location/region where the Container App and ACR should be created."
type = string
}
variable "acr_name" {
description = "The name of the Azure Container Registry."
type = string
}
variable "acr_sku" {
description = "The SKU of the Azure Container Registry."
type = string
default = "Basic"
}
variable "container_app_name" {
description = "The name of the Container App."
type = string
}
variable "image_name" {
description = "Name of a local image to push to ACR"
type = string
}
variable "build_context" {
description = "Path to the location of the application to build with docker"
type = string
}
variable "dockerfile" {
description = "Path to the applications dockerfile"
type = string
}
variable "registry_login_server" {
description = "URI of the ACR registry images should be pushed/pulled from"
type = string
}
variable "registry_username" {
description = "Username for the ACR registry"
type = string
}
variable "registry_password" {
description = "Password for the ACR registry"
type = string
}
variable "container_app_environment_id" {
description = "Id of the azure container app environment to deploy to"
type = string
}
variable "env_vars" {
description = "Environment variables for the container"
type = map(string)
default = {}
}

For our outputs, we will define the container app endpoint and the managed identity.

output "container_app_identity" {
value = azurerm_container_app.app.identity.0.principal_id
description = "The managed identity of this container app"
}
output "container_app_endpoint" {
value = azurerm_container_app.app.ingress[0].fqdn
description = "The application endpoint of this container app"
}

Now going back to the main module, we will implement the container apps using our submodule. We will start with the orders service, which references the EventGrid topic.

resource "azurerm_eventgrid_topic" "orders_topic" {
name = "terraform-order-updates"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
}

We can then create our container app using the container apps module. You can reference this module using the source property. This passes in information about the container registry and the resource group, as well as service specific variables like the Dockerfile location and the event grid topic name.

module "orders_container_app" {
source = "./modules/containerapps"
acr_name = azurerm_container_registry.acr.name
container_app_name = "orders"
container_app_environment_id = azurerm_container_app_environment.environment.id
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
registry_login_server = azurerm_container_registry.acr.login_server
registry_username = azurerm_container_registry.acr.admin_username
registry_password = azurerm_container_registry.acr.admin_password
image_name = "orders:latest"
build_context = "."
dockerfile = "./orders/Dockerfile"
env_vars = {
PORT = "3000"
AZURE_REGION = azurerm_resource_group.example.location
AZURE_TOPIC = azurerm_eventgrid_topic.orders_topic.name
}
}

For the orders application to work, we need to give it permissions to push events to the orders topic. This will be a container app role assignment with the EventGrid Data Sender role, scoped to the topic.

resource "azurerm_role_assignment" "orders_topic_access" {
scope = azurerm_eventgrid_topic.orders_topic.id
role_definition_name = "EventGrid Data Sender"
principal_id = module.orders_container_app.container_app_identity
depends_on = [
module.orders_container_app
]
}

We can then implement the invoices container. This service will be basically the same to implement. You will notice in the environment variables that instead of referencing the event grid topic, we are instead referencing the storage container so our invoices can be stored.

module "invoices_container_app" {
source = "./modules/containerapps"
acr_name = azurerm_container_registry.acr.name
container_app_name = "invoices"
container_app_environment_id = azurerm_container_app_environment.environment.id
resource_group_name = azurerm_resource_group.example.name
registry_login_server = azurerm_container_registry.acr.login_server
registry_username = azurerm_container_registry.acr.admin_username
registry_password = azurerm_container_registry.acr.admin_password
location = azurerm_resource_group.example.location
image_name = "invoices:latest"
build_context = "."
dockerfile = "./invoices/Dockerfile"
env_vars = {
PORT = "3000"
AZURE_REGION = azurerm_resource_group.example.location
AZURE_STORAGE_CONNECTION_STRING = azurerm_storage_account.storage.primary_connection_string
AZURE_INVOICES_CONTAINER_NAME = azurerm_storage_container.invoice_files.name
INVOICE_API_KEY = var.invoice_api_key
INVOICE_API_URL = var.invoice_api_url
}
}

To push invoices to the storage container, we need to assign the Storage Blob Data Contributor role to the container app.

resource "azurerm_role_assignment" "invoices_storage_access" {
scope = azurerm_resource_group.example.id
role_definition_name = "Storage Blob Data Contributor"
principal_id = module.invoices_container_app.container_app_identity
}

Finally, we can bind our container app to events sent to the topic by setting up an EventGrid subscription.

resource "azurerm_eventgrid_event_subscription" "invoices_subscription" {
name = "example-eventgridsubscription-auth"
scope = azurerm_eventgrid_topic.orders_topic.id
webhook_endpoint {
url = "https://${module.invoices_container_app.container_app_endpoint}/handle-orders"
max_events_per_batch = 1
preferred_batch_size_in_kilobytes = 64
}
}

The only thing left to do is define our inputs and outputs. The inputs will mostly be defaults, but also allow for configuration and binding to your deployed invoice generating API.

variable "resource_group_name" {
description = "The name of the resource group."
default = "example-resources"
}
variable "resource_group_location" {
description = "The location of the resource group."
default = "East US"
}
variable "storage_account_name" {
description = "The name of the storage account."
default = "examplestoracc"
}
variable "container_name" {
description = "The name of the storage container."
default = "examplecontainer"
}
variable "ARM_CLIENT_ID" {
description = "Azure Client ID"
type = string
default = ""
}
variable "ARM_CLIENT_SECRET" {
description = "Azure Client Secret"
type = string
default = ""
}
variable "ARM_SUBSCRIPTION_ID" {
description = "Azure Subscription ID"
type = string
default = ""
}
variable "ARM_TENANT_ID" {
description = "Azure Tenant ID"
type = string
default = ""
}
variable "invoice_api_url" {
type = string
}
variable "invoice_api_key" {
type = string
}

The only output we need to define is our resource group id. This will allow you to reference this resource group in the future.

output "resource_group_id" {
description = "The ID of the resource group."
value = azurerm_resource_group.example.id
}

Application Code

With the infrastructure definitions done, we can start writing our application code.

We'll start with the orders service. This will look fairly similar to the orders service written with Nitric, however it will use Express.js and the native Azure client. This uses the EventGridPublisherClient from @azure/eventgrid and pulls your default azure credentials using DefaultAzureCredential from @azure/identity. We will use the bodyParser plugin for express, which will parse our requests as JSON. You will want to add all these as dependencies to the application using yarn or npm.

yarn add express body-parser @azure/identity @azure/eventgrid

With those dependencies installed, we can start by initialising our express application and the Azure client.

import express, { Request, Response } from 'express'
import { EventGridPublisherClient } from '@azure/eventgrid'
import { DefaultAzureCredential } from '@azure/identity'
import bodyParser from 'body-parser'
// Extract our constants from the environment variables, setting defaults if they aren't found
const PORT = process.env.PORT || 3000
const TOPIC = process.env.AZURE_TOPIC || 'terraform-order-updates'
const REGION = process.env.AZURE_REGION || 'eastus'
const app: express.Application = express()
app.use(bodyParser.json())
// Create the client to push events to the EventGrid topic
const client = new EventGridPublisherClient(
`https://${TOPIC}.${REGION}-1.eventgrid.azure.net/api/events`,
'EventGrid',
new DefaultAzureCredential(),
)

Let's now write the /order route. This route will forward receive an orders payload and forward it to the topic. It will return 201 if the order was received.

app.post('/order', async (req: Request, res: Response) => {
// Forward the request to the orders topic
await client.send([
{
eventType: 'order.created',
subject: req.body.orderNumber,
dataVersion: '1.0',
data: req.body,
},
])
// Return the request body
return res.status(201).json({
message: 'Order received',
order: req.body,
})
})

Finally, we'll start the express application on the port specified by the PORT environment variable, defaulting to 3000.

// Start the application
app.listen(PORT, () => {
console.log(`Server running at http://localhost:${PORT}/`)
})

To deploy our application to container apps, we need to create a Dockerfile. We have already set up in the Terraform to automatically build and deploy our image. We'll start from the node base image, copying the required files into the image.

# Builder
FROM node:18 AS builder
WORKDIR /app
COPY orders/package*.json /app/orders/
COPY orders/tsconfig*.json /app/orders/
COPY orders/src /app/orders/src

Then install the dependencies and build the application.

RUN cd /app/orders && yarn install && yarn run build

We will then create the base for our runner image and copy the built code into it. After that, we will install the production dependencies.

# Runner
FROM node:18
WORKDIR /app
COPY --from=builder /app/orders/dist /app/orders/dist
COPY --from=builder /app/orders/package\*.json /app/orders/
RUN cd /app/orders && yarn install --production

We can then expose port 3000 and set our built application as the entrypoint for the container app starting.

EXPOSE 3000
CMD ["node", "./orders/dist/app.js"]

We can then create the invoices application. This will use the BlobServiceClient from @azure/storage-blob. We will add this as a dependency.

yarn add `@azure/storage-blob`

We can then initialise the express application and the Azure storage client. We will pull out the environment variables to start, erroring if we can't find the variables required by the Azure client.

import express, { Request, Response } from 'express'
import { BlobServiceClient } from '@azure/storage-blob'
import bodyParser from 'body-parser'
// Extract our environment variables
const PORT = process.env.PORT || 3000
const INVOICE_API_URL = process.env.INVOICE_API_URL || ''
const INVOICE_API_KEY = process.env.INVOICE_API_KEY || ''
const app: express.Application = express()
app.use(bodyParser.json())
// Retrieve your Azure Storage account connection string from an environment variable
const STORAGE_CONN_STR = process.env.AZURE_STORAGE_CONNECTION_STRING
if (!STORAGE_CONN_STR) {
throw Error('Azure Storage Connection string not found')
}
const INV_CONTAINER = process.env.AZURE_INVOICES_CONTAINER_NAME
if (!INV_CONTAINER) {
throw Error('Azure Storage Container string not found')
}
const blobServiceClient =
BlobServiceClient.fromConnectionString(STORAGE_CONN_STR)
const containerClient = blobServiceClient.getContainerClient(INV_CONTAINER)

We'll then write the handler that will subscribe to the orders topic. This will written on the route /handle-orders which was bound to the topic in the terraform code.

app.post('/handle-orders', async (req: Request, res: Response) => {
// Handle subscription validation from Azure Event Grid
if (req.header('aeg-event-type') === 'SubscriptionValidation') {
const validationCode = req.body[0].data.validationCode
return res.status(200).send({ validationResponse: validationCode })
}
const orderEvents = req.body
if (!Array.isArray(orderEvents)) {
return res.status(400).send('expected array of order events')
}
await Promise.all(
// Generate a new invoice for each order event
orderEvents.map(async (orderEvent) => {
const order = orderEvent.data
const response = await fetch(`${INVOICE_API_URL}/invoices`, {
method: 'POST',
headers: {
'x-api-key': INVOICE_API_KEY,
},
body: JSON.stringify(order),
})
if (!response.ok) {
throw new Error(
`Failed to generate invoice for order ${order.orderNumber}`,
)
}
const invoiceFile = await response.arrayBuffer()
const blockBlobClient = containerClient.getBlockBlobClient(
`${order.orderNumber}.pdf`,
)
// Upload data to the blob
await blockBlobClient.upload(invoiceFile, invoiceFile.byteLength)
}),
)
return res.status(200)
})
// Start the application
app.listen(PORT, () => {
console.log(`Server running at http://localhost:${PORT}/`)
})

We'll then create the invoices dockerfile. This is identical to the orders dockerfile, but references the orders application instead.

# Builder
FROM node:18 AS builder
WORKDIR /app
# Invoices module
COPY invoices/package*.json /app/invoices/
COPY invoices/tsconfig*.json /app/invoices/
COPY invoices/src /app/invoices/src
RUN cd /app/invoices && yarn install && yarn run build
# Runner
FROM node:18
WORKDIR /app
# Invoices service
COPY --from=builder /app/invoices/dist /app/invoices/dist
COPY --from=builder /app/invoices/package*.json /app/invoices/
RUN cd /app/invoices && yarn install --production
EXPOSE 3000
CMD ["node", "./invoices/dist/app.js"]

To deploy our application we can use the terraform CLI. Running the following command will first preview the deployment, then deploy your infrastructure.

terraform apply

Comparing Terraform and Nitric Approaches

The difference in these two approaches mainly comes from the separation of the infrastructure and application code. When using an Infrastructure as Code approach, such as Terraform, you write your infrastructure code and separately write your application code. Using Infrastructure from Code like Nitric means your infrastructure is inferred from your application code. Nitric removes the need for rewriting infrastructure boilerplate every time you want to write an application, and it's completely cloud portable. Beyond that, it removes possibilities of misconfiguration by automatically binding your infrastructure together and creating the least-privilege policies required for your application to run. If you need to customise your infrastructure, Nitric still has the option to extend the default providers.

IaC, on the other hand, can be practical if you wanted to be able to completely customise your infrastructure. However, Terraform can be difficult to maintain due to the size of the infrastructure code and the possibilities for infrastructure drift. Infrastructure drift is when the state of infrastructure in your cloud does not match what is defined in your infrastructure code. This difference can happen due to a number of reasons, such as making manual changes, IaC environment differences, and human error. No matter the cause, infrastructure drift can cause unwanted errors and security weaknesses in your application. By unifying your application and infrastructure with IfC you reduce the risk of drift as infrastructure is only changed depending on your application's requirements. Because IaC and IfC have different strengths, using them alongside each other may be the right approach for many teams.

Read more about how Terraform and Nitric approaches differ and complement each other. If you want to learn more about Nitric or more of the benefits of Infrastructure from Code, come have a chat on our Discord.

Get the most out of Nitric

Ship your first app faster with Next-gen infrastructure automation

Explore the docs