Nitric versus other IaC Tools

9 min read

With a knowing nod of the head, we in the DevOps space can commiserate with the frustration around Terraform and the headaches that resulted from the IBM acquisition. Afterall we only have to see the VMware bill increases of late to understand why Terraform licence enforcement could be a really bumpy business issue for companies, especially smaller-to medium sized startups. So, as the British might say, do we stick or twist? If we do change IaC tools what are the alternatives? And are the options beyond just jumping from one frying pan to the skillet?

The obvious one for companies that have lots of legacy Terraform, or complex deployments is to make the move to OpenTofu, the open source fork of Terraform. The issue here is that we haven’t really changed much of the original problem with Terraform in that OpenTofu is still only as good as the contributors. I appreciate there are projects that people may contribute to in their spare time, but large open-source projects are typically supported by big tech firms contributing engineers or money to enable developers to work part or full-time on the code. This is how the Linux foundation cements value into the projects under its umbrella. For example, Google is still the largest single contributor to the Kubernetes project. Google can do this because it makes money running services that rely on it, and sells a managed offering in Google Kubernetes Engine. In the case of OpenTofu most of the support is still smallish startups themselves all scrapping for the freemium cream on top of the project. So we have some uncertainty in the future support of OpenTofu.

What is another alternative? Most of the big cloud providers have beefed up their IaC offerings by making most operations parallel in Pulumi. Now there might be some long term issues with Pulumi’s monetization model being similar to Terraform (see the previous paragraph), so that fear lives on. And add to the mix how the discerning sysadmin and Ops folks view a high-level, interpreted, object-oriented programming language for infrastructure, there is a real adoption hurdle to overcome with Pulumi, especially in legacy projects.

There are a few more options out there. Crossplane takes a Kubernetes-centric view of the world. Similar in theory to Google’s Config Connector, Crossplane shows some merit if you are a company wedded to the scalability (and complexity) that comes along with K8s. Crossplane uses a similar ‘provider-centric’ approach like Terraform allowing easier extensibility to different platforms. The project is CNCF-backed but the main contributor appears to be Upbound (along with a few other smallish companies) whose financial sustainability is also uncertain in the long term.

There are probably other technologies that are available, but with each of these there is the concern about future flexibility and problems with lock-in, even if that results solely from engineering momentum. What is really needed is a framework that makes infrastructure deployment truly independent of the language or platform. One that has the community support to incorporate newer technologies as they come along and internal support to maintain integrations with the most common players. That’s where Nitric comes in. Unique in the space, it allows the long term flexibility of multi-cloud/local deployment with well known languages and constructs for developers. It works with conventional development languages that allows app dev to keep a high pace as well as the Terraform (or Pulumi) output that allows the Ops people to keep track of the infra and security all in one place. In a lot ways it’s a solid answer to the uncertainty of where infrastructure as code is headed in the future. Flexibility to move where the industry goes without needing to refactor existing code.

A common concern that teams bring up is moving to a backend framework like Nitric is lock-in. While that issue has some truth to it, it’s not really any different than most legacy code written for cloud deployment that is now in Terraform. And Nitric does offer the ability to build conventional HCL code as an output prior to deployment. Migrating away from Nitric primarily involves extracting the generated IaC (Terraform or Pulumi) and rebuilding CI/CD pipelines. The core application logic remains portable and doesn't require rewriting so the effort is as minimal as possible.

While the paragraphs in the first part of this article lay out the reasons that a solution like Nitric is worth a look, I’d like to just put a short demo of how a basic chat server application might look. It’s more “hello world” than a fully fledged app, but the It highlights the explicit nature of Terraform configurations, giving an effective comparison with Nitric's declarative style. The inclusion of separate Dockerfile and chat-server.js files emphasizes the multi-file nature and increased complexity compared to Nitric's unified approach.

Here is the code for Nitric:

from nitric.resources import api, websocket, kv
from nitric.application import Nitric
from nitric.context import WebsocketContext
import time
# Nitric handles resource creation and permissions implicitly.
# No separate infrastructure files are needed.
connections = kv("connections").allow("get", "set", "delete")
chat_ws = websocket("chat")
# Connection Management
@chat_ws.on("connect")
async def connect_handler(ctx: WebsocketContext):
# Store connection ID in the key-value store.
# Nitric manages IAM permissions for accessing the store.
await connections.set(ctx.req.connection_id, {"connected_at": time.time()})
print(f"Client connected: {ctx.req.connection_id}")
@chat_ws.on("disconnect")
async def disconnect_handler(ctx: WebsocketContext):
# Remove connection ID from the key-value store.
await connections.delete(ctx.req.connection_id)
print(f"Client disconnected: {ctx.req.connection_id}")
@chat_ws.on("message")
async def message_handler(ctx: WebsocketContext):
message = ctx.req.json() # Assuming JSON payloads
if not message or "content" not in message:
return # Ignore invalid messages
# Iterate through connected clients and send the message.
# Nitric handles fan-out and message delivery.
try:
async for key in connections.keys():
await chat_ws.send(key, {"sender": ctx.req.connection_id, "content": message["content"]})
except Exception as e:
print(f"Error broadcasting message: {e}")
Nitric.run()

And here is the code using Terraform: chat-server.js (Illustrative - adapt to your preferred WebSocket library)

const WebSocket = require('ws') // Example using the 'ws' library
const {
initializeApp,
applicationDefault,
cert,
} = require('firebase-admin/app')
const {
getFirestore,
Timestamp,
FieldValue,
} = require('firebase-admin/firestore')
// Initialize Firebase Admin SDK (replace with your actual service account credentials)
// Retrieve the service account key JSON string from the environment variable
const serviceAccountKey = process.env.FIRESTORE_SERVICE_ACCOUNT_KEY || '{}'
const serviceAccount = JSON.parse(serviceAccountKey)
initializeApp({
credential: cert(serviceAccount),
})
const db = getFirestore()
const wss = new WebSocket.Server({ port: 8080 }) // Use the port exposed by Cloud Run
wss.on('connection', (ws) => {
const connectionId = generateConnectionId() // Replace with your connection ID generation logic
// Store connection information in Firestore
db.collection('connections')
.doc(connectionId)
.set({
connected_at: Timestamp.now(),
// ... other connection metadata
})
.then(() => {
console.log(`Connection ${connectionId} stored in Firestore`)
// Send welcome message (or any initial data)
ws.send(
JSON.stringify({
type: 'welcome',
message: 'Welcome to the chat!',
}),
)
})
.catch((err) => {
console.error(`Error storing connection ${connectionId}:`, err)
ws.close() // Close connection if storage fails
})
ws.on('message', (message) => {
try {
// Assume messages are JSON strings
const parsedMessage = JSON.parse(message)
if (!parsedMessage || !parsedMessage.content) {
return // Ignore malformed messages
}
// Get connections
db.collection('connections')
.get()
.then(async (querySnapshot) => {
for (const doc of querySnapshot.docs) {
// get each connection
const receiver = wss.clients.get(doc.id)
if (!receiver || receiver === ws) {
continue // Skip sending to the sender or invalid connections
}
receiver.send(
JSON.stringify({
sender: connectionId,
content: parsedMessage.content,
}),
)
}
})
} catch (error) {
console.error('Error handling message:', error)
// Consider error handling and notifying the client.
ws.send(
JSON.stringify({
type: 'error',
message: 'There was an error sending the message.',
}),
)
}
})
ws.on('close', () => {
// Remove connection information from Firestore
db.collection('connections')
.doc(connectionId)
.delete()
.then(() => {
console.log(`Connection ${connectionId} removed from Firestore`)
})
.catch((err) => {
console.error(`Error removing connection ${connectionId}:`, err)
})
})
ws.on('error', (error) => {
console.error('WebSocket error:', error)
// Handle WebSocket errors appropriately (e.g., logging, cleanup).
})
})
function generateConnectionId() {
// Placeholder - replace with your ID generation logic
return Math.random().toString(36).substring(2, 15)
}
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "~> 5.0" # Or your preferred version
}
}
}
# Configure the Google Cloud provider (replace with your project ID)
provider "google" {
project = "your-gcp-project-id"
region = "us-central1" # Or your preferred region
}
# Create a Cloud Run service for the chat application
resource "google_cloud_run_v2_service" "default" {
name = "chat-service"
location = "us-central1" # Or your preferred region
template {
containers {
image = "gcr.io/my-project/chat-service:latest" # Your container image
# Set environment variables (secrets, configuration) here.
# For larger numbers of variables use a separate file
}
}
}
# Create a Firestore database
resource "google_firestore_document" "default" {
collection = "connections"
document_id = "default" # Default collection document
# If additional initialization is required, perform it in the application code
}
# Grant the Cloud Run service the "Cloud Datastore User" role to access Firestore
# (Illustrative - replace with the actual required permissions).
resource "google_project_iam_member" "cloudrun_firestore_access" {
project = "your-gcp-project-id"
role = "roles/datastore.user"
member = "serviceAccount:${google_cloud_run_v2_service.default.template[0].containers[0].image}"
}

And the dockerfile:

# Use a suitable base image (e.g., Node.js, Python)
FROM node:18
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
# Replace with your application's entrypoint
CMD ["node", "chat-server.js"]

Hopefully this example makes the case for Nitric even clearer. In Nitric, the same functionality (connection management, message broadcasting, IAM permissions) is handled implicitly by the framework, reducing boilerplate and simplifying the developer experience. This revised example provides a stronger argument for Nitric's advantages in terms of ease of use and developer productivity.

If there are any other questions, please reach out to the team and we can get you some answers. And of course we always love to chat!

Get the most out of Nitric

Ship your first app faster with Next-gen infrastructure automation

Explore the docs