Using actual code for IaC

5 April 2022

7 minute read

Tim Holm
Co-founder and CTO
nitric config as code banner

Recently we relaunched our website and Node.js SDK for nitric. These are the lessons learnt and reasons we chose to relaunch our SDK.

The original approach

Initially, the nitric framework used a nitric.yaml file, allowing users to define their application's infrastructure separate from their code.

Developers would reference the resources in their nitric.yaml file by name in their application code.

Dogfooding our own tech

video of a dog flipping a food bowl

The initial design was a huge improvement on what we'd used to build cloud-native apps in the past. The tech was ready to go, so it was time to build with it and enjoy how much easier our hard work was going to make the future of application development.

Our new project started with a CRUD API, simple enough right? Write some basic HTTP handlers, wire in some resources like buckets and collections, then define the API with a standard approach like OpenAPI. Easy... or so we thought.

Wiring up resources

As stated above, every project included a nitric.yaml file describing how a nitric project was to be deployed. The file included names for resources and the handlers for serverless functions:

name: my-project-name
handler: functions/example.ts
example-collection: {}
example-queue: {}
example-bucket: {}
example-topic: {}
example-api: {}

The resources could then be referenced in code, using their name and our SDKs:

import { faas, storage, eventing, collections, queues } from '@nitric/sdk';
const topic = eventing().topic('example-topic');
const collection = collections().collection('example-collection');
const queues = queues().queue('example-queue');
const bucket = storage().bucket('example-bucket');
faas.http(async (ctx) => {
// your logic here

Without going into a full example, this approach lead to a few pain points:

  • It was error prone, especially to typos
  • Refactoring resources was challenging
  • Definitions were always duplicated, in the yaml and again in the code

Additionally, we also had to define our APIs using OpenAPI specs.

Defining the API

The project started out defining a simple CRUD API with a few models to handle, not massive but not small either.

Functions in our application would be wired via an API gateway, defined in the spec using an operation level OpenAPI extension like so:

name: my-function
type: function

The problem with these specs was that OpenAPI is quite verbose, even for simple schenarios. So, 700 lines of API spec later, we'd had enough! It was starting to become unwieldy and prone to errors for simple changes and just plain hard to look at.

This isn't a criticsm of OpenAPI, but of our approach. There are generally two schools of thought when it comes to building out contract driven APIs and that is you either:

  • write specs first and generate code from the spec; or
  • write code first and generate your spec from the code.

Our approach was the worst of both. Hand written specs, wired to hand written functions, without assistive generation for the server-side code in-between.

The irony of this realisation is our core tech uses gRPC which exemplifies the first of the above approaches...

This realizaton lead us down the path of revisting the approach, ultimately revamping the nitric framework in the process.

A new approach, enter configuration as code

We decided to bring our configuration into our code, removing the issues caused by their separation. This is not in the same way that many CDK solutions work today, where application and infrastructure are both written in code but remain seperate.

Instead, we wanted a framework that allows developers to expressively communicate the cloud infrastructure requirements of their application using common cloud concepts like topics, queues, storage etc.

It started from the idea that developers could simply declare and use cloud resources, directly within their applications:

import { topic, api, collection } from '@nitric/sdk';
const exampleApi = api('example');
const exampleBucket = bucket('example').for('writing');
const exampleTopic = topic('example').for('publishing');'/example', async ({ res }) => {
// your logic here...

It was immediately obvious this approach addressed most of the developer experience concerns we had with the original:

  • resources are defined in a single location
  • refactoring can be done with existing tools and IDEs
  • resources can be shared across functions using existing module and dependency systems

As an added bonus, we were now able to declare resources with intent allowing us to implement least-priveledge security practices, something that was proving to be a challenge with our first approach.

Comparing it with our previous approach

Prior to the implementation of our configuration as code we would need to define:

  • A nitric.yaml file, containing:
    • function defintions for the application, and
    • API definitions that pointed to OAI3 specs
  • An OAI3 api.yaml file that defined the API and pointed at the functions defined in our nitric.yaml
  • The code that defined the functions

This doesn't sound like much, but you can see from this "Hello World" example it can be a lot, even for a basic use case:


hello: functions/hello.ts
hello: api.yaml


openapi: '3.0.0'
title: Hello World
version: '1.0'
summary: Say hello
description: An endpoint that generates greetings.
operationId: hello
description: greeting response
type: function
name: hello


import { faas } from '@nitric/sdk';
.http(() => {
return `hello world`;

24 lines just for Hello World, in a non-trivial app your spec alone would more likely be 1000+ lines

The same example with config as code


- functions/*.ts


import { api } from '@nitric/sdk';
const mainApi = api('main');
mainApi.get('/hello', (ctx) => {
ctx.res.body = 'Hello World';

Thats it! The same app, in just 7 lines.

The team found, with non-trivial applications, this approach also scales better than the previous style.

How it works

At first glance this might look like a language parsing problem where software would be needed to read the users code to figure out the resources they were actually using, but nitric is a bit different in its approach to abstracting the cloud and we used this to our advantage.

The nitric server

When running in the cloud, each nitric function has a lightweight proxy in front of it that abstracts away the implementation concerns of the cloud that its running on. This server can be anything that conforms to the gRPC API spec, so we built a new resources API and embedded a "deploy-time" server into our CLI.

Here is what we ended up with (using our AWS provider as an example):

example deployment diagram

This applies to all the clouds and resources we support today

Our latest Node.js SDK and nitric server are built on this approach and we're looking forward to broadening our language support as our community grows and requests come in.

Give it a try

See for yourself as well, try building something with nitric.