Custom container runtimes

Nitric builds applications by identifying its entrypoints, which are typically defined in the nitric.yaml file as services. Each entrypoint in a Nitric app is built into its own container using Docker, then deployed to a cloud container runtime such as AWS Lambda, Google CloudRun or Azure Container Apps.

The Nitric CLI decides how to build those containers based on the programming language used by the entrypoint, for example if the entrypoint is a python file it will be built using Nitric's python dockerfile template. These dockerfile templates are designed with compatibility and ease of use in mind, this makes building applications convenient but may not provide additional dependencies your code relies or the ideal optimization for your application.

If you need to customize the docker container build process to add dependencies, optimize container size, support a new language or any other reason, you can create a custom dockerfile template to be used by some or all of the entrypoints (services) in your application.

Add a new custom runtime

Add a new custom runtime in the runtimes configuration.

To use the runtime, simply specify the runtime key per service as shown below.

nitric.yaml
name: custom-example
services:
  - match: services/*.ts
    runtime: 'custom-node' # specify custom runtime
    start: npm run dev:services $SERVICE_PATH
runtimes:
  custom-node:
    # All services that specify the 'custom-node' runtime will be built using this dockerfile
    dockerfile: ./docker/node.dockerfile
    args: {}

In this example we're specifying that any handlers that match the path services/*.ts will use a custom node.dockerfile for their dockerfile template.

Create a dockerfile template

It's important to note that the custom dockerfile you create needs to act as a template. This can look a bit different to how you might have written dockerfiles in the past, since the same template file will need to be used for all services that match the configuration the entrypoint will use a variable which contains the service's filename.

Here are some example dockerfiles:

FROM python:3.11-slim

ARG HANDLER

ENV HANDLER=${HANDLER}
ENV PYTHONUNBUFFERED=TRUE

RUN apt-get update -y && \
    apt-get install -y ca-certificates && \
    update-ca-certificates

RUN pip install --upgrade pip pipenv

# Copy either requirements.txt or Pipfile
COPY requirements.tx[t] Pipfil[e] Pipfile.loc[k] ./

# Guarantee lock file if we have a Pipfile and no Pipfile.lock
RUN (stat Pipfile && pipenv lock) || echo "No Pipfile found"

# Output a requirements.txt file for final module install if there is a Pipfile.lock found
RUN (stat Pipfile.lock && pipenv requirements > requirements.txt) || echo "No Pipfile.lock found"

RUN pip install --no-cache-dir -r requirements.txt

COPY . .

ENTRYPOINT python $HANDLER

Create an ignore file

Custom dockerfile templates also support co-located dockerignore files. If your custom docker template is at path ./docker/node.dockerfile you can create an ignore file at ./docker/node.dockerfile.dockerignore.

Create a monorepo with custom runtimes

Nitric supports monorepos via the custom runtime feature, this allows you to change the build context of your docker build. To use a custom runtime in a monorepo, you can specify the runtime key per service definition as shown below.

Example for Turborepo

Turborepo is a monorepo tool for JavaScript and TypeScript that allows you to manage multiple packages in a single repository. In this example, we will use a custom runtime to build a service in a monorepo using a custom dockerfile.

root/backends/guestbook-app/nitric.yaml
name: guestbook-app
services:
  - match: services/*.ts
    runtime: turbo
    type: ''
    start: npm run dev:services $SERVICE_PATH
runtimes:
  turbo:
    dockerfile: ./turbo.dockerfile # the custom dockerfile
    context: ../../ # the context of the docker build
    args:
      TURBO_SCOPE: 'guestbook-api'
root/backends/guestbook-app/turbo.dockerfile
FROM node:alpine AS builder
ARG TURBO_SCOPE

# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
RUN apk update
# Set working directory
WORKDIR /app
RUN yarn global add turbo

# copy from root of the mono-repo
COPY . .
RUN turbo prune --scope=${TURBO_SCOPE} --docker

# Add lockfile and package.json's of isolated subworkspace
FROM node:alpine AS installer
ARG TURBO_SCOPE
ARG HANDLER
RUN apk add --no-cache libc6-compat
RUN apk update
WORKDIR /app
RUN yarn global add typescript @vercel/ncc turbo

# First install dependencies (as they change less often)
COPY .gitignore .gitignore
COPY --from=builder /app/out/json/ .
COPY --from=builder /app/out/yarn.lock ./yarn.lock
RUN yarn install --frozen-lockfile --production

# Build the project and its dependencies
COPY --from=builder /app/out/full/ .
COPY turbo.json turbo.json

RUN turbo run build --filter=${TURBO_SCOPE} -- ./${HANDLER}  -m --v8-cache -o lib/

FROM node:alpine AS runner
ARG TURBO_SCOPE
WORKDIR /app

COPY --from=installer /app/backends/${TURBO_SCOPE}/lib .

ENTRYPOINT ["node", "index.js"]