In Nitric a service is a deployable unit of code, typically this is a single container image, that can be deployed to a cloud provider. Services can be deployed as serverless functions, long-running containers or potentially on VMs or other compute resources. All of Nitric's standard deployment providers deploy services as containers on serverless platforms by default.

In many way services are the core building block of Nitric applications, they are the unit of code that is deployed and run in the cloud. Services can be written in any language that can be compiled to a container image, and can be deployed to any cloud provider that Nitric supports. They're responsible for handling API requests, processing messages, and executing tasks, among other things. Most other resources in Nitric are designed to be declared by services, or interact with them in some way.

An application can have a single service handling the entire application, or many services working together to provide a more complex application. Services can be written in different languages, even within the same application.

Service Deployment

1. System Context

Developers use Nitric to create services or functions within their application.

  • Application code is written in files that matches the pattern(s) in the nitric.yaml config file.
  • The Nitric CLI builds container images for their Lambda functions and push them to a container registry.

Operations use default or overridden IaC (e.g Terraform modules) to provision the necessary resources for their target cloud.

Example AWS Provider
  • AWS ECR (Elastic Container Registry) stores a container image for each Nitric service.
  • AWS Lambda runs containers based on the images from ECR.
  • AWS IAM manages roles and policies for secure access to AWS resources.
  • Docker or Podman is used to build and tag container images before pushing them to ECR.
Code
Build & Push Image
Push to
Terraform
Access ECR
Provide Image
Manage Permissions
Developer
Operations
nitric up
AWS ECR
(Container Registry)
AWS Lambda
(Containerized Functions)
AWS IAM
Docker
(Image Building)
Example GCP Provider
  • Docker builds and tags the image, which is then pushed to Google Artifact Registry (GCR).
  • Google IAM ensures secure access, with the appropriate permissions for the Cloud Run service and service accounts.
  • The Cloud Run service will run based on the container image pulled from GCR.
Code
Build & Push Image
Push to
Terraform
Access GCR
Provide Image
Manage Permissions
Developer
Operations
nitric up
Google Artifact Registry
(Container Registry)
Google Cloud Run
(Containerized Functions)
Google IAM
Docker
(Image Building)

2. Container

General-purpose workers that handle tasks like processing queues, topics, or schedules. Services abstract the runtime’s ability to route tasks and events to application-defined logic.

API & Event-Driven Communication

Manages API calls
Queues messages
Broadcasts messages to subscribers
Handles WebSocket connections
Routes HTTP requests
Processes message queues
Distributes messages to subscribers
Facilitates real-time communication
Service
APIs
Queues
Topics
WebSockets
Cloud API Gateway
Cloud Queue Service
Cloud Pub/Sub
WebSocket Gateway

Data Storage & Management

Handles key-value operations
Manages file storage
Manages secrets securely
Interacts with relational databases
Stores/retrieves key-value pairs
Stores/retrieves files
Stores/retrieves secrets
Executes SQL queries
Service
KeyValue
Storage
Secrets
SQL Databases
Cloud KeyValue Store
Cloud Object Storage
Cloud Secret Manager
Relational Database

Task Execution & Scheduling

Schedules tasks
Triggers periodic tasks
Service
Schedules
Cloud Scheduler

3. Component

Service Module

  • Configures Terraform to handle the deployment and management of containerized services, abstracting away provider-specific details.
  • Dynamically creates and manages a container registry for storing service container images, ensuring secure and efficient access.
  • Automates authentication and tagging for container image pushes, supporting seamless integration with deployment pipelines.
  • Creates a role with least privilege permissions for executing the service, including necessary trust relationships and policies for interacting with other resources.
  • Configures containerized services with runtime parameters like environment variables, memory limits, and execution timeouts to optimize performance and scalability.
  • Optionally supports advanced networking configurations like VPC settings for secure and isolated deployments.
  • Abstracts the underlying infrastructure for running serverless or containerized services, enabling developers to focus on application logic while providing a consistent interface for operations teams.

4. Code

Developers write application code that implements handlers for the api, storage, websocket, topic, schedule resources. This code is written in files that matches the pattern(s) in the nitric.yaml file.

Nitric service configuration - nitric.yaml

HTTP Route Handler

import { api } from '@nitric/sdk'
const customerRoute = api('public').route(`/customers`)
customerRoute.get((ctx) => {
// construct response for the GET: /customers request...
const responseBody = {}
ctx.res.json(responseBody)
})

Bucket On Read/Write/Delete Handler

import { bucket } from '@nitric/sdk'
const assets = bucket('assets')
const accessibleAssets = bucket('assets').allow('delete')
// The request will contain the name of the file `key` and the type of event `type`
assets.on('delete', '*', (ctx) => {
console.log(`A file named ${ctx.req.key} was deleted`)
})

Operations will use or extend the Nitric infrastructure modules, including both Terraform and Pulumi:

Last updated on Feb 15, 2025