Can AI Generate Functional Terraform?

7 min read

Sure, LLMs can create Terraform HCL code, but generating deployable, functional Terraform stacks takes some extra steps.

Current large language models (LLMs) can generate syntactically correct Terraform HCL code, but can they generate deployable, functional Terraform stacks? Anyone who has tried will quickly notice a pattern. Let’s take a simple example: a small application with an Amazon S3 bucket and a Lambda function.

Iteration 1: AI Generates the Basics

To the prompt: "Generate Terraform to create an AWS S3 bucket and a Lambda function that processes uploaded objects", our AI’s response is:

resource "aws_s3_bucket" "example" {
bucket = "example-bucket"
}
resource "aws_lambda_function" "example" {
function_name = "example_lambda"
s3_bucket = aws_s3_bucket.example.bucket
s3_key = "lambda_function.zip"
handler = "index.handler"
runtime = "python3.8"
role = aws_iam_role.lambda_exec.arn
}
resource "aws_iam_role" "lambda_exec" {
name = "lambda_exec_role"
assume_role_policy = jsonencode({
Version = "2012-10-17",
Statement = [{
Effect = "Allow",
Principal = { Service = "lambda.amazonaws.com" },
Action = "sts:AssumeRole"
}]
})
}

At first glance, the Terraform looks correct and has valid syntax, but on closer inspection, it is missing critical pieces.

  • Missing permissions: The Lambda function has no permission to read or write to the S3 bucket.
  • No event trigger: The bucket doesn’t notify the Lambda function when an object is uploaded.
  • No logging, no tags: Best practices like logging, monitoring and resource metadata are absent.

Iteration 2: Adding IAM Permissions

I refined the prompt to include identity and access management (IAM) permissions: "Ensure the Lambda function has the least privilege necessary to read and write to the S3 bucket." AI’s response:

resource "aws_iam_policy" "lambda_s3" {
name = "lambda_s3_policy"
policy = jsonencode({
Version = "2012-10-17",
Statement = [{
Effect = "Allow",
Action = ["s3:GetObject", "s3:PutObject"],
Resource = "${aws_s3_bucket.example.arn}/\*"
}]
})
}
resource "aws_iam_role_policy_attachment" "lambda_attach" {
role = aws_iam_role.lambda_exec.name
policy_arn = aws_iam_policy.lambda_s3.arn
}

Now the Lambda function has the minimum required permissions, but it's still missing event triggers. Using this approach, we can iterate indefinitely until we get something we’re happy with. We can even find models that seem to deliver better responses with clearer prompting. The key point however is that while the LLM is producing the HCL, ultimately it is the developer’s skill and knowledge that determines the quality and accuracy of the final output. In order to guide the LLM to a satisfactory outcome, the developer needs a clear picture of their end design, including an understanding of AWS services, IAM and the finer points of configuring and integrating those components.

Trained on Snippets, Not Systems

I suspect most AI models aren’t trained on production-grade Terraform since they primarily use publicly accessible content for training. Given that the majority of production Terraform projects are private, it seems more likely that the available training data would be comprised of snippets and example code available in docs, tutorials or open source projects.

Think about where most publicly available Terraform lives:

  • Docs and tutorials: Minimal or fragmented configurations designed to demonstrate concepts and isolated best practices, without broad context or complete examples.
  • Stack Overflow answers: Quick fixes that solve one-off issues, perhaps missing dependencies, security settings or broader context.
  • Open source projects: Useful, but inconsistent — popular commercial open source projects don’t often publish the Infrastructure as Code (IaC) for their hosted or paid solutions, limiting the availability of high-quality, complete IaC projects.

The models aren’t being trained on millions of complete, secure and scalable infrastructure repositories. Instead, they’re assembling fragmented snippets from available examples. That’s why I believe AI-generated Terraform often lacks:

  • Consistent structure: No standardization across modules.
  • Dependency awareness: Resources aren’t properly linked.
  • Security best practices: AI doesn’t enforce least privilege IAM roles or networking rules.
  • Scalability and operations focus: Logging, monitoring and autoscaling are often ignored.

We Can’t Expect AI to Solve Undefined Problems

If you ask AI to generate Terraform without explaining what your infrastructure needs to do, it will give you highly generic versions of what you’ve asked for. That’s because it won’t be aware of your security requirements, scaling strategy or how services need to interact.

It’s not that the responses provided are wrong, it's that there are too many unknowns, so it’s effectively guessing to fill in the blanks. And guessing doesn’t work for real applications. Instead, models should be used by skilled operators who treat the AI like an assistant that works within a structured framework.

A Better Approach:

Use AI to assist with writing your application code. Define your application’s infrastructure needs instead of expecting AI to “figure it out.” Use AI to assist with Terraform, but provide context to enforce security, dependencies and best practices.

Ensure everything is production-ready with static analysis tools. This approach forces AI to work within a structure rather than generating infrastructure blindly. However, there is still one major problem: Defining your infrastructure requirements takes time and a comprehensive understanding of how your application should work in the cloud, and this knowledge is often held by your architects and site reliability engineers (SREs). Introducing Context and Expertise

Most developers aren’t Terraform experts, and most AI models don’t understand full architectures. That’s where Nitric comes in. It doesn’t just generate infrastructure, it introduces application context and is opinionated about how your application should be provisioned.

Example: A Function with a Bucket

Let’s revisit our earlier example of a function that writes to an S3 bucket.

Without Nitric:

AI may generate something that looks like working Terraform:

resource "aws_s3_bucket" "notes" {
bucket = "notes-bucket"
}
resource "aws_lambda_function" "handler" {
function_name = "notes_handler"
runtime = "python3.9"
handler = "handler.lambda_handler"
role = aws_iam_role.lambda_exec.arn
source_code_hash = filebase64sha256("lambda.zip")
}

Issues

  • The Lambda function lacks permissions to write to the bucket.
  • There is no event trigger to invoke the function.
  • Logging and monitoring are missing.
  • The IAM policy is not following the least privilege philosophy, which increases attack vectors.

With Nitric:

Nitric allows you to define your application logic, and it generates the infrastructure that matches your application’s requirements. It does this by creating a resource specification or graph of your application’s requirements and mapping these to predefined Terraform modules that are production ready, but can be overridden or extended by your own modules, which you can use AI to help write, if you choose.

Application Code (Python)

from nitric.application import Nitric
from nitric.resources import api, bucket
from nitric.context import HttpContext
main = api("main")
notes = bucket("notes").allow("read", "write")
@main.get("/notes/:title")
async def get_note(ctx: HttpContext):
title = ctx.req.params["title"]
ctx.res.body = await notes.file(title).read()
@main.post("/notes/:title")
async def post_note(ctx: HttpContext):
title = ctx.req.params["title"]
ctx.res.body = await notes.file(title).write(ctx.req.body.encode("utf-8"))
Nitric.run()

Generated Terraform Output for Lambda and S3 Bucket

resource "aws_s3_bucket" "notes" {
bucket = "${var.bucket_name}-${random_id.bucket_id.hex}"
}
resource "aws_lambda_function" "handler" {
function_name = "${var.service_name}-${var.stack_id}"
role = aws_iam_role.role.arn
image_uri = "${aws_ecr_repository.repo.repository_url}@${docker_registry_image.push.sha256_digest}"
package_type = "Image"
timeout = var.timeout
memory_size = var.memory
ephemeral_storage {
size = var.ephemeral_storage
}
}
resource "aws_lambda_permission" "allow_bucket" {
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.handler.function_name
principal = "s3.amazonaws.com"
source_arn = aws_s3_bucket.notes.arn
}
resource "aws_s3_bucket_notification" "bucket_notification" {
bucket = aws_s3_bucket.notes.id
lambda_function {
lambda_function_arn = aws_lambda_function.handler.arn
events = ["s3:ObjectCreated:*"]
}
}

IAM permissions are correctly set.

  • A proper event trigger is wired up
  • Logging and monitoring are automatically configured
  • Security and tagging policies are enforced.
  • Least privilege is guaranteed because the Terraform is always in sync with app requirements.

Add Context to AI-Written IaC

AI can be a powerful tool for generating infrastructure code, but it needs the right context to create something truly useful. Your security model, scaling needs and operational constraints aren’t things AI inherently understands. Basically, it needs guidance.

Nitric bridges this gap by automatically generating an infrastructure specification based on your application code. This ensures that AI-generated Terraform isn’t just functional but follows correct dependencies, security policies and operational best practices.

Built as an open source project for the community, Nitric makes cloud development more predictable and secure. Join us in shaping the future — contribute to Nitric today!

Get the most out of Nitric

Ship your first app faster with Next-gen infrastructure automation

Explore the docs