Back

Automate AWS Security Auditing with Cloudsplaining

banner
Ryan CartwrightRyan Cartwright

Ryan Cartwright

11 min read

Cloud security audits are comprehensive assessments that should be conducted frequently to evaluate your deployed infrastructure's security measures, access control, and the overall integrity of your cloud-based environment. The audit should aim to identify weaknesses, gaps, or non-compliance issues within your cloud infrastructure. It should provide recommendations for improvements and remediation actions to resolve these found vulnerabilities. Weaknesses in cloud infrastructure most commonly derives from a misconfiguration of identity and access management (IAM) policies. This misconfiguration is known as an access-control vulnerability and is in direct conflict with the principle of least-privilege.

The principle of least-privilege refers to the idea that a user or resource should be given the minimum permissions required to function. Least-privilege extends beyond human interaction in cloud environments, referring to restricting application privileges, so they many only interact with specific cloud resources using permitted operations. By implementing least-privilege in your cloud environment you lower your attack surface area. This reduces cybersecurity threats such as data exfiltration, infrastructure modification, resource exposure, and privilege escalation.

For AWS, Cloudsplaining is a useful tool that identifies violations of least-privilege in AWS IAM policies. It's able to scan all the policies in your AWS account or individual policy files. The output from Cloudsplaining is an HTML report flagging risks and providing potential remediation actions. It is actively maintained by Salesforce and is provided as open source software.

As discussed, a secure cloud environment starts with access-control being correctly configured and the principles of least-privilege being adhered to. Using Nitric simplifies this task, ensuring deployed resources are only accessed by functions that require the access. This is achieved by requiring functions to defines exactly the level of access they need to specific resources, using plain terminology. This reduces the likelyhood that access control will be misconfigured due to confusion or mistake.

Additionally, keeping requests for access close to the code that performs the access ensures unused permissions are easy to identify and remove as an application changes over time.

import { bucket } from '@nitric/sdk'

// Create a bucket, where this function can store sensitive documents but not read them.
const writeOnlyBucket = bucket('sensitive-documents').for('writing')

// ❌ Fails as this function was not given read permissions
await writeOnlyBucket.file('super-sensitive.pdf').read()

// ✅ Succeeds as it is able to write
await writeOnlyBucket.file('super-sensitive.pdf').write()

Although Nitric gets us on the right track, it never hurts to be vigilant when it comes to cloud security. We can use Cloudsplaining to ensure we're adhering to the principle of least-privilege by performing an automated security audit.

Using Cloudsplaining

Using cloudsplaining involves downloading the authorization details and then scanning the downloaded details. This is handled directly by Cloudsplaining, which uses your local AWS credentials in the same way Nitric does.

Start by installing cloudsplaining, this is OS dependent so you should refer to their installation guide.

Cloudsplaining will download what is deployed in your AWS account. Therefore, you should perform a deployment before running Cloudsplaining.

Next, you can download the account authorization details which includes the role and user policies using the below command. This provides Cloudsplaining with all the context it needs to provide an accurate security analysis. By default Cloudsplaining will download the authorisation details to a file named default.json.

cloudsplaining download

We'll scan the downloaded file with Cloudsplaining using thescan command. When it's complete an html report will be generated and open it in your browser.

cloudsplaining scan --input-file default.json

Automating using GitHub Actions

Now that we have an initial manually generated report it's a great time to perform any suggested remediation activities that are appropriate for your environment.

That said, we also want to ensure misconfiguration doesn't occur in the future. This can be achieved by automating Cloudsplaining scans of non-production deployments using GitHub Actions, allowing misconfigurations to be found before they're live.

We can automate this whole process within a GitHub Action workflow that deploys updates to a temporary environment, scans them, then tears the environment down.

Start by initialising your Git repository.

git init

Then create a new GitHub workflows directory in the root of your project.

mkdir -p .github/workflows

We'll then create our audit workflow file, calling it audit.yml.

touch .github/workflows/audit.yml

We can then create the outline for our Github actions workflow file. If you are unsure on the basics of GitHub actions syntax, you can find more about it here. This action will go through deploying to AWS with Nitric, running the cloudsplaining download and scan commands, and then destroying the deployed stack.

name: Nitric Cloudsplaining Audit
on:
  push:
    branches:
      - main
jobs:
  up:
    name: Deploy to AWS
    runs-on: ubuntu-latest
  audit:
    name: Run Cloudsplaining
    runs-on: ubuntu-latest
  down:
    name: Destroy AWS stack
    runs-on: ubuntu-latest

Deploying the AWS stack

We can use the Nitric github action to deploy our application by filling in the up job. You'll notice in the environment variable section that we need to configure an environment with the following variables:

  • PULUMI_CONFIG_PASSPHRASE
  • PULUMI_ACCESS_TOKEN
  • AWS_ACCESS_KEY_ID
  • AWS_SECRET_ACCESS_KEY
up:
  name: Deploy to AWS
  runs-on: ubuntu-latest
  environment: cloudsplaining
  steps:
    - uses: actions/checkout@v4
    - uses: nitrictech/actions@v1
      with:
        command: up
        stack-name: aws
      env:
        PULUMI_CONFIG_PASSPHRASE: ${{ secrets.PULUMI_CONFIG_PASSPHRASE }}
        PULUMI_ACCESS_TOKEN: ${{ secrets.PULUMI_ACCESS_TOKEN }}
        AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
        AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}

Running Cloudsplaining as an action

We can then build out the Cloudsplaining job, starting by installing it. We'll first use the setup-python action to install pip. We can then use pip to install Cloudsplaining.

audit:
  - uses: actions/setup-python@v4
  - run: pip3 install --user cloudsplaining

We then need to configure AWS credentials using the configure-aws-credentials action.

The configured credentials or Nitric stack should be for a non-production environment.

audit:
  ...
  - name: Configure AWS Credentials
    uses: aws-actions/configure-aws-credentials@v1
    with:
      aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
      aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
      aws-region: us-east-1

To finish off this job we can use the download and scan commands. This will be done similarly to how we ran it locally, however we will add the --skip-open-report flag. Without it, the opening of the browser will block when used in a CI pipeline. We also add the upload-artifact action to make the html report accessible after the action has been run.

audit:
  ...
  - run: cloudsplaining download
  - run: cloudsplaining scan --input-file default.json --skip-open-report
  - uses: actions/upload-artifact@v3
    with:
      name: cloudsplaining-report
      path: iam-report-default.html
      retention-days: 7

Destroying the AWS stack

Finally, we will destroy the stack by using the Nitric GitHub action again.

down:
  name: Destroy AWS stack
  needs: audit
  runs-on: ubuntu-latest
  environment: cloudsplaining
  steps:
    - uses: actions/checkout@v4
    - uses: nitrictech/actions@v1
      with:
        command: down
        stack-name: aws
      env:
        PULUMI_CONFIG_PASSPHRASE: ${{ secrets.PULUMI_CONFIG_PASSPHRASE }}
        PULUMI_ACCESS_TOKEN: ${{ secrets.PULUMI_ACCESS_TOKEN }}
        AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
        AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}

Once it has all been written, it will look like the following.

name: Nitric Cloudsplaining Audit
on:
  push:
    branches:
      - main
jobs:
  up:
    name: Deploy to AWS
    runs-on: ubuntu-latest
    environment: cloudsplaining
    steps:
      - uses: actions/checkout@v4
      - uses: nitrictech/actions@v1
        with:
          command: up
          stack-name: aws
        env:
          PULUMI_CONFIG_PASSPHRASE: ${{ secrets.PULUMI_CONFIG_PASSPHRASE }}
          PULUMI_ACCESS_TOKEN: ${{ secrets.PULUMI_ACCESS_TOKEN }}
          AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
  audit:
    name: Run cloudsplaining
    needs: up
    runs-on: ubuntu-latest
    environment: cloudsplaining
    steps:
      - uses: actions/setup-python@v4
      - run: pip3 install --user cloudsplaining
      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v1
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: us-east-1
      - run: cloudsplaining download
      - run: cloudsplaining scan --input-file default.json --skip-open-report
      - uses: actions/upload-artifact@v3
        with:
          name: cloudsplaining-report
          path: iam-report-default.html
          retention-days: 7
  down:
    name: Destroy AWS stack
    needs: audit
    runs-on: ubuntu-latest
    environment: cloudsplaining
    steps:
      - uses: actions/checkout@v4
      - uses: nitrictech/actions@v1
        with:
          command: down
          stack-name: aws
        env:
          PULUMI_CONFIG_PASSPHRASE: ${{ secrets.PULUMI_CONFIG_PASSPHRASE }}
          PULUMI_ACCESS_TOKEN: ${{ secrets.PULUMI_ACCESS_TOKEN }}
          AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}

You can test this works by making a push to the main branch of your repository in GitHub. Once the action is run you'll be able to download the generated report and find any of the flagged security issues.

If you have any cloud security questions or any problems setting up the CI pipeline, you can come chat with us on Discord. If something was flagged in your report that you're unsure about, feel free to create an issue in the Nitric repository.

Previous Post
How Much Infrastructure Code Are Your Teams Writing?
Next Post
Nitric Update - November 2023