SAM templates your way

Marcin Sodkiewicz
6 min readJun 14, 2023

--

How & why you should start using your own SAM CookieCutter templates.

Motivation

I build a lot of small projects to play with AWS services. Apart from my day job, I create my own small projects, usually using AWS Lambda. Some of them are written in Go and some in TypeScript. I usually started my projects with sam init and then made changes to the generated project. Often these changes were very similar, so the next obvious step was to add some automation to this process. This article focuses on exactly that.

Why you should create your own template?

Every time you create a new project using SAM, there are many files that are not related to the project you have just started, and you have to manually replace “Hello World” with the name of your project (including directory 😿) in several places and make many other adjustments.

SAM templates provided by AWS are meant to be generic and meet the needs of a very broad audience. However, you may want to add your own quirks or project standards right from the start.

That’s the beauty of having your own SAM project template. You don’t have such a broad target. You can have your own conventions — like using ssm parameters, lambda layers or… anything else you want.

What I wanted to have from start?

I couldn’t find anything made by the community that met all my requirements. Maybe I missed some community repo, I don’t know. Still, I prefer to have my own template that I can adapt to my needs over time. In case of my private projects I had some requirements like:

  • GitHub Actions setup
  • Single deployment for project having multiple lambdas using nested stacks
  • OpenTelemetry setup for AWS Lambda with centralized configuration
  • Makefile for building, deploying and locally invoking lambdas etc.
  • Create default lambda alarms
  • Create LogGroup resource
  • Single place for log level configuration

Cookie Cutter Repo

There is a convention that your CookieCutter template should be in the root of your git repository. If you want to have multiple CookieCutter templates, you want to keep them in the single repo… I guess. At least I do. I chose this option and created a single repo for my CookieCutter templates with git submodules.

After that you can share your template with others and they can build project using:sam init -l <LINK_TO_GITHUB_REPO> or in case of local development sam init -l ../sam-templates/nodejs16.x/<template_name> which is pretty handy especially during creating your template.

Creating resources in root of the project

Many of my requirements imply having files in the root of the repository, such as the GitHub workflow definition, which by convention needs to be in the .github/workflows directory, the root CloudFormation template for nested stacks, or even the Makefile for the whole project.

By default this can’t be done in CookieCutter template. Fortunately, there are template generation hooks that can be used to move/modify files. Such hook might look like this:

#!/bin/bash
echo "Running post_gen_project.sh"

# Check if .github folder exists
if [ -d ".github" ]; then
if [ ! -d "../.github/workflows" ]; then
echo "Creating folder ../.github/workflows"
mkdir -p ../.github/workflows
cp .github/workflows/* ../.github/workflows
echo "GitHub Workflows moved successfully"
else
echo "Folder ../.github/workflows already exists"
fi

rm -rf .github

else
echo "Folder not found!"
fi

Full hooks documentation can be found here.

Hooks can also be handy in case you would like to add your nested stack into the root stack by injecting into CloudFormation template piece of YAML. It’s tricky, but can be done using yq. I created this ugly script presented below, but it can be template loaded from S3 or any other place. Hooks are supporting also python so you can implement any logic.

if [ ! -f "../template.yaml" ]; then
echo "Creating rootTemplate.yaml"
mv rootTemplate.yaml ../template.yaml
else
echo "rootTemplate.yaml already exists"
rm rootTemplate.yaml

yaml_file="../template.yaml"

projectResourceKey="{{cookiecutter.project_name | replace('-', '_')}}"
newYamlResourceKey="Resources.$projectResourceKey"

item_exists=$(yq ".Resources | has(\"$projectResourceKey\")" $yaml_file)
if [ "$item_exists" == "true" ]; then
echo "Item $newYamlResourceKey already exists in the root template file. Skipping..."
else
temp_file=$(mktemp)
yaml_object_to_add=$(cat <<EOF
Type: AWS::Serverless::Application
Properties:
Location: ./{{cookiecutter.project_name}}/template.yaml
Parameters:
LogLevel: !Ref LogLevel
EOF
)
# Add new resource to YAML file
echo "$yaml_object_to_add" > "$temp_file"
yq eval-all "select(fileIndex==0).$newYamlResourceKey = select(fileIndex==1) | select(fileIndex==0)" $yaml_file $temp_file -i $yaml_file

echo "Item added successfully!"
fi
fi

Another ideas? Maybe you would like to define standard git hooks into your project? That’s great place to add it as well! Sky is the limit.

Rendering files with markers

Whole files
CookieCutter uses Jinja to render files and directories. This is really great, but in some cases you don’t want to render a file at all. In that case you can just add such a file to the cookiecutter.json file format like this

{
...
"_copy_without_render": [
"<path_to_file>"
]
}

and that’s simple way. More tricky one are files with partially blocked rendering.

Partial rendering
Sometimes there is a conflict between markers used in other technologies, and you don’t want to render only a small part of the file. An example of this might be dynamic references in CloudFormation such as:

Environment:
Variables:
LOG_LEVEL: !Ref LogLevel
AWS_LAMBDA_EXEC_WRAPPER: /opt/otel-handler
{% raw %}
OPENTELEMETRY_COLLECTOR_CONFIG_FILE:
!Sub
- '{{resolve:ssm:${ConfigLocation}}}'
- ConfigLocation:
Fn::ImportValue: OTEL::CollectorConfig::S3Location
{% endraw %}
OPENTELEMETRY_EXTENSION_LOG_LEVEL: warn
OTEL_SERVICE_NAME: {{ cookiecutter.project_name }}
POWERTOOLS_SERVICE_NAME: {{ cookiecutter.project_name }}

Here also I can use my private convention for importing OTEL Collector config file using path available under OTEL::CollectorConfig::S3Location defined in the ssm

or variables injection inside of GitHub Actions workflow definitions:

on:
push:
branches:
- main

permissions:
id-token: write
contents: read

env:
AWS_REGION : "eu-west-1"
AWS_DEPLOY_ROLE: "<provide your role here>"

jobs:
build-deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Assume AWS role
uses: aws-actions/configure-aws-credentials@v2
with:{% raw %}
aws-region: ${{ env.AWS_REGION }}
role-to-assume: ${{ env.AWS_DEPLOY_ROLE }}{% endraw %}
role-session-name: GitHub_to_AWS_via_FederatedOIDC
- name: Setup SAM
uses: aws-actions/setup-sam@v1
- name: Build and deploy application
run: make deploy

Beyond automation

At the moment, after creating a new project from scratch, I have to do a few manual actions:

  1. Create a GitHub OIDC stack to deploy my new project. This includes my favourite task — setting up the least privilege to deploy role.
  2. Set arn of the just created deployer role in the Github workflow definition under AWS_DEPLOY_ROLE in the pipeline definition.

GitHub Actions setup guide can be found here. Long story short you have to deploy stack like one presented below which will grant GitHub access to assume role in your account. Also you will have to add Policies section with permissions necessary to deploy your application.

Parameters:
GitHubOrg:
Description: Name of GitHub organization/user (case sensitive)
Type: String
RepositoryName:
Description: Name of GitHub repository (case sensitive)
Type: String
OIDCProviderArn:
Description: Arn for the GitHub OIDC Provider.
Default: ""
Type: String
OIDCAudience:
Description: Audience supplied to configure-aws-credentials.
Default: "sts.amazonaws.com"
Type: String

Conditions:
CreateOIDCProvider: !Equals
- !Ref OIDCProviderArn
- ""

Resources:
Role:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Action: sts:AssumeRoleWithWebIdentity
Principal:
Federated: !If
- CreateOIDCProvider
- !Ref GithubOidc
- !Ref OIDCProviderArn
Condition:
StringEquals:
token.actions.githubusercontent.com:aud: !Ref OIDCAudience
StringLike:
token.actions.githubusercontent.com:sub: !Sub repo:${GitHubOrg}/${RepositoryName}:*

GithubOidc:
Type: AWS::IAM::OIDCProvider
Condition: CreateOIDCProvider
Properties:
Url: https://token.actions.githubusercontent.com
ClientIdList:
- sts.amazonaws.com
ThumbprintList:
- 6938fd4d98bab03faadb97b34396831e3780aea1

Outputs:
Role:
Value: !GetAtt Role.Arn

It’s crucial to add this section in your pipeline definition:

permissions:
id-token: write
contents: read

Summary

Having your own SAM template gives you the freedom and flexibility to get your projects up and running quickly. Unfortunately, it’s one more thing to manage and update. On the other hand — no-brainer setup using CI/CD GitHub Actions, easy extending my project with additional lambda apps & OTEL Observability using NewRelic free tier seems to outweigh this drawback.

I have wanted to create my own CookieCutter templates for a long time, but have always put it off. I wasted so much time! Don’t do it and have fun :)

Example blank repo: https://github.com/SodaDev/sam-templates-ts

--

--

No responses yet