Welcome to the final installment of our “Serverless Rust API on AWS” series.
In Part 1, we explored the ecosystem and considerations for building
serverless APIs with Rust on AWS.
Part 2 focused on local development, where we built a Poem API with a
basic CRUD interface for managing currencies using an in-memory repository.
Now, we’re ready to take our application to the cloud. In this post, we’ll:
Implement a DynamoDB repository to replace our in-memory storage.
Adapt our Poem API to work with AWS Lambda.
Use AWS CDK to provision and deploy our infrastructure.
Integrate AWS Cognito for API authentication.
By the end of this tutorial, you’ll have a fully functional, secure,
and scalable serverless API running on AWS.
We’ll see how the abstractions we built in Part 2 allow us to
easily swap out our storage layer, and how Poem’s Lambda integration
simplifies the process of moving our API to the cloud.
Armed with these dependencies, let’s add a new module in src/repository/dynamodb.rs
to implement our DynamoDB repository.
The DynamoDbRepository struct encapsulates a DynamoDB client and the name of the table we’ll be using.
Its new method asynchronously initialises the repository by loading AWS configuration and creating a DynamoDB client.
We then implement the Repository trait for DynamoDbRepository, providing implementations for
add_currency, get_currency, and delete_currency.
These methods interact with DynamoDB using the AWS SDK, handling serialisation and deserialisation of our Currency type using serde_dynamo.
Finally, we expose DynamoDbRepository as a public type in the repository module,
making it available for use in other parts of our application.
In the next section, we’ll focus on adapting our application to run as an AWS Lambda function.
Running Poem on Lambda
To integrate Poem with Lambda, we’ll utilise the poem-lambda crate,
which should already be listed in your Cargo.toml dependencies.
Given the significant differences between local development setup and AWS Lambda environment,
we’ve opted to create two separate binary targets.
The Lambda-specific target configures structured JSON logging (as opposed to the pretty logger
used for local development) and always uses the DynamoDB repository as its backend.
Let’s take a look.
Note the use of poem_lambda::run to execute the application. This function handles
Lambda events and converts them into Poem requests for our API to process.
To register this new binary target, update your Cargo.toml:
Lastly, implement the settings module to robustly load configuration
from environment variables:
Environment variables are a common way to pass configuration
to Lambda functions.
The envy crate allows environment variables to be parsed into serde types,
offering a more convenient and robust way to handle environment variables
than using std::env::var().
To deploy our Rust application on AWS Lambda, we need to package our binary
in a Lambda-compatible format.
The Cargo Lambda tool provides an efficient way to achieve this.
First, install Cargo Lambda by following the instructions on their official website.
Once it’s installed, you can build your Lambda-compatible package with the following command:
This command does the following:
--bin serverless: Specifies the binary target to build (our Lambda-specific entry point).
--arm64: Builds for the ARM64 architecture.
While we’ve chosen ARM64 here, x86-64 is also supported.
The crucial point is to ensure that the build architecture matches the Lambda function’s platform,
which we’ll specify in our AWS CDK code later.
After a successful build, you should find a new compiled binary at:
This bootstrap file is the executable we’ll deploy to AWS Lambda.
It contains your Rust application compiled and packaged in a way that’s compatible with the Lambda runtime.
Deployment
Now that we’ve implemented our API, let’s deploy the infrastructure required to run it in AWS.
This includes a Lambda function to execute our code, an API Gateway to expose our API endpoints,
and a Cognito user pool for authentication.
AWS CDK
For infrastructure definition and deployment, we’ll leverage the
AWS Cloud Development Kit (CDK).
If you’re new to CDK, it’s a powerful framework that allows you to describe cloud
infrastructure using familiar programming languages like Python and TypeScript (but not Rust),
rather than domain-specific languages or JSON/YAML templates.
The benefits of using CDK include:
Using languages you already know, eliminating the need to learn a new domain-specific language.
Leveraging the full power and ecosystem of these programming languages.
Ability to test infrastructure stacks using standard software testing methodologies.
Particularly useful for organisations where developers manage their service infrastructure.
I typically use either TypeScript or Python for CDK stacks (depending on team preferences and existing codebase).
All things being equal, TypeScript often feels slightly nicer, as it’s the language CDK is written is.
I’ve chosen Python for this example.
This decision is based on my observation that many in the Rust community are familiar with Python.
However, the concepts presented here are easily transferable to other CDK-supported languages
like TypeScript, if that’s your preference.
Pyproject setup
Let’s set up our project structure and configuration for the infrastructure code:
Create an infrastructure folder with a stack subfolder in it.
Add an empty __init__.py in infrastructure/stack to make it a Python package.
Create a pyproject.toml file in the project root with the following content:
This configuration:
Specifies project metadata.
Defines dependencies for both main and development environments.
Sets up code quality tools (mypy for type checking and ruff for linting/formatting).
Ensures the stack package from the infrastructure folder is correctly included
CDK Configuration
Create a cdk.json file in the project root to configure the CDK project:
This configuration:
Specifies infrastructure/app.py as the entry point for our CDK application.
Sets various CDK-specific options to optimise deployment and security.
With this setup complete, we’re ready to start defining our infrastructure stack.
In the next section, we’ll create the app.py file and begin implementing our CDK stack.
The CDK stack
Our CDK stack will consist of four main resources:
An HTTP API in API Gateway.
A Lambda function running our Rust code.
A Cognito user pool for API authentication
The DynamoDB table used for persistence.
Let’s focus on the most interesting aspects of this setup, starting with the API Gateway.
API Gateway Configuration
Create a new file infrastructure/stack/api.py with the following content:
The most noteworthy part is how we integrate the Lambda function:
By using the /{proxy+} path, we’re configuring API Gateway to send all requests to our Lambda function,
regardless of the specific path. This allows our Rust code to handle routing internally.
In the next sections, we’ll see how to create the Lambda function and Cognito user pool
that this API construct depends on.
The stack construct with additional resources
The API construct we defined earlier, along with the remaining resources,
are all implemented in infrastructure/stack/stack.py.
Let’s examine this file and its key components:
This stack creates the following resources:
A DynamoDB table.
A Lambda function to host our Rust code.
Cognito user pool and client.
The API Gateway (using our previously defined ServerlessApi construct).
Let’s focus on the most interesting part: the Lambda function creation.
Key points about this Lambda function configuration:
Code location: The code_path points to the serverless directory within the target directory.
This should match the binary target name defined in Cargo.toml.
Architecture: We’re using Architecture.ARM_64. While x86-64 is also supported, the architecture must match the platform for which the binary was built.
Runtime: We’re using the generic Amazon Linux 2023 runtime (Runtime.PROVIDED_AL2023). Older Amazon Linux versions are also compatible.
Memory: We’ve set the memory to 256 MB. Rust’s efficiency allows it to run well even with as little as 128 MB, but the ideal size depends on your specific use case.
Environment variables: We pass the DynamoDB table name as the TABLE_NAME environment variable, which is used in the settings module of our Rust code.
Handler: The handler parameter is set to “does-not-matter” because custom runtimes (like our Rust binary) don’t use this parameter.
The CDK code assumes that the Rust binary has been compiled beforehand.
While there are CDK construct libraries that handle the compilation step
during deployment, separating the build step from CDK deployments can be advantageous:
It simplifies the CDK code and deployment process.
It allows for more flexible build processes, especially in CI/CD pipelines.
It ensures that the exact same binary is used across different environments.
To deploy successfully, ensure that your build process compiles the Rust code
and places the binary in the correct location (target/lambda/serverless) before running the CDK deployment.
Deploying the stack
Now that we’ve fully defined our stack, let’s create the entrypoint for our CDK application.
We’ll add the app.py file that we previously referenced in cdk.json.
With our infrastructure code complete, we can now deploy our stack.
Here’s a step-by-step guide to ensure a successful deployment:
Compile Rust code: Ensure your Rust code is compiled for the Lambda environment. The binary should be in the target/lambda/serverless directory.
Set up AWS credentials: Make sure your AWS credentials are properly configured. You can do this by setting environment variables or using the AWS CLI’s configure command.
Deploy the stack: Run the following command from the infrastructure directory:
Review outputs: After deployment, CDK will display important information like the API endpoint URL. Make note of these outputs for future use.
Interacting with your serverless API
Now that your API is deployed, let’s explore how to interact with it.
The CDK stack output includes the API endpoint URL, which you’ll use to make requests to your API.
The API endpoint is automatically generated by AWS API Gateway.
If you need a custom domain, you can easily adjust the CDK stack.
I’ve added Cognito authentication to the stack so the API is not wide open.
This means that each request to the API must include an Authorization token in the headers.
In case you’re not familiar with Cognito, I’ve included some helper scripts in
utilities/ for reference.
These can assist you with creating a test user and retrieving a token.
Once you have the authentication token, you can start interacting with your API.
Here’s an example of how to create a new currency using a curl command:
Make sure to replace $API_ENDPOINT with the actual endpoint URL provided in the CDK stack output.
Final words
Building a serverless Rust API with AWS CDK opens up a world of possibilities for creating
efficient, scalable, and cost-effective applications.
Throughout this guide, we’ve explored the entire process from setting up the development environment
to deploying and interacting with a fully functional API.
This series used DynamoDB as a convenient database to spin up on AWS.
Many services are better off using a relational database as their primary
storage. If you are curious about using Postgres in Rust,
check out the series about
End-to-end type safety with Remix and Rust.