This is the second part in the “Serverless Rust API on AWS” series. Part 1 gave an overview of the Rust ecosystem for serverless development and general considerations for picking the tools and libraries used in this post. We discussed why Rust is an excellent choice for building efficient, reliable APIs. If you want to dive straight into coding, feel free to skip the first part.
In this post, we’ll build a Poem API with a basic CRUD interface for managing currencies. For now, the focus will be on local development, and we’ll avoid using any AWS components. Part 3 will demonstrate the ease with which the service we’re implementing here can be migrated to AWS Lambda.
The final code for Part 2 is hosted on GitHub.
Initial setup
Starting with an empty repository, let’s add the following
Cargo.toml
.
The project will consist of a library, which contains the bulk of the code,
and eventually two entrypoints, /src/bin/local.rs
and src/bin/serverless.rs
,
for local development and a Lambda-compatible entrypoint respectively.
For now, we only need the local implementation, so ignore serverless.rs
for now.
Create a blank src/lib.rs
file and dummy main function in src/bin/local.rs
.
With the 3 files in place, the project should compile fine when
you run cargo build --bin local
.
A note on dependencies
In case you aren’t familiar with some of the crates listed in the
Cargo.toml
, here is a quick description of them.
Async runtime and tracing
Tokio probably requires no introduction. We’ll use it as our async runtime. tracing is part of the tokio system. Along with tracing-subscriber, we’ll us it for structured logging and instrumentation.
Async traits
async-trait is to provide async
support for traits.
Even though, support for async functions exists in stable Rust since
late 2023, it has limitations. async-trait is still required
for dyn
traits with async functions.
Web framework
Poem is our web framework of choice. It provides Lambda
integration through the poem-lambda crate,
and OpenAPI support through the poem-openapi crate.
We’ll enable the test
feature in development, which provides a
test client we can use to implement API tests.
Error handling
The anyhow and thiserror crates are used to make error handling more convenient. If you aren’t already familiar with these crates, I recommend checking them out, as they are pretty pervasive in the Rust ecosystem.
Serialisation
A perhaps even more pervasive crate is serde. We’ll use it for type-safe serialisation and deserialisation.
Configuration
envy is a crate for deserialising environment variables into serde types. We’ll use it for configuration passed into Lambda as environment variables, as well as configuration for local development.
The repository
Before we start working on the API layer, let’s implement a repository suitable for storing and retrieving currencies.
We’ll use DynamoDB for persistence in the final implementation. However, for local development and automated testing, decoupling from a real AWS environment can be beneficial. We have two options to do so:
- A local DynamoDB database using the official Docker image.
- An abstract repository with an implementation which replaces real database actions.
For local development, I prefer keeping things as close to the real solution as possible. DynamoDB tables are practically free at low traffic levels, making it often a good idea to create real tables on DynamoDB for development purposes. Using a local DynamoDB instance is an equally valid approach, offering a balance between realism and convenience.
For automated testing, I recommend going with an “offline” option. A local DynamoDB instance works best for integration tests and other deeper styles of testing, which I find the most valuable. However, there’s also value in shallower API tests where we don’t care about the persistence layer. In such cases, an in-memory repository implementation may be more appropriate, offering faster test execution and easier setup.
For this demo project, we only need a very basic key-value store for currencies. Creating an abstraction layer for the repository along with an in-memory implementation is trivial and provides flexibility. For the sake of complete decoupling from AWS and to demonstrate the power of abstraction, let’s build a version with in-memory storage. This approach allows us to easily switch between different storage implementations (like DynamoDB or in-memory) without changing our application logic.
The abstract repository
Let’s create a repository
(src/repository.rs
) module and register it as a public module in src/lib.rs
.
Create 3 sub-modules inside src/repository
:
Module | Path | Purpose |
---|---|---|
base | /src/repository/base.rs | The base trait and the Currency struct. |
memory | /src/repository/memory.rs | The in-memory implementation of the repository. |
Here is the base repository module:
The repository trait defines the necessary API to manage currencies.
The Currency
struct encapsulates the information we store about a particular
currency. For example, British Pounds may be stored as
Note, we have also added a SharedRepository = Arc<dyn Repository>
.
This is a cloneable, Sync
and Send
type which we can eventually pass to Poem with either
the DynamoDB or the in-memory repository contained inside it.
The in-memory repository
The in-memory repository uses a HashMap
wrapped in a RwLock
for thread-safe concurrent access:
For the use
paths to be correct, make sure your src/repository.rs
re-exposes the required types:
Error handling
You may be wondering what the Result
and Error
type we’re returning from
these functions are.
We use thiserror to implement a custom error type in src/error.rs
:
Register this new error
module in lib.rs
as a private module.
Even though our use-case is trivial, this error type provides a foundation for more complex error scenarios as the application grows.
At this point, everything should compile again.
The OpenAPI service
Now that we have a repository with functionality to store currencies,
let’s define the OpenAPI service. We’ll implement an api
module with 2 sub-modules, payload
for the OpenAPI types and
endpoints
for the API endpoints.
The OpenAPI types
We’ll create the OpenAPI type definitions using poem-openapi
in src/api/payload.rs
.
While there’s some duplication in defining Currency again for OpenAPI, maintaining a clear separation between persistence and API layer types is beneficial. As applications grow, differences often emerge between these representations, so it’s best to separate them early. Rust’s type system ensures these types remain compatible.
We’ve also used a From
implementation to map our custom error
variants to HTTP responses with appropriate status codes
and messages.
The endpoints
Now let’s add the API endpoints that interact with the repository in src/api/endpoints.rs
:
This layer is a thin wrapper around the repository.
The function arguments use Poem’s extractors (Data
, Json
, and Path
)
to access request information and context in a type-safe manner:
Data
passes arbitrary objects (like our repository) to handlers.Json
automatically deserializes JSON payloads into Serde types.Path
extracts and parses path parameters.
The service
To build a service from CurrencyApi
, let’s add a new function
in src/api.rs
to register an OpenAPI service at the /api
route:
Poem provides solutions for displaying the OpenAPI spec using
Swagger,
Redoc,
or RapiDoc UI.
In this example, we set up the application to serve Swagger UI at the /
path.
To demonstrate the test utilities provided by Poem, let’s throw in a few unit tests as well:
These tests use the in-memory repository for persistence, demonstrating how to test the API endpoints effectively.
Running the service locally
The final step before we can run the service is a main function
that builds and starts the application.
If you followed the initial setup steps, you should already have an
entry in Cargo.toml
for the local entrypoint:
This will contain the main function which sets up the service for local development.
Let’s add the following content to src/bin/local.rs
:
This setup does the following:
- Initialises a pretty-printed logging system for better readability during development.
- Creates an in-memory repository for storing currency data.
- Builds the application using the build_app function we defined earlier.
- Starts a server that listens on all interfaces (0.0.0.0) on port 3000.
You can now run the application using Cargo:
This command will compile and start your local development server. You should see output indicating that the server has started, and it will be accessible at http://localhost:3000.
To test the API
Open a web browser and navigate to http://localhost:3000 to access the Swagger UI. Use the Swagger UI to interact with your API endpoints, or use tools like curl or Postman to send HTTP requests to http://localhost:3000/api/currencies. Remember that this local setup uses an in-memory repository, so any data you add will be lost when you stop the server.
Conclusion
In this tutorial, we’ve successfully built a robust foundation for a serverless Rust API using the Poem framework. Our implementation includes:
- A well-structured API for managing currency data.
- OpenAPI documentation with Swagger UI integration.
- Error handling and custom error types.
- An in-memory repository for local development and testing.
- A modular design that separates concerns and allows for easy extension.
While our current implementation runs entirely locally, it serves as an excellent starting point for a cloud-native application. The modularity of our design, particularly the use of trait-based repositories, sets us up for a smooth transition to cloud-based services.
We have implemented a simple API using Poem for currencies. So far, no AWS components have been used, the implementation is entirely local.
The final code for Part 2 is hosted on GitHub.
In Part 3, we’ll dive into AWS integration and serverless deployment strategies. These additions will elevate our Rust API to a production-ready, cloud-native application capable of handling real-world scenarios and scaling to meet demand.
David Steiner
I'm a software engineer and architect focusing on performant cloud-native distributed systems.