Previously, we created a gRPC reference data service for stocks using Tonic.
We intentionally left the implementation for the persistence layer as a TODO
for later.
In this post, we will zoom in on the repository, implementing the Postgres
repository with the help of SQLx.
Everything we do in this section will take place inside the ./api/ directory.
Unless otherwise specified, any command needs to be run within the directory.
Introduction to SQLx
We’ll use SQLx to interface with the database.
SQLx is a crate that enables compile-time checked queries, allowing you to write plain
SQL queries while ensuring that your Rust types adhere to the schema and queries you’ve defined.
It is a unique and refreshing approach to interfacing with a database in a world where
ORMs are so
pervasive.
SQLx works by connecting to a live database at compile time to verify your types.
Unlike in some ORMs, where the schema is generated from the types and you run
migrations to keep the database in sync, SQLx requires you to update the database
first. It then checks that the Rust code is compatible with the schema.
To facilitate this, SQLx comes with a CLI tool that assists with evolving the database
through migrations. The CLI tool is also required to prepare the database for offline
mode - this is needed in CI/CD pipelines where there is no access to a live database.
Creating the schema
To start the local database, let’s use docker compose.
SQLx requires the database URL to conect. Let’s start by updating
the .env file we previously created and setting the database URL to match what
we specified in the docker-compose.yml file.
Note that we appended /trading to the URL. This database doesn’t exist yet,
but we can use the CLI to create it:
Our reference data service only needs a very simple stocks table, which
stores the symbols and names of various stocks. Let’s create a new migration
where we can specify the query to create the table.
This will create a new migrations directory with two files ending in
down.sql and up.sql. Because we passed in the -r flag, SQLx will
create two-way migrations, where the down.sql script should revert
the migration carried out by the up.sql script.
Add this query to the up file:
You can run the migration using the CLI:
Apart from the stocks table, this also creates a new _sqlx_migrations
table which keeps track of the already applied migrations. If you were to
run the migrate run command again, it would do nothing.
Let’s also add the down script quickly:
You can revert the last migration by calling the
command. This will execute the corresponding down script.
Adding SQLx as a project dependency
All this has worked without adding sqlx as a dependency to the Cargo
project, as we had previously installed the SQLx CLI. Now that we are
ready to work on the Rust code, let’s add the SQLx crate to the project as well.
Connecting to the database
Before we can implement any useful logic in the repository, we need to create a
connection pool and pass it to the Postgres repository.
Let’s modify the repository so that it takes a PgPool.
In main.rs, add a utility function to create a new pool, and use it
to create the repository with the pool passed in.
The complete main module should look like this:
Implementing the repository
The repository now has access to the database through the connection pool.
It’s time to get our hands dirty with SQLx!
Adding a new stock
Let’s revisit the stub function we created in the Postgres repository for
adding a new stock. We can now replace the TODO with a few lines of SQLx
code to insert a new stock into the database.
We will use the query_as! macro, which allows us to specify the expected
return type of the query.
As its second argument, it takes
the query string. The following arguments are bound to the arguments in the query.
If you were to try to compile this, it would fail with an error complaining
about the compiler struggling to convert the error to our Result type.
Let’s tackle this issue next.
Handling SQLx errors
The thiserror error type we previously created in src/error.rs needs to be extended
with a new variant so that SQLx errors are correctly handled. We will also improve the
mapping logic so that RowNotFound errors are mapped to the NotFound gRPC status,
and SQL check violations are mapped to InvalidArgument. This is not a perfect mapping,
but it works for demonstration purposes.
After making these changes, everything should compile again.
Use grpcui to test the AddStock method. The id field of the stock
should be left unspecified as this is only used for responses.
Retrieving all stocks
Finally, let’s implement the function to retrieve all stocks.
You should be able to retrieve any stock you’ve added using grpcui.
Testing
An important area we haven’t covered is testing. SQLx has built-in testing
utilities to create isolated test databases, apply migrations and fixtures,
and tear down the database at the end.
My preferred way of testing gRPC services is integration tests that call
the endpoint functions directly (but without actually running the service).
Let’s start by creating the tests directory and a subdirectory tests/fixtures.
Fixtures are written as SQL files and can contain arbitrary SQL code.
Create a file called stocks.sql and add a few stocks to be used in tests.
We’ll use these stocks as baseline reference data in several tests.
To mark tests as SQLx tests, we can use the sqlx::test macro.
You can pass in the fixtures you’d like to apply for the particular
test, for example
The test needs to take PgPool as its argument for the SQLx functionality
to kick in.
Create a new test module refdata.rs inside the tests directory and
add a simple test for retrieving all stocks and adding a new stock.
This isn’t a comprehensive test suite of course, but the same concepts can
be applied to cover edge cases and more complex scenarios.
Revised GitHub actions
The GitHub Actions included in the repository no longer pass due to new dependencies on Protobuf, SQLx, and the database.
SQLx can work in CI/CD pipelines in two modes - either running against a database
(like we’ve been using it locally) or in offline mode. We’ll demonstrate the use of both modes
in our pipeline.
Offline mode
For the build step and Clippy checks, let’s use offline mode, as these steps
don’t actually require a running database. The SQLX_OFFLINE=true flag can be used
to ensure SQLx uses offline mode. We can use the SQLx CLI to generate meta files for our queries,
which SQLx uses at compile time in offline mode to verify the queries against.
You should see a new .sqlx directory appear with JSON files in it. Make sure you commit
these files into git.
Finally, add the SQLX_OFFLINE flag to the individual steps.
Note, we also added a step to install protoc so the pipeline can compile
proto files.
Tests against a running database
For running integration tests in the pipeline, we will need an actual database.
Let’s add a Postgres service, set the DATABASE_URL for SQLx and run the
SQLx CLI to prepare the database.
With these changes, our pipeline should be passing again.
Conclusion
By the end of this part in the series, you should have a working refdata service
that can add and retrieve stocks from the local Postgres database. All clippy
warnings should be gone as we now utilise all the parts we previously developed.