Previously, we implemented a simple reference data service to add and list stocks.
In this post, we’ll implement a trade service analogous in structure to the
reference data service.
We’ll pick up the pace here, focusing less on the repetitive boilerplate code
we’ve seen in earlier posts and more on new concepts.
If you ever get stuck, please refer to the finished branch for this post.
Defining the gRPC service
Let’s start by creating the interface we’d like to implement in a new proto package.
The interface consists of two functions.
The SendOrder function takes a market order.
In this dummy application, the order is immediately executed
for both BUY and SELL actions, provided there are sufficient shares.
Positions provide a summarized view of all orders, updating automatically with each new order sent.
AllPositions returns the position for every stock.
There is no need to change build.rs, as it picks up new files in the
protos directory. We need to register the Rust module, though.
New modules for trade service
Add a new trade.rs module with 3 sub-modules:
models.rs
repository.rs
service.rs
Let’s create our types to model positions and the side of an order:
Just like for the reference data service, we’ll create an abstract
repository trait and a concrete Postgres implementation.
We’ll leave the implementation of add_trade and get_all_positions as TODOs for now, focusing on these interesting parts later.
Finally, let’s implement the auto-generated service trait using the repository.
Nothing interesting to see here, we’re just massaging the types to
satisfy the desired types.
Registering the new service
Let’s register the trade module in lib.rs as a public module and
expose the types we’ll need to register it with the Tonic service.
With the required types exposed, we can add the trade service to the
Tonic service.
You can explore the new endpoints using grpcui, but of course they will fail for now
due to the TODOs we have yet to complete.
Adding tests
At this point, it’s probably worth stopping to consider what behaviour we expect from
our endpoints, and set our expectations in a few simple integration tests.
A comprehensive test suite is beyond the scope of this article, but here are a few
test cases to serve as examples.
By the time we finish the TODO implementations, these tests should pass.
Adding a new migration
We need two new tables for trades (executed orders) and overall positions.
Let’s create these in a new migration.
The up migration just creates the tables. We can also add any new indexes
as required.
Next, we add the down script:
You can verify the migration is working by repeatedly applying and reverting it.
Make sure the migration is applied before we proceed.
Implementing the repository
The SQLx implementation of the repository is where we can apply some new concepts.
The SQL queries required for the trade service are a little more involved than the
ones we had for the reference data service. For more complex cases, SQLx supports
defining your queries in dedicated SQL files.
This allows us to separate SQL logic from Rust code, enhancing readability and maintainability.
Handling orders
Create a new src/queries directory for our SQL files. We’ll have two queries for
new orders - one to insert the trade and one to upsert positions.
The insert statement is straightforward:
The statement to upsert positions is a little more involved:
Having this inline in the Rust code would have been quite distracting!
Separate SQL files also play nicely with IDEs with database support -
you get proper syntax highlighting, auto-complete, and you can run the SQL
code to test it out.
We are now equipped to tackle the implementation of add_trade in the
repository.
Note, we use the query_file_as! macro to source the SQL queries from
the files we have defined. The other difference from the reference data
service is that we use a transaction to ensure consistency,
rather than passing the connection pool directly to execute.
Getting all positions
Thankfully, the query to get the positions is much simpler:
Let’s quickly fill in the implementation of get_all_positions:
Preparing for merge
As we have added new SQL queries, let’s prepare SQLx for offline mode again
so that our pipeline can build the code.
Commit the new files into git.
Conclusion
If you have followed the steps up to this point, you should have
a working trade service that can receive orders and list all positions.
The integration tests should pass, and Clippy should be happy.
If that’s not the case, check out the completed branch
for this section.
Through implementing the trade service, we’ve seen how Rust’s robust type system and SQLx’s compile-time
checks can work together to ensure data integrity and enhance our application’s reliability.
In the next installment, we’ll work on the Remix application and demonstrate
how this type safety can be extended to the frontend.