When I started working on this post, I had another idea in mind: I wanted to compare the developer experience and performance of Spring Boot and GraalVM, with Rust, on a demo HTTP API application. Unfortunately, the M1 processor of my MacBook Pro had other ideas. Hence, I changed my initial plan. I'll write about the developer experience of developing the above application in Rust, compared to what I'm used to with Spring Boot. The Sample Application Like every pet project, the application is limited in scope. I designed a simple Create Read Update Delete (CRUD) HTTP API. Data are stored in PostgreSQL. When one designs an app on the JVM, the first and only design decision is to choose the framework: a couple of years ago, it was Spring Boot. Nowadays, the choice is mostly between and . In many cases, they all rely on the same underlying libraries, logging or connection pools. Spring Boot, Quarkus, Micronaut e.g. Rust is much younger; hence the ecosystem has yet to mature. For feature, one needs to choose precisely which library to use - or to implement it. Worse, one needs to understand there's such a feature. Here are the ones that I searched for: every Reactive database access Database connection pooling Mapping rows to structures Web endpoints JSON serialization Configuration from different sources, , YAML, environment variables, etc. e.g. Web framework The choice of the web framework is the most critical. I've to admit I had no prior clue about such libraries. I looked around and stumbled upon . After reading the post, I decided to follow the conclusion and chose : Which Rust web framework to choose in 2022 axum Route requests to handlers with a macro-free API. Declaratively parse requests using extractors. Simple and predictable error handling model. Generate responses with minimal boilerplate. Take full advantage of the tower and tower-http ecosystem of middleware, services, and utilities. In particular, the last point is what sets apart from other frameworks. doesn’t have its own middleware system but instead uses . This means axum gets timeouts, tracing, compression, authorization, and more, for free. It also enables you to share middleware with applications written using hyper or tonic. axum axum tower::Service -- axum crate documentation uses the library underneath. For basic usage, it requires two crates: axum Tokio asynchronous [dependencies] axum = "0.6" tokio = { version = "1.23", features = ["full"] } 's router looks very similar to Spring's Kotlin Routes DSL: axum let app = Router::new() .route("/persons", get(get_all)) //1 .route("/persons/:id", get(get_by_id)) //1//2 async fn get_all() -> Response { ... } async fn get_by_id(Path(id): Path<Uuid>) -> Response { ... } A route is defined by the path and a function reference A route can have path parameters. can infer parameters and bind them axum Shared Objects An issue commonly found in software projects is sharing an "object" with others. We established long ago that there were better ideas than sharing global variables. Spring Boot (and similar JVM frameworks) solves it with runtime dependency injection. Objects are created by the framework, stored in a context, and injected into other objects when the application starts. Other frameworks do dependency injection at compile-time, e.g., Dagger 2. Rust has neither runtime nor objects. Configurable dependency injection is not "a thing". But we can create a variable and inject it manually where needed. In Rust, it's a problem because of : ownership Ownership is a set of rules that govern how a Rust program manages memory. All programs have to manage the way they use a computer’s memory while running. Some languages have garbage collection that regularly looks for no-longer-used memory as the program runs; in other languages, the programmer must explicitly allocate and free the memory. Rust uses a third approach: memory is managed through a system of ownership with a set of rules that the compiler checks. If any of the rules are violated, the program won’t compile. None of the features of ownership will slow down your program while it’s running. -- What Is Ownership? provides a dedicated wrapper, the extractor, to reuse variables across different scopes. axum State struct AppState { //1 ... } impl AppState { fn create() -> Arc<AppState> { //2 Arc::new(AppState { ... }) } } let app_state = AppState::create(); let app = Router::new() .route("/persons", get(get_all)) .with_state(Arc::clone(&app_state)); //3 async fn get_all(State(state): State<Arc<AppState>>) -> Response { //4 ... //5 } Create the to be shared struct Create a new wrapped in an struct Atomically Reference Counted Share the reference with all routing functions, , e.g. get_all Pass the state Use it! Automated JSON serialization Modern JVM web frameworks automatically serialize objects in JSON before sending. The good thing is that does the same. It relies on . First, we add the and crate dependencies: axum Serde serde serde_json [dependencies] serde = { version = "1.0", features = ["derive"] } serde_json = "1.0" Then, we annotate our with the macro: struct derive(Serialize) #[derive(Serialize)] struct Person { first_name: String, last_name: String, } Finally, we return the wrapped in a and the HTTP status code in an . struct Json axum Response async fn get_test() -> impl IntoResponse { //1 let person = Person { //2 first_name: "John".to_string(), last_name: "Doe".to_string() }; (StatusCode::OK, Json(person)) //3 } The tuple is automatically converted into a (StatusCode, Json) Response Create the Person Return the tuple At runtime, automatically serializes the in JSON: axum struct {"first_name":"Jane","last_name":"Doe"} Database access For a long time, I used the MySQL database for my demos. But I started to read a lot of good stuff about PostgreSQL and decided to switch. I needed an asynchronous library compatible with Tokio: it's exactly what the crate does. tokio_postgres The problem with the crate is that it creates direct connections to the database. I searched for a connection pool crate and stumbled upon (sic): deadpool Deadpool is a dead simple async pool for connections and objects of any type. -- Deadpool Deadpool provides two distinct implementations: An unmanaged pool: the developer has complete control - and responsibility - over the pooled objects' lifecycle A managed pool: the crate creates and recycles objects as needed More specialized implementations of the latter cater to different databases or "drivers", Redis and... . One can configure Deadpool directly or defer to the crate it supports. The latter crate allows several alternatives for configuration: e.g. tokio-postgres config Config organizes hierarchical or layered configurations for Rust applications. Config lets you set a set of default parameters and then extend them via merging in configuration from a variety of sources: Environment variables String literals in well-known formats Another Config instance Files: TOML, JSON, YAML, INI, RON, JSON5, and custom ones defined with Format trait Manual, programmatic override (via a method on the Config instance) .set Additionally, Config supports: Live watching and re-reading of configuration files Deep access into the merged configuration via a path syntax Deserialization via serde of the configuration or any subset defined via a path -- Crate config To create the base configuration, one needs to create a dedicated structure and use the crate: #[derive(Deserialize)] //1 struct ConfigBuilder { postgres: deadpool_postgres::Config, //2 } impl ConfigBuilder { async fn from_env() -> Result<Self, ConfigError> { //3 Config::builder() .add_source( Environment::with_prefix("POSTGRES") //4 .separator("_") //4 .keep_prefix(true) //5 .try_parsing(true), ) .build()? .try_deserialize() } } let cfg_builder = ConfigBuilder::from_env().await.unwrap(); //6 The macro is mandatory Deserialize The field match the environment prefix, see below must The function is and returns a async Result Read from environment variables whose name starts with POSTGRES_ Keep the prefix in the configuration map Enjoy! Note that environment variables should conform to what Deadpool's expects. Here's my configuration in Docker Compose: Config Env variable Value POSTGRES_HOST "postgres" POSTGRES_PORT 5432 POSTGRES_USER "postgres" POSTGRES_PASSWORD "root" POSTGRES_DBNAME "app" Once we have initialized the configuration, we can create the pool: struct AppState { pool: Pool, //1 } impl AppState { async fn create() -> Arc<AppState> { //2 let cfg_builder = ConfigBuilder::from_env().await.unwrap(); //3 let pool = cfg_builder //4 .postgres .create_pool( Some(deadpool_postgres::Runtime::Tokio1), tokio_postgres::NoTls, ) .unwrap(); Arc::new(AppState { pool }) //2 } } Wrap the pool in a custom struct Wrap the in an to pass it within an (see above) struct Arc axum State Get the configuration Create the pool Then, we can pass the pool to the routing functions: let app_state = AppState::create().await; //1 let app = Router::new() .route("/persons", get(get_all)) .with_state(Arc::clone(&app_state)); //2 async fn get_all(State(state): State<Arc<AppState>>) -> Response { let client = state.pool.get().await.unwrap(); //3 let rows = client .query("SELECT id, first_name, last_name FROM person", &[]) //4 .await //5 .unwrap(); // //6 } Create the state Pass the state to the routing functions Get the pool out of the state, and get the client out of the pool Create the query Execute it Read the row to populate the Response The last step is to implement the transformation from a to a . We can do it with the trait. Row Person From impl From<&Row> for Person { fn from(row: &Row) -> Self { let first_name: String = row.get("first_name"); let last_name: String = row.get("last_name"); Person { first_name, last_name, } } } let person = row.into(); Docker build The last step is the building of the application. I want everybody to be able to build, so I used Docker. Here's the : Dockerfile FROM --platform=x86_64 rust:1-slim AS build //1 RUN rustup target add x86_64-unknown-linux-musl //2 RUN apt update && apt install -y musl-tools musl-dev //3 WORKDIR /home COPY Cargo.toml . COPY Cargo.lock . COPY src src RUN --mount=type=cache,target=/home/.cargo \ //4 && cargo build --target x86_64-unknown-linux-musl --release //5 FROM scratch //6 COPY --from=build /home/target/x86_64-unknown-linux-musl/release/rust /app //7 CMD ["/app"] Start from a standard Rust image Add target so we can compile to Alpine Linux musl Install the required Alpine dependencies Cache the dependencies Build for Alpine Linux Start from scratch Add the previously built binary The final image is . My experience has shown that an equivalent GraalVM native compiled image would be more than 100MB. 7.56MB Conclusion Though it was not my initial plan, I learned about quite a few libraries with this demo app and how they work. More importantly, I've experienced what it is like to develop an app without a framework like Spring Boot. You need to know the following: Available crates for each capability Crate compatibility Version compatibility Last but not least, the documentation of most above crates ranges from average to good. I found 's to be good; on the other hand, I didn't manage to use Deadpool correctly from the start and had to go through several iterations. The documentation quality of Rust crates is different from crate to crate. All in all, they have room for potential to reach the level of modern JVM frameworks. axum Also, the demo app was quite simple. I assume that more advanced features could be more painful. The complete source code for this post can be found on . GitHub For further reading: Which Rust web framework to choose in 2022 Create an Optimized Rust Alpine Docker Image How to create small Docker images for Rust Using Axum Framework To Create Rest API Originally published here.