#Rust Web Development Tutorial: REST API

Nov 14th, 2019  rust, tutorial

In this tutorial, we are going to create a REST API in Rust with Actix web 2.0 and Diesel. We will be using Postgres as our database, so if you don’t have Postgres installed on your computer, you should do that first.

# Hello world

We are going to start by creating our project with Cargo and move into the project directory.

$ cargo new rest_api
$ cd rest_api

We need to add Actix web to our dependencies for our first example. So let’s add that to the Cargo.toml.

[dependencies]
actix-web = "2.0"
actix-rt = "1.0"

And then we set up the request handler and server in src/main.rs.

// src/main.rs
use actix_web::{App, HttpResponse, HttpServer, Responder, get};

#[get("/")]
async fn index() -> impl Responder {
    HttpResponse::Ok().body("Hello world!")
}

#[actix_rt::main]
async fn main() -> std::io::Result<()> {
    HttpServer::new(|| {
        App::new()
            .service(index)
    })
        .bind("127.0.0.1:5000")?
        .run()
        .await
}

Now that we have created our first server, let’s run it with cargo run. To test our REST API, let’s visit localhost:5000, and we should hopefully see our hello world.

# Auto Reloading

It could be quite tedious to recompile the code manually every time that we make a change, so let’s have cargo-watch recompile it for us on every change. It would also make sense to combine it with the listenfd crate and the systemfd utility to keep the connection open while our code recompiles. This way we avoid that our REST client breaks off the requests while the code recompiles since it can’t reach the server. By keeping the connection open we could just make a call to the server and the server will respond as soon as it has recompiled and is ready to handle our request.

For this, we need to install cargo-watch and systemfd. Both are written in Rust and available on crates.io, so we can install them with cargo.

$ cargo install systemfd cargo-watch

We also need to add listenfd to our dependencies.

[dependencies]
listenfd = "0.3"

Then we need to make some changes to src/main.rs so that we can use the listener that is provided for us by systemfd, but also have a fallback for cases when we don’t need it. Like when we are deploying our code.

// src/main.rs
use actix_web::{get, App, HttpResponse, HttpServer, Responder};
use listenfd::ListenFd;

#[get("/")]
async fn index() -> impl Responder {
    HttpResponse::Ok().body("Hello world!")
}

#[actix_rt::main]
async fn main() -> std::io::Result<()> {
    let mut listenfd = ListenFd::from_env();
    let mut server = HttpServer::new(||
        App::new()
            .service(index)
    );

    server = match listenfd.take_tcp_listener(0)? {
        Some(listener) => server.listen(listener)?,
        None => server.bind("127.0.0.1:5000")?,
    };

    server.run().await
}

Now we can run the server and file watcher that will automatically recompile our code on changes with this command.

$ systemfd --no-pid -s http::5000 -- cargo watch -x run

# Environment variables and logging

You would probably deploy your code at some point. Then you might want to run the server with some different settings than on your local machine, like using a different port or a different level of logging. You might also need to use some secrets that should not be part of your code, like database credentials. For this we could use environment variables.

Also when you deploy your code you could be sure that it will run into problems at some point. And to help with solving these problems it is important with good logging, so that we could figure out what went wrong and solve the problem.

For setting up environment variables and logging we are going to add another few dependencies.

[dependencies]
dotenv = "0.11"
log = "0.4"
env_logger = "0.6"

For convenience let’s set up some default parameters that we could use for development. We could do that by creating a .env file in the root of our project.

RUST_LOG=rest_api=info,actix=info
HOST=127.0.0.1
PORT=5000

The log crate provides five different logs levels which are error, warn, info, debug and trace, where error represents the highest-priority log messages and trace the lowest. For this tutorial we will set the log level to info for our REST API and Actix, meaning we will get all messages from error, warn and info.

To activate logging and environment variable we would only need to make a few small changes to our main file.

// src/main.rs
#[macro_use]
extern crate log;

use actix_web::{get, App, HttpResponse, HttpServer, Responder};
use dotenv::dotenv;
use listenfd::ListenFd;
use std::env;

#[get("/")]
async fn index() -> impl Responder {
    HttpResponse::Ok().body("Hello world!")
}

#[actix_rt::main]
async fn main() -> std::io::Result<()> {
    dotenv().ok();
    env_logger::init();

    let mut listenfd = ListenFd::from_env();
    let mut server = HttpServer::new(||
        App::new()
            .service(index)
    );

    server = match listenfd.take_tcp_listener(0)? {
        Some(listener) => server.listen(listener)?,
        None => {
            let host = env::var("HOST").expect("Host not set");
            let port = env::var("PORT").expect("Port not set");
            server.bind(format!("{}:{}", host, port))?
        }
    };

    info!("Starting server");
    server.run().await
}

The dotenv().ok() function will grab the environment variables from the .env file and add them to our servers environment variables. This way we could use these variables by using the std::env::var() function, as we have done for setting the host and port.

The log crate will provide five micros that we could use for writing log messages. One for each log level: error!, warn!, info! debug! and trace!. To see our log messages in stdout or stderr we need to initiate the env_logger which we do with a single function: env_logger::init().

# Api endpoints

Our API will be sending and receiving json data, so we need a way to Serialize and Deserialize json into a data structure recognized by Rust. For this we are going to use Serde. So we need to add that to our list of dependencies.

[dependencies]
serde = "1.0"
serde_json = "1.0"

Now we will define a user model and we will add the Serialize and Deserialize annotations so that our model can be extracted from and converted to json.

// src/user/model.rs
use serde::{Deserialize, Serialize};

#[derive(Serialize, Deserialize)]
pub struct User {
    pub id: i32,
    pub email: String,
}

Let’s go ahead and create our REST API’s. Our next step will be to persist the data, but for now we will just be using hard coded dummy data.

// src/user/routes.rs
use crate::user::User;
use actix_web::{get, post, put, delete, web, HttpResponse, Responder};
use serde_json::json;

#[get("/users")]
async fn find_all() -> impl Responder {
    HttpResponse::Ok().json(
        vec![
            User { id: 1, email: "[email protected]".to_string() },
            User { id: 2, email: "[email protected]".to_string() },
        ]
    )
}

#[get("/users/{id}")]
async fn find() -> impl Responder {
    HttpResponse::Ok().json(
        User { id: 1, email: "[email protected]".to_string() }
    )
}

#[post("/users")]
async fn create(user: web::Json<User>) -> impl Responder {
    HttpResponse::Created().json(user.into_inner())
}

#[put("/users/{id}")]
async fn update(user: web::Json<User>) -> impl Responder {
    HttpResponse::Ok().json(user.into_inner())
}

#[delete("/users/{id}")]
async fn delete() -> impl Responder {
    HttpResponse::Ok().json(json!({"message": "Deleted"}))
}

pub fn init_routes(cfg: &mut web::ServiceConfig) {
    cfg.service(find_all);
    cfg.service(find);
    cfg.service(create);
    cfg.service(update);
    cfg.service(delete);
}

We also need to connect the user routes with the user model and make it available outside of the user directory.

// src/user/mod.rs
mod model;
mod routes;

pub use model::User;
pub use routes::init_routes;

We can now replace our “Hello world” endpoint with the actual user endpoints.

// src/main.rs
#[macro_use]
extern crate log;

use actix_web::{App, HttpServer};
use dotenv::dotenv;
use listenfd::ListenFd;
use std::env;

mod user;

#[actix_rt::main]
async fn main() -> std::io::Result<()> {
    dotenv().ok();
    env_logger::init();

    let mut listenfd = ListenFd::from_env();
    let mut server = HttpServer::new(|| 
        App::new()
            .configure(user::init_routes)
    );

    server = match listenfd.take_tcp_listener(0)? {
        Some(listener) => server.listen(listener)?,
        None => {
            let host = env::var("HOST").expect("Host not set");
            let port = env::var("PORT").expect("Port not set");
            server.bind(format!("{}:{}", host, port))?
        }
    };

    info!("Starting server");
    server.run().await
}

We should now be able to test the user endpoints we just created by calling them. You can for example use Insomnia or curl for that.

# Persisting data

Having a few endpoints is not really helpful if we are not able to persist the data. For this we are going to use Diesel, which is a quite a mature ORM. Diesel will let us connect to Postgres, MySQL and SQLite, but in this tutorial we will only be covering Postgres.

Diesel depends on openssl and libpq, so we need to install those before we can install Diesel CLI. If you are using a Debian like OS you could simply install that using apt.

$ sudo apt install openssl libpq-dev -y

When we have installed the needed dependencies we can install Diesel CLI.

$ cargo install diesel_cli --no-default-features --features postgres

To let Diesel know where our database is, we need to add DATABASE_URL to our .env file.

DATABASE_URL=postgres://postgres:password@localhost/rest_api

We can use Diesel CLI to set up Diesel in our project, and create files for our user migration.

$ diesel setup
$ diesel migration generate create_user

In the migrations folder we should now be able to find a folder for our first migration. This folder should contain two files. One named up.sql where we will be creating our user schema and one named down.sql which should revert everything we did in the up.sql file.

// up.sql
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";

CREATE TABLE "user" (
    id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
    email TEXT UNIQUE NOT NULL,
    password TEXT NOT NULL,
    created_at TIMESTAMP   NOT NULL DEFAULT current_timestamp,
    updated_at TIMESTAMP
);
// down.sql
DROP TABLE "user";

Now that we have created our first migration, we can run it with Diesel CLI.

$ diesel migration run

This command should also create a schema file that we will use later for building sql queries. Default location for this file is src/schema.rs.

When we are dealing with databases we should be prepared for that problems that can occur like connection issues or database conflicts. So we are going to create an own error type to handle these problems.

// src/api_error.rs
use actix_web::http::StatusCode;
use actix_web::{HttpResponse, ResponseError};
use diesel::result::Error as DieselError;
use serde::Deserialize;
use serde_json::json;
use std::fmt;

#[derive(Debug, Deserialize)]
pub struct ApiError {
    pub status_code: u16,
    pub message: String,
}

impl ApiError {
    pub fn new(status_code: u16, message: String) -> ApiError {
        ApiError { status_code, message }
    }
}

impl fmt::Display for ApiError {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        f.write_str(self.message.as_str())
    }
}

impl From<DieselError> for ApiError {
    fn from(error: DieselError) -> ApiError {
        match error {
            DieselError::DatabaseError(_, err) => ApiError::new(409, err.message().to_string()),
            DieselError::NotFound => ApiError::new(404, "Record not found".to_string()),
            err => ApiError::new(500, format!("Diesel error: {}", err)),
        }
    }
}

impl ResponseError for ApiError {
    fn error_response(&self) -> HttpResponse {
        let status_code = match StatusCode::from_u16(self.status_code) {
            Ok(status_code) => status_code,
            Err(_) => StatusCode::INTERNAL_SERVER_ERROR,
        };

        let message = match status_code.as_u16() < 500 {
            true => self.message.clone(),
            false => {
                error!("{}", self.message);
                "Internal server error".to_string()
            },
        };

        HttpResponse::build(status_code)
            .json(json!({ "message": message }))
    }
}

Our error type consists of a status code and a message that we will use to create the error message. We create the error message by implementing ResponseError, which we are using to create a json response.

In case we have an internal server error, it is probably not the best show the users what went wrong. For this case we will just let the user know that something went wrong and write the error message to the logs.

Our error type also implements From<diesel::result::Error>, so that we don’t have to do that for each time we have to handle a Diesel error.

[dependencies]
chrono = { version = "0.4", features = ["serde"] }
diesel = { version = "1.4", features = ["postgres", "r2d2", "uuid", "chrono"] }
diesel_migrations = "1.4"
lazy_static = "1.4"
r2d2 = "0.8"
uuid = { version = "0.6", features = ["serde", "v4"] }

For handling state we will be using statics, although Actix has built in state management. You could read my post on loose coupling to understand why I decided to go for this approach, although some people might disagree with it.

Now let’s establish a database connection and use r2d2 to efficiently handle the connection pool.

// src/db.rs
use crate::api_error::ApiError;
use diesel::pg::PgConnection;
use diesel::r2d2::ConnectionManager;
use lazy_static::lazy_static;
use r2d2;
use std::env;

type Pool = r2d2::Pool<ConnectionManager<PgConnection>>;
pub type DbConnection = r2d2::PooledConnection<ConnectionManager<PgConnection>>;

embed_migrations!();

lazy_static! {
    static ref POOL: Pool = {
        let db_url = env::var("DATABASE_URL").expect("Database url not set");
        let manager = ConnectionManager::<PgConnection>::new(db_url);
        Pool::new(manager).expect("Failed to create db pool")
    };
}

pub fn init() {
    info!("Initializing DB");
    lazy_static::initialize(&POOL);
    let conn = connection().expect("Failed to get db connection");
    embedded_migrations::run(&conn).unwrap();
}

pub fn connection() -> Result<DbConnection, ApiError> {
    POOL.get()
        .map_err(|e| ApiError::new(500, format!("Failed getting db connection: {}", e)))
}

With the database connection established we can finally create the API for creating, reading, updating and deleting the user data.

// src/user/model.rs
use crate::api_error::ApiError;
use crate::db;
use crate::schema::user;
use chrono::{NaiveDateTime, Utc};
use diesel::prelude::*;
use serde::{Deserialize, Serialize};
use uuid::Uuid;

#[derive(Serialize, Deserialize, AsChangeset)]
#[table_name = "user"]
pub struct UserMessage {
    pub email: String,
    pub password: String,
}

#[derive(Serialize, Deserialize, Queryable, Insertable)]
#[table_name = "user"]
pub struct User {
    pub id: Uuid,
    pub email: String,
    pub password: String,
    pub created_at: NaiveDateTime,
    pub updated_at: Option<NaiveDateTime>,
}

impl User {
    pub fn find_all() -> Result<Vec<Self>, ApiError> {
        let conn = db::connection()?;

        let users = user::table
            .load::<User>(&conn)?;

        Ok(users)
    }

    pub fn find(id: Uuid) -> Result<Self, ApiError> {
        let conn = db::connection()?;

        let user = user::table
            .filter(user::id.eq(id))
            .first(&conn)?;

        Ok(user)
    }

    pub fn create(user: UserMessage) -> Result<Self, ApiError> {
        let conn = db::connection()?;

        let user = User::from(user);
        let user = diesel::insert_into(user::table)
            .values(user)
            .get_result(&conn)?;

        Ok(user)
    }

    pub fn update(id: Uuid, user: UserMessage) -> Result<Self, ApiError> {
        let conn = db::connection()?;

        let user = diesel::update(user::table)
            .filter(user::id.eq(id))
            .set(user)
            .get_result(&conn)?;

        Ok(user)
    }

    pub fn delete(id: Uuid) -> Result<usize, ApiError> {
        let conn = db::connection()?;

        let res = diesel::delete(
                user::table
                    .filter(user::id.eq(id))
            )
            .execute(&conn)?;

        Ok(res)
    }
}

impl From<UserMessage> for User {
    fn from(user: UserMessage) -> Self {
        User {
            id: Uuid::new_v4(),
            email: user.email,
            password: user.password,
            created_at: Utc::now().naive_utc(),
            updated_at: None,
        }
    }
}

And with the user API in place we could use that instead of the fake data we used earlier.

// src/user/routes.rs
use crate::api_error::ApiError;
use crate::user::{User, UserMessage};
use actix_web::{delete, get, post, put, web, HttpResponse};
use serde_json::json;
use uuid::Uuid;

#[get("/users")]
async fn find_all() -> Result<HttpResponse, ApiError> {
    let users = User::find_all()?;
    Ok(HttpResponse::Ok().json(users))
}

#[get("/users/{id}")]
async fn find(id: web::Path<Uuid>) -> Result<HttpResponse, ApiError> {
    let user = User::find(id.into_inner())?;
    Ok(HttpResponse::Ok().json(user))
}

#[post("/users")]
async fn create(user: web::Json<UserMessage>) -> Result<HttpResponse, ApiError> {
    let user = User::create(user.into_inner())?;
    Ok(HttpResponse::Ok().json(user))
}

#[put("/users/{id}")]
async fn update(id: web::Path<Uuid>, user: web::Json<UserMessage>) -> Result<HttpResponse, ApiError> {
    let user = User::update(id.into_inner(), user.into_inner())?;
    Ok(HttpResponse::Ok().json(user))
}

#[delete("/users/{id}")]
async fn delete(id: web::Path<Uuid>) -> Result<HttpResponse, ApiError> {
    let num_deleted = User::delete(id.into_inner())?;
    Ok(HttpResponse::Ok().json(json!({ "deleted": num_deleted })))
}

pub fn init_routes(cfg: &mut web::ServiceConfig) {
    cfg.service(find_all);
    cfg.service(find);
    cfg.service(create);
    cfg.service(update);
    cfg.service(delete);
}

Now we just have to add db, schema and api_error models in our main file. I also strongly recommend to initiate the database, although that is not strictly necessary. We are using lazy_static to handle the database pool. So if we don’t initiate it right way, it won’t be initiated before it will be used. Which then will be when the first user tires to call our API.

// src/main.rs
#[macro_use]
extern crate log;
#[macro_use]
extern crate diesel;
#[macro_use]
extern crate diesel_migrations;

use actix_web::{App, HttpServer};
use dotenv::dotenv;
use listenfd::ListenFd;
use std::env;

mod api_error;
mod db;
mod schema;
mod user;

#[actix_rt::main]
async fn main() -> std::io::Result<()> {
    dotenv().ok();
    env_logger::init();

    db::init();

    let mut listenfd = ListenFd::from_env();
    let mut server = HttpServer::new(|| 
        App::new()
            .configure(user::init_routes)
    );

    server = match listenfd.take_tcp_listener(0)? {
        Some(listener) => server.listen(listener)?,
        None => {
            let host = env::var("HOST").expect("Host not set");
            let port = env::var("PORT").expect("Port not set");
            server.bind(format!("{}:{}", host, port))?
        }
    };

    info!("Starting server");
    server.run().await
}

We should now be able to create, read, update and delete users via the endpoints.

In case you need it you can also find the complete code on github.

In case you have any questions or suggestions for improvement, feel free to contact me. You could also contact me in case you have suggestions for tutorials or topics I could cover in an upcoming blog post.

Next up I am planning to show how to Authenticate our users. I also have a list of other topics that I plan to cover and you could be the first to know by signing up for the newsletter.


Never miss an article about about building the cloud with Rust