ESC
Type to search...

Rapina 0.11.0

Background jobs, multipart uploads, environment variable config for the dev server, and OpenAPI improvements

Rapina 0.11.0 shipped on April 1, 2026. This release introduced a full background jobs system — from handler definition to enqueueing to CLI visibility — along with multipart file upload support, environment-based server configuration, and automatic OpenAPI requestBody generation.


Background jobs

The biggest addition in 0.11.0 was a complete background jobs system. Jobs were defined with a macro, enqueued through a typed extractor, and persisted to the database — all without reaching for a separate queue service.

#[job] macro

The #[job] macro marks an async function as a background job handler. It accepts attributes for queue routing and retry configuration:

use rapina::prelude::*;

#[derive(Serialize, Deserialize)]
struct WelcomeEmailPayload {
    user_id: i64,
    email: String,
}

#[job(queue = "emails", max_retries = 5, retry_policy = "exponential", retry_delay_secs = 2)]
async fn send_welcome_email(payload: WelcomeEmailPayload) -> JobResult {
    // send the email
    println!("Sending welcome email to {}", payload.email);
    Ok(())
}

The macro generates a helper function with the same name that returns a JobRequest. That value is passed to Jobs::enqueue(). Handlers can also receive dependency-injected State<T> and Db parameters alongside the payload.

Jobs extractor

The Jobs extractor gives handlers a way to enqueue jobs from within a request:

use rapina::jobs::Jobs;

#[post("/users")]
async fn create_user(body: Json<CreateUserRequest>, db: Db, jobs: Jobs) -> Result<StatusCode> {
    // Simple enqueue
    jobs.enqueue(send_welcome_email(WelcomeEmailPayload {
        user_id: 42,
        email: body.email.clone(),
    }))
    .await?;

    Ok(StatusCode::CREATED)
}

For transactional enqueues — where the job and the database write need to commit together — enqueue_with() accepts a connection or transaction:

let txn = db.conn().begin().await?;
let user = User::insert(&txn, &body).await?;
jobs.enqueue_with(&txn, send_welcome_email(WelcomeEmailPayload {
    user_id: user.id,
    email: user.email.clone(),
}))
.await?;
txn.commit().await?;

RetryPolicy

RetryPolicy is an enum with three variants. exponential and fixed take the max retry count and a base delay:

use rapina::jobs::RetryPolicy;
use std::time::Duration;

// Up to 5 retries: first is immediate, then 2s → 8s → 32s… (base × 4^(n-2) each step)
RetryPolicy::exponential(5, Duration::from_secs(2));

// Up to 3 retries: first is also immediate, then uses the fixed 10-second gap
RetryPolicy::fixed(3, Duration::from_secs(10));

// No retries
RetryPolicy::none();

The first retry is always immediate for both policies; subsequent attempts follow the configured delay. Exponential backoff adds deterministic jitter per job ID to prevent thundering herd after outages. Delay is capped at one week. Jobs that exhaust all retries move to failed and remain in the table.

Background jobs table and migration

Job state is stored in a rapina_jobs table. To add it to your project, run rapina jobs init to register the migration, then apply it with rapina migrate:

$ rapina jobs init
$ rapina migrate

rapina jobs init adds a create_rapina_jobs migration to your migrations list. rapina migrate (or .run_migrations() called at startup) then creates the table if it does not exist. The schema:

rapina_jobs
  id            uuid primary key  (gen_random_uuid())
  queue         varchar(255)      default 'default'
  job_type      varchar(255)
  payload       jsonb             default {}
  status        varchar(32)       default 'pending'  -- pending | running | completed | failed
  attempts      integer           default 0
  max_retries   integer           default 3
  run_at        timestamptz       default now()
  started_at    timestamptz
  locked_until  timestamptz
  finished_at   timestamptz
  last_error    text
  trace_id      varchar(64)
  created_at    timestamptz       default now()

A partial index on (queue, run_at) WHERE status = 'pending' keeps polling efficient. The jobs system requires PostgreSQL — SQLite and MySQL are not supported.

rapina jobs list

The CLI gained a rapina jobs list command for inspecting job queue status:

$ rapina jobs list

  STATUS     COUNT
  ─────────  ─────
  pending    3
  running    1
  completed  142
  failed     2

Passing --failed shows a detailed table of failed jobs with the last error and retry counts:

$ rapina jobs list --failed

  ID            QUEUE    JOB TYPE              ATT.    LAST ERROR
  ────────────  ───────  ────────────────────  ──────  ──────────────────────────
  3fa85f64...  emails   send_welcome_email    3/3     connection refused
  7c9e6679...  default  sync_user_data        3/3     timeout after 30s

Multipart file upload

The Multipart extractor added support for file upload handling. Fields arrive as a stream consumed one at a time:

use rapina::extract::Multipart;

#[post("/upload")]
async fn upload_file(mut multipart: Multipart) -> Result<String> {
    while let Some(field) = multipart.next_field().await? {
        let name = field.name().unwrap_or("unknown").to_string();
        let file_name = field.file_name().map(|s| s.to_string());

        if let Some(file_name) = file_name {
            let data = field.bytes().await?;
            println!("File '{}': {} bytes", file_name, data.len());
        } else {
            let text = field.text().await?;
            println!("Field '{}': {}", name, text);
        }
    }

    Ok("ok".to_string())
}

Field exposes name(), file_name(), content_type(), bytes(), text(), and a streaming chunk() method for large files. Forms that mix file uploads with text fields are handled through the same API.


RAPINA_HOST and RAPINA_PORT

When both RAPINA_HOST and RAPINA_PORT are set, they override the address passed to .listen():

RAPINA_HOST=0.0.0.0 RAPINA_PORT=8080 rapina dev

The override applies at the listen() call level, so it works in any environment — not only during rapina dev. The rapina dev command also passes these variables when spawning the compiled binary, making it straightforward to bind to a non-default address during development without changing code. If neither variable is set, the address passed to .listen() is used as-is.


OpenAPI requestBody auto-generation

Handlers that accept a body extractor have their requestBody populated in the generated OpenAPI spec automatically. The following extractors are supported:

ExtractorContent-TypeRequired
Json<T>application/jsonyes
Form<T>application/x-www-form-urlencodedyes
Validated<Json<T>>application/jsonyes
Validated<Form<T>>application/x-www-form-urlencodedyes
Option<Json<T>>application/jsonno
Option<Form<T>>application/x-www-form-urlencodedno

Schemas are derived from the type's JsonSchema impl. No attributes are required for the common case:

#[derive(Deserialize, JsonSchema)]
struct CreateTodo {
    title: String,
    done: bool,
}

#[post("/todos")]
async fn create_todo(body: Json<CreateTodo>) -> Result<Json<Todo>> {
    // requestBody generated automatically from CreateTodo
}

Explicit overrides remain available via #[openapi(request_body = ...)] for cases that need more control.


matchit 0.9.1

The underlying router was upgraded from matchit 0.8 to 0.9.1. The new version brought performance improvements to route matching and expanded support for wildcard and catch-all patterns. No changes to route definitions are required.


Also: Juca made his quiet debut in this release.


Upgrade by bumping the version in your Cargo.toml:

rapina = "0.11.0"