Building an HTTP Server on a Thread-per-Core Framework, without Async/Await


📚 Part 2 • The Architecture of Tina
+ 1 more article in this series

In previous articles, we explored why async/await is the wrong primitive for massive concurrency and why unbounded queues don’t fix overload. Those are part of the foundational beliefs guiding the design of Tina.

But theory is cheap. To prove this architecture works in reality, I built a specification-compliant, production-grade HTTP/1.1 server natively on top of Tina’s concurrency primitives.

This article will show you how the HTTP library (Tina HTTP) is composed on top of the core framework. You can use this model to build your own networked applications — such as a custom RPC system, a WebSocket server, or a stateful game backend.

The Mental Model: Isolates and Shards

Tina is a strictly bounded, thread-per-core framework designed for high-performance concurrency, safety, and fault-tolerance.

Instead of using async/await, Tina uses an Isolate as the fundamental unit of concurrent work. Isolates, similar to what some might call Actors, are state machines that communicate exclusively via message passing. Each Isolate has its own memory region and runs on a single Shard (a dedicated OS thread). They are scheduled cooperatively, so you yield control to the scheduler by returning an Effect. This gives you fine-grained control over when and how long each Isolate runs, which results in highly predictable performance.

Isolates are also the unit of fault containment. If an Isolate crashes (panics or even segfaults), the Shard’s trap boundary catches the fault, cleans up the Isolate, and the supervisor restarts it. Other Isolates on the same Shard never notice. Because they are fault-tolerant, this architecture removes the need for complex global error handling. It is Erlang’s “Let it crash” philosophy reinvented for systems programming.

This model allows you to write simple, synchronous-looking state machines without fighting a black-box async runtime and its implicit state machine.

Deconstructing HTTP: Control Plane vs. Data Plane

Tina HTTP is composed of three distinct Isolate types.

  1. The Listener Isolate (The Control Plane): The Listener is an Isolate whose only job is to bind to a port, call accept, and spawn other Isolates. It does not do HTTP parsing. It is simply an accept-loop.

  2. The Connection Isolate (The Data Plane): Every time the Listener accepts a new socket, it spawns exactly one Connection Isolate. This Isolate owns the TCP socket. It reads the raw bytes, parses the HTTP request, invokes the user’s route handler, and writes the response back to the network.

  3. The Dispatcher Isolate (Cross-Shard Scaling): The other two Isolate types are sufficient for a single-threaded HTTP server. A third Isolate type is use for multi-shard scaling: The Dispatcher.

Here is how the topology works in practice:

  • Single-Shard Mode: The Listener Isolate accepts a connection and directly spawns a Connection Isolate on the same shard. This mode is perfect for workloads where a single CPU core can handle.
  • Multi-Shard Mode: The Listener Isolate accepts a connection, makes a lightweight L4 routing decision (such as round-robin), and hands the file descriptor across the shard boundary via the messaging subsystem. Sitting on the target shard is a Dispatcher Isolate. It receives the socket, spawns the Connection Isolate locally on its Shard, and goes back to waiting.
[ SHARD 0 ] (Ingress / Control)               [ SHARD 1 ] (Data Plane)
+---------------------------------+           +---------------------------------+
|                                 |           |                                 |
|  +---------------------------+  |           |  +---------------------------+  |
|  |     Listener Isolate      |  |           |  |   Dispatcher Isolate      |  |
|  | (Binds port, calls accept)|  |  L4       |  | (Waits for handoffs)      |  |
|  +-------------+-------------+  | Handoff   |  +-------------+-------------+  |
|                |                |=========> |                |                |
|                v                | (Mailbox) |                v                |
|  +---------------------------+  |           |  +---------------------------+  |
|  |    Connection Isolate     |  |           |  |   Connection Isolate      |  |
|  | (Local HTTP parse/respond)|  |           |  | (Local HTTP parse/respond)|  |
|  +---------------------------+  |           |  +---------------------------+  |
|                                 |           |                                 |
+---------------------------------+           +---------------------------------+

In Multi-Shard Mode, the Listener makes a lightweight L4 routing decision and sends the file descriptor across to a Dispatcher on another Shard.

This separation of concerns provides structural fault isolation. If a malformed HTTP request triggers a bug in the parser, or if a user’s route handler panics during execution, only that specific Connection Isolate crashes. The framework immediately closes the socket and reclaims the memory. The other active connections continue processing without interruption.

The Handler Contract: Synchronous Code, Predictable State

Despite the rigorous systems architecture beneath it, the application code you write remains entirely synchronous, linear, and clean.

There is no async keyword to fragment your call stack, and no await to obscure your control flow. No promise chaining. No callback hell. A route handler is a regular function that accepts a Request, mutates a Response, and returns a Route_Step instruction.

import http "tina_http"
import "core:fmt"

get_user :: proc(request: ^http.Request, response: ^http.Response) -> http.Route_Step {
    id := http.param(request, "id")

    // Mutate the response buffer directly
    http.header_set(response, "cache-control", "no-cache")

    // tprintf uses Tina's handler-transient
    // scratch arena, ensuring zero mallocs or hidden heap allocations.
    body := fmt.tprintf(`{"id":"%d"}`, id)

    // Return a Route_Step instruction to the framework
    return http.respond_json(response, http.HTTP_STATUS_OK, body)
}

The HTTP library acts as a bridge to the underlying framework. It takes the returned Route_Step (such as .Flush_Final) and translates it into the underlying Effect in Tina’s core (such as submitting an io_send operation to the internal I/O subsystem). The developer gets an ergonomic API; the framework gets rigorous, predictable state transitions. We achieve this predictability by pre-allocating all memory at startup. When a Connection Isolate processes a request, it uses memory that already exists. If your workload exceeds your pre-configured capacity, Tina drops messages and sheds load predictably rather than failing with a catastrophic Out-Of-Memory crash.

Escaping the HTTP Sandbox

Because the HTTP Connection is just another Isolate, you are not trapped in the traditional request-response abstraction most web frameworks force upon you.

When an HTTP request requires fan-out to multiple subscribers, managing long-running background tasks, or orchestrating complex state, traditional frameworks force you to introduce external infrastructure like Redis or spin up OS threads with mutexes.

With Tina, if an incoming request requires heavy background processing, your route handler simply calls ctx_spawn() to spin up a Worker Isolate, or ctx_send() to message a Pub/Sub Router Isolate. These components run within the exact same OS process, communicating via lock-free message passing. Thus, you can compose complex, distributed-style architectures entirely within a single binary, safely and predictably.

Deterministic Simulation Testing

This strict, predictable architecture unlocks one of Tina’s most powerful features: Deterministic Simulation Testing (DST).

In traditional web frameworks, testing network timeouts, partial HTTP frame reads, or database disconnects requires a fragile web of mocks, spies, and dependency injection. Tina eliminates this.

Because the HTTP server is built on pure state machines, and Tina’s scheduler completely abstracts the clock, the thread, and the I/O subsystem, you rely entirely on the framework’s simulation engine.

You can run your HTTP server in simulation mode, on a single thread. The engine can inject dropped TCP packets, simulate slowloris attacks, or force a specific Isolate to crash. If a bug occurs, the simulation engine prints a seed. Plug that seed back in, and the exact same sequence of network delays, interleaved requests, and crashes will play back perfectly. You are therefore testing the actual physical boundaries of your system, not a mocked fantasy.

This works flawlessly with Odin’s native core:testing package. You simply pass the Odin test seed into Tina’s simulation config, and you can orchestrate thousands of simulated network faults just by running odin test.

See it in Action

You can read the deep-dive architectural documents on how this works in the Tina docs directory, and explore the HTTP API in the Tina HTTP README.

If you want to see the performance and simplicity for yourself, you can run the HTTP example with the Datastar SDK locally, right now:

git clone https://github.com/pmbanugo/tina.git
cd tina
odin run examples/example_http_datastar.odin -file

Then open a browser and navigate to http://localhost:8080. Here’s a quick demo of the example in action:

Wrapping Up

Tina is a zero-dependency systems framework built in the open. My goal is to prove that we do not need to settle for complex async runtimes, coloured functions, or unpredictable garbage collectors to build massively concurrent and reliable software.

If you believe in this architectural approach and want to see systems programming return to mechanical sympathy and structural simplicity, consider supporting the project.

  • Star the repository on GitHub: In the open-source world, stars are the social proof that helps pragmatic engineers discover new projects. It takes two seconds and makes a massive difference to the project’s visibility.
  • 💖 Sponsor the work: If your company relies on high-throughput backend systems, sponsoring helps fund the design, development, and testing required to harden these bare-metal primitives.

The code for the HTTP library and the core framework is Apache licensed and available to use today. I welcome your architectural critiques in the GitHub Discussions.

Subscribe to newsletter

Subscribe to receive expert insights on high-performance Web and Node.js optimization techniques, and distributed systems engineering. I share practical tips, tutorials, and fun projects that you will appreciate. No spam, unsubscribe anytime.