Tracing the Evolution of Async: Node.js Promises to Rust Futures

Tracing the Evolution of Async: Node.js Promises to Rust Futures
Written by Devops Traveler Tales, 4 August 2025

The Era of Thread-Dominated Web Servers

Before 2009, web servers were dominated by threads.
For every incoming request, the server would spawn a dedicated thread to handle it. This is like a restaurant hiring an additional waiter every time a customer walks in. The problem arises when the restaurant becomes crowded, having too many waiters makes it difficult to even move around.

Similarly, when a web server has too many threads, it leads to excessive consumption of CPU and memory resources, ultimately degrading system performance (see: C10k Problem).

At this point, Ryan Dahl had an idea:

“JavaScript engines already handle asynchronous operations very efficiently in the browser using the event loop. What if we applied the same model to servers?”

That moment marked the beginning of a new era in web servers, the birth of Node.js.


Node.js: The Emergence of the Event Loop and Asynchronous Programming

Ryan Dahl chose the event loop as a powerful alternative to threads.
The event loop is a mechanism where a single main thread handles tasks, while long-running operations — particularly I/O tasks like network requests or file reading — are processed asynchronously so that the main thread remains unblocked.

This architecture proposed by Node.js was a shock to developers.
It enabled a single thread to efficiently handle thousands, even tens of thousands, of requests with ease.


What Is Asynchronous Programming?

Let’s take a moment to clarify what we mean by "asynchronous."

  • In synchronous processing, Task B can only begin once Task A has completed. Everything proceeds in order, which means you have to wait.
  • In asynchronous processing, Task B can start immediately without waiting for Task A to finish. When Task A eventually completes, its result is retrieved and used as needed.

For example, standing still and doing nothing while waiting for a pizza to finish baking is synchronous behavior.Placing the order and then watching a movie or doing other tasks while waiting — that’s asynchronous.

Img: https://poiemaweb.com/js-async

What Are the Benefits of Asynchronous Programming?

  • High throughput: Multiple tasks can be performed simultaneously without waiting for long-running operations to finish.
  • Efficient resource usage: Threads don’t need to be spawned indiscriminately, allowing the system to manage resources more effectively.
  • Faster response times: User requests are handled quickly, improving the overall user experience.

Thanks to these advantages, Node.js rapidly gained traction in a variety of domains, including web servers, API backends, and real-time chat services.


The Core of Async in Node.js: Promise

The key technology that makes asynchronous processing in Node.js more refined is the Promise.


A Promise is a JavaScript object that represents a task that will complete in the future, and it allows you to manage the state of an asynchronous operation.

Many people memorize that Promises were created to eliminate, "Callback hell" Let’s look at the following example to understand:

const p = new Promise((resolve) => {
	let v = 0;
	console.log('in p'); // executed immediately
  setTimeout(() => { v += 1; resolve(v); }, 1000);
});

p.then((v1) => {
  (async () => {
    console.log('a', v1);
    const v2 = await p;
    console.log('p', v2);
  })();
  console.log('b');
});

What’s the output of this code?

First, in p is printed immediately.
Then, after one second, a 1 is printed, followed by p 1 almost instantly.

Unlike continuation-passing style (CPS), which chains callbacks to handle asynchronous tasks, a resolved Promise does not re-execute its internal logic when .then() or await is called again.
Instead, it simply returns the already-resolved result.

So in the code above, attaching an await to the resolved Promise does not cause another one-second wait — the result is returned almost instantly.

By leveraging this behavior, you can optimize how time is used in asynchronous operations:

const prm1 = fetch(some); // initiate async call before CPU work

// Start time-consuming CPU-bound task (or possibly another long async task)
const cpuJob = heavyCpuJobs();

// By now, prm1 has likely completed; just await the result!
const prmResult = await prm1; // returns almost immediately

Rust: Embracing the Asynchronous Paradigm of the Next-Generation Language

After Node.js revolutionized the web server paradigm with its event loop and Promises, the programming world began to explore asynchronous programming more broadly.
Amid this shift, the systems programming language Rust also came to actively embrace asynchronous execution.

Rust was designed from the ground up to offer both memory safety and performance. In the realm of systems programming, achieving high-performance concurrency and parallelism with precision is essential — but also notoriously complex. To address this challenge, Rust made the decision to support asynchronous programming (async) as a first-class feature at the language level.

Here are the key reasons why Rust actively adopted async:

  • Efficient resource utilization
    Traditional multithreading models suffer from frequent context switching between threads, leading to high resource consumption.
    Rust’s async runtime minimizes the number of threads while still managing resources effectively.
  • Maximum performance
    Async in Rust is compiled into a state machine at compile time, offering zero-cost abstractions that minimize runtime overhead.
  • Maintaining memory safety
    Even in asynchronous contexts, Rust ensures data race freedom through strict compile-time checks.

By providing language-level support for async in this way, Rust has expanded the concept of asynchronous programming in a fundamentally different direction from Node.js.


Node.js Async vs. Rust Async

Now let’s take a closer look at how Rust differs from Node.js in its approach to async execution.
Unlike Node.js, which uses an eager approach where tasks begin immediately upon Promise creation, Rust manages asynchronous tasks using a lazy model.

  • Node.js (Eager):
    A Promise starts executing (e.g., I/O operations) as soon as it is created.
  • Rust (Lazy):
    A Future(Asynchronous) does nothing upon creation. It remains idle until it is explicitly polled, meaning execution won’t start until the runtime explicitly requests it.

Reviewing a Node.js Concurrent Execution Pattern

async fn foo() -> i32 {
    println!("start"); // Only printed when the task actually runs
    tokio::time::sleep(Duration::from_secs(1)).await;
    1
}

let fut = foo();         // Just creates the Future — no execution yet
tokio::spawn(fut);       // Execution starts at this point

Rust gives developers explicit control over when a task begins execution.


Now, how do you implement the same pattern in Rust?

A lazy Future in Rust does absolutely nothing until it is polled. This is one of the key differences between how await works in Node.js and how it works in Rust.

In Rust, if a future hasn't been polled yet, calling await will first trigger polling, and only then will it wait for the result. To handle polling, Rust requires a special runtime component called an executor.

The executor is responsible for actually driving the future to completion.
It manages a task queue and ensures that multiple asynchronous tasks are scheduled and executed efficiently.

The executor polls the future and allows you to retrieve the result asynchronously through a JoinHandle. This architecture gives developers precise control over execution timing, enabling highly optimized concurrency.

So then, how would you express the common Node.js pattern of "starting multiple async tasks in advance" in Rust? Let’s take a look at a typical example of launching concurrent async operations in Node.js.

How Would You Implement the Same Pattern in Rust?

Since Rust's futures are lazy, the behavior differs slightly — even if the structure looks similar.

async fn concurrent() {
    let fA = fetch_a();  // Creates Future object (not yet running)
    let fB = fetch_b();  // Another Future (also not yet running)

    // futures::join! starts polling both Futures concurrently
    let (a, b) = futures::join!(fA, fB);
}

In Rust, execution begins only when the futures::join! macro is called — that’s the point where both tasks are polled and run concurrently.

Unlike Node.js’s eager model, Rust gives developers explicit control over when tasks start, allowing them to launch asynchronous operations only when needed.


What Are the Advantages of Lazy Run?

Rust’s lazy asynchronous model offers several key advantages that distinguish it from Node.js’s eager approach:

  • Avoiding unnecessary work
    Even if a Future is created, it doesn’t have to be executed unless it’s actually needed. This helps skip tasks that might never be used.
  • Precise scheduling control
    Developers can explicitly decide when to start async operations — either launching them in parallel at specific moments or sequencing them carefully to maximize performance.
  • Zero-cost abstraction
    Rust’s futures are compiled into state machines, which means there’s virtually no runtime overhead. You get ergonomic asynchronous code without sacrificing performance.
  • Flexible cancellation and testability
    Because a Future doesn’t start automatically, it can be canceled or composed for testing before execution. This greatly improves maintainability.

Tokio: A Powerful Companion for Async in Rust

“But what if I want to execute tasks immediately, like in Node.js?”

In Rust, you can start futures eagerly — just like in Node.js — by using Tokio.

Tokio is Rust’s most popular async executor, and it makes asynchronous programming much easier.
In particular, you can register a future with the executor right away using tokio::spawn, which starts execution immediately:

let hA = tokio::spawn(fetch_a()); // Starts immediately
let hB = tokio::spawn(fetch_b()); // Starts immediately

let (a, b) = (hA.await?, hB.await?);

With this approach, you can achieve nearly the same behavior as using Promises in Node.js. Also, Rust’s futures::join! macro polls multiple futures concurrently when invoked, meaning it still preserves the lazy nature while enabling high concurrency.

In short, Rust’s laziness does not reduce concurrency.
In fact, it allows you to control precisely when concurrency begins, enabling efficient resource management.


Single-Threaded Node.js vs. Multi-Threaded Rust

The strength of asynchronous programming lies in its ability to efficiently handle both I/O-bound and CPU-bound operations, maximizing overall performance. However, leveraging the benefits of async does not mean you are limited to a single thread. In fact, asynchronous techniques can be highly efficient in both single-threaded and multi-threaded environments.

This leads us to a critical architectural choice: the threading model.

  • Single-threaded model: A single main thread handles all tasks via an event loop.
  • Multi-threaded model: Multiple threads run concurrently, separating CPU-intensive and I/O tasks for better performance.

Here’s a comparison of the two models:

CategoryNode.jsRust (Tokio, etc.)
Main LogicSingle-threaded event loopChoice of single-threaded or multi-threaded executor
I/O Handlinglibuv-based thread pool (default: 4 threads)Internal executor-managed thread pool (dynamically adjusted)
CPU-Intensive WorkBlocks main thread → needs separate Worker ThreadsHandled via spawn_blocking or naturally across multi-threaded executors
Cancellation / TimeoutManual via AbortControllerClean handling via tokio::time::timeout
SafetySingle-threaded JS → no data racesCompile-time data race prevention using Send / Sync traits

The Strengths and Limitations of Node.js’s Single-Threaded Model

Node.js was designed from the very beginning to operate efficiently in a single-threaded environment. Its event loop and Promise-based architecture deliver excellent performance for most I/O-centric web services.

However, there's one important caveat: Using Promises in Node.js does not guarantee that the main thread will never be blocked.

Consider the case where a Promise wraps a CPU-intensive task — such as encryption, hashing, or image processing:

new Promise((resolve) => {
  // CPU-intensive task, e.g., encryption
  const encrypted = heavyEncrypt(data); // time-consuming operation
  resolve(encrypted);
});

In this scenario, Node.js’s only main thread becomes occupied with the computation, preventing it from handling other tasks.
As a result, the event loop is blocked, and the overall responsiveness of the server drops.

To mitigate this issue, Node.js introduced Worker Threads, which allow CPU-intensive tasks to be offloaded to separate threads outside the main event loop.


Multithreaded Asynchronous Execution with Rust and Tokio

Rust, in contrast, was designed from the ground up to actively leverage multithreaded executors. Its most widely used async executor, Tokio, uses user-level threads (similar to goroutines) to elegantly combine asynchronous programming with multithreading.

Tokio’s executor utilizes a thread pool to execute multiple futures concurrently. By breaking down async tasks into very small units, the system can dynamically maintain the optimal number of worker threads and ensure efficient execution.

For example, parallelizing async tasks in Rust is straightforward:

#[tokio::main]  // Uses the multithreaded executor by default
async fn main() {
    let handle_io = tokio::spawn(async {
        fetch_data().await
    });

    let handle_cpu = tokio::spawn_blocking(|| {
        heavy_computation()
    });

    // I/O and CPU-bound operations run in parallel for maximum performance
    let (io_result, cpu_result) = tokio::join!(handle_io, handle_cpu);
}

In this code, Tokio handles both I/O-bound and CPU-bound operations concurrently. It automatically creates and manages the optimal number of worker threads internally, freeing developers from the burden of managing complex thread logic themselves.


When Should You Choose Which Approach?

In asynchronous programming, the key is choosing the right model for the right context.

Use CaseRecommended Approach
Lightweight web applications (I/O-bound)Node.js with single-threaded model + eager Promises
CPU-intensive tasks or high performanceRust with lazy Futures + multithreaded executor (e.g., Tokio)
A balanced middle groundNode.js with Worker Threads + eager Promises, or Rust with eager helpers
  • For lightweight, I/O-driven applications, Node.js offers fast development and efficient resource usage — making it an excellent choice.
  • For systems that demand high performance and stability, Rust with Tokio's powerful multithreaded support and fine-grained control is often the better fit.
  • A hybrid approach is also viable — combining the strengths of both models and selectively applying them depending on the scenario can yield optimal results.

In Closing

To truly understand asynchronous programming, you must consider both when tasks are executed (Eager vs. Lazy) and where they are executed (Single-threaded vs. Multi-threaded).

Node.js delivers excellent productivity and fast response times with its eager async model and single-threaded architecture. In contrast, Rust combines lazy async with multi-threaded execution to achieve optimal resource efficiency and maximum performance.

I hope this article serves as a clear guide for anyone exploring async models and threading strategies. Use the unique strengths of each language to design an asynchronous architecture that best fits your system.

Keep in mind that even in Node.js, you can use Worker Threads to offload Promise-based workloads to separate threads. By selecting the right approach for the situation, you can build systems that are both high-performing and resource-efficient.

Finally, this article focused on Node.js and Rust not because they are inherently superior, but because they provide excellent examples for discussing modern async paradigms. If you understand how these languages handle async, you’ll gain the insight needed to implement asynchronous models — even in environments where async is not supported at the language level.

backnd

© 2025 AFI, INC. All Rights Reserved. All Pictures cannot be copied without permission.

Read more