/ Tags: RUBY 3 / Categories: RUBY

Ruby's Fiber Scheduler — Concurrency That Actually Makes Sense

Most Ruby developers have a complicated relationship with concurrency. Threads feel dangerous. Processes feel heavy. And the GIL — the Global Interpreter Lock — has been the villain in so many horror stories that a lot of us quietly gave up and just scaled horizontally. But Ruby 3 quietly shipped something that changes the picture: the Fiber Scheduler. It’s not a silver bullet, but for I/O-bound workloads, it’s genuinely powerful and far less scary than you’d expect.

What the Fiber Scheduler Actually Does


Before Fiber Scheduler, fibers in Ruby were a cooperative concurrency primitive — lightweight, manually switched, useful but verbose to orchestrate. You yielded control explicitly, you resumed explicitly. Great for understanding coroutines; awkward in production code.

The Fiber Scheduler changes the model. Instead of you managing when fibers yield and resume, the scheduler hooks into Ruby’s I/O operations — things like read, write, sleep, DNS lookups — and automatically suspends the current fiber when it’s waiting on I/O, then resumes it when the data is ready. You write mostly normal code; the scheduler makes it concurrent underneath.

Think of it like this: your code makes a network call. Normally, the thread blocks. With a Fiber Scheduler, Ruby pauses that fiber, runs other fibers doing their own work, and comes back to yours when the socket has data. No threads. No locks. No shared mutable state nightmares.

The Interface


Ruby 3 defines Fiber::Scheduler as an interface — a set of methods your scheduler class needs to implement. Ruby itself doesn’t ship a default implementation; that’s intentional. Libraries like async (from the async gem) and evt provide production-ready schedulers.

Example:

require 'async'

Async do
  10.times.map do |i|
    Async do
      response = Net::HTTP.get(URI("https://httpbin.org/delay/1"))
      puts "Request #{i} done"
    end
  end
end

Without a scheduler, those 10 requests run sequentially — about 10 seconds total. With Async, they run concurrently and finish in roughly 1 second. Same code structure. No threads, no callbacks, no promises.

Setting a Scheduler


You tell Ruby which scheduler to use with Fiber.set_scheduler. From that point forward in the current thread, any fiber that blocks on I/O will yield through it.

Setup:

require 'fiber'

class SimpleScheduler
  def io_wait(io, events, timeout)
    # minimal example — in practice, use select or epoll
    IO.select([io]) if events & IO::READABLE > 0
    events
  end

  def kernel_sleep(duration = nil)
    sleep(duration)
  end

  def close
    # cleanup if needed
  end
end

Fiber.set_scheduler(SimpleScheduler.new)

Fiber.schedule do
  # this fiber can now yield on I/O automatically
  File.read("/etc/hosts")
end

In real projects, reach for Async or Falcon (the async-capable web server) instead of rolling your own scheduler. The interface exists so the ecosystem can innovate without waiting for MRI to catch up.

Where It Shines (and Where It Doesn’t)


Fiber Scheduler is purpose-built for I/O-bound concurrency — web requests, database queries, file reads, API calls. If your app spends time waiting for the network or disk, this is exactly the right tool.

It does not help with CPU-bound work. If you’re crunching numbers, transcoding video, or running heavy computations, you still want Ractor (for true parallelism across CPU cores) or native threads. The GIL still exists; fibers don’t bypass it.

Workload type Best tool
Network I/O Fiber Scheduler
Database calls Fiber Scheduler
CPU computation Ractor / threads
Mixed workload Fiber Scheduler + Ractor
Scheduler hooks Ruby exposes

The Fiber::Scheduler interface includes hooks for:

  • io_wait — called when a fiber blocks on an IO object
  • kernel_sleep — called when a fiber calls sleep
  • process_wait — called when waiting for a child process
  • address_resolve — called for DNS resolution
  • timeout_after — wraps blocking calls with a timeout

You don’t have to implement all of them — unimplemented hooks fall back to default blocking behavior.

Using It with the Async Gem


The async gem is the most mature scheduler implementation available today. It integrates cleanly with net-http, pg, and a growing list of async-aware libraries.

Setup:

gem install async

Example:

require 'async'
require 'async/http/internet'

Async do
  internet = Async::HTTP::Internet.new

  results = 5.times.map do |i|
    Async do
      response = internet.get("https://jsonplaceholder.typicode.com/posts/#{i + 1}")
      JSON.parse(response.read)
    end
  end.map(&:wait)

  puts results.map { |r| r["title"] }
ensure
  internet.close
end

Five concurrent HTTP requests. The event loop handles scheduling. You write top-to-bottom code that reads synchronously but executes concurrently.

What This Means for Rails


Rails itself isn’t fully async-aware yet, but Falcon — an async-capable web server built on the async gem — can serve Rails apps with fiber-based concurrency. For certain workload profiles (many slow external API calls, webhook-heavy apps), the throughput gains are significant without adding infrastructure complexity.

Pro-Tip: Don’t reach for the Fiber Scheduler just because it’s new. Profile first. If your app’s response time is dominated by database queries that already use connection pooling, you’ll see diminishing returns. The Fiber Scheduler earns its keep when you have genuine I/O concurrency — multiple external calls per request, or high fan-out across services.

Conclusion


The Fiber Scheduler is Ruby’s answer to the async/await patterns that have spread across other languages — except Ruby does it without new syntax. You set a scheduler, use Fiber.schedule, and existing I/O code becomes concurrent automatically. It’s elegant in a way that feels very Ruby.

The ecosystem is still catching up — not every gem is scheduler-aware yet, and you’ll hit rough edges with libraries that use blocking I/O internally. But for greenfield services and I/O-heavy workloads, this is worth understanding now. Ruby’s concurrency story is getting genuinely good.

FAQs


Q1: Does the Fiber Scheduler replace threads in Ruby?
Not entirely. It’s better than threads for I/O-bound concurrency, but threads are still appropriate for CPU-bound work or when you need true parallelism today. Think of them as complementary tools.

Q2: Is the Fiber Scheduler production-ready?
Yes, with the right libraries. The async gem and Falcon web server are production-tested. Rolling your own scheduler from scratch is not recommended for production use.

Q3: Does this work with ActiveRecord?
Not transparently yet. ActiveRecord uses connection pooling which doesn’t integrate with fiber scheduling out of the box. Libraries like activerecord-cause and async-aware database adapters are improving this, but check compatibility before committing.

Q4: What’s the difference between Fiber Scheduler and Ractor?
Fiber Scheduler handles concurrency within a single thread via cooperative yielding — no parallelism, no shared state issues. Ractor provides true parallelism across CPU cores but with strict object isolation rules. They solve different problems.

Q5: Can I use Fiber Scheduler with Sidekiq?
Sidekiq uses threads, not fibers. There’s ongoing work in the ecosystem to make job processing fiber-friendly, but as of now, they operate independently. Watch the async-job ecosystem if this matters to your stack.

cdrrazan

Rajan Bhattarai

Full Stack Software Developer! 💻 🏡 Grad. Student, MCS. 🎓 Class of '23. GitKraken Ambassador 🇳🇵 2021/22. Works with Ruby / Rails. Photography when no coding. Also tweets a lot at TW / @cdrrazan!

Read More