Rails 8 is a huge milestone release. It cuts down on external dependencies, makes deployment a first-class citizen, and gives you better defaults for security and performance.
The biggest theme for this update is the new “Solid” trifecta (Solid Cache, Solid Queue, and Solid Cable). They’re called “Solid” because each one is a self-contained, database-backed alternative to tools like Redis or Memcached. That means fewer moving parts, simpler infrastructure, and stronger defaults baked into Rails itself.
In this tutorial, we’ll build a small project and explore each new feature through real examples.
Project Setup
Let’s start from scratch:
gem install rails -v 8.0.3
rails new rails8demo --database=postgresql
cd rails8demo
bin/setup
Since we need a PostgreSQL database, we can quickly spin one up with Docker:
docker run -d \\
--name rails8demo-postgres \\
-e POSTGRES_USER=rails8demo \\
-e POSTGRES_PASSWORD=secret \\
-e POSTGRES_DB=rails8demo_development \\
-p 5432:5432 \\
-v rails8demo_pgdata:/var/lib/postgresql/data \\
postgres:16
Then update config/database.yml:
default: &default
adapter: postgresql
encoding: unicode
pool: <%= ENV.fetch("RAILS_MAX_THREADS", 5) %>
host: <%= ENV.fetch("PGHOST", "127.0.0.1") %>
port: <%= ENV.fetch("PGPORT", 5432) %>
username: <%= ENV.fetch("PGUSER", "rails8demo") %>
password: <%= ENV.fetch("PGPASSWORD", "secret") %>
development:
primary:
<<: *default
database: rails8demo_development
queue:
<<: *default
database: rails8demo_development_queue
migrations_paths: db/queue_migrate
Create and load the schema:
bin/rails db:drop db:create db:prepare
Run the app just to confirm everything works:
rails s
Before we explore the new changes, we need a few tools to measure how effective these features are.
A) tiny middleware to time requests
# app/middleware/server_timing.rb
# Simple Server-Timing + log instrumentation
class ServerTiming
def initialize(app) = @app = app
def call(env)
start = Process.clock_gettime(Process::CLOCK_MONOTONIC)
status, headers, body = @app.call(env)
total_ms = ((Process.clock_gettime(Process::CLOCK_MONOTONIC) - start) * 1000).round(1)
# Expose in response header and logs
headers["Server-Timing"] = "app;dur=#{total_ms}"
Rails.logger.info("[TIMING] path=#{env["PATH_INFO"]} ms=#{total_ms}")
[status, headers, body]
end
end
Enable it:
# config/application.rb
module Rails8demo
class Application < Rails::Application
# ....
require Rails.root.join("app/middleware/server_timing")
config.middleware.insert_before 0, ServerTiming
end
end
B) Curl alias to record wall-clock time
alias ctime='curl -s -o /dev/null -w "time_total=%{time_total}\\n"'
Use it like: ctime <http://localhost:3000/slow_cache> → time_total=0.213456
1. Solid Adapters: Built-in Cache, Queue, and Cable
Rails 8 introduces Solid Cache, Solid Queue, and Solid Cable, all powered by your database.
No Redis. No Memcached. No extra infra.
Under the hood, these adapters store data in regular PostgreSQL (or any supported database) tables and use lightweight background threads or polling loops to manage cleanup, dispatch, or broadcasts. The goal: make your app “just work” in development and production without needing extra services.
Solid Cache: Cold vs. warm (real time savings)
Create an intentionally expensive endpoint:
# config/routes.rb
Rails.application.routes.draw do
get :slow, to: "bench#slow"
get :slow_cached, to: "bench#slow_cached"
end
class BenchController < ApplicationController
# Simulate heavy work (DB + CPU)
def slow
users = User.limit(10_000).pluck(:email) # or any big query in your app
Digest::SHA256.hexdigest(users.join)[0..16] # some CPU
render plain: "ok"
end
# Same work, but cached for 60s
def slow_cached
Rails.cache.fetch("slow:v1", expires_in: 60) do
users = User.limit(10_000).pluck(:email)
Digest::SHA256.hexdigest(users.join)[0..16]
end
render plain: "ok"
end
end
Enable Solid Cache:
# config/environments/development.rb
config.cache_store = :solid_cache_store
config.action_controller.perform_caching = true
Run the benchmark:
# cold cache (first hit)
ctime <http://localhost:3000/slow_cached>
# warm cache (repeat)
ctime <http://localhost:3000/slow_cached>
# no cache baseline
ctime <http://localhost:3000/slow>
Typical result (example):
❯ ctime <http://localhost:3000/slow_cached>
time_total=0.124908
❯ ctime <http://localhost:3000/slow_cached>
time_total=0.041723
❯ ctime <http://localhost:3000/slow>
time_total=0.153223
Note: Performance may vary, but the difference is always noticeable.
Why this matters:
- You get caching that “just works” out of the box, no Redis setup, no extra service.
- Perfect for Heroku, Fly.io, or any single-DB deployment.
- Still fast enough for production workloads in small and medium apps.
Next, let’s test Solid Queue.
Solid Queue: Request time inline vs. background
Solid Queue is a full ActiveJob backend implemented entirely in SQL. It creates a solid_queue_jobs table and a worker loop (bin/rails solid_queue:start) that polls for new jobs. Unlike Sidekiq or Delayed Job, there’s no separate service or Redis layer. Jobs are enqueued and committed in the same DB transaction, ensuring they can’t be lost. It’s simpler to deploy, though Sidekiq may still be faster for massive workloads.
Make a fake “expensive email” endpoint:
# config/routes.rb
post :signup_inline, to: "bench#signup_inline"
post :signup_async, to: "bench#signup_async"
# app/jobs/welcome_email_job.rb
class WelcomeEmailJob < ApplicationJob
queue_as :default
def perform(user_id)
# simulate ~300ms email send
sleep 0.3
end
end
Add actions for the new endpoints:
# app/controllers/bench_controller.rb
def signup_inline
# simulate inline email
sleep 0.3
render plain: "ok"
end
def signup_async
WelcomeEmailJob.perform_later(1)
render plain: "queued"
end
Run the Solid Queue worker in a separate terminal:
bin/rails solid_queue:start
Now test the endpoints:
ctime -X POST <http://localhost:3000/signup_inline>
ctime -X POST <http://localhost:3000/signup_async>
Example result:
❯ ctime -X POST <http://localhost:3000/signup_inline>
time_total=0.334755
❯ ctime -X POST <http://localhost:3000/signup_async>
time_total=0.128887
Offloading work to Solid Queue dramatically reduces request latency, users get instant responses while jobs run in the background.
Why this matters:
- You can deploy background jobs with zero extra infrastructure.
- Jobs are stored and committed in the same DB transaction, no message loss.
- Ideal for apps that outgrow inline processing but don’t need Sidekiq scale.
Finally, let’s check out Solid Cable.
Solid Cable: Message fan-out without Redis
With solid_cable, Action Cable stays simple and self-contained. Normally, Action Cable uses Redis pub/sub to broadcast WebSocket messages. solid_cable replaces that with a lightweight database table and a short polling loop. Each broadcast writes a row that connected clients read almost instantly (every 100ms by default). You get real-time updates without maintaining a Redis cluster (great for small and medium apps). Heavy chat or gaming apps might still prefer Redis for ultra-low latency.
We can’t “time a WebSocket” with curl, but we can measure broadcast time via notifications.
# config/cable.yml
development:
adapter: solid_cable
database: primary
Subscribe to broadcasts and log durations:
# config/initializers/cable_bench.rb
ActiveSupport::Notifications.subscribe("broadcast.action_cable") do |_, start, finish, _, payload|
ms = ((finish - start) * 1000).round(2)
Rails.logger.info("[CABLE] broadcast=#{payload[:stream]} ms=#{ms} bytes=#{payload[:message].to_s.bytesize}")
end
Then, from the Rails console (rails c):
ActionCable.server.broadcast("bench", { at: Time.now.to_f })
check logs for [CABLE] … ms=…
You’ll see low-millisecond broadcasts without maintaining a Redis cluster.
[ActionCable] Broadcasting to bench: {at: 1759909481.8707998}
[CABLE] broadcast= ms=62.19 bytes=24
Why this matters:
- Real-time updates with no Redis dependency.
- Great for dashboards, notifications, or lightweight chat without extra ops overhead.
- You can scale horizontally later by just switching adapters, no code changes.
2. Propshaft: First vs. repeat asset fetch (cache headers)
Propshaft replaces the old Sprockets-based asset pipeline with something far simpler. It doesn’t bundle or transpile assets, tools like jsbundling-rails or cssbundling-rails handle that. Propshaft’s job is just to serve digested, cacheable files from public/assets, with strong cache headers.
This is part of Rails’ effort to clean up years of asset pipeline confusion (Sprockets → Webpacker → Importmap → Propshaft). It favors modern browsers and HTTP/2, where long-lived digests and caching matter more than bundling.
Create a tiny asset and observe network timings:
// app/assets/javascripts/application.js
console.log("propshaft demo");
Ensure tags are in your layout:
app/views/layouts/application.html.erb
<%= javascript_include_tag "application", type: "module" %>
<%= stylesheet_link_tag "application", "data-turbo-track": "reload" %>
Precompile once:
bin/rails assets:precompile
Measure with curl (first vs second hit)
Find the digested asset path from your page source (e.g., public/assets/application-4bd30b1b.js), then:
❯ ctime -I <http://localhost:3000/assets/application-4bd30b1b.js>
time_total=0.001209
❯ ctime -I <http://localhost:3000/assets/application-4bd30b1b.js>
time_total=0.000868
Because each asset filename includes a hash digest, browsers can cache it indefinitely. When the file changes, the digest changes, forcing a new fetch. The result: faster repeat loads and no stale assets.
Why this matters:
- No bundling complexity, just digest and serve.
- Eliminates stale assets by tying cache invalidation to file hashes.
- Modern HTTP/2-friendly approach that aligns with today’s browser caching behavior.
3. Built-in Authentication Generator
No more Devise setup if you just need simple email/password auth.
Generate authentication:
bin/rails generate authentication user
bin/rails db:migrate
This scaffolds everything you need:
app/models/user.rb
app/controllers/sessions_controller.rb
app/views/sessions/
config/routes.rb
Add the routes so we can preview it:
# config/routes.rb
Rails.application.routes.draw do
# ....
root "sessions#new"
resources :sessions, only: [:new, :create, :destroy]
resources :users
end
Start the server and visit http://localhost:3000 .
You’ll have working sign-up, login, and logout screens, all scaffolded by Rails.
Why this matters:
- You no longer need Devise for simple use cases.
- Fewer dependencies = faster boot times and smaller codebase.
- Perfect for quick MVPs or internal tools where simplicity wins.
4. Kamal 2: Built-in Deployment Solution
This feature now ships with Rails 8 by default. You can check for a config/deploy.yml file, if it’s there, you’re ready to go.
If not, just run:
kamal init
That will generate the config for you.
We won’t go into full detail since Kamal could be its own tutorial, but to use it you’ll need a registry server to store your app image. Kamal uses Docker Hub by default, so make sure you have a Docker account linked and your VPS configured.
You can find more details in the Rails deployment guide .
# ...
servers:
web:
- 192.168.0.1
# Credentials for your image host.
registry:
# Specify the registry server, if you're not using Docker Hub
# server: registry.digitalocean.com / ghcr.io / ...
username: your-user
# Always use an access token rather than real password when possible.
password:
- KAMAL_REGISTRY_PASSWORD
# ...
Once everything’s set up:
kamal deploy
And boom, your app is live.
Why this matters:
- Deployment is finally a first-class Rails experience.
- Kamal uses Docker under the hood, same workflow from dev to prod.
- No Capistrano, no manual setup, no surprises.
5. Stricter Defaults and Cleanup
Rails 8 cleaned house, removing legacy APIs and enforcing stronger security best practices.
Most of these changes come from real-world pain points, and you can read more in the official Rails 8 Release Notes .
Key changes
params.expectreplaces nestedrequire/permit- Default
Regexp.timeout = 1.second - Dropped support for Ruby < 3.2
config.read_encrypted_secretsremoved (use credentials instead)
These defaults push developers toward safer, more explicit patterns with less boilerplate.
Example: Replacing require / permit with params.expect
In previous Rails versions, you’d typically whitelist params like this:
# OLD (Rails 7 and below)
def user_params
params.require(:user).permit(:name, :email, :password)
end
def create
@user = User.new(user_params)
if @user.save
redirect_to @user, notice: "User created successfully"
else
render :new, status: :unprocessable_entity
end
end
With Rails 8, that pattern becomes cleaner and flatter:
# NEW (Rails 8)
def create
attrs = params.expect(:name, :email, :password)
@user = User.new(attrs)
if @user.save
redirect_to @user, notice: "User created successfully"
else
render :new, status: :unprocessable_entity
end
end
This avoids the need for require(:user) when you already control the incoming payload (for example, in JSON APIs or direct form submissions).
Why this matters:
- Fewer nested hashes to deal with in APIs.
- More predictable parameter handling.
- Safer defaults that reduce mass-assignment risks.
Wrap-up
Personally, I love how Rails 8 lets you focus on building apps again, not infrastructure.
- Solid Adapters cut out external dependencies
- Propshaft makes assets fast and clean
- Built-in Auth saves you from gem hell
- Kamal 2 make deploys truly “just works”
- And stricter defaults keep you safe by design
So, which feature are you most excited to try first? Stay tuned because we’ll build a full Rails application over the next few tutorials with all those exciting features!

