How to Use JRuby and MRI in One Project
Back in 2013, I found myself working on a challenging project that needed both rapid web response times and intensive computational power. Like many Ruby developers at the time, I was torn between sticking with MRI Ruby for its familiar ecosystem or switching to JRuby for better performance on long-running processes.
Instead of choosing one or the other, I decided to experiment with using both interpreters in the same project. What I discovered was a surprisingly effective approach that leveraged the best of both worlds, though it wasn't without its complexities.
The Problem: Different Needs, Different Strengths
Our application had two distinct components with very different performance requirements:
The User-Interactive Section: This handled typical web requests - user authentication, displaying data, form submissions. Users expected fast response times, and we needed something that could start up quickly and handle requests efficiently.
Background Jobs: These were computationally intensive tasks - mathematical optimization problems, essentially NP-hard problems we solved using heuristics. These jobs were CPU and memory heavy, often running for minutes or hours.
By 2013, the Ruby landscape had some clear performance characteristics. MRI Ruby (we were using 2.0 and later 2.1) was great for web applications - fast startup, good for short-lived processes, and excellent compatibility with the ecosystem we knew and loved. But MRI's Global Interpreter Lock meant true parallel processing was impossible, and memory management on long-running processes wasn't ideal.
JRuby 1.7.x, on the other hand, was showing impressive maturity. It offered true threading without GIL constraints and, most importantly for our use case, much better memory management for long-running processes thanks to the rock-solid JVM garbage collector.
The Architecture: Best Tool for Each Job
Here's how we structured the application:
Web Frontend: MRI Ruby + Unicorn + Nginx
For the user-facing parts, we stuck with MRI Ruby 2.0 (and later 2.1). The setup was straightforward and battle-tested:
# Gemfile (MRI-specific gems)
source 'https://rubygems.org'
gem 'rails', '~> 4.0'
gem 'pg'
gem 'unicorn'
# MRI-specific gems that wouldn't work on JRuby
gem 'nokogiri'
gem 'sqlite3', groups: [:development, :test]
We used Unicorn as our application server behind Nginx. This was the standard Rails deployment stack in 2013-2014, and it worked beautifully for handling web requests. Unicorn's process-based architecture was perfect for MRI - no threading headaches, and if one process crashed, the others kept serving requests.
Our Unicorn configuration was typical for the time:
# config/unicorn.rb
worker_processes 4
timeout 30
preload_app true
listen "/tmp/unicorn_myapp.sock", :backlog => 64
if GC.respond_to?(:copy_on_write_friendly=)
GC.copy_on_write_friendly = true
end
before_fork do |server, worker|
ActiveRecord::Base.connection.disconnect!
end
after_fork do |server, worker|
ActiveRecord::Base.establish_connection
end
This gave us fast response times and easy deployment. When we needed to push fixes or small changes, we could restart the web frontend quickly without affecting the background jobs.
Background Processing: JRuby + Sidekiq
For the heavy computational work, we used JRuby 1.7 with Sidekiq. This is where things got interesting.
Mike Perham had released Sidekiq in early 2012, and by 2013 it was becoming the go-to choice for background job processing. What made it perfect for our JRuby setup was its threading model - Sidekiq was built from the ground up to use threads instead of processes, which aligned perfectly with JRuby's strengths.
# Gemfile for JRuby workers
source 'https://rubygems.org'
gem 'sidekiq'
gem 'activerecord'
gem 'activerecord-jdbc-adapter'
gem 'jdbc-postgres'
# Our computational gems that worked well on JRuby
gem 'matrix'
gem 'concurrent-ruby'
The beauty of this setup was that JRuby's superior threading and memory management really shone on long-running processes. Where MRI might accumulate memory fragmentation over hours of computation, JRuby's JVM garbage collector kept things clean and efficient.
Here's what a typical background job looked like:
class OptimizationWorker
include Sidekiq::Worker
def perform(problem_data)
# This might run for 30+ minutes
optimizer = HeuristicOptimizer.new(problem_data)
# JRuby's true threading allows us to parallelize
# parts of the computation
result = optimizer.solve_with_parallel_processing
# Save results back to shared database
OptimizationResult.create!(
problem_id: problem_data['id'],
solution: result.to_json,
completed_at: Time.current
)
end
end
The Shared Foundation: ActiveRecord Models
The key to making this work was keeping our business logic in shared ActiveRecord models that both interpreters could use. This meant our MRI web frontend and JRuby workers could operate on the same data structures without duplication.
# app/models/optimization_problem.rb
# This model was used by both MRI and JRuby
class OptimizationProblem < ActiveRecord::Base
validates :parameters, presence: true
def enqueue_for_processing
# Called from MRI web frontend
OptimizationWorker.perform_async(self.attributes)
end
def processing_complete?
OptimizationResult.exists?(problem_id: self.id)
end
end
Both interpreters connected to the same PostgreSQL database, so data consistency was never an issue. The MRI frontend would create optimization problems and enqueue jobs, while the JRuby workers would pick up those jobs and save results.
Database Configuration: One Database, Two Interpreters
We used the same database for both sides but with different connection adapters:
# MRI side: database.yml
production:
adapter: postgresql
database: myapp_production
username: myapp
password: <%= ENV['DATABASE_PASSWORD'] %>
host: localhost
# JRuby side: database.yml
production:
adapter: jdbcpostgresql
database: myapp_production
username: myapp
password: <%= ENV['DATABASE_PASSWORD'] %>
host: localhost
The JDBC adapter for JRuby gave us better performance for long-running database connections, while the standard PostgreSQL adapter worked perfectly for MRI's shorter request cycles.
Deployment and Process Management
Managing two different Ruby interpreters in production required some careful orchestration. We used separate service scripts for each component:
# /etc/init.d/myapp_web (MRI)
#!/bin/bash
USER="deploy"
DAEMON="unicorn"
ROOT_DIR="/var/www/myapp"
DAEMON_OPTS="-c $ROOT_DIR/config/unicorn.rb -D"
case "$1" in
start)
su - $USER -c "$DAEMON $DAEMON_OPTS"
;;
stop)
su - $USER -c "kill `cat /tmp/unicorn_myapp.pid`"
;;
restart)
su - $USER -c "kill -USR2 `cat /tmp/unicorn_myapp.pid`"
;;
esac
# /etc/init.d/myapp_workers (JRuby)
#!/bin/bash
USER="deploy"
ROOT_DIR="/var/www/myapp"
JRUBY_OPTS="-J-Xmx2g -J-Xms1g"
case "$1" in
start)
su - $USER -c "cd $ROOT_DIR && jruby $JRUBY_OPTS -S bundle exec sidekiq -d"
;;
stop)
su - $USER -c "kill `cat /var/www/myapp/tmp/pids/sidekiq.pid`"
;;
esac
Performance Results
The results were impressive. Our web frontend maintained sub-100ms response times for typical requests - exactly what we'd expect from a well-tuned MRI/Unicorn setup. But the background jobs showed dramatic improvements with JRuby.
Memory usage was the biggest win. Where MRI workers processing hour-long optimization jobs might balloon to 800MB+ and show memory fragmentation, JRuby workers stayed consistently around 400-500MB with stable performance throughout the job lifecycle.
CPU utilization was better too. JRuby's ability to truly parallelize computation within a single process meant we could solve optimization problems 2-3x faster than equivalent MRI implementations.
Challenges We Encountered
This setup wasn't without its pain points:
Gem Compatibility: Not every gem worked on both interpreters. We had to maintain separate Gemfiles and occasionally find JRuby-specific alternatives for certain dependencies.
Development Environment: New developers needed to set up both MRI and JRuby locally. We used RVM to manage both versions:
rvm install 2.1.0
rvm install jruby-1.7.14
# Create separate gemsets
rvm use 2.1.0@myapp_web --create
rvm use jruby-1.7.14@myapp_workers --create
Debugging: Troubleshooting issues that crossed the interpreter boundary was tricky. We had to be very careful about our Redis job serialization and database connection handling.
Deployment Complexity: Coordinating deployments across two different Ruby stacks required careful planning. We couldn't just deploy everything at once - we had to consider which changes affected which interpreter.
Why This Worked in 2013-2014
Looking back, this approach worked well because of where Ruby was in its evolution. MRI 2.0 and 2.1 were solid for web applications but still struggled with intensive background processing. JRuby 1.7 had reached impressive compatibility with MRI while offering genuine performance advantages for the right workloads.
Sidekiq was relatively new but incredibly well-designed for threading. The combination of JRuby's true threads and Sidekiq's efficient job processing was perfect for our computational workloads.
The Rails ecosystem had also stabilized around certain deployment patterns. Unicorn + Nginx was the standard, and most gems we needed had mature versions that worked reliably across interpreters.
Lessons Learned
The biggest lesson was that choosing Ruby interpreters doesn't have to be an either/or decision. When you have clearly different performance requirements within the same application, using the best tool for each job can yield significant benefits.
However, this approach only makes sense when:
- You have genuinely different performance requirements (fast web responses vs. long-running processes)
- Your team is comfortable managing the additional complexity
- Your deployment infrastructure can handle multiple Ruby environments
- The performance gains justify the operational overhead
For most applications, sticking with one interpreter is absolutely the right choice. But for specialized cases like ours, a hybrid approach can unlock performance that wouldn't be possible with either interpreter alone.
This experience taught me to think more broadly about architecture decisions. Rather than getting locked into "best practices," sometimes the best solution is combining the strengths of different tools in thoughtful ways. The Ruby ecosystem's diversity becomes a strength when you can leverage it strategically.