Puma and Nginx production stack

Basic Setup

As an alternative to the one described in Wagn in production, this describes the setup Brett Neumeier uses for his wagns, with dramatically lower memory usage than the Apache + Passenger approach.

 

Components

Puma works best with Ruby engines that support true multi-threading (i.e., rubinius and jruby, neither of which have a global interpreter lock), but I have not yet experimented with those runtimes. Even with MRI, puma has better performance for me than passenger, with dramatically lower memory usage.

 

Installing software

Everything but puma from the list above must be installed. I built everything from source using the source tarballs from the sites mentioned, but on most Linux systems you should be able to use the standard distribution package management system (yum or apt-get) to install a reasonably-recent version of the packages. (In addition to the software mentioned above, I also built node.js 0.11.2 because I've had issues with asset compilation using therubyracer and therubyrhino from time to time; your mileage may vary.)

 

Initial wagn setup

(This closely parallels the README provided with wagn.)

 

Before you do anything else, add puma to the Gemfile:

echo "gem 'puma'" >> Gemfile

You may also wish to make other adjustments -- for example, I have node.js on my machines, so I don't need therubyrhino and comment it out of my Gemfile. This is also a good time to adjust config/wagn.yml; I set:

cache_store: dalli_store
mem_cache_servers: 127.0.0.1

Once you've made adjustments to the Gemfile and other configuration files, go ahead with the basic wagn setup steps (note, again, I use postgresql; I believe it to be technically superior to mysql in almost all ways):

bundle install --without mysql:test:debug:development:profile
ENGINE=postgres rake wagn:install
rake wagn:create

 

Set up puma

Puma can be configured just by providing command-line arguments, but it's more convenient to bake the configuration settings into a configuration file and then provide that file with a single -C command line directive. A simple puma.rb that works with MRI Ruby is:

#!/usr/bin/env puma

# start puma with:
# RAILS_ENV=production bundle exec puma -C ./config/puma.rb

application_path = '/opt/wagn/current'
railsenv = 'production'
directory application_path
environment railsenv
daemonize true
pidfile "#{application_path}/tmp/pids/puma-#{railsenv}.pid"
state_path "#{application_path}/tmp/pids/puma-#{railsenv}.state"
stdout_redirect
"#{application_path}/log/puma-#{railsenv}.stdout.log",
"#{application_path}/log/puma-#{railsenv}.stderr.log"
threads 0, 16
bind "unix://#{application_path}/tmp/sockets/#{railsenv}.socket"

The important things here are:

  • daemonize true: this tells puma to run in the background by spawning a subprocess and detaching it from the executing shell. If you don't use daemonize, you need to run the puma process via nohup and put it in the background explicitly.
  • the state_path directive specifies a file location where puma runtime information is stored; that file can subsequently be used to control puma using the pumactl script. This makes stopping and restarting the application server easier than it might otherwise be.
  • the bind directive tells puma to bind to a domain socket rather than a TCP/IP network socket. This improves performance slightly when puma is running on the same server as the nginx web server that proxies to it, and reduces system load correspondingly.

Even though the rails environment is specified with "environment railsenv," the environment variable RAILS_ENV still needs to be set. I don't know why. Maybe it only needs to be set as an environment variable, and doesn't need to be in the puma configuration file at all -- I didn't experiment with that.

 

At this point you can run puma with the command line mentioned in the config file above.

 

Set up nginx

Nginx is a HTTP server like Apache httpd; however, it uses an entirely different design under the hood. Rather than spawning a large number of worker threads or worker processes and then having the primary server hand off TCP sockets to those workers as they arrive, nginx uses exclusively non-blocking I/O with a very small number of worker processes. It turns out this is a good way to have extremely good performance with low system load, and is perfect for proxying to a relatively slow back-end application server or pool of application servers. Ruby on Rails tends to fall into that category.

 

Nginx also has a configuration file syntax that I find simpler and easier to understand than Apache httpd's. A complete nginx.conf that works for the puma-hosted rails application defined above is:

worker_processes  2;
worker_rlimit_nofile 8192;

events {
worker_connections 256;
}

http {
include mime.types;
index index.html index.htm;

default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_iso8601] $status '
'"$request" $body_bytes_sent $request_time "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
sendfile on;
tcp_nopush on;
server_names_hash_bucket_size 128;

upstream wagn {
server unix:/opt/wagn/current/tmp/sockets/production.socket;
}

server {
server_name wagn.yourdomain.com;
listen 80;
access_log /var/log/nginx/wagn_access.log main;
error_log /var/log/nginx/wagn_error.log;
root /opt/wagn/current/public;

location / {
try_files $uri $uri/index.html $uri.html @wagn;
}

location @wagn {
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_redirect off;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://wagn;
}
}
}

Interesting parts of this configuration file:

  • The directives are broken into just a few top-level elements: a couple of directives that tell nginx to run two worker processes, an events block that limits the number of connections that the server should accept simultaneously, and an http block that controls all of the HTTP server functionality.
  • The "upstream wagn" stanza tells nginx where to find the application server socket. This could be a TCP/IP URL, but since the nginx is on the same machine a domain socket works better as described above. It's not necessary to define an upstream if you only have one Rails server process, but if you have multiple backend servers that nginx should load-balance among then they can all be listed in the single upstream stanza.

  • The server stanza corresponds to an Apache virtual host.
  • try_files in the "/" location block tells nginx that for each request to the server, it should try finding a static file at the provided location (under the root directory for the server); if it doesn't find one, look for an index.html fallback; and if it doesn't find that either, switch to the @wagn location.
  • The @wagn location, which receives everything that should be routed to rails, sets up some proxying directives and then tells nginx to use the upstream wagn defined earlier. Again, for this simple example with a single puma listening on a single domain socket, the upstream stanza could be removed and the proxy_pass directive could be changed to http://unix:/opt/wagn/current/tmp/sockets/production.socket;

Now start nginx as root (just run "nginx"; you can also verify the configuration file syntax with "nginx -t"), visit your shiny new wagn at  http://wagn.yourdomain.com/ ...and get wagning!

 

Puma and Nginx production stack+discussion

Great work, thanks.  I'm thinking a larger discussion of deploy and system resources would be a really good thing whether on this card, or maybe move it to a better card for generality at some point.

 

We have some more constraints with the cldstr deployments now, mysql vs. postgres is one of them, and I'm not sure we actually deploy on passenger now.  I think maybe we do, but enterprise ruby isn't really relevant past ruby 1.8.  Apache can't be taken out of the loop either.  Given those constraints, how much of this could still apply and be useful --Gerry Gleason


I hadn't actually looked at cloudstore before today. That's interesting, a very low-impedence and low-cost way to spin up new application instances on demand. I'm more familiar with heroku, which can also be low-cost but perhaps not as much so as cloudstore. Still, heroku is very big on postgresql, so there's less of a mismatch there.

 

However! My own situation is different: I have a server in my basement and I can do whatever I like with it. So I'm focused on getting things set up the way that suits me best. If other people can benefit from it, that's awesome! If not, well, no harm no foul.

--Brett Neumeier.....2013-06-11 22:47:27 +0000