Colored boxes representing container servics in a row

Production Meteor and Node Using Docker, Part I

Within the field of NetOps, there are many technology channels moving quickly and independently. It’s tricky for a small team to assemble a project using all of the latest technologies in a time frame that won’t crack your spirit.

We’re all constantly assimilating new “buzz” words into our vocabularies. To list a few, there’s Docker, Mesos, Orchestration, DCHQ, ContainerX, Containership, Rancher, Tutum, Continuous Integration, MongoDB replication, mesh networks, Kubernetes (still love the way that sounds), cross cloud deployment, complex DNS, smart load balancing, and...service discovery.

That's a s#!t ton of terms, isn’t it?

I won’t promise Containers will magically make things easier for you. There’s definitely a lot to learn. But just as jQuery helped the sanity of front-end developers, cloud-based Docker services do much the same for NetOps and simplifying Docker deployment.

My mission is to guide you, step by step, through building a production Meteor/Node environment that can deliver a range of cutting-edge application deployment techniques. You’ll be happy to know it’s a very real, robust solution that can scale, works across multiple cloud providers for redundancy, and will help make your business run more smoothly. It will reduce overhead, stress, and perhaps best of all, you can set it all up in a weekend.

Don’t be intimidated. You don’t have to understand everything under the hood to make real progress. Set out to tackle something straightforward on your first attempt. Rest assured — soon you should be able to carefully integrate all the current buzz-wordy technologies that the big guys brag about.




Project Ricochet is a full-service digital agency specializing in Open Source & Docker.

Is there something we can help you or your team out with?




Here’s what I’ll be covering in this post and following posts in the coming weeks:

  • Why Node and Meteor make great container apps (with example Meteor app and Dockerfile)
  • Docker Cloud to the rescue (Formerly Tutum.co)
  • Continuous Integration
  • Intelligent cross-cloud load balancing (plus Round-robin DNS tricks)
  • SSL
  • MongoDB Replication using volumes
  • Behavioral driven development (BDD)
  • Backups (not boring anymore!)
  • Logging and Performance monitoring

If you’re not familiar with Meteor, that should be your first step. It’s well worth getting to know better. Meteor exhibits much the same forward-thinking Docker does. Perhaps that’s why they work so well together.

Node (at the heart of Meteor’s server code) is quite a joy to deploy with Docker. Other languages such as PHP require not only PHP, but a front end web server like Apache or Nginx, plus various modules. Not a “deal breaker” by any means, but Node is dead simple. Don’t you just love simple?

To get comfortable with Docker, it’s important to break down your server needs into small self-contained units (containers!), with each container driving a single process (like Node, or Mongo). In a typical Meteor (or Node) app, your setup will include load balancing, a Mongo replica set (or some other database), and one or more node processes to handle client requests. Here, we’ll outline requirements for a container that will run Meteor/Node. We’ll tackle load balancing and MongoDB in later posts.

It’s not complicated. Here’s what the Node/Meteor container needs:

  • The Meteor/node source code bundle
  • The version of node that will run the bundle
  • Supporting NPM modules such as node fibers
  • A root URL (in the case of Meteor)
  • An exposed TCP port
  • A MongoDB URL

For a basic app, the above is just about all you need. So, we need to turn our code repo into a Docker image that can be deployed into production. Using this example app, we can walk through how that happens. This example app repository contains a Dockerfile meant to build and set up a Docker image for deployment, using the open source Meteor app Microscope.

Note: There is a more polished base Docker image for building and deploying Meteor apps called Meteord, however it notas easy to follow what’s happening on the Dockerfile side of things unless you’re a bit more experienced. Our Dockerfile is easier to read and follow in a single file.

Let's download the test repo:

git clone git@github.com:popestepheng/meteor-production-docker-example.git

If you scan through the repo, you’ll notice we include a Dockerfile with the Meteor project code itself. This is handy for a couple reasons. Docker helps improve communication between developers and those actually deploying the app. Now you have the application code and the server infrastructure required to run that code in a single repository! These truly symbiotic partners are now considered one unit. The days of your app and its supporting configuration scattered among the heads of many different people might be over! You’ve got the code and the server definition in one place for all team members to access and collaborate on.

To build the Meteor image, all you need to do is run Docker build:

cd meteor-production-docker-example.git
docker build -t yourname/appname .

Let's quickly break down exactly what the Dockerfile is doing:

The section just sets up some basic info for the Dockerfile, ubuntu 14.04 is the base image, and I’m the maintainer:

FROM ubuntu:14.04
MAINTAINER Stephen Pope, spope@projectricochet.com

These commands simply setup the folder structure for our container, and define the working dir that all commands from this point on will be executed from:


RUN mkdir /home/meteorapp
WORKDIR /home/meteorapp
ADD . ./meteorapp

This section performs basic core Ubuntu updates:


# Do basic updates
RUN apt-get update -q && apt-get clean

The commands below do the bulk of work:


# Get curl in order to download curl
RUN apt-get install curl -y \

  # Install Meteor
  && (curl https://install.meteor.com/ | sh) \

  # Build the Meteor app
  && cd /home/meteorapp/meteorapp/app \
  && meteor build ../build --directory \

  # Install the version of Node.js we need.
  && cd /home/meteorapp/meteorapp/build/bundle \
  && bash -c 'curl "https://nodejs.org/dist/$(<.node_version.txt> /home/meteorapp/meteorapp/build/required-node-linux-x64.tar.gz' \
  && cd /usr/local && tar --strip-components 1 -xzf /home/meteorapp/meteorapp/build/required-node-linux-x64.tar.gz \
  && rm /home/meteorapp/meteorapp/build/required-node-linux-x64.tar.gz \

  # Build the NPM packages needed for build
  && cd /home/meteorapp/meteorapp/build/bundle/programs/server \
  && npm install \


  # Get rid of Meteor. We're done with it.
  && rm /usr/local/bin/meteor \
  && rm -rf ~/.meteor \

  #no longer need curl
  && apt-get --purge autoremove curl -y

Several things are happening here:

  • It installs curl so we’re able to download the needed files
  • Downloads and installs Meteor
  • Builds the Meteor project using the Meteor binaries
  • Detects the needed version of Node by looking at the final Meteor build
  • Downloads and installs the proper version of node
  • Install supporting node modules includes Node Fibers
  • And finally, we remove all of the supporting install files so they aren’t included in the final image

You might ask, why did we string all those commands together using && instead of using several RUN commands? Wouldn’t that have been much cleaner and easier to read? Point 7 above will help highlight the reasoning. Because Docker images are constructed in layers, we don’t want to create a bunch of layers that are only used to build the Meteor code in the final image. Doing so would waste space.

We only need the final build product, so we remove unneeded files from the filesystem before the RUN command finishes. That way it's not included in the final build image. Only the output of the Meteor build, our code bundle is left within the image.

This simply installs forever to keep the node process running even if it crashes:

RUN npm install -g forever

These two lines expose port 80 of the container, and sets the Meteor port to 80:

EXPOSE 80
ENV PORT 80

These are the final commands that tell the container what process to run:

CMD ["forever", "--minUptime", "1000", "--spinSleepTime", "1000", "meteorapp/build/bundle/main.js"]

What’s happening here is actually pretty simple. We’re building a Linux image that contains the Meteor app and the supporting Node binaries. Once you’ve built your app, you’re ready to start testing the container locally.

Because it’s Meteor, you need a Mongo db to get your app running, if you’re just starting out with Docker, I’d suggest using an easy-to-set-up Mongo database on compose.io so that you’re focused on getting your app to run in a container (and not having to mess around with setting up a db, and wondering if that's the issue when you run into problems).

docker run -d \
    -e ROOT_URL=http://yourapp.com \
    -e MONGO_URL=mongodb://url \
    -p 8080:80 \
    yourname/appname

Once the container starts, if you point your browser at port 8080 on your local development machine, your Meteor app should pop right up. Don’t underestimate what you have here. You’ve now isolated a section of your application stack into a discrete, defined image that can be expected to perform in a very specific, consistent way.

If all went well, you’ve got your Docker image. So now, how will this image fit into a full production system?

In the next installment of “Production Meteor/Node Using Docker,” we’ll cover the Docker cloud (previously known as Tutum) and how to use an image to power a whole ecosystem. This will enable us to put all the images in motion in a simple and clean way. I can’t wait to show you the next step. It’s hard to believe how easy it is.

On the horizon, you can look forward to the following topics to finish our journey:

  • Why Node and Meteor make great container apps (with example Meteor app and Dockerfile)
  • Docker Cloud to the rescue (Formerly Tutum.co)
  • Continuous Integration
  • Intelligent cross-cloud load balancing (plus Round-robin DNS tricks)
  • SSL
  • MongoDB Replication using volumes
  • Behavioral driven development (BDD)
  • Backups (not boring anymore!)
  • Logging and Performance monitoring

I look forward to sharing more with you soon!

Click here for the next installment in the series.




Curious about how much it might cost to get help from an agency that specializes in Docker?

We're happy to provide a free estimate!



A bit about Stephen:

Stephen Pope (Partner, Senior Software Engineer) has been working with technology for over 20 years. Steve has a B.S. in Computer Science and specializes in high end open source software development using LAMP, Drupal & Node.js.

A bit about Kevin:

Kevin Kaland (Senior Software Engineer) has extensive experience in open source web development and over 16 years of experience in software development. Kevin is an expert in jQuery, CSS, PHP, JavaScript, HTML/XHTML, and Linux administration.