Getting started with Yesod and Docker

Last updated by Garry Cairns on 6 August 2016 10:43

Note: This post is still worth skimming over but not coding along with IMO. See part two for the more up-to-date version.

Building web applications is fun. Setting up the infrastructure on which to build them often isn’t. Docker can change that. You’ve probably heard a lot about Docker already. But in case you haven’t I’d describe it as a lightweight box for putting services in.

The people at Docker recommend one service per container (note service, not process). We’re going to use Docker and Docker Compose to build a web application running on three connected Docker containers. We’ll use the Haskell programming language for our web application, PostgreSQL for the database and Nginx as a webserver.

This post covers getting set up. At the end of this first post you’ll have a scaffolded Yesod site running on a PostgreSQL database behind an Nginx reverse proxy server. We’ll be following more steps than does a typical ‘quick start’ but the time invested now will be worth it when it comes to deployment. What we set up today will be very close to what we deploy.

You will need

Running through this tutorial should take around 1-2 hours. Before we start, please install Docker and install Docker Compose.


Make a directory for your project and move into it. Create a blank file called docker-compose.yml and three subdirectories: site; database and webserver. You should be left with something like this:



Yesod is a web framework for Haskell. It does some very cool things, particularly overlap checking on routes and type checking on URLs. We'll use it to do the heavy lifting when we develop our site.

We’ll need one additional component before we start, a cabal global config file for stackage server. Save that in your site directory as stackage. I’ll cover what it’s for later.

Let’s write a Dockerfile to build our first image.

Create a file in your site subdirectory called Dockerfile. This file will create the image of our Docker container - the box in which our Yesod application will run. Add the following to it.

FROM haskell:7.8

# install database dependencies
RUN ["apt-get", "update"]
RUN ["apt-get", "-y", "install", "libpq-dev"]

# update cabal and install yesod
RUN cabal update
ADD ./stackage /root/.cabal/stackage
RUN cat /root/.cabal/config /root/.cabal/stackage > /root/.cabal/config2
RUN ["mv", "/root/.cabal/config2", "/root/.cabal/config"]
RUN cabal update
RUN ["cabal", "install", "yesod-bin", "-j4"]
# Add installed cabal executables to PATH
ENV PATH /root/.cabal/bin:$PATH

# Default directory for container
WORKDIR /opt/server

The first line FROM haskell:7.8 tells the docker daemon that you'd like to use the official Haskell base image. I recommend you only ever use official base images or images you created yourself. That’s because, as you see above, a Dockerfile builds an image from arbitrary instructions. Using a base image from an untrusted source thus includes the risk that some of those instructions will have been malicious.

The subsequent lines will make sense to any user of a Debian-based operating system who’s familiar with Haskell. Those people can skip the next bit if they choose, because I'm going to explain line-by-line what’s happening here.

Line-by-line for the uninitiated

# install database dependencies
RUN ["apt-get", "update"]
RUN ["apt-get", "-y", "install", "libpq-dev"]

apt-get is the package manager for Debian-based systems. When Docker interprets commands in the array syntax above it passes all elements of the array to a shell, separated by spaces. The firs line tells apt-get to refresh its list of installable packages. The second asks it to install libpq-dev, which is essential for our web application’s integration with the database we’ll build in a moment. The -y flag in the second command tells apt-get that you’re giving any prompts it provides a ‘yes’ answer.

Aside: Installing system-level dependencies like this is one reason I prefer Docker to things like Python virtualenvs for creating isolated development environments. With Docker, you genuinely get every dependency isolated.

# update cabal and install yesod
RUN cabal update
ADD ./stackage /root/.cabal/stackage
RUN cat /root/.cabal/config /root/.cabal/stackage > /root/.cabal/config2
RUN ["mv", "/root/.cabal/config2", "/root/.cabal/config"]
RUN cabal update
RUN ["cabal", "install", "yesod-bin", "-j4"]
# Add installed cabal executables to PATH
ENV PATH /root/.cabal/bin:$PATH

You can think of cabal as apt-get for Haskell for now. That analogy won’t hold later so don’t get it too stuck in your brain.

The lines above introduce two new Docker commands — ADD and ENV. The ADD command takes a file from the first directory specified, which must exist on your host filesystem, and places it the second directory specified, which must exist on your new Docker container. The ADD command here places the file we downloaded earlier into our container. We then use a cat command to append it to our container’s cabal config file and run cabal update again. The second update ensures the subsequent cabal install commands use the new package list our stackage file provides.

Finally we install yesod-bin, which will give our container access to some yesod commands we’ll use during development, and add those executables to our path using ENV. The -j4 flag on the yesod-bin install command tells cabal the number of concurrent jobs it’s allowed to perform. It will default to the number of CPUs on your machine (thanks Max Tagher for the correction here) if you remove that flag. In my experience, removing that flag can sometimes solve some installation problems.

# Default directory for container
WORKDIR /opt/server

The final command sets the /opt/server directory on the container as the working directory for future commands, such as those passed in when we run the container.

Building our first image

We’re ready to build our image. From the directory that contains the Dockerfile, run:

sudo docker build -t yourname/yesod ./

Aside: When putting this tutorial together I sometimes encountered failed Docker builds because cabal failed to fetch packages. Running the build command again usually solved these.

That command will build a Docker image using our Dockerfile and call it yourname/yesod. You can now run and connect to a container built from that image with another command. This command is a bit more complicated but we’re going to leave any explanation to the existing Docker documentation:

sudo docker run -p 3000:3000 -itv /path/to/your/project/site/subdirectory/:/opt/server yourname/yesod /bin/bash

Scaffolding our site

From your command prompt inside the container run yesod init --bare. Yesod will ask you what you want to call your site and what database you want to use. Choose any name you like and PostgreSQL as your database.

Yesod will create all the files you need for now, including a cabal file for our project. This is the point at which the cabal as apt-get analogy ceases. This cabal file specifies all our project’s dependencies, which lets us use cabal as a build tool to prepare our project for deployment.

Updating our Dockerfile

We’re now going to edit our Dockerfile to take account of the application code we’ve just generated. Edit it to look like this:

FROM haskell:7.8

# install database dependencies
RUN ["apt-get", "update"]
RUN ["apt-get", "-y", "install", "libpq-dev"]

# update cabal and install yesod
RUN cabal update
ADD ./stackage /root/.cabal/stackage
RUN cat /root/.cabal/config /root/.cabal/stackage > /root/.cabal/config2
RUN ["mv", "/root/.cabal/config2", "/root/.cabal/config"]
RUN cabal update
RUN ["cabal", "install", "yesod-bin", "-j4"]

# Add your .cabal file before the rest of the code so next step caches
ADD ./YourSiteName.cabal /opt/server/YourSiteName.cabal

# Docker will cache this command as a layer, freeing us up to
# modify source code without re-installing dependencies
RUN cd /opt/server && cabal install --only-dependencies -j4

# Add and install application code
ADD ./ /opt/server
RUN cd /opt/server && cabal install

# Add installed cabal executables to PATH
ENV PATH /root/.cabal/bin:$PATH

# Default directory for container
WORKDIR /opt/server

The lines we’ve added ADD our project’s cabal file first and run a cabal install. We then ADD the rest of our application code and cabal install again.

We do this because Docker caches the result of each instruction in our Dockerfile for faster rebuilding. Any line that changes bursts the cache and all subsequent lines run afresh. ADDing an unchanged file doesn’t burst the cache so the above configuration will only reapply the first new install command if your cabal file changed. Rebuild the container with:

sudo docker build -t yourname/yesod ./

Now we’re ready to build a database for our application.


Move out of our site subdirectory and into our database one. Create a new Dockerfile and fill it like this:

FROM postgres:9.4

We’ll return to this in a future tutorial to set up a different database user and a password but for now this is all you need.


Now move out of the database subdirectory and into the webserver one. We’re going to create two files here, another Dockerfile and a site configuration. We’ll do the site configuration first. Create a file called site.conf, which should look like this:

# see
upstream localhost {
    server site_1:3000;

server {
    root /home/webserver;
    location / {
        proxy_pass http://localhost;

All that does is tell Nginx to pass all traffic to the warp server our application will be running on in our site container. Docker Compose will set the site_1 environment variable to an IP address Docker has given our site container when we run them together later.

Now we’ll look at the webserver Dockerfile to see how that configuration gets deployed.

FROM ubuntu:14.04

# get the nginx package and set it up
RUN ["apt-get", "update"]
RUN ["apt-get", "-y", "install", "nginx"]

# forward request and error logs to docker log collector
RUN ln -sf /dev/stdout /var/log/nginx/access.log
RUN ln -sf /dev/stderr /var/log/nginx/error.log
VOLUME ["/var/cache/nginx"]
EXPOSE 80 443

# load nginx conf as root
ADD ./site.conf /etc/nginx/sites-available/YourSiteName
RUN ["ln", "-s", "/etc/nginx/sites-available/YourSiteName", "/etc/nginx/sites-enabled/YourSiteName"]
RUN ["rm", "-rf", "/etc/nginx/sites-available/default"]

# add application code as unprivileged user
RUN ["groupadd", "webserver"]
RUN ["useradd", "webserver", "-s", "/bin/bash", "-m", "-g", "webserver", "-G", "webserver"]
ENV HOME /home/webserver
WORKDIR /home/webserver
RUN ["chown", "-R", "webserver:webserver", "/home/webserver"]

#start the server
CMD ["nginx", "-g", "daemon off;"]

By now this should be starting to look clear. We've ADDed our conf file and made it the only site available to Nginx. We’re running the webserver as an unprivileged user to add some safety and we’re exposing ports 80 and 443 so visitors can see our site.

We’re not actually going to serve our files with Nginx just yet. We will return to it in a future tutorial when we’ll deploy all this to an Amazon Web Services instance. For now, we’ll use Yesod’s excellent yesod devel command to serve our application during development.

Bringing it all together with Docker Compose

We have an application, a database and a webserver. But we want them to talk to each other. Docker Compose will do this for us, and make it very easy at that.

Aside: Docker Compose is still officially in beta.

Open the docker-compose.yml file we created at the beginning. Edit it to look like this:

    build: database
    build: site
    command: yesod devel # dev setting
    # command: /opt/server/dist/build/YourSiteName/YourSiteName # production
        - HOST=
        - PGHOST=database
        - PGPORT=5432
        - PGUSER=postgres
        - PGPASS
        - PGDATABASE=postgres
        - database
        true # dev setting
    # tty:
    #     false # production
        - "3000:3000" # dev setting
        - site:/opt/server/

I almost don’t have to explain that file, because it’s doing exactly what it said it would. It builds your database, using the Dockerfile in the database subdirectory. Then it builds your site, sets some environment variables to help it link to the database, links the two containers together and runs the yesod devel command, which will serve your site to http://localhost:3000. Again, more detailed information is available through the Docker Compose documentation.

Aside: Users of boot2docker may need to run boot2docker ip to find their boot2docker IP address and replace localhost with that. Thanks to Andrew Boardman for raising this.

From the project root directory, run:

sudo docker-compose up

The command might fail on the first run if you’re database doesn't finish building before your site comes up. If that happens just run it again and you should be good. Now visit http://localhost:3000 to see the Yesod welcome page!

Starting your development

Much like Nginx, we’ll mainly use Docker Compose when everything’s up and running. For now we’ll only use it to run our database, which we’ll connect to using a standard Docker command. This lets us take full advantage of the yesod-bin commands by using them interactively. We’re not going any further for today but, in case you want to start exploring on your own, the relevant steps are:

Run sudo docker-compose run -d database from the root directory (the -d means it will run in the background so you can apply more commands);

Run sudo docker ps to get the NAME of the running database container, the next command assumes you got a NAME of project_database; then

Run the following, large, command.

sudo docker run -p 3000:3000 -itv /path/to/your/project/site/subdirectory/:/opt/server \
--link project_database:database -e "HOST=" -e "PGHOST=database" -e "PGPORT=5432" -e "PGUSER=postgres" -e "PGDATABASE=postgres" \
yourname/yesod /bin/bash

That should leave you at a command prompt running in your site container. Be sure to check that all the paths you use in that command are accurate if you run into any problems.

Next steps

In the next tutorial we’ll build a personal website that looks a surprisingly like this one. We’ll cover the basic Create, Retrieve, Update and Delete (CRUD) operations in Yesod. In the final tutorial we’ll package everything up and deploy it to an Amazon Web Services EC2 instance.

For the curious I recommend looking at Max Tagher’s making a blog with Yesod 1.4 screencast, on which much of this site is based, and of course reading the excellent Yesod book in the meantime.

Please tweet me if you have any questions or comments.