Asynchronicity in Elixir - Best effort vs. Guaranteed execution

We often run into a scenario in web applications where we need to do some work at “some point soon”, but we don’t want to make the user wait right now.

As a simple example, let’s say we have a user complete a website signup and we want to send them a welcome email. We don’t want to make the user wait for this, or have their request fail if the email failed to send for whatever reason.

Luckily, this is Erlang! Processes are super cheap, why don’t we use a Task?

You can run a fire-and-forget task using Task.start/3

def signup(user) do
  Task.start(EmailService, :send_welcome_email, user)

  :ok
end

Problem solved right?

Well… not quite

Background

Erlang was originally developed to run on machines that had long uptimes.

You’d have a telephony box in the middle of the woods somewhere with two nodes in it.

Every two years you send out an engineer to do an upgrade, which would have been thoroughly tested on the exact hardware previously and would work via hot code reloading that works with OTP to upgrade all your GenServer code and do any necessary state transformation without stopping any processes.

This code might run for years without being rebooted.

In this scenario, you can be reasonably sure that any spawned task will get executed.

Contrast this to how we run Elixir in production today:

A lot of us are using ephemeral containers e.g. Docker or VMs.

We don’t use hot code reloading to do deploys since it’s time-intensive to get that right and we’re willing to trade a little downtime for faster development speed.

We do deploys by booting an entirely new version of the VM and throwing the old one away.

This means that currently running processes and in-memory state are thrown away on every deploy, which can be multiple times per day.

This means there is a small possibility of losing any Task, but it is especially problematic if:

  • The task takes a long time
  • The task has a likelihood of failing and we might want to automatically or manually retry it

Strategy

We need to classify our tasks into two different categories:

  • Best effort
  • Guaranteed execution

An example of a best-effort task:

Let’s say you have a customer tell us their address in the signup process and sometime later we need to show them a pretty map with their property on it. Our software already must handle a case where we can’t geocode the property (invalid address etc).

A regular Task is probably OK for this because the job is:

  • Of relatively short duration
  • Not mission-critical
  • Potentially high volume

An example of a task that requires guaranteed execution:

Adding a customer to the CRM and sending them a welcome email after signup. This task must complete or the business could risk losing a multi-thousand pound deal. Additionally it might fail and require automatic or even manual retry, so it needs to hang around for a while.

In order to guarantee execution of tasks we need to persist data about them somewhere outside of the BEAM, so that if the VM restarts we can read that data out of the database and guarantee that the job runs.

We need something with the following characteristics:

  • Holds state outside of the BEAM
  • Really good at keeping data safe
  • Does not lose data on restarts

We need something like a BASE for our DATA.

Can anybody think of anything that fulfils these requirements?

OMG! THE DATABASE.

Postgres is good enough

There are a lot of people who flinch when they hear “database-backed job queue”, and with good reason.

Delayed::Job is a famous database-backed queue from Ruby-land and it is famously appalling at scale. It maxes out at around 100 jobs/s even on huge database boxes.

Traditionally the community has reached to Redis to solve this problem, some well-known examples are Resque and Sidekiq for Ruby, and ExQ for Elixir.

However this comes with the overhead of having to manage another service. For many small apps and especially for beginners, this seems like overkill.

In addition, Redis is an in-memory key-value store. It is not designed for durable, transactional storage of data. It’s not ACID compliant - you can force it to persist everything to disk (synchronous writes) but you lose a lot of the performance that it’s known for.

A database is the ideal solution to our problem. But can we make it fast?

Luckily this is no longer 2008 and there are now several Postgres-specific features we can take advantage of to negate these downsides:

  • FOR UPDATE SKIP LOCKED
  • Advisory locking
  • pg_notify()

The Que library for Ruby uses some of these features and has been benchmarked at just under 10,000 jobs per second.

With this kind of performance I no longer see a place for Redis. If you need more than 10,000 jobs/s then Redis is probably not the right solution for you either. You are well into the territory of needing a “real” queue system like Kafka or ActiveMQ at that point.

Introducing Rihanna

Rihanna is a fast, reliable and easy-to-use Postgres-backed distributed job queue for Elixir.

It is designed for the following very common use-case:

  • I have a simple Phoenix/Raxx app with a database (probably > 90% of Elixir deployments in the wild)
  • I want to run some task asynchronously so I don’t make the user wait in a request
  • I want to be sure that this task is going to run even if I deploy my app and I want to be able to retry the task if it fails

Rihanna is a drop-in solution with no dependencies on any other services. It is based on Ruby’s Que library and uses advisory locks for speed. Que has been benchmarked at up to 10,000 jobs per second and Rihanna’s performance should be similar if not better since this is Elixir, not Ruby.

We are already using Rihanna in production at work, you can download it from hex.pm and it comes with a GUI that you can run as a docker container.

How to quickly setup a GraphQL server in Elixir using Abinsthe

GraphQL in Elixir

Elixir is an excellent choice for a GraphQL backend. It has good enough performance and concurrency to handle a large volume of requests without caching support, which is useful considering that request-level caching is not possible with GraphQL like it is with REST.

The Elixir ecosystem is also blessed with what is in my opinion one of the best GraphQL DSLs around - Absinthe.

Absinthe is a collection of libraries to help with parsing and responding to GraphQL queries. It can run standalone or on top of Phoenix. The easiest way to get started is with Phoenix, so let’s dive right in.

Building a basic GraphQL server with Phoenix/Absinthe

Let’s say I’m building an MMA fan site. I want to be able to get information about the fighters from the backend for both the website (written in React) and a mobile app.

Initial setup

Create a new phoenix project with $ mix phx.new mma --database postgres --no-brunch --no-html.

You’ll need to add the absinthe dependencies to your mix.exs

defp deps do
  [
    {:phoenix, "~> 1.3.0"},
    {:phoenix_pubsub, "~> 1.0"},
    {:phoenix_ecto, "~> 3.2"},
    {:postgrex, ">= 0.0.0"},
    {:gettext, "~> 0.11"},
    {:cowboy, "~> 1.0"},

    # Absinthe
    {:absinthe, "~> 1.3.0"},
    {:absinthe_ecto, "~> 0.1.2"},
    {:absinthe_plug, "~> 1.3.0"}
  ]
end

Then run $ mix deps.get to install.

Create Fighters and Fights

I’ve opted to create three records to show the use of associations in GraphQL. Our API shows Fighters with their vital statistics, and a list of their past fights including whether they won or lost.

$ mix phx.gen.schema Fighter fighters name:string belts:integer weight_in_kilos:float

$ mix phx.gen.schema Fight fights name:string

$ mix phx.gen.schema FightResult fight_results fight_id:references:fights fighter_id:references:fighters result:string

$ mix ecto.create

$ mix ecto.migrate

Let’s populate it with some data for testing.

# priv/repo/seeds.exs

### Fighters

conor = Mma.Repo.insert!(%Mma.Fighter{
  name: "Conor McGregor",
  belts: 2,
  weight_in_kilos: 69.4
})

jon = Mma.Repo.insert!(%Mma.Fighter{
  name: "Jon Jones",
  belts: 0,
  weight_in_kilos: 92.99
})

daniel = Mma.Repo.insert!(%Mma.Fighter{
  name: "Daniel Cormier",
  belts: 1,
  weight_in_kilos: 92.99
})

Mma.Repo.insert!(%Mma.Fighter{
  name: "Demetrious \"Mighty Mouse\" Johnson",
  belts: 1,
  weight_in_kilos: 56.7
})

### Fights

ufc182 = Mma.Repo.insert!(%Mma.Fight{
  name: "UFC 182"
})

ufc214 = Mma.Repo.insert!(%Mma.Fight{
  name: "UFC 214"
})

mcgregor_mayweather = Mma.Repo.insert!(%Mma.Fight{
  name: "McGregor vs. Mayweather"
})

### FightResults

Mma.Repo.insert!(%Mma.FightResult{
  fight_id: ufc182.id,
  fighter_id: jon.id,
  result: "Win"
})

Mma.Repo.insert!(%Mma.FightResult{
  fight_id: ufc182.id,
  fighter_id: daniel.id,
  result: "Loss"
})

Mma.Repo.insert!(%Mma.FightResult{
  fight_id: ufc214.id,
  fighter_id: jon.id,
  result: "Win"
})

Mma.Repo.insert!(%Mma.FightResult{
  fight_id: ufc214.id,
  fighter_id: daniel.id,
  result: "Loss"
})

Mma.Repo.insert!(%Mma.FightResult{
  fight_id: mcgregor_mayweather.id,
  fighter_id: conor.id,
  result: "Loss"
})

Run $ mix run priv/repo/seeds.exs to fill your dev database with seed data.

Add relations

# fighter.ex

schema "fighters" do
  has_many :fight_results, Mma.FightResult
  ...
# fight_result.ex

schema "fight_results" do
  belongs_to :fight, Mma.Fight
  ...

Define our GraphQL Types

GraphQL types represent a tree of objects with scalars at the leaves.

Create a new file at lib/mma_web/schema/types.ex and add the following code:

defmodule MmaWeb.Schema.Types do
  use Absinthe.Schema.Notation
  use Absinthe.Ecto, repo: Mma.Repo

  object :fighter do
    field :id, :id
    field :belts, :integer
    field :name, :string
    field :weight_in_kilos, :float
    field :fight_results, list_of(:fight_result), resolve: assoc(:fight_results)
  end

  object :fight_result do
    field :result, :string
    field :fight, :fight, resolve: assoc(:fight)
  end

  object :fight do
    field :name, :string
  end
end

You’ll notice the schema closely mirrors our database schema. This is quite normal when working with Absinthe and Ecto relations.

By default Absinthe will attempt to look up the keys in the resolved struct. Since fight_results is empty on a freshly loaded Mma.Fighter we’ll need to tell Absinthe how to load it.

assoc is a function that comes from Absinthe.Ecto and specifies how to load data from associations. It automatically batches queries to avoid N+1 queries.

Create our GraphQL Schema

A GraphQL schema describes relationships between objects and exposes queries and mutations for accessing them.

Create a new file at lib/mma_web/schema.ex containing:

defmodule MmaWeb.Schema do
  use Absinthe.Schema
  import_types MmaWeb.Schema.Types

  query do
    field :fighters, list_of(:fighter) do
      resolve fn _params, _info ->
        {:ok, Mma.Repo.all(Mma.Fighter)}
      end
    end
  end
end

Explore using GraphiQL

absinthe_plug comes with an awesome tool called GraphiQL that can be used to test and explore your GraphQL queries.

To enable it, open lib/mma_web/router.ex and add the following lines:

  forward "/graphiql",
    Absinthe.Plug.GraphiQL,
    schema: MmaWeb.Schema

Boot your server with mix phx.server and visit localhost:4000/graphiql.

You can introspect your schema and see automatically generated documentation using the Docs tab on the right hand side of the page.

You can query basic fighter information like this:

query FightersName {
  fighters {
    id
    name
    weightInKilos
  }
}

Note that GraphiQL is able to autocomplete fields, and Absinthe has automagically camelCased them for us. You can use snake_case in your queries as well and absinthe will seamlessly convert between the two.

To get more information, we can construct a more detailed query based on the types we defined. Associations are automatically loaded by absinthe_ecto.

query FightersWithFights {
  fighters {
    id
    belts
    fightResults {
      result
      fight {
        name
      }
    }
    name
    weightInKilos
  }
}

And there you have it, a fully functional GraphQL API. Note how little code we had to write to get here!

How to solve ActiveRecord::PreparedStatementCacheExpired errors on deploy

Occasionally when deploying a Rails app on Postgres you may see an ActiveRecord::PreparedStatementCacheExpired error. This will only happen if you have run a migration in the deploy.

This happens because Rails makes use of Postgres’ cached prepared statements feature for performance. You can disable that feature to avoid these errors (not recommended) but there is a better way to handle it safely if you want zero-downtime deploys.

First, some background. In Postgres the prepared statement cache becomes invalidated if the schema changes in a way that it affects the returned result.

Examples:

  • adding or removing a column then doing a SELECT *
  • removing the foo column then doing a SELECT bar.foo

My work here ensures that in case this happens in a Rails transaction, we correctly deallocate the outdated prepared statement cache and raise ActiveRecord::PreparedStatementCacheExpired. It is up to the application developer to decide what to do with this.

The developer may choose to catch this error and retry the transaction. We can expect the transaction to succeed on the second attempt, since Rails clears the prepared statement cache after the transaction fails.

Here’s how you can transparently rescue and retry transactions.

# Make all transactions for all records automatically retriable in the event of
# cache failure
class ApplicationRecord
  class << self
    # Retry automatically on ActiveRecord::PreparedStatementCacheExpired.
    #
    # Do not use this for transactions with side-effects unless it is acceptable
    # for these side-effects to occasionally happen twice
    def transaction(*args, &block)
      retried ||= false
      super
    rescue ActiveRecord::PreparedStatementCacheExpired
      if retried
        raise
      else
        retried = true
        retry
      end
    end
  end
end

You can now call a retriable transaction like this:

# Automatically retries in the event of ActiveRecord::PreparedStatementCacheExpired
ApplicationRecord.transaction do
  # ...
end

or

# Automatically retries in the event of ActiveRecord::PreparedStatementCacheExpired
MyModel.transaction do
  # ...
end

That should clear up any prepared statement cache errors you’re seeing on deploy, and make it completely invisible to your end users.

IMPORTANT NOTE: if you are sending emails, POSTing to an API or doing other such things that interact with the outside world inside your transactions, this could result in some of those things occasionally happening twice.

NB. This is why retrying is not automatically performed by Rails, and instead we leave this up to the application developer.

If you have a transaction with side-effects that cannot be avoided and would prefer the original behaviour of raising rather than retrying in the event of this error, you can call the original like this:

# Raises instead of retries on ActiveRecord::PreparedStatementCacheExpired
ActiveRecord::Base.transaction do
  # ...
  post_to_some_api
  send_some_email
  # ...
end

Avoiding side-effects in transactions

There’s a potential trip-up here, since you might have implemented these side-effect methods as a model after_save callback or similar.

Ideally you should structure your application so that there are no side-effects in any of the model callbacks. You should instead move these methods outside of transactions completely, since it makes your transactions easier to reason about as an atomic unit and in any case it’s bad practice to hold a database transaction open unnecessarily.

One way to do this is to use a Service Object approach. Let’s say you have a User model that looks like this:

class User
  after_create :send_registration_email

  private

  def send_registration_email
    UserMailer.registered(self).deliver_now
  end
end

With our new auto-retry transaction, if we create a user inside the transaction we run the risk of sending the registration email twice.

User.transaction do
  # this might get retried and send the email twice
  # ...
  user.create!(params)
  # ...
end

One way to resolve the problem is to remove the callback from the model and create a service object to encapsulate this logic instead.

class UserCreator
  def initialize(user)
    @user = user
  end

  def create(params)
    User.transaction do
      # this is safe to retry since we send the email outside
      # of the transaction
      # ...
      user.create!(params)
      # ...
    end
    send_registration_email
  end

  private

  def send_registration_email
    UserMailer.registered(self).deliver_now
  end
end

Use it like this:

UserCreator.new(user).create(params)

By using a service object, not only have you made your transaction side-effect free, you have also made your model thinner and easier to manage at the same time.

How to deploy an Elixir Plug application to Heroku

This guide will work for any Plug app including Phoenix.

In my previous post I outlined how to create a basic Plug application. We’ll use that application as an example as we walkthrough how to deploy to Heroku, but the same steps should work for any Elixir web application, the only requirement AFAICT is that your application must boot with the mix run --no-halt command, and listen on the port specified by the $PORT env variable.

Step 0 - install the Heroku toolbelt if you don’t have it already

Instructions are here.

Step 1 - create the Heroku application

$ heroku create
Creating app... done, ⬢ glacial-waters-81278
https://glacial-waters-81278.herokuapp.com/ | https://git.heroku.com/glacial-waters-81278.git

NOTE: This automatically added the heroku remote to ./.git/config on my machine, if git push heroku master fails with an unknown remote error you’ll need to add that manually.

Step 2 - add the Elixir buildpack

We’re going to use this Elixir buildpack for Heroku.

$ heroku buildpacks:set https://github.com/HashNuke/heroku-buildpack-elixir

Step 3 - add a config file for the buildpack (optional)

Create the following elixir_buildpack.config file in your project root. Note that your versions may differ from mine. This step is not strictly necessary but I like to have the control over what versions I am running.

#./elixir_buildpack.config
# Erlang version
erlang_version=19.3

# Elixir version
elixir_version=1.4.2

# Always rebuild from scratch on every deploy?
always_rebuild=false

# A command to run right before compiling the app (after elixir, .etc)
pre_compile="pwd"

# A command to run right after compiling the app
post_compile="pwd"

# Set the path the app is run from
runtime_path=/app

Step 4 - configure your application to listen to $PORT env variable

Heroku expects your app to listen on a $PORT env variable which is randomly set.

If you don’t set this, your app will attempt to listen on a denied port and you’ll see an error in your Heroku logs that looks something like this:

2017-04-29T10:31:34.456439+00:00 app[web.1]: 10:31:34.455 [error] Failed to start Ranch listener HelloWebhook.Endpoint.HTTP in :ranch_tcp:listen([port: 80]) for reason :eacces (permission denied)

The easiest way to get your app to listen on the correct port is to modify your config/prod.exs file like so:

# ./config/prod.exs
use Mix.Config

port =
  case System.get_env("PORT") do
    port when is_binary(port) -> String.to_integer(port)
    nil -> 80 # default port
  end

config :my_app, port: port

Make sure you are starting cowboy with the correct port in your worker, e.g.

#./lib/my_app/endpoint.ex
# ...
def start_link do
  port = Application.fetch_env!(:my_app, :port)
  {:ok, _} = Plug.Adapters.Cowboy.http(__MODULE__, [], port: port)
end
# ...

Step 5 - make your first push

$ git push heroku master

If you did everything right, the push should be successful and the logs should show something like this:

2017-04-29T10:38:10.905309+00:00 heroku[web.1]: Starting process with command `mix run --no-halt`
2017-04-29T10:38:14.630637+00:00 heroku[web.1]: State changed from starting to up

And you’re done - hoorah! To open your app in the browser:

$ heroku open

How to build a lightweight webhook or JSON API endpoint in Elixir

Sometimes you just want a simple base for a webhook or JSON API in Elixir, e.g. for a small microservice.

Phoenix is nice, but much like Rails it comes with a lot of extra baggage that you may not need such as templating, database drivers, handling CSRF and other such web-frameworky things.

There are many cases where it makes more sense to start with a lightweight, barebones endpoint and build from there. Here I’ll walk through the process of creating a simple Hello World app using Plug, which is a bit like a combination of Rack/Sinatra for Elixir.

The endpoint will receive a JSON payload containing your name and return a response saying hello. You can adapt and extend this basic template for your own purposes.

Let’s get started. Firstly you’ll want to create a regular mix app:

$ mix new hello_webhook
$ cd hello_webhook

Now we’ll need the cowboy HTTP server and Plug framework. So make sure your mix.exs file looks like this. Most of the hard setup work is handled for us by the application/0 function provided to us by Mix.

# ./mix.exs
defmodule HelloWebhook.Mixfile do
  use Mix.Project

  def project do
    [app: :hello_webhook,
     version: "0.1.0",
     elixir: "~> 1.4", # yours may differ
     build_embedded: Mix.env == :prod,
     start_permanent: Mix.env == :prod,
     deps: deps()]
  end

  def application do
    [extra_applications: [:logger],
     mod: {HelloWebhook, []}] # This tells OTP which module contains our main application, and any arguments we want to pass to it
  end

  # The version numbers listed here are latest at the time of writing, you
  # should check each project and use the latest version in your code.
  defp deps do
    [
      {:cowboy, "~> 1.1"},
      {:plug, "~> 1.3"},
      {:poison, "~> 3.0"}, # NOTE: Poison is necessary only if you care about parsing/generating JSON
    ]
  end
end

Make sure to install the new dependencies.

$ mix deps.get

Now we need to implement our application. This is a bit of boilerplate that goes in ./lib/hello_webhook.ex and implements the standard OTP application behaviour. This behaviour defines two callbacks, start/2 and stop/1. For our purposes we only really care about start/2 so let’s implement that and point it to the HelloWebhook.Endpoint module which we shall create shortly.

#./lib/hello_webhook.exs
defmodule HelloWebhook do
  @moduledoc "The main OTP application for HelloWebhook"

  use Application

  def start(_type, _args) do
    import Supervisor.Spec, warn: false

    children = [
      worker(HelloWebhook.Endpoint, [])
    ]

    opts = [strategy: :one_for_one, name: HexVersion.Supervisor]
    Supervisor.start_link(children, opts)
  end
end

We’re almost done at this point, if you can believe it. All that remains now is to actually create our endpoints and routes. Create a new directory called ./lib/hello_webhook and a new file ./lib/hello_webhook/endpoint.ex.

Here’s the code for our Hello Webhook application:

# ./lib/hello_webhook/endpoint.ex
defmodule HelloWebhook.Endpoint do
  use Plug.Router
  require Logger

  plug Plug.Logger
  # NOTE: The line below is only necessary if you care about parsing JSON
  plug Plug.Parsers, parsers: [:json], json_decoder: Poison
  plug :match
  plug :dispatch

  def init(options) do
    options
  end

  def start_link do
    # NOTE: This starts Cowboy listening on the default port of 4000
    {:ok, _} = Plug.Adapters.Cowboy.http(__MODULE__, [])
  end

  get "/hello" do
    send_resp(conn, 200, "Hello, world!")
  end

  post "/hello" do
    {status, body} =
      case conn.body_params do
        %{"name" => name} -> {200, say_hello(name)}
        _ -> {422, missing_name()}
      end
    send_resp(conn, status, body)
  end

  defp say_hello(name) do
    Poison.encode!(%{response: "Hello, #{name}!"})
  end

  defp missing_name do
    Poison.encode!(%{error: "Expected a \"name\" key"})
  end
end

Now visit http://localhost:4000/hello in your browser, you should see your hello message.

Alright! Let’s quickly test this with curl. Start your server with iex -S mix.

Or you can use curl:

$ curl http://localhost:4000/hello
Hello, world!

Great. Now what if we want to supply our own name?

$ curl -H "Content-Type: application/json" -X POST -d '{}' http://localhost:4000/hello
{"error":"Expected a \"name\" key"}

Oops, better send a correctly formatted request.

$ curl -H "Content-Type: application/json" -X POST -d '{"name":"Sam"}' http://localhost:4000/hello
{"response":"Hello, Sam!"}

Hooray! It works. One thing to note is that unlike Phoenix, this app will not auto-reload when you change your code files. You must restart your iex -S mix process to see the new changes take effect.

That’s pretty much it for this simple Hello World app, you could take this skeleton template and build your own perfectly functional webhook endpoint using it.

But there are a couple more things we can do to improve it, namely setting up environment-specific configuration and adding some tests.

We’ll add an ExUnit test for both the success and fail cases of POST /hello.

# ./test/hello_webhook_test.exs
defmodule HelloWebhookTest do
  use ExUnit.Case, async: true
  use Plug.Test
  doctest HelloWebhook

  @opts HelloWebhook.Endpoint.init([])

  test "GET /hello" do
    # Create a test connection
    conn = conn(:get, "/hello")

    # Invoke the plug
    conn = HelloWebhook.Endpoint.call(conn, @opts)

    # Assert the response and status
    assert conn.state == :sent
    assert conn.status == 200
    assert conn.resp_body == "Hello, world!"
  end

  test "POST /hello with valid payload" do
    body = Poison.encode!(%{name: "Sam"})

    conn = conn(:post, "/hello", body)
      |> put_req_header("content-type", "application/json")

    conn = HelloWebhook.Endpoint.call(conn, @opts)

    assert conn.state == :sent
    assert conn.status == 200
    assert Poison.decode!(conn.resp_body) == %{"response" => "Hello, Sam!"}
  end

  test "POST /hello with invalid payload" do
    body = Poison.encode!(%{namu: "Samu"})

    conn = conn(:post, "/hello", body)
      |> put_req_header("content-type", "application/json")

    conn = HelloWebhook.Endpoint.call(conn, @opts)

    assert conn.state == :sent
    assert conn.status == 422
    assert Poison.decode!(conn.resp_body) == %{"error" => "Expected a \"name\" key"}
  end
end

Assuming you left your original server running, when you try to run these tests you might see the following error:

$ mix test

=INFO REPORT==== 29-Apr-2017::10:46:58 ===
    application: logger
    exited: stopped
    type: temporary
** (Mix) Could not start application hello_webhook: HelloWebhook.start(:normal, []) returned an error: shutdown: failed to start child: HelloWebhook.Endpoint
    ** (EXIT) an exception was raised:
        ** (MatchError) no match of right hand side value: {:error, :eaddrinuse}
            (hello_webhook) lib/hello_webhook/endpoint.ex:16: HelloWebhook.Endpoint.start_link/0

This is because our test server is trying to run on the same port as our development server (which is port 4000 by default). We can fix this by adding some enviroment-speciic configuration using Mix.Config which is the canonical way to configure your mix app.

Let’s dive into our config file and uncomment the bottom line so we can add environment-specific configuration:

# ./config/config.exs`
# This file is responsible for configuring your application
# and its dependencies with the aid of the Mix.Config module.
use Mix.Config

# This configuration is loaded before any dependency and is restricted
# to this project. If another project depends on this project, this
# file won't be loaded nor affect the parent project. For this reason,
# if you want to provide default values for your application for
# 3rd-party users, it should be done in your "mix.exs" file.

# You can configure for your application as:
#
#     config :hello_webhook, key: :value
#
# And access this configuration in your application as:
#
#     Application.get_env(:hello_webhook, :key)
#
# Or configure a 3rd-party app:
#
#     config :logger, level: :info
#

# It is also possible to import configuration files, relative to this
# directory. For example, you can emulate configuration per environment
# by uncommenting the line below and defining dev.exs, test.exs and such.
# Configuration from the imported file will override the ones defined
# here (which is why it is important to import them last).
#
import_config "#{Mix.env}.exs" # NOTE: uncomment this line

Note that Mix.Config overwrites previous values with new ones, so any configuration specified in one of your env files will override the main configuration in config.exs.

You will need to add three config files, each corresponding to a Mix env.

# ./config/dev.exs
use Mix.Config

config :hello_webhook, port: 4000

# ./config/prod.exs
use Mix.Config

# NOTE: Use $PORT environment variable if specified, otherwise fallback to port 80
port =
  case System.get_env("PORT") do
    port when is_binary(port) -> String.to_integer(port)
    nil -> 80 # default port
  end

config :hello_webhook, port: port

# ./config/test.exs
use Mix.Config

config :hello_webhook, port: 4001

We have to tell our application server that it should use the port specified in configuration, so modify your HelloWebhook.Endpoint.start_link/0 function to look like this:

# ./lib/hello_webhook/endpoint.ex
# ...
def start_link do
  port = Application.fetch_env!(:hello_webhook, :port)
  {:ok, _} = Plug.Adapters.Cowboy.http(__MODULE__, [], port: port)
end
# ...

Now let’s try running mix test again. You should see green dots:

$ mix test
12:18:56.712 [info]  GET /hello
12:18:56.716 [info]  Sent 200 in 4ms
.
12:18:56.721 [info]  POST /hello
12:18:56.725 [info]  Sent 200 in 4ms
.
12:18:56.725 [info]  POST /hello
.
12:18:56.725 [info]  Sent 422 in 50µs

Finished in 0.06 seconds
3 tests, 0 failures

That’s pretty much it. As you can see, it’s extremely straightforward to set up a basic Plug app using Elixir that could be the basis for a JSON API, or any number of web microservices. Phoenix is nice but you can get a lot done without it.

I describe how to deploy your Plug app to Heroku here.

PS. In case you run into any trouble, the full source for this app is available on github. Feel free to clone it and use it as a template to build your own microservices in Elixir.