Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Read Building Conduit - Leanpub

Download as pdf or txt
Download as pdf or txt
You are on page 1of 93

0

 

Building Conduit

Building Conduit

Buy on Leanpub

Table of Contents

Preface
Introduction

Who is Building Conduit for?


What does it cover?
What is CQRS?

Commands

Domain events

Queries

What is event sourcing?


What are the costs of using CQRS?
Recipe for building a CQRS/ES application in Elixir
An aggregate

An event sourced aggregate

Unit testing an aggregate

Conduit

General functionality
API specs

Contexts

Contexts in Phoenix
Contexts in Conduit

Getting started

Installing Phoenix
Generating a Phoenix project
Starting the Phoenix server
Commanded facilitates CQRS/ES in Elixir
Write and read model stores

Installing and configuring Commanded

Configuring the read model store

Accounts

Register a user

Building our first context

Writing our first integration test

Application structure

Alternate structure

Building our first aggregate

Building our first command

Building our first domain event

Writing our first unit test

Command dispatch and routing

Writing our first read model projection

Validating dispatched commands

Testing user registration validation

Enforce unique usernames

Additional username validation

Validating a user’s email address

Hashing the user’s password

Completing user registration

Authentication

Authenticate a user

Generating a JWT token

Getting the current user

Frequently asked questions

How do I structure my CQRS/ES application?


How do I deal with eventually consistent read model projections?

Appendix I

Conduit API specs

Authentication header
JSON objects returned by API

User

Profile

Single article

Multiple articles

Single comment

Multiple comments

List of tags

Errors and status codes

Endpoints

Authentication

Registration

Get current user

Update user

Get profile

Follow user

Unfollow user

List articles

Feed articles

Get article

Create Article

Update Article

Delete article

Add comments to an article

Get comments from an article

Delete comment

Favourite article

Unfavourite article

Get tags

Notes

Preface

Welcome to Building Conduit.

In this book you will discover how to implement the Command Query Responsibility
Segregation and event sourcing (CQRS/ES) pattern in an Elixir application.
This book will take you through the design and build, from scratch, of an exemplary
Medium.com clone. The full source code is available to view and clone from GitHub. As each
feature is developed, a link to the corresponding Git commit will be provided so you can
browse the source code at each stage of development.

The application will be built as if it were a real world project. Including the specification of
integration and unit tests to verify the functionality under development.

By the end of this book you should have a solid grasp of how to apply the CQRS/ES pattern to
your own Elixir applications.

You will learn how to:

Follow test-driven development to build an HTTP API exposing and consuming JSON data.
Validate input data using command validation.
Create a functional, event sourced domain model.
Define a read model and populate it by projecting domain events.
Authenticate a user using a JSON Web Token (JWT).

Introduction

Who is Building Conduit for?


This book is written for anyone who has an interest in CQRS/ES and Elixir.

It assumes the reader will already be familiar with the broad concepts of CQRS/ES. You will
be introduced to the building blocks that comprise an application built following this pattern,
and shown how to implement them in Elixir.

The reader should be comfortable reading Elixir syntax and understand the basics of its
actor concurrency model, implemented as processes and message passing.

What does it cover?


You will learn an approach to implementing the CQRS/ES pattern in a real world Elixir
application. You will build a Medium.com clone, called Conduit, using the Phoenix web
framework. Conduit is a real world blogging platform allowing users to publish articles,
follow authors, and browse and read articles.

The inspiration for this example web application comes from the RealWorld project:

See how the exact same Medium.com clone (called Conduit) is built using any of our
supported frontends and backends. Yes, you can mix and match them, because they all
adhere to the same API spec.
While most “todo” demos provide an excellent cursory glance at a framework’s
capabilities, they typically don’t convey the knowledge & perspective required to
actually build real applications with it.

RealWorld solves this by allowing you to choose any frontend (React, Angular 2, &
more) and any backend (Node, Django, & more) and see how they power a real world,
beautifully designed fullstack app called “Conduit”.

By building a backend in Elixir and Phoenix that adheres to the RealWorld API specs, you can
choose to pair it with any of the available frontends. Some of the most popular current
implementations are:

React / Redux
Elm
Angular 4+
Angular 1.5+
React / MobX

You can view a live demo of Conduit that’s powered by React and Redux with a Node.js
backend, to get a feel for what we’ll be building.

Many thanks to Eric Simons for pioneering the idea and founding the RealWorld project.

Before we start building Conduit, let’s briefly cover some of the concepts related to command
query responsibility segregation and event sourcing.

What is CQRS?
At its simplest, CQRS is the separation of commands from queries.

Commands are used to mutate state in a write model.


Queries are used to retrieve a value from a read model.

In a typical layered architecture you have a single model to service writes and reads,
whereas in a CQRS application the read and write models are different. They may also be
separated physically by using a different database or storage mechanism. CQRS is often
combined with event sourcing where there’s an event store for persisting domain events
(write model) and at least one other data store for the read model.
CQRS overview

Commands
Commands are used to instruct an application to do something, they are named in the
imperative:

Register account
Transfer funds
Mark fraudulent activity

Commands have one, and only one, receiver: the code that fulfils the command request.

Domain events
Domain events indicate something of importance has occurred within a domain model. They
are named in the past tense:

Account registered
Funds transferred
Fraudulent activity detected

Domain events describe your system activity over time using a rich, domain-specific
language. They are an immutable source of truth for the system. Unlike commands which are
restricted to a single handler, domain events may be consumed by multiple subscribers - or
potentially no interested subscribers.

Often commands and events come in pairs: a successful register account command results in
an account registered event. It’s also possible that a command can be successfully executed
and result in many or no domain events.

Queries
Domain events from the write model are used to build and update a read model. I refer to
this process as projecting events into a read model projection.

The read model is optimised for querying therefore the data is often stored denormalized to
support faster querying performance. You can use whatever technology is most appropriate
to support the querying your application demands, and take advantage of multiple different
types of storage as appropriate:

Relational database.
In-memory store.
Disk-based file store.
NoSQL database.
Full-text search index.

What is event sourcing?


Any state changes within your domain are driven by domain events. Therefore your entire
application’s state changes are modelled as a stream of domain events:

Bank account event stream

An aggregate’s state is built by applying its domain events to some initial empty state. State is
further mutated by applying a created domain to the current state:
f(state, event) => state

Domain events are persisted in order – as a logical stream – for each aggregate. The event
stream is the canonical source of truth, therefore it is a perfect audit log.

All other state in the system may be rebuilt from these events. Read models are projections of
the event stream. You can rebuild the read model by replaying every event from the
beginning of time.

What are the costs of using CQRS?


Domain events provide a history of your poor design decisions and they are immutable.

It’s an alternative, and less common, approach to building applications than basic CRUD1.
Modelling your application using domain events demands a rich understanding of the
domain. It can be more complex to deal with the eventual consistency between the write
model and the read model.

Recipe for building a CQRS/ES application in Elixir

1. A domain model containing aggregates, commands, and events.


2. Hosting of an aggregate root instance and a way to send it commands.
3. An event store to persist the created domain events.
4. A Read model store for querying.
5. Event handlers to build and update the read model.
6. An API to query the read model data and to dispatch commands to the write model.

An aggregate
An aggregate defines a consistency boundary for transactions and concurrency. Aggregates
should also be viewed from the perspective of being a “conceptual whole”. They are used to
enforce invariants in a domain model and to guard against business rule violations.

This concept fits naturally within Elixir’s actor concurrency model. An Elixir GenServer
enforces serialised concurrent access and processes communicate by sending messages
(commands and events).

An event sourced aggregate


Must adhere to these rules:

1. Public API functions must accept a command and return any resultant domain events, or
an error.
2. Its internal state may only be modified by applying a domain event to its current state.
3. Its internal state can be rebuilt from an initial empty state by replaying all domain events
in the order they were raised.

Here’s an example event sourced aggregate in Elixir:

defmodule ExampleAggregate do
# aggregate's state
defstruct [:uuid, :name]
# public command API
def create(%ExampleAggregate{}, uuid, name) do
%CreatedEvent{
uuid: uuid,
name: name,
}
end
# state mutator
def apply(%ExampleAggregate{} = aggregate, %CreatedEv
e}) do
%ExampleAggregate{aggregate | uuid: uuid, name: name
end
end

It is preferable to implement aggregates using pure functions2. Why might this be a good rule
to follow? Because a pure function is highly testable: you will focus on behaviour rather than
state.

By using pure functions in your domain model, you also decouple your domain from the
framework’s domain. Allowing you to build your application separately first, and layer the
external interface on top. The external interface in our application will be the RESTful API
powered by Phoenix.

Unit testing an aggregate


An aggregate function can be tested by executing a command and verifying the expected
events are returned.

The following example demonstrates a BankAccount aggregate being tested for opening an
account:

defmodule BankAccountTest do
use ExUnit.Case, async: true
alias BankAccount.Commands.OpenAccount
alias BankAccount.Events.BankAccountOpened
describe "opening an account with a valid initial bala
test "should be opened" do
account = %BankAccount{}
open_account = %OpenAccount{
account_number: "ACC123",
initial_balance: 100,
}

account_opened = BankAccount.open_account(account,
assert account_opened == %BankAccountOpened{
account_number: "ACC123",
initial_balance: 100,
}
end
end
end

Conduit

Conduit is a social blogging site: it is a Medium.com clone.

Welcome to Conduit
You can view a live demo at: demo.realworld.io

General functionality
As a blogging platform, Conduit’s functionality is based around authors publishing articles.

Authenticate users via JWT3.


Register, view, and update users.
Publish, edit, view, and delete articles.
Create, view, and delete comments on articles.
Display paginated lists of articles.
Favourite articles.
Follow other users.

API specs
Conduit uses a custom REST API for all requests, including authentication.

We will be implementing a backend that must adhere to the Conduit API specs.

HTTP verb URL Action  

POST /api/users/login Login a user  

POST /api/users Register a user  

GET /api/user Get current user  

PUT /api/user Update current user  

GET /api/profiles/:username Get profile  

POST /api/profiles/:username/follow Follow user  

DELETE /api/profiles/:username/follow Unfollow user  

GET /api/articles List articles  

GET /api/articles/feed Feed articles  

GET /api/articles/:slug Get an article  

POST /api/articles Publish an article  

PUT /api/articles/:slug Update an article  

DELETE /api/articles/:slug Remove an article  

POST /api/articles/:slug/comments Comment on an article  


HTTP verb URL Action  

GET /api/articles/:slug/comments Get comments on an article  

DELETE /api/articles/:slug/comments/:id Remove an comment  

POST /api/articles/:slug/favorite Favorite an article  

DELETE /api/articles/:slug/favorite Unfavorite an article  

GET /api/tags Get tags  

The full API specs are detailed in Appendix I.

Contexts

Phoenix 1.3 introduces the concept of contexts. These are somewhat inspired by bounded
contexts in domain-driven design. They provide a way of defining clear boundaries between
parts of your application. A context defines a public API that should be consumed by the rest
of the application.

Bounded Context is a central pattern in Domain-Driven Design. It is the focus of DDD’s


strategic design section which is all about dealing with large models and teams. DDD
deals with large models by dividing them into different Bounded Contexts and being
explicit about their interrelationships.

— Martin Fowler

Bounded context - by Martin Fowler


Contexts in Phoenix
When using the included phx.gen.* generators you must provide an additional context
argument for each resource you create. As an example, when generating a JSON resource:

$ mix phx.gen.json Accounts User users name:string age:i

The first argument is the context module followed by the schema module and its plural name
(used as the schema table name). The context is an Elixir module that serves as an API
boundary for the given resource. A context often holds many related resources. It is a
module, with some implementation modules behind it, that exposes a public interface the
rest of your application can consume.

This is powerful, because now you can define clear boundaries for your application domains.
You’re now implementing your application, containing your business logic, separate from the
Phoenix web interface. The web interface is merely one of the possible consumers of the API
exposed by your application using a context module.4

The official Phoenix documentation has a guide detailing Contexts in further detail.5

Contexts in Conduit
Given the features that we plan to implement in Conduit specified in the previous chapter,
we can define the following contexts to provide a boundary for each part of our application.

Accounts Register users, find user by username.

Auth Authenticate users.

Blog Publish articles, browse a paginated list of articles, comment on articles,


favourite articles.

Why would we separate the Conduit functionality into three contexts? To keep the
responsibilities cohesive within each context. For example the responsibility of our Accounts
context is to manage users and their credentials, not handle publishing articles. Therefore
we have a blog context, separate from accounts, to publish and list articles.

Contexts have their own folder within lib/conduit which immediately shows at a high
level what the Conduit app does and allows easy navigation to help locate modules and files
related to a specific area of functionality.

lib/conduit/accounts

lib/conduit/auth

lib/conduit/blog
With this separation of concerns enforced at the directory structure, when I need to add a
feature related to blogging, or fix a bug for articles, I can immediately focus on the
lib/conduit/blog folder and its contained modules. This reduces the cognitive load when
working with the code.

These contexts would provide public API functions such as:

Accounts

Accounts.register_user/1

Auth

Auth.authenticate/2

Blog

Blog.publish_article/1

Blog.list_articles/1

Getting started

Let’s start building our Conduit web application. We’ll be using the latest version of the
Phoenix Web framework, currently 1.3.

Before proceeding you will need to have Elixir v1.5 or later installed. Please follow the
official installation guide to get Elixir running on your operating system. There are
instructions for Windows, Mac OS X, Linux, Docker, and Raspberry Pi.

Installing Phoenix
Install the latest version of Phoenix using the mix command:

$ mix archive.install https://github.com/phoenixframewor


x_new.ez

Generating a Phoenix project


Once installed, you use the mix phx.new command to create a new Phoenix 1.3 project:

$ mix phx.new conduit --module Conduit --app conduit --n

A project in the conduit directory will be created.


The application name and module name have been specified using the --app and --
module flags respectively. The Conduit frontend will be provided by one of the existing
frameworks so we omit Phoenix’s HTML and static asset support, provided by Brunch, by
appending --no-brunch --no-html to the generator command.

Initialize repository

We will temporarily comment out the Ecto repo, Conduit.Repo , in the main Conduit
application, to allow the server to be started without a database. We can also take the
opportunity to remove Phoenix’s pub/sub dependency and channels/sockets as they will not
be used for our RESTful API.

Remove Phoenix pub/sub and default user socket


Starting the Phoenix server


Fetch mix dependencies and compile:

$ mix deps.get
$ mix compile

Run the Phoenix server:

$ mix phx.server

Visit http://localhost:4000/ in a browser to check the server is running. An error is shown as


no routes have been defined yet.

[info] GET /
[debug] ** (Phoenix.Router.NoRouteError) no route found
outer)
(conduit) lib/conduit/web/router.ex:1: Conduit.Web.R
(conduit) lib/phoenix/router.ex:303: Conduit.Web.Rou
(conduit) lib/conduit/web/endpoint.ex:1: Conduit.Web
all/2
(conduit) lib/plug/debugger.ex:123: Conduit.Web.Endp
3)"/2
(conduit) lib/conduit/web/endpoint.ex:1: Conduit.Web
(plug) lib/plug/adapters/cowboy/handler.ex:15: Plug.
upgrade/4
(cowboy) /Users/ben/src/conduit/deps/cowboy/src/cowb
wboy_protocol.execute/4

Commanded facilitates CQRS/ES in Elixir


We will use Commanded6 to build our Elixir application following the CQRS/ES pattern.
Commanded is an open source library that contains the building blocks required to
implement CQRS/ES in Elixir.

It provides support for:

Command registration and dispatch.


Hosting and delegation to aggregates.
Event handling.
Long running process managers.

You can use Commanded with one of the following event stores for persistence:

EventStore Elixir library, using PostgreSQL for persistence.


Greg Young’s Event Store.

Your choice of event store has no affect on how you build your application.

For Conduit we will use the PostgreSQL based Elixir EventStore as we will also be using
PostgreSQL for our read model store and the Ecto database query library.

Write and read model stores


Applications applying the CQRS pattern have a logical separation between the write and read
models. You can choose to make these physically separated by using an alternative database,
schema, or storage mechanism for each.

In Conduit, we will be using event sourcing to persist domain events created by our write
model. These events are the canonical source of truth for our application, they are used by
both the aggregates to rebuild their state and are projected into the read model store for
querying. Since the read model is a projection built from all domain events in the event
store, we can rebuild it from scratch at any time. To rebuild a read store, the database is
recreated and then populated by projecting all of the domain events.

We will use two databases for Conduit: one for the event store; another for the read model.

The database naming convention is to suffix the storage type (event store or read store) and
environment name to the application name (conduit):
Environment Event store database Read store database

dev conduit_eventstore_dev conduit_readstore_dev

test conduit_eventstore_test conduit_readstore_test

prod conduit_eventstore_prod conduit_readstore_prod

Installing and configuring Commanded


We’ll be using the following open source libraries, published to Hex, to help build Conduit:

commanded - used to build Elixir applications following the CQRS/ES pattern.


eventstore - an Elixir event store using PostgreSQL as the underlying storage engine.
commanded_eventstore_adapter - adapter to use EventStore with Commanded.

The Commanded README details the steps required to install and configure the library.

1. Add commanded and commanded_eventstore_adapter to the list of dependencies in


mix.exs:

defp deps do
[
{:commanded, "~> 0.15"},
{:commanded_eventstore_adapter, "~> 0.3"},
# ...
]
end

2. Include :eventstore in the list of extra applications in mix.exs :

def application do
[
extra_applications: [
:logger,
:eventstore,
],
# ...
]
end

3. Configure Commanded to use the EventStore adapter in the mix config file
( config/config.exs ):

config :commanded,
event_store_adapter: Commanded.EventStore.Adapters
4. Configure the event store database in each environment’s mix config file (e.g.
config/dev.exs ):

# Configure the event store database


config :eventstore, EventStore.Storage,
serializer: Commanded.Serialization.JsonSerializer
username: "postgres",
password: "postgres",
database: "conduit_eventstore_dev",
hostname: "localhost",
pool_size: 10

5. Fetch and compile the dependencies:

$ mix do deps.get, deps.compile

6. Create the event store database and tables using the mix task:

$ mix do event_store.create, event_store.init

Configuring the read model store


Ecto will be used for building and querying the read model. The Phoenix generator will have
already included the phoenix_ecto dependency, which includes ecto , and generated
config settings for each environment.

1. Configure the Conduit.Repo Ecto repository database in each environment’s mix config
file (e.g. config/dev.exs ):

# Configure the read store database


config :conduit, Conduit.Repo,
adapter: Ecto.Adapters.Postgres,
username: "postgres",
password: "postgres",
database: "conduit_readstore_dev",
hostname: "localhost",
pool_size: 10

2. Create the read store database:

$ mix ecto.create
There are now two databases in our development environment:

1. Read model store, conduit_readstore_dev , containing only the Ecto schema migrations
table.
2. An event store, conduit_eventstore_dev , containing the default event store tables.

 Installing and configuring Commanded

Accounts

The Conduit blogging platform requires authors to register an account before they can
publish articles. We shall begin our first feature by implementing user registration in the
accounts context.

Register a user
The API spec for registration is as follows:

HTTP verb URL Required fields

POST /api/users email, username, password

Example request body:

{
"user":{
"username": "jake",
"email": "jake@jake.jake",
"password": "jakejake"
}
}

Example response body:

{
"user": {
"email": "jake@jake.jake",
"token": "jwt.token.here",
"username": "jake",
"bio": null,
"image": null
}
}

The request should fail with a 422 HTTP status code error should any of the required fields
be invalid. In this case the response body would be in the following format:

{
"errors": {
"body": [
"can't be empty"
]
}
}

We must ensure that usernames are unique and cannot be registered more than once.

Building our first context


Phoenix includes a set of generators to help scaffold your application:

Command Action

mix phx.gen.channel Generates a Phoenix channel

mix phx.gen.context Generates a context with functions around an Ecto schema

mix Generates an embedded Ecto schema file


phx.gen.embedded

mix phx.gen.html Generates controller, views, and context for an HTML


resource

mix phx.gen.json Generates controller, views, and context for a JSON resource

mix Generates a Presence tracker


phx.gen.presence

mix phx.gen.schema Generates an Ecto schema and migration file

mix phx.gen.secret Generates a secret

Since we’re building a REST API, we can use the phx.gen.json generator to create our first
context, resource, controller, and JSON view. As we already know the fields relating to our
users we can include them, with their type, in the generator command:

$ mix phx.gen.json Accounts User users username:string e


ord:string bio:string image:string --table accounts_user
Overall, this generator will add the following files to lib/conduit :

Context module in lib/conduit/accounts/accounts.ex , serving as the API boundary.


Ecto schema in lib/conduit/accounts/user.ex , and a database migration to create the
accounts_users table.

View in lib/conduit_web/views/user_view.ex .
Controller in lib/conduit_web/controllers/user_controller.ex .
Unit and integration tests in test/conduit/accounts .

Remember that the User module we’re creating here is not our domain model. It will be a
read model projection, populated by domain events published from an aggregate.

The generator prompts us to add the resource to the :api scope in our Phoenix router
module lib/conduit/web/router.ex . For now, we will configure only the :create
controller action to support registering a user:

post "/users", UserController, :create

Scaffold accounts context


Writing our first integration test


Let’s follow Behaviour Driven Development (BDD), thinking “from the outside in”, and start
by writing a failing integration test for user registration. We will include tests that cover the
happy path of successfully creating a user, and for the two failure scenarios mentioned
above: missing required fields and duplicate username registration.

Factories to construct test data


We will use factory functions to generate realistic data for our tests. ExMachina is an Elixir
library that makes it easy to create test data and associations.

In mix.exs , add :ex_machina as a test environment dependency:

defp deps do
[
{:ex_machina, "~> 2.0", only: :test},
# ...
]
end
Fetch mix dependencies and compile:

$ mix do deps.get, deps.compile

We must ensure the ExMachina application is started in the test helper,


test/test_helper.exs , before ExUnit:

{:ok, _} = Application.ensure_all_started(:ex_machina)

Now we create our factory module in test/support/factory.ex :

defmodule Conduit.Factory do
use ExMachina
def user_factory do
%{
email: "jake@jake.jake",
username: "jake",
hashed_password: "jakejake",
bio: "I like to skateboard",
image: "https://i.stack.imgur.com/xHWG8.jpg",
}
end
end

In our test module, we must import Conduit.Factory to access our user factory. Then we
have access to build/1 and build/2 functions to construct params for an example user to
register:

build(:user)

build(:user, username: "ben")

User registration integration test


In our integration test we want to verify successful user registration and check any failure
includes a useful error message to help the user identify the problem.

1. To test success, we assert that the returned HTTP status code is 201 and JSON response
matches the API:

test "should create and return user when data is vali


conn = post conn, user_path(conn, :create), user: b
json = json_response(conn, 201)["user"]

assert json == build(:user, bio: nil, image: nil)


end

2. To test a validation failure, we assert the response is 422 and errors are included:

test "should not create user and render errors when d


onn} do
conn = post conn, user_path(conn, :create), user: b
assert json_response(conn, 422)["errors"] == [
username: [
"can't be empty",
]
]
end

The full integration test is given below:

defmodule ConduitWeb.UserControllerTest do
use ConduitWeb.ConnCase
import Conduit.Factory
alias Conduit.Accounts
def fixture(:user, attrs \\ []) do
build(:user, attrs) |> Accounts.create_user()
end
setup %{conn: conn} do
{:ok, conn: put_req_header(conn, "accept", "applicat
end
describe "register user" do
@tag :web
test "should create and return user when data is val
conn = post conn, user_path(conn, :create), user:
json = json_response(conn, 201)["user"]

assert json == %{
"bio" => nil,
"email" => "jake@jake.jake",
"image" => nil,
"username" => "jake",
}
end
@tag :web
test "should not create user and render errors when
: conn} do
conn = post conn, user_path(conn, :create), user:
")
assert json_response(conn, 422)["errors"] == %{
"username" => [
"can't be empty",
]
}
end
@tag :web
test "should not create user and render errors when
, %{conn: conn} do
# register a user
{:ok, _user} = fixture(:user)

# attempt to register the same username


conn = post conn, user_path(conn, :create), user:
e2@jake.jake")
assert json_response(conn, 422)["errors"] == %{
"username" => [
"has already been taken",
]
}
end
end
end

Before running these tests, we must create the event store and read store databases for the
test environment:

$ MIX_ENV=test mix do event_store.create, event_store.in


$ MIX_ENV=test mix ecto.create

Then we can execute our registration integration test using the mix test command:

$ mix test test/conduit_web/controllers/user_controller_

The execution result of running these tests will be:

Finished in 0.2 seconds


3 tests, 3 failures
Register user integration test

Great, we have three failing tests. We now have the acceptance criteria of user registration
codified in our tests. When these tests pass, the feature will be done. These tests will also help
to prevent regressions caused by any changes we, or anyone else, may make in the future.

Let’s move forward by building the domain model to support user registration.

Application structure
The default directory structure used by the Phoenix generator creates a folder per context,
and places them inside a folder named after the application within the lib directory.

We currently have our accounts context, along with the Phoenix web folder:

lib/conduit/accounts

lib/conduit/web

One benefit of this approach is that when a context becomes too large, it can be extracted
into its own project. Using an umbrella project allows these separate Elixir applications to be
used together, via internal mix references.

When our application grows too large to be comfortably hosted by a single, monolithic
service we can migrate the individual apps into their own microservices. This is why it’s
important to focus on separation of concerns, using contexts, from the outset. It allows us to
split the application apart as production usage and requirements dictate, and more
importantly it supports rewriting and deletion of code. Keeping each contexts highly
cohesive, but loosely coupled, provides these benefits.

Within each context in our CQRS application we will create the common building blocks:
aggregates, commands, events, read model projections, and queries. I prefer to create a
separate folder for each of these. It provides further segregation within a single context and
allows you to easily locate any module by its type and name.

The folder structure for our first accounts context will be:

lib/conduit/accounts/aggregates

lib/conduit/accounts/commands

lib/conduit/accounts/events

lib/conduit/accounts/projections

lib/conduit/accounts/projectors

lib/conduit/accounts/queries
lib/conduit/accounts/validators

Inside the lib/conduit/accounts folder we will place the context module and a supervisor
module:

lib/conduit/accounts/accounts.ex

lib/conduit/accounts/supervisor.ex

These are the public facing parts of the account context, and provide the API into its
available behaviour. The supervisor is responsible for supervising the workers contained
within, such as the event handlers and projectors.

Alternate structure
Instead of grouping modules by their type within a context, you may chose to group by their
aggregate functionality. Following the convention used by Phoenix, the filename and module
names are suffixed by their type (e.g. user_aggregate.ex ).

In this example I’ve illustrated the file structure for modules related to the User aggregate in
the Accounts context, including the commands, events, projection, validators, and queries:

lib/conduit/accounts/user/user_aggregate.ex

lib/conduit/accounts/user/register_user.ex

lib/conduit/accounts/user/user_registered.ex

lib/conduit/accounts/user/user_projection.ex

lib/conduit/accounts/user/user_projector.ex

lib/conduit/accounts/user/user_by_email_query.ex

lib/conduit/accounts/user/unique_username_validator.ex

You can use either approach, or another application structure entirely, but it’s important that
you chose a convention and adhere to it within your application.

Building our first aggregate


As we’re dealing with registering users, our first aggregate will be the user.

One decision we must take when designing an aggregate is how to identify an instance. The
simplest approach is to use a UUID7. This may be generated by the client or the server, and is
used to guarantee a unique identity per aggregate instance. All commands must include this
identity to allow locating the target aggregate instance.

For Conduit users, we have a restriction that their username must be unique. So we could
use the username to identify the aggregate and enforce this business rule. Domain events
persisted for each user would be appended to a stream based upon their individual
username. Populating the user aggregate would retrieve their events from the stream based
on their username. Attempting to register an already taken username would fail since the
aggregate exists and its state will be non-empty. However, one downside to this approach is
that it would prevent us from allowing a user to amend their username at some point in the
future. Remember that domain events are immutable once appended to the event store, so
you cannot amend them, or move them to another stream. Instead you would need to create
a new aggregate instance, using the new username, initialise its state from the existing
aggregate, and mark the old aggregate instance as obsoleted.

We’ll use an assigned unique identifier for each user. The uuid package provides a UUID
generator and utilities for Elixir. With this library we can assign a unique identity to a user
using UUID.uuid4() .

We’ll add the UUID package to our mix dependencies:

defp deps do
[
{:uuid, "~> 1.1"},
# ...
]
end

To enforce the username uniqueness, we will validate the command before execution to
ensure the username has not already been taken.

The User aggregate module, created in lib/conduit/accounts/aggregates/user.ex ,


defines the relevant fields as a struct, and exposes two public functions:

1. execute/2 that accepts the empty user struct, %User{} , and the register user command,
%RegisterUser{} , returning the user registered domain event.

2. apply/2 that takes the user struct and the resultant user registered event
%UserRegistered{} and mutates the aggregate state.

defmodule Conduit.Accounts.Aggregates.User do
defstruct [
:uuid,
:username,
:email,
:hashed_password,
]

alias Conduit.Accounts.Aggregates.User
@doc """
Register a new user
"""
def execute(%User{uuid: nil}, %RegisterUser{} = regist
%UserRegistered{
user_uuid: register.user_uuid,
username: register.username,
email: register.email,
hashed_password: register.hashed_password,
}
end
# state mutators
def apply(%User{} = user, %UserRegistered{} = register
%User{user |
uuid: registered.user_uuid,
username: registered.username,
email: registered.email,
hashed_password: registered.hashed_password,
}
end
end

This approach to building aggregates will be followed for all new commands and events. The
execute/2 function takes the command and returns zero, one, or more domain events.
While the apply/2 function mutates the aggregate state by applying a single event.

The execute/2 function to register the user uses pattern matching to ensure the uuid field
is nil. This ensures a user aggregate for a given identity can only be created once.

Why did I name the command function execute/2 and not register_user/2 ? This is to
allow commands to be dispatched directly to the aggregate, without needing an intermediate
command handler. This means less code to write. You can choose to have descriptive
command function, but you must also write a command handler module to route each
command to the function on the aggregate module. It’s also possible to have the command
handler module implement the domain logic by returning any domain events itself, if you
prefer.

 Building our first aggregate

Building our first command


We need to create a command to register a user. A command is a standard Elixir module
using the defstruct keyword to define its fields. A struct is a map with an extra field
indicating its type and allows developers to provide default values for keys.

defmodule Conduit.Accounts.Commands.RegisterUser do
defstruct [
:user_uuid,
:username,
:email,
:password,
:hashed_password,
]
end

The register user command can be constructed using familiar Elixir syntax:

alias Conduit.Accounts.Commands.RegisterUser
%RegisterUser{
user_uuid: UUID.uuid4(),
username: "jake",
email: "jake@jake.jake",
password: "jakejake",
}

When building a struct, Elixir will automatically guarantee all keys belongs to the struct. This
helps prevent accidental typos:

iex(1)> %RegisterUser{
...(1)> user: "jake",
...(1)> email: "jake@jake.jake",
...(1)> password: "jakejake",
...(1)> }
** (KeyError) key :user not found in: %RegisterUser{emai
hed_password: nil, password: "jakejake", username: nil,

Constructing commands from external data


Commands will usually be populated from external data. In Conduit, this will be JSON data
sent to our Phoenix web server. Phoenix will parse JSON data into an Elixir map with key
based strings. We therefore need a way to construct commands from these key/value maps.

ExConstructor is an Elixir library that makes it easy to instantiate structs from


external data, such as that emitted by a JSON parser.

Add use ExConstructor after a defstruct statement to inject a constructor function


into the module.

– ExConstructor

We’ll add this library to our mix dependencies:


defp deps do
[
{:exconstructor, "~> 1.1"},
# ...
]
end

Then we add use ExConstructor to our command:

defmodule Conduit.Accounts.Commands.RegisterUser do
defstruct [
:user_uuid,
:username,
:email,
:password,
:hashed_password,
]

use ExConstructor
end

This allows us to create the command struct from a plain map, such as that provided by the
params argument in a Phoenix controller function using the new/1 function:

iex(1)> alias Conduit.Accounts.Commands.RegisterUser


iex(2)> RegisterUser.new(%{"username" => "jake", "email"
assword" => "jakejake"})
%Conduit.Accounts.Commands.RegisterUser{email: "jake@ja
hashed_password: nil, password: "jakejake", user_uuid:

 Register user command

Building our first domain event


Domain events must be named in the past tense, so for user registration an appropriate
event name is UserRegistered . Again we’ll use plain Elixir modules and structs to define
our domain event:

defmodule Conduit.Accounts.Events.UserRegistered do
@derive [Poison.Encoder]
defstruct [
:user_uuid,
:username,
:email,
:hashed_password,
]
end

Note we derive the Poison.Encoder protocol in the domain event module. This is because
Commanded uses the poison pure Elixir JSON library to serialize events in the database. For
maximum performance, you should @derive [Poison.Encoder] for any struct used for
encoding.

 User registered domain event

Writing our first unit test


With our User aggregate, register user command, and user registered event modules
defined we can author the first unit test. We’ll use the test to verify the expected domain
event is returned when executing the command, and that the fields are being correctly
populated.

ExUnit provides an ExUnit.CaseTemplate module that allows a developer to define a test


case template to be used throughout their tests. This is useful when there are a set of
functions that should be shared between tests, or a shared set of setup callbacks.

We can create a case template for aggregate unit tests that provides a reusable way to
execute commands against an aggregate and verify the resultant domain events. In the
following Conduit.AggregateCase case template an Elixir macro is used to allow each unit
test to specify the aggregate module acting as the test subject:

defmodule Conduit.AggregateCase do
@moduledoc """
This module defines the test case to be used by aggreg
"""

use ExUnit.CaseTemplate
using [aggregate: aggregate] do
quote bind_quoted: [aggregate: aggregate] do
@aggregate_module aggregate

import Conduit.Factory
# assert that the expected events are returned whe
# have been executed
defp assert_events(commands, expected_events) do
assert execute(List.wrap(commands)) == expected_
end
# execute one or more commands against the aggrega
defp execute(commands) do
{_, events} = Enum.reduce(commands, {%@aggregate
and, {aggregate, _}) ->
events = @aggregate_module.execute(aggregate,

{evolve(aggregate, events), events}


end)
List.wrap(events)
end
# apply the given events to the aggregate state
defp evolve(aggregate, events) do
events
|> List.wrap()
|> Enum.reduce(aggregate, &@aggregate_module.app
end
end
end
end

The unit test, in test/conduit/accounts/aggregates/user_test.exs , asserts that the


register user command returns a user registered event:

defmodule Conduit.Accounts.UserTest do
use Conduit.AggregateCase, aggregate: Conduit.Account
alias Conduit.Accounts.Commands.RegisterUser
alias Conduit.Accounts.Events.UserRegistered
describe "register user" do
@tag :unit
test "should succeed when valid" do
user_uuid = UUID.uuid4()

assert_events build(:register_user, user_uuid: use


%UserRegistered{
user_uuid: user_uuid,
email: "jake@jake.jake",
username: "jake",
hashed_password: "jakejake",
}
]
end
end
end
Tagging tests makes it easier to run only certain classes of tests. As an example,
@tag :unit is used to indicate that it is a fast running unit test.

You can use @tag :wip to mark a test that you are currently working on,
allowing you to execute it on its own:

$ mix test --only wip

To facilitate test-driven development I use the mix test.watch command provided by the
mix_test_watch package. It will automatically run tests whenever files change.

In mix.exs , add the package as a dev environment dependency:

defp deps do
[
{:mix_test_watch, "~> 0.5", only: :dev, runtime: fal
# ...
]
end

Fetch and compile the mix dependencies:

$ mix do deps.get, deps.compile

We can now execute our tagged unit test as a one off test run:

$ mix test --only unit

… or automatically whenever a file is saved:

$ mix test.watch --only unit


Register user unit test

Command dispatch and routing


We’ve implemented registration for the user aggregate. Now we need to expose this
behaviour through the public API, the Conduit.Accounts context module. We will create a
register_user/1 function that takes an Elixir map containing the user attributes, then
construct and dispatch a register user command.

To dispatch a command to its intended aggregate we must create a router module that
implements the Commanded.Commands.Router behaviour:

defmodule Conduit.Router do
use Commanded.Commands.Router
alias Conduit.Accounts.Aggregates.User
alias Conduit.Accounts.Commands.RegisterUser
dispatch [RegisterUser], to: User, identity: :user_uui
end

Once configured, the router allows us to dispatch a command:

alias Conduit.Router
alias Conduit.Accounts.Commands.RegisterUser
register_user = %RegisterUser{
user_uuid: UUID.uuid4(),
email: "jake@jake.jake",
username: "jake",
hashed_password: "hashedpw",
}

:ok = Router.dispatch(register_user)

We can pattern match on the return value to ensure that the command succeeded, or handle
any failures.

The register_user/1 function in the accounts context assigns a unique identity to the user,
constructs the register user command, and dispatches it to the configured aggregate:
defmodule Conduit.Accounts do
@moduledoc """
The boundary for the Accounts system.
"""

alias Conduit.Accounts.Commands.RegisterUser
alias Conduit.Router
@doc """
Register a new user.
"""
def register_user(attrs \\ %{}) do
attrs
|> assign_uuid(:user_uuid)
|> RegisterUser.new()
|> Router.dispatch()
end
# generate a unique identity
defp assign_uuid(attrs, identity), do: Map.put(attrs,
end

To verify the expected behaviour of the register user function we turn to the accounts test in
test/conduit/accounts/accounts_test.exs :

defmodule Conduit.AccountsTest do
use Conduit.DataCase
alias Conduit.Accounts
alias Conduit.Accounts.Projections.User
describe "register user" do
@tag :integration
test "should succeed with valid data" do
assert {:ok, %User{} = user} = Accounts.register_u

assert user.bio == "some bio"


assert user.email == "some email"
assert user.hashed_password == "some hashed_passwo
assert user.image == "some image"
assert user.username == "some username"
end
end
end

Again the test is tagged, using @tag integration , to indicate it is an integration test and will
likely be slower due to accessing the database. Running the test results in a failure:
$ mix test --only integration
Including tags: [:integration]
Excluding tags: [:test]

1) test register user should succeed with valid data (Co


test/conduit/accounts/accounts_test.exs:9
match (=) failed
code: assert {:ok, %User{} = user} = Accounts.regist
right: :ok
stacktrace:
test/conduit/accounts/accounts_test.exs:10: (test)

Finished in 0.2 seconds


8 tests, 1 failure, 7 skipped

A failing test is helpful feedback: it guides us as to what we need to build next. In this case,
we need to populate our read model and return the newly registered user.

 Register user in accounts context

Writing our first read model projection


A projection is a read-only view of some application state, built up from the published
domain events.

We’ll be using Ecto to persist our read model to a PostgreSQL database. A projection is built
by a projector module defined as an event handler: it receives each published domain event
and updates the read model. So a projection is read model state that is projected from
domain events by a projector.

In Commanded, an event handler is an Elixir module that implements the


Commanded.Event.Handler behaviour. Each event handler is given a unique name and
should be included in the application supervision tree by being registered as a child of a
supervisor. An event handler must implement the handle/2 callback function which
receives the domain event and its associated metadata. The function must return :ok to
indicate successful processing of the event. You can use pattern matching to define multiple
handle/2 functions, one per domain event you want to process.

Here’s an example event handler using the Commanded.Event.Handler macro:

defmodule ExampleEventHandler do
use Commanded.Event.Handler, name: "ExampleEventHandl
def handle(%AnEvent{..}, _metadata) do
# process domain event
:ok
end
end

Commanded Ecto projections


The commanded_ecto_projections Elixir library helps you to build read model projections
using the Ecto database library. Commanded Ecto projections provides a macro for defining a
read model projection inside a module which are is defined as a Commanded event handler.
The project macro provides a convenient DSL8 for defining projections. It uses pattern
matching to specify the domain event being projected, and provides access to an Ecto.Multi
data structure for grouping multiple Repo operations. Ecto.Multi is used to insert, update,
and delete data, and these will be executed within a single database transaction.

The Phoenix generator has already included the Ecto package as a dependency and created
an Ecto repo for us, Conduit.Repo in lib/repo.ex . We configured the database connection
details for the repo in the configuring the read model store section of the getting started
chapter.

We’ll add the Commanded Ecto projections package to our dependencies in mix.exs :

defp deps do
[
{:commanded_ecto_projections, "~> 0.6"},
# ...
]
end

Fetch mix dependencies and compile:

$ mix do deps.get, deps.compile

We need to configure the commanded_ecto_projections library with the Ecto repo used by
our application in config/config.exs :

config :commanded_ecto_projections,
repo: Conduit.Repo

Then we generate an Ecto migration to create a projection_versions table:

$ mix ecto.gen.migration create_projection_versions


This table is used to track which events each projector has seen, to ignore events already
seen should they be resent as the event store guarantees at-least-once delivery of events. It’s
possible an event may be handled more than once if the event store doesn’t receive the
acknowledgement of successful processing.

We need to modify the generated migration, in priv/repo/migrations , to create the


projection_versions table as detailed in the Commanded projections project README:

defmodule Conduit.Repo.Migrations.CreateProjectionVersi
use Ecto.Migration
def change do
create table(:projection_versions, primary_key: fals
add :projection_name, :text, primary_key: true
add :last_seen_event_number, :bigint

timestamps()
end
end
end

Creating a user projection


Now we need to create our User schema module, a database migration to create the
accounts_users table, and a corresponding projector module.

When we ran the Phoenix resource generator to create the accounts context, we also asked it
to create a user schema and specified its fields. It generated a database migration for us in
priv/repo/migrations . By default Phoenix schemas use auto-incrementing integer fields as
the table primary key. As we’re using UUIDs to identify our user aggregate we need to amend
the schema and migration to use the uuid data type.

We’ll add two unique indexes to the accounts_users table, on username and email , to
support efficient querying on those fields.

defmodule Conduit.Repo.Migrations.CreateConduit.Account
use Ecto.Migration
def change do
create table(:accounts_users, primary_key: false) do
add :uuid, :uuid, primary_key: true
add :username, :string
add :email, :string
add :hashed_password, :string
add :bio, :string
add :image, :string
timestamps()
end
create unique_index(:accounts_users, [:username])
create unique_index(:accounts_users, [:email])
end
end

The user projection schema is modified to use Ecto’s binary_id as the primary key data
type:

defmodule Conduit.Accounts.Projections.User do
use Ecto.Schema
@primary_key {:uuid, :binary_id, autogenerate: false}

schema "accounts_users" do
field :username, :string, unique: true
field :email, :string, unique: true
field :hashed_password, :string
field :bio, :string
field :image, :string

timestamps()
end
end

Then we migrate our database:

$ mix ecto.migrate

Finally, we create a projector module to insert a user projection each time a user is
registered. This uses the project macro, provided by the Commanded Ecto projections
library, to match each user registered domain event and insert a new User projection into
the database.

defmodule Conduit.Accounts.Projectors.User do
use Commanded.Projections.Ecto, name: "Accounts.Proje
alias Conduit.Accounts.Events.UserRegistered
alias Conduit.Accounts.Projections.User
project %UserRegistered{} = registered do
Ecto.Multi.insert(multi, :user, %User{
uuid: registered.user_uuid,
username: registered.username,
email: registered.email,
hashed_password: registered.hashed_password,
bio: nil,
image: nil,
})
end
end

The project macro exposes an Ecto.Multi struct, as multi , that we can use to chain
together many database operations. They are executed within a single database transaction
to ensure all changes succeed, or fail, together.

Include projector in supervision tree


To start and register the projector module as an event handler we need to include it within
our application’s supervision tree. We will create a supervisor module per context
responsible for handling its processes. The following supervisor, created in
lib/conduit/accounts/supervisor.ex , will start the user projector:

defmodule Conduit.Accounts.Supervisor do
use Supervisor
alias Conduit.Accounts
def start_link do
Supervisor.start_link(__MODULE__, [], name: __MODULE
end
def init(_arg) do
Supervisor.init([
Accounts.Projectors.User
], strategy: :one_for_one)
end
end

In lib/application.ex , we add the Conduit.Accounts.Supervisor supervisor module to


the top level application supervisor:

defmodule Conduit.Application do
use Application
def start(_type, _args) do
import Supervisor.Spec
children = [
# Start the Ecto repository
supervisor(Conduit.Repo, []),

# Start the endpoint when the application starts


supervisor(ConduitWeb.Endpoint, []),

# Accounts supervisor
supervisor(Conduit.Accounts.Supervisor, []),
]

opts = [strategy: :one_for_one, name: Conduit.Superv


Supervisor.start_link(children, opts)
end
end

Reset storage between test execution


To ensure test independence we must clear the event store and read store test databases
between each test execution. We already have a Conduit.DataCase module, generated by
Phoenix, to use for integration tests accessing the database. This can be extended to clear
both databases; so we add reset_eventstore/0 and reset_readstore/0 functions to do
just that.

For the event store, we take advantage of the EventStore.Storage.Initializer.reset!/1


function to reset the database structure, removing any events, streams, and clearing all
subscriptions.

The read model database is manually reset by executing a truncate table SQL statement
specifying each projection table to clear. We must remember to add any additional tables to
this statement as we build our application to also reset them during test execution.

defmodule Conduit.DataCase do
use ExUnit.CaseTemplate
using do
quote do
alias Conduit.Repo
import Ecto
import Ecto.Changeset
import Ecto.Query
import Conduit.Factory
import Conduit.DataCase
end
end
setup _tags do
Application.stop(:conduit)
Application.stop(:commanded)
Application.stop(:eventstore)
reset_eventstore()
reset_readstore()

Application.ensure_all_started(:conduit)
:ok
end
defp reset_eventstore do
{:ok, conn} =
EventStore.configuration()
|> EventStore.Config.parse()
|> Postgrex.start_link()

EventStore.Storage.Initializer.reset!(conn)
end
defp reset_readstore do
readstore_config = Application.get_env(:conduit, Con

{:ok, conn} = Postgrex.start_link(readstore_config)


Postgrex.query!(conn, truncate_readstore_tables(), [
end
defp truncate_readstore_tables do
"""
TRUNCATE TABLE
accounts_users,
projection_versions
RESTART IDENTITY;
"""
end
end

 User projection schema, migration, and projector

Returning to the accounts integration test


We have now done almost enough to make our register user test in the accounts context
pass. The remaining change is to return the User projection from the register_user/1
function.

Dealing with eventual consistency


One caveat we need to be aware of is that all event handlers in Commanded run
asynchronously, and independently of one another. This prevents a slow event
handler from blocking any others. They are also run in parallel as separate
processes, so an error in one handler does not impact another or your entire
application.

We cannot assume that once the command is successfully dispatched, the read
model has been updated. It’s possible that the projection we need to read data
from is still being updated, is lagging behind, or is being rebuilt from scratch.

There are a number of strategies to overcome this limitation of eventual


consistency:

1. Wait, and poll the read model until the data we expect to be present appears.
2. Use pub/sub from the read model to notify interested subscribers after each
update.
3. Fake a response by generating it from the data we already have.

Commanded provides an additional option for us: dispatch a command with


strong consistency. This uses an opt-in consistency model where you define the
required consistency per event handler and during command dispatch.

The available consistency options are :eventual (default) and :strong .


Dispatching a command using consistency: :strong will block until all events
created by the command have been processed by the handlers that opt-in to use
strong consistency. This guarantees that when you receive the :ok response from
dispatch your read model will be up to date and can be safely queried.

I’ve written an article discussing these strategies in dealing with eventual


consistency in a CQRS/ES application using Conduit as an example.

In this scenario, we could attempt to return a %User{} struct populated from the
parameters passed to the register_user/1 function. The concern with this approach is the
additional duplicate code we must write, and the potential for it getting out of sync during
future changes. Instead we’ll take advantage of Commanded’s support for strongly consistent
command dispatch.

The Commanded consistency model is opt-in, with the default consistency being :eventual .
We need to define our user projector as strongly consistent:

defmodule Conduit.Accounts.Projectors.User do
use Commanded.Projections.Ecto,
name: "Accounts.Projectors.User",
consistency: :strong

# .. projection code omitted


end
Returning to the accounts context, we will update the register_user/1 function to dispatch
the command using consistency: :strong :

defmodule Conduit.Accounts do
@moduledoc """
The boundary for the Accounts system.
"""

alias Conduit.Accounts.Commands.RegisterUser
alias Conduit.Accounts.Projections.User
alias Conduit.Repo
alias Conduit.Router
@doc """
Register a new user.
"""
def register_user(attrs \\ %{}) do
uuid = UUID.uuid4()

register_user =
attrs
|> assign(:user_uuid, uuid)
|> RegisterUser.new()

with :ok <- Router.dispatch(register_user, consisten


get(User, uuid)
else
reply -> reply
end
end
defp get(schema, uuid) do
case Repo.get(schema, uuid) do
nil -> {:error, :not_found}
projection -> {:ok, projection}
end
end
defp assign(attrs, key, value), do: Map.put(attrs, key
end

Now when we receive an :ok reply from command dispatch we can be assured that the
user projection has been updated with our newly registered user. Allowing us to query the
projection and return the populated %User{} . Let’s run the accounts test to check our
changes:

$ mix test test/conduit/accounts/accounts_test.exs


Success, we have a passing test.

We still have one other failing test, but that’s useful as it directs us towards what needs to be
worked on next: adding command validation.

Register user, returning registered user projection


Command validation
We want to build our application to ensure that most commands are successfully handled by
an aggregate. The first way to achieve this is to limit which commands can be dispatched by
only allowing acceptable commands to be shown to the end user in the UI. The second level
of protection before a command reaches an aggregate is command validation; all commands
should be validated before being passed to the aggregate.

There are three different levels of command validation that apply to an application:

1. Command property validation: mandatory fields, data format, min/max ranges.


2. Domain validation rules: prevent duplicate usernames, application state based validation
logic.
3. Business invariants: protection against invalid state changes.

Command property validation


Superficial command field validation is the simplest to apply. You add rules to each
command property specifying its data type, whether it’s mandatory or optional, and apply
basic range checking (e.g. date must be in the future). You can even apply cross field
validation (e.g. start date must be before end date). These rules apply to the command itself,
requiring no external information.

Property validation helps guard against common errors, such as the user forgetting to fill out
a mandatory field, by applying the rules before accepting the command and rejecting upon
validation failure. These rules can be applied at the user interface to help assist the user.

Domain validation rules


In our user registration feature we have a rule that usernames must be unique. To enforce
this rule we must check that a given username does not yet exist when executing the register
user command. This information will need to be read from a read model. We cannot enforce
this rule within our user aggregate because each aggregate instance runs completely
independent from any other. It’s not possible to have a command that affects, or uses,
multiple aggregates since an aggregate is itself the transaction boundary.

You could decide that this invariant was important enough to warrant protection by using an
aggregate whose purpose is to record and assign unique usernames. Its job would be to
enforce uniqueness by acting as a gatekeeper to the user registration. A command, such as
reserve username, could be used to claim the username. The aggregate would publish a
domain event with the newly assigned username on success, allowing an event handler to
then register the user with the guaranteed unique username.

In Elixir a GenServer process can be successfully used to enforce uniqueness as concurrent


requests to a process are handled serially. The process would allow a username to be claimed
or reserved during command dispatch, preventing any later request from using the same
username. The only caveat to this approach is that the GenServer ’s in-memory state must be
persisted to storage so that it can be restarted with the list of already taken usernames.

Business invariants
An aggregate root must protect itself against commands that would cause an invariant to be
broken. This includes attempting to execute a command that is invalid for the aggregate’s
current state. An example would be attempting to rename an article that has been deleted. In
this scenario the aggregate would return an {:error, reason} tagged tuple from the
command execute/2 function.

For certain business operations you might decide to return a domain event indicating an
error, rather than preventing the command execution. An example would be attempting to
withdraw money from a bank account where the amount requested is larger than the
account balance. Retail banks earn interest or fees when an account goes overdrawn, so
rather than reject the withdraw money command, a bank account aggregate might instead
allow the withdrawal and also return an account overdrawn domain event.

Applying command property validation


For command field validation we will be using Vex.

An extensible data validation library for Elixir.

Can be used to check different data types for compliance with criteria.

Ships with built-in validators to check for attribute presence, absence, inclusion,
exclusion, format, length, acceptance, and by a custom function. You can easily define
new validators and override existing ones.

– Vex

We’ll add the vex package to our dependencies in mix.exs :

defp deps do
[
{:vex, "~> 0.6"},
# ...
]
end
Fetch mix dependencies and compile:

$ mix do deps.get, deps.compile

Then we add validation rules for each of the fields in the command:

defmodule Conduit.Accounts.Commands.RegisterUser do
defstruct [
:user_uuid,
:username,
:email,
:hashed_password,
]

use ExConstructor
use Vex.Struct
validates :user_uuid, uuid: true
validates :username, presence: true, string: true
validates :email, presence: true, string: true
validates :hashed_password, presence: true, string: tr
end

For the uuid field we will use a custom validator that attempts to parse the given string as a
UUID:

defmodule Conduit.Validation.Validators.Uuid do
use Vex.Validator
def validate(value, _options) do
Vex.Validators.By.validate(value, [function: &valid_
e, allow_blank: false])
end
defp valid_uuid?(uuid) do
case UUID.info(uuid) do
{:ok, _} -> true
{:error, _} -> false
end
end
end

To validate string fields, such as username and email, we will use another custom validator:
defmodule Conduit.Validation.Validators.String do
use Vex.Validator
def validate(nil, _options), do: :ok
def validate("", _options), do: :ok
def validate(value, _options) do
Vex.Validators.By.validate(value, [function: &String
end
end

Then register these validators in config/config.exs :

config :vex,
sources: [
[string: Conduit.Validation.SringValidator],
[uuid: Conduit.Validation.UuidValidator],
Vex.Validators
]

Once registered, we can verify a validator is configured using iex -S mix :

iex(1)> Vex.validator(:uuid)
Conduit.Validation.Validators.Uuid

With the validation rules in place, we can validate a register user command as follows:

iex(1)> alias Conduit.Accounts.Commands.RegisterUser


Conduit.Accounts.Commands.RegisterUser
iex(2)> register_user = %RegisterUser{}
%Conduit.Accounts.Commands.RegisterUser{email: nil, has
username: nil, user_uuid: nil}
iex(3)> Vex.valid?(register_user)
false
iex(3)> Vex.results(register_user)
[{:error, :email, :presence, "must be present"}, {:ok, :
{:error, :hashed_password, :presence, "must be present"
{:ok, :hashed_password, :string},
{:error, :username, :presence, "must be present"}, {:ok
{:error, :user_uuid, :uuid, "must be valid"}]

Validating dispatched commands


We’ve defined our command validation rules, now we need to apply them during command
dispatch.
Commanded provides middleware as an the extension point to include cross-cutting
concerns into command dispatch. This can be used to add in command validation,
authorization, logging, and other behaviour that you want to be called for every command
the router dispatches. You define your own middleware modules and register them in your
command router. They are executed before, and after success or failure, of every dispatched
command.

We will write a middleware module that implements the Commanded.Middleware behaviour.


It uses the Vex.valid?/1 and Vex.errors/1 functions to validate commands before
dispatch:

defmodule Conduit.Support.Middleware.Validate do
@behaviour Commanded.Middleware

alias Commanded.Middleware.Pipeline
import Pipeline
def before_dispatch(%Pipeline{command: command} = pipe
case Vex.valid?(command) do
true -> pipeline
false -> failed_validation(pipeline)
end
end
def after_dispatch(pipeline), do: pipeline
def after_failure(pipeline), do: pipeline
defp failed_validation(%Pipeline{command: command} = p
errors = command |> Vex.errors() |> merge_errors()

pipeline
|> respond({:error, :validation_failure, errors})
|> halt
end
defp merge_errors(errors) do
errors
|> Enum.group_by(
fn {_error, field, _type, _message} -> field end,
fn {_error, _field, _type, message} -> message end
|> Map.new()
end
end

This middleware will return an {error, :validation_failure, errors} tagged tuple


should a command fail validation. The errors will contain the collection of validation
failures, per field, that can be returned and shown to the client.

The validation middleware module is registered in the router:


defmodule Conduit.Router do
use Commanded.Commands.Router
alias Conduit.Accounts.Aggregates.User
alias Conduit.Accounts.Commands.RegisterUser
alias Conduit.Support.Middleware.Validate
middleware Validate
dispatch [RegisterUser], to: User, identity: :uuid
end

Testing user registration validation


Returning to our failing accounts test:

defmodule Conduit.AccountsTest do
use Conduit.DataCase
alias Conduit.Accounts
alias Conduit.Accounts.User
describe "register user" do
@tag :integration
test "should fail with invalid data and return error
assert {:error, :validation_failure, errors} = Acc
d(:user, username: ""))

assert errors == %{username: ["can't be empty"]}


end
end
end

We can run the test again to check whether it passes:

$ mix test test/conduit/accounts/accounts_test.exs


Excluding tags: [:pending]
.
1) test register user should fail with invalid data an
AccountsTest)
test/conduit/accounts/accounts_test.exs:21
Assertion with == failed
code: errors == [username: ["can't be empty"]]
left: [username: ["must be present"]]
right: [username: ["can't be empty"]]
stacktrace:
test/conduit/accounts/accounts_test.exs:24: (test
Finished in 0.4 seconds
2 tests, 1 failure

It’s still failing, but only because the validation error message we’re expecting, “can’t be
empty”, differs from the default validation error message provided by Vex, “must be present”.

We can provide our own message when registering the validation rules in the command:

defmodule Conduit.Accounts.Commands.RegisterUser do
defstruct [
:user_uuid,
:username,
:email,
:hashed_password,
]

use ExConstructor
use Vex.Struct
validates :user_uuid, uuid: true
validates :username, presence: [message: "can't be emp
validates :email, presence: [message: "can't be empty"
validates :hashed_password, presence: [message: "can't
end

Run the test again to see it succeed:

$ mix test test/conduit/accounts/accounts_test.exs


Compiling 3 files (.ex)
Excluding tags: [:pending]
..
Finished in 0.4 seconds
2 tests, 0 failures

We now have complete end-to-end user registration including command dispatch and
validation, aggregate construction, domain event publishing, and read model projection.
That covers the entire flow of a CQRS application from an initial command dispatch resulting
in an updated read model available to query.

Validate dispatched commands



Enforce unique usernames
We’ve implemented basic command field validation using Vex. Now we need to move on to
the second level validation: domain validation rules. Enforcing unique usernames when
registering a new user will be the first that we’ll implement.

Typically domain validation will use a read model to query for application state. In our case
we already have a user projection that contains the username. We even specified a unique
index on the username column in the database migration:

create unique_index(:users, [:username])

The index will ensure that querying on this column is performant.

Let’s write an integration test to assert that registering the same username will fail with a
useful error message:

@tag :integration
test "should fail when username already taken and return
assert {:ok, %User{}} = Accounts.register_user(build(:
assert {:error, :validation_failure, errors} = Account
ser))

assert errors == %{username: ["has already been taken"


end

Running this test will fail, so we need to implement the unique username validation rule.
First we build a read model query to lookup the user projection by username:

defmodule Conduit.Accounts.Queries.UserByUsername do
import Ecto.Query
alias Conduit.Accounts.Projections.User
def new(username) do
from u in User,
where: u.username == ^username
end
end

This can be executed by passing the query to our Ecto repository:


UserByUsername.new("jake") |> Conduit.Repo.one()

We use this query to expose a new public function in the accounts context:
user_by_username/1 :
defmodule Conduit.Accounts do
alias Conduit.Accounts.Queries.UserByUsername
alias Conduit.Repo
@doc """
Get an existing user by their username, or return `nil
"""
def user_by_username(username) do
username
|> String.downcase()
|> UserByUsername.new()
|> Repo.one()
end
end

Then we can check if a username already exists in the new unique username validator,
added to the accounts context in lib/conduit/accounts/validators :

defmodule Conduit.Accounts.Validators.UniqueUsername do
use Vex.Validator
alias Conduit.Accounts
def validate(value, _options) do
Vex.Validators.By.validate(value, [
function: fn value -> !username_registered?(value)
message: "has already been taken"
])
end
defp username_registered?(username) do
case Accounts.user_by_username(username) do
nil -> false
_ -> true
end
end
end

Add the accounts validators to the vex config in config/config.exs :

config :vex,
sources: [
Conduit.Accounts.Validators,
Conduit.Support.Validators,
Vex.Validators
]
Then we can register the new validator against the username property:

defmodule Conduit.Accounts.Commands.RegisterUser do
# ...
validates :username, presence: [message: "can't be emp
ue_username: true
end

Run the accounts integration test and we now have three passing tests:

$ mix test test/conduit/accounts/accounts_test.exs


Excluding tags: [:pending]
...
Finished in 1.3 seconds
3 tests, 0 failures

Ensure unique username


Concurrent registration
We’ve included command validation to ensure unique usernames, and have tested this when
registering one user after another. However, there’s a problem: attempting to register two
users with the same username concurrently. The unique username validation will pass for
both commands, allowing both users with an identical username to be created. This issue
exists during the small period of time between registering the user and the read model being
updated.

An integration test demonstrates the problem:

defmodule Conduit.AccountsTest do
use Conduit.DataCase
alias Conduit.Accounts
alias Conduit.Accounts.User
describe "register user" do
@tag :integration
test "should fail when registering identical usernam
n error" do
1..2
|> Enum.map(fn _ -> Task.async(fn -> Accounts.regi
end) end)
|> Enum.map(&Task.await/1)
end
end
end

Since the issue exists only during concurrent command handling we can use another router
middleware module to enforce uniqueness. In the before_dispatch/1 callback for the
register user command we can attempt to claim the username. Should that fail, it indicates
that another user registration for that username is currently being processed and return a
validation failure.

The middleware will use a new Unique module that provides a claim/2 function. This
attempts to reserve a unique value for a given context. It returns :ok on success, or
{:error, :already_taken} on failure. To make the middleware reusable for other fields
we define an Elixir protocol ( UniqueFields ) allowing commands to specify which fields are
unique.

defmodule Conduit.Support.Middleware.Uniqueness do
@behaviour Commanded.Middleware

defprotocol UniqueFields do
@fallback_to_any true
@doc "Returns unique fields for the command"
def unique(command)
end
defimpl UniqueFields, for: Any do
def unique(_command), do: []
end
alias Conduit.Support.Unique
alias Commanded.Middleware.Pipeline
import Pipeline
def before_dispatch(%Pipeline{command: command} = pipe
case ensure_uniqueness(command) do
:ok ->
pipeline

{:error, errors} ->


pipeline
|> respond({:error, :validation_failure, errors}
|> halt()
end
end
def after_dispatch(pipeline), do: pipeline
def after_failure(pipeline), do: pipeline
defp ensure_uniqueness(command) do
command
|> UniqueFields.unique()
|> Enum.reduce_while(:ok, fn ({unique_field, error_m
value = Map.get(command, unique_field)

case Unique.claim(unique_field, value) do


:ok ->
{:cont, :ok}

{:error, :already_taken} ->


{:halt, {:error, Keyword.new([{unique_field, e
end
end)
end
end

For the RegisterUser command we specify the :username field must by unique by
implementing the UniqueFields protocol:

defimpl Conduit.Support.Middleware.Uniqueness.UniqueFie
s.Commands.RegisterUser do
def unique(_command), do: [
{:username, "has already been taken"},
]
end

The new Uniqueness middleware is registered after command validation so that it will only
be applied to valid commands:

defmodule Conduit.Router do
use Commanded.Commands.Router
alias Conduit.Accounts.Aggregates.User
alias Conduit.Accounts.Commands.RegisterUser
alias Conduit.Support.Middleware.{Uniqueness,Validate
middleware Validate
middleware Uniqueness
dispatch [RegisterUser], to: User, identity: :uuid
end

We’ll use a GenServer to track assigned unique values. Its state is a mapping between a
context, such as :username , and the already claimed values. Attempting to claim an
assigned value for a context returns an {:error, :already_taken} tagged error.

defmodule Conduit.Support.Unique do
use GenServer
def start_link do
GenServer.start_link(__MODULE__, %{}, name: __MODULE
end
def claim(context, value) do
GenServer.call(__MODULE__, {:claim, context, value})
end
def init(state), do: {:ok, state}
def handle_call({:claim, context, value}, _from, assig
{reply, state} = case Map.get(assignments, context)
nil -> {:ok, Map.put(assignments, context, MapSet.
values ->
case MapSet.member?(values, value) do
true -> {{:error, :already_taken}, assignments
false -> {:ok, Map.put(assignments, context, M
)}
end
end
{:reply, reply, state}
end
end

The Conduit.Support.Unique module is included in the application supervision tree, in


lib/conduit/application.ex :

defmodule Conduit.Application do
use Application
def start(_type, _args) do
import Supervisor.Spec
children = [
# ...
# Enforce unique constraints
worker(Conduit.Validation.Unique, []),
]

opts = [strategy: :one_for_one, name: Conduit.Superv


Supervisor.start_link(children, opts)
end
end
You might have noticed that by using a GenServer the list of assigned unique
values is transient and only stored in memory. When the process dies, either due
to an error or server restart, the list will be gone.

In this instance it’s perfectly acceptable as the behaviour is an additional


validation check only needed for concurrent command dispatches. We use the
UniqueUsername validator, UserByUsername query, and User read model
projection for additional command validation which is persistent between
restarts.

We now have unique usernames enforced as part of the register user command dispatch
pipeline. This should prevent duplicate usernames from being registered at exactly the same
time. We can verify this by running the integration tests again:

$ mix test --only integration


Including tags: [:integration]
Excluding tags: [:test, :pending]
....
Finished in 1.7 seconds
11 tests, 0 failures, 7 skipped

 Concurrent user registration

Additional username validation


There are two further validation rules to implement on usernames during registration:

1. Must be lowercase.
2. Must only contain alphanumeric characters (a-z, 0-9).

We can use a regular expression9 to enforce both of these rules.

First we add two integration tests to cover these requirements:

@tag :integration
test "should fail when username format is invalid and re
assert {:error, :validation_failure, errors} = Account
ser, username: "j@ke"))
assert errors == %{username: ["is invalid"]}
end
@tag :integration
test "should convert username to lowercase" do
assert {:ok, %User{} = user} = Accounts.register_user(
JAKE"))

assert user.username == "jake"


end

Vex supports regex validation using the format validator. We add this to the username
validation rules in the register user command:

defmodule Conduit.Accounts.Commands.RegisterUser do
# ..
validates :username,
presence: [message: "can't be empty"],
format: [with: ~r/^[a-z0-9]+$/, allow_nil: true, all
"is invalid"],
string: true,
unique_username: true
end

The allow_nil and allow_blank options are included as we already have validation to
ensure the username is present. We don’t want duplicate error messages when it is not
provided: “can’t be empty” and “is invalid”.

We need to convert the username to lowercase during registration in the Accounts context
register_user/1 function. Let’s take the opportunity to make a small refactoring by
moving the existing assign_uuid/2 function into the RegisterUser module. At the same
time we will include a new downcase_username/1 function that does as described. These
functions are chained together using the pipeline operator after constructing the
RegisterUser command struct from the user supplied attributes.

defmodule Conduit.Accounts do
@doc """
Register a new user.
"""
def register_user(attrs \\ %{}) do
uuid = UUID.uuid4()

register_user =
attrs
|> RegisterUser.new()
|> RegisterUser.assign_uuid(uuid)
|> RegisterUser.downcase_username()
with :ok <- Router.dispatch(register_user, consisten
get(User, uuid)
else
reply -> reply
end
end
end

The new functions are added to the RegisterUser command:

defmodule Conduit.Accounts.Commands.RegisterUser do
@doc """
Assign a unique identity for the user
"""
def assign_uuid(%RegisterUser{} = register_user, uuid)
%RegisterUser{register_user | user_uuid: uuid}
end
@doc """
Convert username to lowercase characters
"""
def downcase_username(%RegisterUser{username: username
%RegisterUser{register_user | username: String.downc
end
end

Running the integration test suite confirms our changes are good.

Additional username validation


Validating a user’s email address


We can now apply the same strategy to email address validation. The rules we need to
enforce are that an email address:

1. Must be unique.
2. Must be lowercase.
3. Must be in the desired format: contain an @ character.
The implementation will follow a similar approach to how we validated usernames.

First, we write failing tests to cover the scenarios above:

@tag :integration
test "should fail when email address already taken and r
assert {:ok, %User{}} = Accounts.register_user(build(:
assert {:error, :validation_failure, errors} = Account
ser, username: "jake2"))

assert errors == %{email: ["has already been taken"]}


end
@tag :integration
test "should fail when registering identical email addre
turn error" do
1..2
|> Enum.map(fn x -> Task.async(fn -> Accounts.register
ame: "user#{x}")) end) end)
|> Enum.map(&Task.await/1)
end
@tag :integration
test "should fail when email address format is invalid a
assert {:error, :validation_failure, errors} = Account
ser, email: "invalidemail"))

assert errors == %{email: ["is invalid"]}


end
@tag :integration
test "should convert email address to lowercase" do
assert {:ok, %User{} = user} = Accounts.register_user(
E@JAKE.JAKE"))

assert user.email == "jake@jake.jake"


end

Second, extend email validation in the command:

defmodule Conduit.Accounts.Commands.RegisterUser do
# ...
validates :email,
presence: [message: "can't be empty"],
format: [with: ~r/\S+@\S+\.\S+/, allow_nil: true, al
: "is invalid"],
string: true,
unique_email: true
end
Third, we create the new unique email validator:

defmodule Conduit.Accounts.Validators.UniqueEmail do
use Vex.Validator
alias Conduit.Accounts
def validate(value, _options) do
Vex.Validators.By.validate(value, [
function: fn value -> !email_registered?(value) en
message: "has already been taken"
])
end
defp email_registered?(email) do
case Accounts.user_by_email(email) do
nil -> false
_ -> true
end
end
end

This also requires a new public user_by_email/1 function in the accounts context to
retrieve a user by their email address:

defmodule Conduit.Accounts do
@doc """
Get an existing user by their email address, or return
"""
def user_by_email(email) when is_binary(email) do
email
|> String.downcase()
|> UserByEmail.new()
|> Repo.one()
end
end

The UserByEmail query is a module that constructs a standard Ecto query:

defmodule Conduit.Accounts.Queries.UserByEmail do
import Ecto.Query
alias Conduit.Accounts.Projections.User
def new(email) do
from u in User,
where: u.email == ^email
end
end

Fourth, we extend the UniqueFields protocol implementation for the register user
command to include email address:

defimpl Conduit.Support.Middleware.Uniqueness.UniqueFie
s.Commands.RegisterUser do
def unique(_command), do: [
{:email, "has already been taken"},
{:username, "has already been taken"},
]
end

Last we include the RegisterUser.downcase_email/1 function in the register user pipeline:

defmodule Conduit.Accounts do
@doc """
Register a new user.
"""
def register_user(attrs \\ %{}) do
uuid = UUID.uuid4()

register_user =
attrs
|> RegisterUser.new()
|> RegisterUser.assign_uuid(uuid)
|> RegisterUser.downcase_username()
|> RegisterUser.downcase_email()

with :ok <- Router.dispatch(register_user, consisten


get(User, uuid)
else
reply -> reply
end
end
end

That completes the email address validation: we run the integration test suite again to
confirm this with passing tests.
Email address validation

Hashing the user’s password


We don’t want to store a user’s password anywhere in our application. Instead we’ll use a
one-way hashing function and store the password hash. To authenticate a user during login
we hash the password they provide, using the same algorithm, and compare it with the
stored password hash: not the actual password.

For Conduit we’ll use the bcrypt10 password hashing function as described in how to safely
store a password using bcrypt. The Comeonin library provides an implementation of the
bcrypt hashing function in Elixir.

Password hashing (bcrypt, pbkdf2_sha512 and one-time passwords) library for Elixir.

This library is intended to make it very straightforward for developers to check users’
passwords in as secure a manner as possible.

– Comeonin

Add comeonin and bcrypt_elixir to dependencies in mix.exs :

defp deps do
[
# ...
{:bcrypt_elixir, "~> 1.0"},
{:comeonin, "~> 4.0"},
]
end

Fetch mix dependencies and compile:

$ mix do deps.get, deps.compile

For our test environment only we will reduce the number of bcrypt rounds so it doesn’t slow
down our test suite. In config/test.exs we configure comeonin as follows:

config :comeonin, :bcrypt_log_rounds, 4


We’ll create a Conduit.Auth module to wrap the Comeonin library’s bcrypt hashing
functions:

defmodule Conduit.Auth do
@moduledoc """
Authentication using the bcrypt password hashing funct
"""

alias Comeonin.Bcrypt
def hash_password(password), do: Bcrypt.hashpwsalt(pas
def validate_password(password, hash), do: Bcrypt.chec
end

Then create an integration test to verify the password is being hashed and stored in the user
read model. For the test assertion we use the Auth.validate_password/2 function, shown
above, which hashes the provided password, jakejake , and compares with the already
hashed password saved for the user, such as
$2b$04$W7A/lWysNVUqeYg8vjKCXeBniHoks4jmRziKDmACO.fvqo3wdqsea . Remember that we
never store the user’s password, only a one-way hash.

@tag :integration
test "should hash password" do
assert {:ok, %User{} = user} = Accounts.register_user(
assert Auth.validate_password("jakejake", user.hashed_
end

Next we include a password field in the register user command struct, to contain the user
provided password in plain text. We add a hash_password/1 function that hashes the
password, stores the hash value as hashed_password , and clears the original plain text
password. This prevents the user’s password from being exposed by any command auditing.

defmodule Conduit.Accounts.Commands.RegisterUser do
defstruct [
user_uuid: "",
username: "",
email: "",
password: "",
hashed_password: "",
]

@doc """
Hash the password, clear the original password
"""
def hash_password(%RegisterUser{password: password} =
%RegisterUser{register_user |
password: nil,
hashed_password: Auth.hash_password(password),
}
end
end

The final change is to include this function in the register user command dispatch chain:

defmodule Conduit.Accounts do
@doc """
Register a new user.
"""
def register_user(attrs \\ %{}) do
uuid = UUID.uuid4()

register_user =
attrs
|> RegisterUser.new()
|> RegisterUser.assign_uuid(uuid)
|> RegisterUser.downcase_username()
|> RegisterUser.downcase_email()
|> RegisterUser.hash_password()

with :ok <- Router.dispatch(register_user, consisten


get(User, uuid)
else
reply -> reply
end
end
end

We’ve now successfully hashed the user’s password during registration, helping to protect
our users’ security should our deployed environment be compromised and database
accessed. The Comeonin library will generate a different 16 character length salt for each
hashed password by default. This is another layer of protection against hashed password
dictionary and rainbow table attacks.

Hash user’s password


Completing user registration


With user registration done, at least from the accounts context, we return to our acceptance
criteria defined in the UserControllerTest integration test. To specify the initial
requirements and direct our development efforts we started out by writing end-to-end tests
to ensure that the /api/users registration endpoint adheres to the requirements of the
JSON API.

On successful registration the following response should be returned:

{
"user": {
"email": "jake@jake.jake",
"token": "jwt.token.here",
"username": "jake",
"bio": null,
"image": null
}
}

For now we will skip the authentication token, that will be addressed in the next chapter.

The integration test for successful user registration asserts against the JSON returned from a
POST request to /api/users in the UserControllerTest module:

@tag :web
test "should create and return user when data is valid",
conn = post conn, user_path(conn, :create), user: buil
json = json_response(conn, 201)["user"]

assert json == %{
"bio" => nil,
"email" => "jake@jake.jake",
"image" => nil,
"username" => "jake",
}
end

Running the test still results in a failure, so there’s more work for us to do. We need to modify
the user view and select a subset of the fields from our user projection to be returned as
JSON data:

defmodule ConduitWeb.UserView do
use ConduitWeb, :view
alias ConduitWeb.UserView
def render("index.json", %{users: users}) do
%{users: render_many(users, UserView, "user.json")}
end
def render("show.json", %{user: user}) do
%{user: render_one(user, UserView, "user.json")}
end
def render("user.json", %{user: user}) do
%{
username: user.username,
email: user.email,
bio: user.bio,
image: user.image,
}
end
end

As per the API spec we only return the username , email , bio , and image fields.

Next we need to handle the case where validation errors are returned during command
dispatch. The request should fail with a 422 HTTP status code and the response body would
be in the following format:

{
"errors": {
"username": [
"can't be empty"
]
}
}

This scenario is covered by the following test:

@tag :web
test "should not create user and render errors when data
nn} do
conn = post conn, user_path(conn, :create), user: buil
assert json_response(conn, 422)["errors"] == %{
"username" => [
"can't be empty",
]
}
end

To achieve this we will use a new feature in Phoenix 1.3, the action_fallback plug for
controllers to support generic error handling. Including the plug inside a controller allows
you to ignore errors, and only handle the successful case. Take a look at our existing user
controller, where we only pattern match on the {:ok, user} successful outcome:

defmodule ConduitWeb.UserController do
use ConduitWeb, :controller
alias Conduit.Accounts
alias Conduit.Accounts.User
action_fallback ConduitWeb.FallbackController
def create(conn, %{"user" => user_params}) do
with {:ok, %User{} = user} <- Accounts.register_user
conn
|> put_status(:created)
|> render("show.json", user: user)
end
end
end

Any errors that aren’t handled within your controller can be dealt with by the configured
fallback controller. We pattern match on the {:error, :validation_failure, errors}
tagged error tuple returned when command dispatch fails due to a validation failure. The
errors are rendered using a new validation view module and returned with an HTTP 422
“Unprocessable Entity” status code:

defmodule ConduitWeb.FallbackController do
use ConduitWeb, :controller
def call(conn, {:error, :validation_failure, errors})
conn
|> put_status(:unprocessable_entity)
|> render(ConduitWeb.ValidationView, "error.json", e
end
def call(conn, {:error, :not_found}) do
conn
|> put_status(:not_found)
|> render(ConduitWeb.ErrorView, :"404")
end
end

The validation view returns a map containing the errors that is rendered as JSON:

defmodule ConduitWeb.ValidationView do
use ConduitWeb, :view
def render("error.json", %{errors: errors}) do
%{errors: errors}
end
end

We can run the integration tests tagged with @web after making these changes, and the good
news is they all pass:

$ mix test --only web


Including tags: [:web]
Excluding tags: [:test, :pending]

...

Finished in 2.3 seconds


18 tests, 0 failures, 15 skipped

User controller web tests


Having completed user registration, we now move on to authentication in the next chapter.

Authentication

Authenticate a user
The API spec for authentication is as follows:

HTTP verb URL Required fields

POST /api/users/login email, password

Example request body:

{
"user":{
"email": "jake@jake.jake",
"password": "jakejake"
}
}

Example response body:


{
"user": {
"email": "jake@jake.jake",
"token": "jwt.token.here",
"username": "jake",
"bio": null,
"image": null
}
}

Example failure response body:

{
"errors": {
"email or password": [
"is invalid"
]
}
}

The successful login response includes a JSON Web Token (JWT). This token is included in the
HTTP headers on subsequent requests to authorize the user’s actions. We’ll use Guardian to
authenticate users and take advantage of its support for JWT tokens.

An authentication framework for use with Elixir applications.

– Guardian

Guardian provides a number of Plug modules to include within the Phoenix request
handling pipeline. We’ll make use of the following three plugs for the Conduit API:

Plug Usage

Guardian.Plug.VerifyHeader Looks for a token in the Authorization


header.

  Useful for APIs.

  If one is not found, this does nothing.

Guardian.Plug.EnsureAuthenticated Looks for a previously verified token.

  If one is found, continues.

  Otherwise it will call the :unauthenticated


function of your handler.
Plug Usage

Guardian.Plug.LoadResource Looks in the sub field of the token,

  fetches the resource from the configured


serializer

  and makes it available via


Guardian.Plug.current_resource(conn).

In mix.exs , add the guardian package as a dependency:

defp deps do
[
# ...
{:guardian, "~> 0.14"},
]
end

Fetch and compile the mix dependencies:

$ mix do deps.get, deps.compile

Guardian requires a secret key to be generated for our application. We can use the “secret
generator” mix task provided by Phoenix to do this:

$ mix phx.gen.secret
IOjbrty1eMEBzc5aczQn0FR4Gd8P9IF1cC7tqwB7ThV/uKjS5mrResG1

Configure Guardian in config/config.exs , including copying the key from above into
secret_key :

config :guardian, Guardian,


allowed_algos: ["HS512"],
verify_module: Guardian.JWT,
issuer: "Conduit",
ttl: {30, :days},
allowed_drift: 2000,
verify_issuer: true,
secret_key: "IOjbrty1eMEBzc5aczQn0FR4Gd8P9IF1cC7tqwB7T
serializer: Conduit.Auth.GuardianSerializer
Guardian requires you to implement a serializer, as specified in the config above, to encode
and decode your resources into and out of the JWT token. The only resource we’re interested
in are users. We can encode the user’s UUID into the token, and later use it to fetch the user
projection from the read model.

At this point we will move the existing Conduit.Auth module into its own context. This will
allows us to keep authentication concerns, such as password hashing, separate from user
accounts.

The Guardian serializer module is created at lib/conduit/auth/guardian_serializer.ex :

defmodule Conduit.Auth.GuardianSerializer do
@moduledoc """
Used by Guardian to serialize a JWT token
"""

@behaviour Guardian.Serializer
alias Conduit.Accounts
alias Conduit.Accounts.Projections.User
def for_token(%User{} = user), do: {:ok, "User:#{user.
def for_token(_), do: {:error, "Unknown resource type"
def from_token("User:" <> uuid), do: {:ok, Accounts.us
def from_token(_), do: {:error, "Unknown resource type
end

We need to add the user_by_uuid/1 function to the accounts context:

defmodule Conduit.Accounts do
@doc """
Get a single user by their UUID
"""
def user_by_uuid(uuid) when is_binary(uuid) do
Repo.get(User, uuid)
end
end

The Conduit API specs show the authentication header is in the following format:

Authorization: Token jwt.token.here

So we need to prefix the JWT token with the word Token . To do this we configure the
Phoenix web router, in lib/conduit/web/router.ex , and instruct Guardian to use Token
as the realm:

defmodule ConduitWeb.Router do
use ConduitWeb, :router
pipeline :api do
plug :accepts, ["json"]
plug Guardian.Plug.VerifyHeader, realm: "Token"
plug Guardian.Plug.LoadResource
end
# ... routes omitted
end

We will create a new session controller to support user login. It will authenticate the user
from the provided email and password and return the user’s details as JSON:

defmodule ConduitWeb.SessionController do
use ConduitWeb, :controller
alias Conduit.Auth
alias Conduit.Accounts.Projections.User
action_fallback ConduitWeb.FallbackController
def create(conn, %{"user" => %{"email" => email, "pass
case Auth.authenticate(email, password) do
{:ok, %User{} = user} ->
conn
|> put_status(:created)
|> render(ConduitWeb.UserView, "show.json", user

{:error, :unauthenticated} ->


conn
|> put_status(:unprocessable_entity)
|> render(ConduitWeb.ValidationView, "error.jso
assword" => ["is invalid"]})
end
end
end

An error is returned with a 422 HTTP status code and a generic “is invalid” error message for
the email or password on login failure. The existing user and validation views are reused for
rendering the response as JSON.

The session controller uses a new public function in the auth context: authenticate/2
This function will look for an existing user by their email address, and then compare their
stored hashed password with the password provided hashed using the same bcrypt hash
function. An {:error, :unauthenticated} tagged tuple is returned on failure:

defmodule Conduit.Auth do
@moduledoc """
Boundary for authentication.
Uses the bcrypt password hashing function.
"""

alias Comeonin.Bcrypt
alias Conduit.Accounts
alias Conduit.Accounts.Projections.User
def authenticate(email, password) do
with {:ok, user} <- user_by_email(email) do
check_password(user, password)
else
reply -> reply
end
end
def hash_password(password), do: Bcrypt.hashpwsalt(pas
def validate_password(password, hash), do: Bcrypt.chec
defp user_by_email(email) do
case Accounts.user_by_email(email) do
nil -> {:error, :unauthenticated}
user -> {:ok, user}
end
end
defp check_password(%User{hashed_password: hashed_pass
do
case validate_password(password, hashed_password) do
true -> {:ok, user}
_ -> {:error, :unauthenticated}
end
end
end

The POST /api/users/login action, mapped to the new session controller, is added to the
router:

defmodule ConduitWeb.Router do
use ConduitWeb, :router
# ... pipeline omitted
scope "/api", ConduitWeb do
pipe_through :api

post "/users/login", SessionController, :create


post "/users", UserController, :create
end
end

With the controller and routing configured we can write a web integration test to verify the
functionality. In test/conduit/web/controllers/session_controller_test.exs we use the
Phoenix connection test case to access helper functions for controllers.

There are three scenarios to test:

1. Successfully authenticating an existing user and valid password.


2. Failing to authenticate a known user when the password is incorrect.
3. Failing to authenticate an unknown user.

defmodule ConduitWeb.SessionControllerTest do
use ConduitWeb.ConnCase
setup %{conn: conn} do
{:ok, conn: put_req_header(conn, "accept", "applicat
end
describe "authenticate user" do
@tag :web
test "creates session and renders session when data
do
register_user()

conn = post conn, session_path(conn, :create), use


email: "jake@jake.jake",
password: "jakejake"
}

assert json_response(conn, 201)["user"] == %{


"bio" => nil,
"email" =>
"jake@jake.jake",
"image" => nil,
"username" => "jake",
}
end
@tag :web
test "does not create session and renders errors whe
h", %{conn: conn} do
register_user()
conn = post conn, session_path(conn, :create), use
email: "jake@jake.jake",
password: "invalidpassword"
}

assert json_response(conn, 422)["errors"] == %{


"email or password" => [
"is invalid"
]
}
end
@tag :web
test "does not create session and renders errors whe
n: conn} do
conn = post conn, session_path(conn, :create), use
email: "doesnotexist",
password: "jakejake"
}

assert json_response(conn, 422)["errors"] == %{


"email or password" => [
"is invalid"
]
}
end
end
defp register_user, do: fixture(:user)
end

Run the new web tests, mix test --only web , to confirm that our changes are good.

 Authentication using Guardian

Generating a JWT token


User authentication is now working, but we’ve omitted a necessary part of the user data
returned as JSON from the login and register user actions. In both cases our response does
not include the JWT token as shown in the example response:

{
"user": {
"email": "jake@jake.jake",
"token": "jwt.token.here",
"username": "jake"
}
}

We need to rectify that omission by including the token in the response. First, we’ll include
the token property in the session controller test. We assert that it is not empty when
successfully authenticating a user:

defmodule ConduitWeb.SessionControllerTest do
use ConduitWeb.ConnCase
setup %{conn: conn} do
{:ok, conn: put_req_header(conn, "accept", "applicat
end
describe "authenticate user" do
@tag :web
test "creates session and renders session when data
do
register_user()

conn = post conn, session_path(conn, :create), use


email: "jake@jake.jake",
password: "jakejake"
}
json = json_response(conn, 201)["user"]
token = json["token"]

assert json == %{
"bio" => nil,
"email" => "jake@jake.jake",
"token" => token,
"image" => nil,
"username" => "jake",
}
refute token == ""
end
end
end

Let’s use Guardian to generate the token for us. It will use the
Conduit.Auth.GuardianSerializer module we’ve already written and configured to
serialize our user resource into a string for inclusion in the token.

To generate the JWT we use Guardian.encode_and_sign/2 by adding a Conduit.JWT


module and wrapper function in lib/conduit/web/jwt.ex :

defmodule ConduitWeb.JWT do
@moduledoc """
JSON Web Token helper functions, using Guardian
"""

def generate_jwt(resource, type \\ :token) do


case Guardian.encode_and_sign(resource, type) do
{:ok, jwt, _full_claims} -> {:ok, jwt}
end
end
end

Since the token generation will be used in both the session and user controllers we will
import the ConduitWeb.JWT module in the Phoenix controller macro, in
lib/conduit/web/web.ex . This makes the generate_jwt/2 function available to use in all
of our web controllers.

defmodule ConduitWeb do
def controller do
quote do
use Phoenix.Controller, namespace: ConduitWeb
import Plug.Conn
import ConduitWeb.Router.Helpers
import ConduitWeb.Gettext
import ConduitWeb.JWT
end
end
# ... view, router, and channel definitions omitted
end

The session controller needs to be updated to generate the JWT after authenticating the user.
The JWT token is passed to the render function to make it available to the view:

defmodule ConduitWeb.SessionController do
use ConduitWeb, :controller
alias Conduit.Auth
alias Conduit.Accounts.Projections.User
action_fallback ConduitWeb.FallbackController
def create(conn, %{"user" => %{"email" => email, "pass
with {:ok, %User{} = user} <- Auth.authenticate(emai
{:ok, jwt} <- generate_jwt(user) do
conn
|> put_status(:created)
|> render(ConduitWeb.UserView, "show.json", user
else
{:error, :unauthenticated} ->
conn
|> put_status(:unprocessable_entity)
|> render(ConduitWeb.ValidationView, "error.jso
assword" => ["is invalid"]})
end
end
end

The render function in the user view for a single user merges the JWT token into the user
data that is rendered as JSON:

defmodule ConduitWeb.UserView do
use ConduitWeb, :view
alias ConduitWeb.UserView
def render("index.json", %{users: users}) do
%{users: render_many(users, UserView, "user.json")}
end
def render("show.json", %{user: user, jwt: jwt}) do
%{user: user |> render_one(UserView, "user.json") |>
)}
end
def render("user.json", %{user: user}) do
%{
username: user.username,
email: user.email,
bio: user.bio,
image: user.image,
}
end
end

Running the web tests again, mix test --only web , confirms that the token is successfully
generated and included in the response.

Generate JWT token


Getting the current user


An authenticated HTTP GET request to /api/user should return a JSON representation of
the current user. Authentication is determined by the presence of a valid HTTP request
header containing the JWT token: Authorization: Token jwt.token.here .
We will start by adding two new tests to the user controller to verify the following scenarios:

1. Successful authentication, with a valid JWT token, returns the current user as JSON data.
2. An invalid request, missing a JWT token, returns a 401 Unauthorized response.

To support a valid request we must register a user and generate a JWT token for them in the
test setup. The token is included in the request headers of the test connection using the
authenticated_conn/1 function:

defmodule ConduitWeb.UserControllerTest do
use ConduitWeb.ConnCase
setup %{conn: conn} do
{:ok, conn: put_req_header(conn, "accept", "applicat
end
describe "get current user" do
@tag :web
test "should return user when authenticated", %{conn
conn = get authenticated_conn(conn), user_path(con
json = json_response(conn, 200)["user"]
token = json["token"]

assert json == %{
"bio" => nil,
"email" => "jake@jake.jake",
"token" => token,
"image" => nil,
"username" => "jake",
}
refute token == ""
end
@tag :web
test "should not return user when unauthenticated",
conn = get conn, user_path(conn, :current)

assert response(conn, 401) == ""


end
end
def authenticated_conn(conn) do
with {:ok, user} <- fixture(:user),
{:ok, jwt} <- ConduitWeb.JWT.generate_jwt(user)
do
conn
|> put_req_header("authorization", "Token " <> jwt
end
end
end
The failing tests guide us towards our next code change, we need to register the /api/user
route in the router:

defmodule ConduitWeb.Router do
use ConduitWeb, :router
# ... pipeline omitted
scope "/api", ConduitWeb do
pipe_through :api

get "/user", UserController, :current


post "/users/login", SessionController, :create
post "/users", UserController, :create
end
end

Next we add a current function to the user controller module. Before doing so we’ll take
advantage of Guardian’s built in support for Phoenix controllers. Using the
Guardian.Phoenix.Controller module in our controller provides easier access to the
current user and their claims. The public controller functions are extended to accept two
additional parameters, user and claims , as shown below.

Before: def current(conn, params) do

After: def current(conn, params, user, claims) do

We will also use two plugs provided by Guardian:

1. Guardian.Plug.EnsureAuthenticated ensures a verified token exists.


2. Guardian.Plug.EnsureResource guards against a resource not found.

Both plugs require us to implement an error handler module that deals with failure cases. In
lib/conduit/web/error_handler.ex we provide functions for the three main error cases.
They each return an appropriate HTTP error status code and an empty response body:

defmodule ConduitWeb.ErrorHandler do
import Plug.Conn
@doc """
Return 401 for "Unauthorized" requests
A request requires authentication but it isn't provide
"""
def unauthenticated(conn, _params), do: respond_with(c
@doc """
Return 403 for "Forbidden" requests
A request may be valid, but the user doesn't have perm
ction
"""
def unauthorized(conn, _params), do: respond_with(conn
@doc """
Return 401 for "Unauthorized" requests
A request requires authentication but the resource has
"""
def no_resource(conn, _params), do: respond_with(conn,
defp respond_with(conn, status) do
conn
|> put_resp_content_type("application/json")
|> send_resp(status, "")
end
end

The Guardian plugs are configured with our error handler module and to only apply to the
current controller action. This action returns the authenticated current user, and their JWT
token, as JSON data:

defmodule ConduitWeb.UserController do
use ConduitWeb, :controller
use Guardian.Phoenix.Controller
alias Conduit.Accounts
alias Conduit.Accounts.Projections.User
action_fallback ConduitWeb.FallbackController
plug Guardian.Plug.EnsureAuthenticated, %{handler: Co
en action in [:current]
plug Guardian.Plug.EnsureResource, %{handler: Conduit
tion in [:current]

def create(conn, %{"user" => user_params}, _user, _cla


with {:ok, %User{} = user} <- Accounts.register_user
{:ok, jwt} = generate_jwt(user) do
conn
|> put_status(:created)
|> render("show.json", user: user, jwt: jwt)
end
end
def current(conn, _params, user, _claims) do
jwt = Guardian.Plug.current_token(conn)

conn
|> put_status(:ok)
|> render("show.json", user: user, jwt: jwt)
end
end

When an unauthenticated user requests /api/user the


Guardian.Plug.EnsureAuthenticated plug will step in. It redirects the request to our error
handler module, which responds with a 401 unauthorized status code.

Run the web tests, mix test --only web , to confirm the new route is working as per the
API spec.

 Get current user

We’ve now built out the basic user registration and authentication features required for
Conduit. Let’s move on to authoring articles in the next chapter.

Frequently asked questions

How do I structure my CQRS/ES application?


Application structure is described in the Accounts chapter.

How do I deal with eventually consistent read model projections?


Dealing with eventual consistency is explained in the Accounts chapter.

Appendix I

Conduit API specs

Authentication header
Authorization: Token jwt.token.here

JSON objects returned by API

User
Used for authentication.

{
"user": {
"email": "jake@jake.jake",
"token": "jwt.token.here",
"username": "jake",
"bio": "I work at statefarm",
"image": null
}
}

Profile

{
"profile": {
"username": "jake",
"bio": "I work at statefarm",
"image": "https://static.productionready.io/images/s
"following": false
}
}

Single article

{
"article": {
"slug": "how-to-train-your-dragon",
"title": "How to train your dragon",
"description": "Ever wonder how?",
"body": "It takes a Jacobian",
"tagList": ["dragons", "training"],
"createdAt": "2016-02-18T03:22:56.637Z",
"updatedAt": "2016-02-18T03:48:35.824Z",
"favorited": false,
"favoritesCount": 0,
"author": {
"username": "jake",
"bio": "I work at statefarm",
"image": "https://i.stack.imgur.com/xHWG8.jpg",
"following": false
}
}
}

Multiple articles

{
"articles":[{
"slug": "how-to-train-your-dragon",
"title": "How to train your dragon",
"description": "Ever wonder how?",
"body": "It takes a Jacobian",
"tagList": ["dragons", "training"],
"createdAt": "2016-02-18T03:22:56.637Z",
"updatedAt": "2016-02-18T03:48:35.824Z",
"favorited": false,
"favoritesCount": 0,
"author": {
"username": "jake",
"bio": "I work at statefarm",
"image": "https://i.stack.imgur.com/xHWG8.jpg",
"following": false
}
}, {

"slug": "how-to-train-your-dragon-2",
"title": "How to train your dragon 2",
"description": "So toothless",
"body": "It a dragon",
"tagList": ["dragons", "training"],
"createdAt": "2016-02-18T03:22:56.637Z",
"updatedAt": "2016-02-18T03:48:35.824Z",
"favorited": false,
"favoritesCount": 0,
"author": {
"username": "jake",
"bio": "I work at statefarm",
"image": "https://i.stack.imgur.com/xHWG8.jpg",
"following": false
}
}],
"articlesCount": 2
}

Single comment

{
"comment": {
"id": 1,
"createdAt": "2016-02-18T03:22:56.637Z",
"updatedAt": "2016-02-18T03:22:56.637Z",
"body": "It takes a Jacobian",
"author": {
"username": "jake",
"bio": "I work at statefarm",
"image": "https://i.stack.imgur.com/xHWG8.jpg",
"following": false
}
}
}

Multiple comments
{
"comments": [{
"id": 1,
"createdAt": "2016-02-18T03:22:56.637Z",
"updatedAt": "2016-02-18T03:22:56.637Z",
"body": "It takes a Jacobian",
"author": {
"username": "jake",
"bio": "I work at statefarm",
"image": "https://i.stack.imgur.com/xHWG8.jpg",
"following": false
}
}]
}

List of tags

{
"tags": [
"reactjs",
"angularjs"
]
}

Errors and status codes


If a request fails any validations, expect a 422 and errors in the following format:

{
"errors":{
"body": [
"can't be empty"
]
}
}

Other status codes

401 Unauthorized requests, when a request requires authentication but it isn’t


provided.

403 Forbidden requests, when a request may be valid but the user doesn’t have
permissions to perform the action.

404 Not found requests, when a resource can’t be found to fulfill the request.

Endpoints
Authentication
POST /api/users/login

Example request body: JSON { "user":{ "email": "jake@jake.jake", "password":


"jakejake" } }

No authentication required, returns a user.

Required fields: email , password

Registration
POST /api/users

Example request body: JSON { "user":{ "username": "Jacob", "email":


"jake@jake.jake", "password": "jakejake" } }

No authentication required, returns a user.

Required fields: email , username , password

Get current user


GET /api/user

Authentication required, returns a user that’s the current user

Update user
PUT /api/user

Example request body: JSON { "user":{ "email": "jake@jake.jake", "bio": "I like to
skateboard", "image": "https://i.stack.imgur.com/xHWG8.jpg" } }

Authentication required, returns the user.

Accepted fields: email , username , password , image , bio

Get profile
GET /api/profiles/:username

Authentication optional, returns a profile.

Follow user
POST /api/profiles/:username/follow

Authentication required, returns profile.


No additional parameters required

Unfollow user
DELETE /api/profiles/:username/follow

Authentication required, returns profile.

No additional parameters required

List articles
GET /api/articles

Returns most recent articles globally by default.

Query parameters
Provide tag , author or favorited query parameter to filter results.

Filter by tag ?tag=AngularJS

Filter by author ?author=jake

Favorited by user ?favorited=jake

Limit number of articles (default is 20) ?limit=20

Offset/skip number of articles (default is 0) ?offset=0

Authentication optional, will return multiple articles, ordered by most recent first.

Feed articles
GET /api/articles/feed

Can also take limit and offset query parameters like list articles.

Authentication required, will return multiple articles created by followed users, ordered by
most recent first.

Get article
GET /api/articles/:slug

No authentication required, will return single article.

Create Article
POST /api/articles
Example request body:

{
"article": {
"title": "How to train your dragon",
"description": "Ever wonder how?",
"body": "You have to believe",
"tagList": ["reactjs", "angularjs", "dragons"]
}
}

Authentication required, will return an article.

Required fields: title , description , body

Optional fields: tagList as an array of Strings

Update Article
PUT /api/articles/:slug

Example request body:

{
"article": {
"title": "Did you train your dragon?"
}
}

Authentication required, returns the updated article.

Optional fields: title , description , body

The slug also gets updated when the title is changed.

Delete article
DELETE /api/articles/:slug

Authentication required

Add comments to an article


POST /api/articles/:slug/comments

Example request body:


{
"comment": {
"body": "His name was my name too."
}
}

Authentication required, returns the created comment.

Required fields: body

Get comments from an article


GET /api/articles/:slug/comments

Authentication optional, returns multiple comments.

Delete comment
DELETE /api/articles/:slug/comments/:id

Authentication required

Favourite article
POST /api/articles/:slug/favorite

Authentication required, returns the article.

No additional parameters required

Unfavourite article
DELETE /api/articles/:slug/favorite

Authentication required, returns the article.

No additional parameters required

Get tags
GET /api/tags

No authentication required, returns a list of tags.

Notes

1 CRUD is an abbreviation of Create, Read, Update, and Delete.↩


2 A pure function always evaluates the same result value given the same argument value.↩

3 JSON Web Tokens are an open, industry standard method for representing claims securely
between two parties.↩

4 Putting contexts in context↩

5 Contexts in Phoenix v1.3↩

6 Commanded is MIT licensed, a permissive free software license. This allows reuse within
proprietary software, and for commercial purposes.↩

7 A universally unique identifier (UUID) is a 128-bit number used to identify information in


computer systems.↩

8 Domain specific language↩

9 Regular expression, regex or regexp, is a sequence of characters that define a search


pattern in a string.↩

10 bcrypt is a password hashing function designed by Niels Provos and David Mazières,
based on the Blowfish cipher.↩

About

About Leanpub
Blog
Team
Podcast
Contact
Press

Authors

Why Leanpub
Pricing
Manifesto
Romance Authors
Author Support

Author Help
Getting Started
Manual
API Docs

More

Reader Help
Publishers
Causes

Legal

Terms of Service
Copyright Policy
Privacy Policy

Leanpub is copyright © 2010-2017 Ruboss Technology Corp. All rights reserved.

You might also like