Lacinia - GraphQL for Clojure¶
Lacinia is a library for implementing Facebook’s GraphQL specification in idiomatic Clojure.
GraphQL is a way for clients to efficiently obtain data from servers.
Compared to traditional REST approaches, GraphQL ensures that clients can access exactly the data that they need (and no more), and do so with fewer round-trips to the server.
This is especially useful for mobile clients, where bandwidth is always at a premium.
In addition, GraphQL is self describing; the shape of the data that can be exposed, and the queries by which that data can be accessed, are all accessible using GraphQL queries. This allows for sophisticated, adaptable clients, such as the in-browser GraphQL IDE GraphiQL.
Although GraphQL is quite adept at handling requests from client web browsers and responding with JSON, it is also exceptionally useful for allowing backend systems to communicate.
Overview¶
GraphQL consists of two main parts:
- A server-side schema that defines the available queries and types of data that may be returned.
- A client query language that allows the client to specify what query to execute, and what data to return.
The GraphQL specification goes into detail about the format of the client query language, and the expected behavior of the server.
This library, Lacinia, is an implementation of the key component of the server, in idiomatic Clojure.
Schema¶
The GraphQL specification includes a language to define the server-side schema; the
type
keyword is used to introduce a new kind of object.
In Lacinia, the schema is Clojure data: a map of keys and values; top level keys indicate the type of data being defined:’
{:enums
{:Episode
{:description "The episodes of the original Star Wars trilogy."
:values [:NEWHOPE :EMPIRE :JEDI]}}
:interfaces
{:Character
{:fields {:id {:type String}
:name {:type String}
:appearsIn {:type (list :Episode)}
:friends {:type (list :Character)}}}}
:objects
{:Droid
{:implements [:Character]
:fields {:id {:type String}
:name {:type String}
:appearsIn {:type (list :Episode)}
:friends {:type (list :Character)
:resolve :friends}
:primaryFunction {:type (list String)}}}
:Human
{:implements [:Character]
:fields {:id {:type String}
:name {:type String}
:appearsIn {:type (list :Episode)}
:friends {:type (list :Character)}
:home_planet {:type String}}}
:Query
{:fields
{:hero {:type (non-null :Character)
:args {:episode {:type :Episode}}}
:human {:type (non-null :Human)
:args {:id {:type String
:default-value "1001"}}}
:droid {:type :Droid
:args {:id {:type String
:default-value "2001"}}}}}}}
Here, we are defining Human
and Droid
objects.
These have a lot in common, so we define a shared Character
interface.
But how to access that data? That’s accomplished using one of three queries:
- hero
- human
- droid
In this example, each query returns a single instance of the matching object. Often, a query will return a list of matching objects.
Compiling the Schema¶
The schema defines the shape of the data that can be queried, but leaves out where that data comes from. Unlike an object/relational mapping layer, where we might discuss database tables and rows, GraphQL (and by extension, Lacinia) has no idea where the data comes from.
That’s the realm of the field resolver function. Since EDN files are just data, we simply attach the actual functions after the EDN data is read into memory.
The schema starts as a data structure, we need to add in the field resolvers and then compile the result.
(ns org.example.schema
(:require
[clojure.edn :as edn]
[clojure.java.io :as io]
[com.walmartlabs.lacinia.schema :as schema]
[com.walmartlabs.lacinia.util :as util]
[org.example.db :as db]))
(defn star-wars-schema
[]
(-> (io/resource "star-wars-schema.edn")
slurp
edn/read-string
(util/inject-resolvers {:Query/hero db/resolve-hero
:Query/human db/resolve-human
:Query/droid db/resolve-droid
:Human/friends db/resolve-friends
:Droid/friends db/resolve-friends})
schema/compile))
The com.walmartlabs.lacinia.util/inject-resolvers function identifies objects and fields within those objects, and adds the resolver function. With those functions in place, the schema can be compiled for execution.
Compilation performs a number of checks, applies defaults, merges in introspection data about the schema,
and performs a number of other operations to ready the schema for use.
The structure passed into compile
is quite complex, so it is always validated using
clojure.spec.
Parsing GraphQL IDL Schemas¶
Lacinia also offers support for parsing schemas defined in the GraphQL Interface Definition Language and tranforming them into the Lacinia schema data structure.
See GraphQL IDL Schema Parsing for details.
Executing Queries¶
With that in place, we can now execute queries.
(require
'[com.walmartlabs.lacinia :refer [execute]]
'[org.example.schema :refer [star-wars-schema]])
(def compiled-schema (star-wars-schema))
(execute compiled-schema
"query { human(id: \"1001\") { name }}"
nil nil)
=> {:data {:human #ordered/map([:name "Darth Vader"])}}
The query string is parsed and matched against the queries defined in the schema.
The two nils are variables to be used executing the query, and an application context.
In GraphQL, queries can pass arguments (such as id
) and queries identify
exactly which fields
of the matching objects are to be returned.
This query can be stated as just provide the name of the human with id ‘1001’.
This is a successful query, it returns a result map [2] with a :data
key.
A failed query would return a map with an :errors
key.
A query can even be partially successful, returning as much data as it can, but also errors.
Inside :data
is a key corresponding to the query, :human
, whose value is the single
matching human. Other queries might return a list of matches.
Since we requested just a slice of a full human object, just the human’s name, the map has just a single
:name
key.
[1] | This shouldn’t be strictly necessary (JSON and EDN don’t normally care about key order, and keys can appear in arbitrary order), but having consistent ordering makes writing tests involving GraphQL queries easier: you can typically check the textual, not parsed, version of the result map directly against an expected string value. |
[2] | In GraphQL’s specification, this is referred to as the “response”; in practice, this result data forms the body of a response map (when using Ring or Pedestal). Lacinia uses the terms result map or result data to keep these ideas distinct. |
Tutorial¶
Pre-Requisites¶
You should have a basic understanding of GraphQL, which you can pick up from this documentation, and from the GraphQL home page.
You should be familiar with, but by no means an expert in, Clojure.
You should have a recent build of Clojure, including the clj
command.
You should have an editor or IDE ready to go, set up for editing Clojure code.
A skim of the Lacinia reference documentation (the rest of this manual, outside of this tutorial) is also helpful, or you can follow links provided as we go.
The later chapters use a database stored in a Docker container [1] ;
you should download and install Docker and
ensure that you can run the docker
command.
With those basics installed and ready, we can build an empty project and work from there, but first we’ll talk about the application we will be building.
[1] | A Docker container is the Inception of computers; a container is essentially a light-weight virtual machine that runs inside your computer. To the PostgreSQL server we’ll be running inside the container, it will appear as if the entire computer is running Linux, just as if Linux and PostgreSQL were installed on a bare-metal computer. Docker images are smaller and less demanding than full operating system virtual machines. In fact frequently you will run several interconnected containers together. Docker includes infrastructure for downloading the images from a central repository. Ultimately, it’s faster and easier to get PostgreSQL running inside a container that to install the database onto your computer. |
Domain¶
Our goal will be to provide a GraphQL interface to data about board games (one of the author’s hobbies), as a limited version of Board Game Geek.
Board Game Geek itself is a huge resource, with decades of information about games, game designers and publishers, tracking which users own which games, game ratings, forums for discussing games, and far, far more. We’ll focus on just a couple of simple elements of the full design.
The basic types in the final system are as follows:
![digraph {
BoardGame
Publisher
Designer
Member
GameRating
BoardGame -> GameRating [taillabel="1", headlabel="n"]
BoardGame -> {Publisher, Designer} [taillabel="n", headlabel="m"]
GameRating -> Member [taillabel="n", headlabel="1" ]
}](_images/graphviz-3f10904f89b1fc40a08847f9b30da0f51ece0f67.png)
A BoardGame may be published by multiple Publisher companies (the Publisher may be different for different countries, or may simply vary over time).
A BoardGame may have any number of Designers.
Users of Clojure Game Geek, represented as type Member, may provide their personal ratings for board games.
Even this tiny silver of functionality it sufficiently meaty to give us a taste for building full applications. We’ll start by creating an empty project, in the next chapter.
Creating the Project¶
In this first step of the tutorial, we’ll create the initial empty project, and set up the initial dependencies.
Let’s get started.
The clj
command is used to start Clojure projects, but it’s a bit of swiss-army knife; it can
also be used to launch arbitrary Clojure tools. Importantly, clj also understands dependencies
and repositories, so it will download any libraries, as they are needed.
We’re going to install a Clojure tool, clj-new
> clj -Ttools install com.github.seancorfield/clj-new '{:git/tag "v1.2.399"}' :as clj-new
Cloning: https://github.com/seancorfield/clj-new.git
Checking out: https://github.com/seancorfield/clj-new.git at c82384e437a2dfa03b050b204dd2a2008c02a6c7
clj-new: Installed com.github.seancorfield/clj-new v1.2.399
With this tools installed, we can then create a new Clojure application project:
> clj -Tclj-new app :name my/clojure-game-geek
Downloading: org/apache/maven/resolver/maven-resolver-spi/1.3.3/maven-resolver-spi-1.3.3.pom from central
Downloading: org/apache/maven/resolver/maven-resolver-transport-http/1.3.3/maven-resolver-transport-http-1.3.3.pom from central
Downloading: org/apache/maven/maven-resolver-provider/3.6.1/maven-resolver-provider-3.6.1.pom from central
Downloading: org/apache/maven/resolver/maven-resolver-api/1.3.3/maven-resolver-api-1.3.3.pom from central
Downloading: org/apache/maven/resolver/maven-resolver-util/1.3.3/maven-resolver-util-1.3.3.pom from central
Downloading: org/apache/maven/resolver/maven-resolver-connector-basic/1.3.3/maven-resolver-connector-basic-1.3.3.pom from central
Downloading: org/apache/maven/resolver/maven-resolver-impl/1.3.3/maven-resolver-impl-1.3.3.pom from central
Downloading: org/apache/maven/resolver/maven-resolver-transport-file/1.3.3/maven-resolver-transport-file-1.3.3.pom from central
Downloading: org/apache/maven/maven/3.6.1/maven-3.6.1.pom from central
Downloading: stencil/stencil/0.5.0/stencil-0.5.0.pom from clojars
Downloading: org/clojure/core.cache/0.6.3/core.cache-0.6.3.pom from central
Downloading: org/codehaus/plexus/plexus-utils/3.2.0/plexus-utils-3.2.0.pom from central
Downloading: org/slf4j/jcl-over-slf4j/1.7.25/jcl-over-slf4j-1.7.25.pom from central
Downloading: quoin/quoin/0.1.2/quoin-0.1.2.pom from clojars
Downloading: scout/scout/0.1.0/scout-0.1.0.pom from clojars
Downloading: org/clojure/data.priority-map/0.0.2/data.priority-map-0.0.2.pom from central
Downloading: quoin/quoin/0.1.2/quoin-0.1.2.jar from clojars
Downloading: scout/scout/0.1.0/scout-0.1.0.jar from clojars
Downloading: stencil/stencil/0.5.0/stencil-0.5.0.jar from clojars
Generating a project called clojure-game-geek based on the 'app' template.
All those downloads will only occur the first time the tool is run. The name of the project, my/clojure-game-geek
is used define the main namespace; feel free to change the value if you like
(but then certain paths in the tutorial will also change).
The clj-new
tool creates a directory, clojure-game-geek
, and populates it with a good starting
point for a basic Clojure application:
> cd clojure-game-geek
> tree .
.
├── CHANGELOG.md
├── LICENSE
├── README.md
├── build.clj
├── deps.edn
├── doc
│ └── intro.md
├── pom.xml
├── resources
├── src
│ └── my
│ └── clojure_game_geek.clj
└── test
└── my
└── clojure_game_geek_test.clj
This is a good point to load this new, empty project into your IDE of choice.
You’ll want to review the generated README.md
file.
clj-new
sets things up inside deps.edn
to support
sources under a src
directory, tests under a test
directory, two different ways to run the project’s code, a way to
run tests, and some support for building and deploying the project.
{:paths ["src" "resources"]
:deps {org.clojure/clojure {:mvn/version "1.11.1"}}
:aliases
{:run-m {:main-opts ["-m" "my.clojure-game-geek"]}
:run-x {:ns-default my.clojure-game-geek
:exec-fn greet
:exec-args {:name "Clojure"}}
:build {:deps {io.github.seancorfield/build-clj
{:git/tag "v0.8.2" :git/sha "0ffdb4c"
;; since we're building an app uberjar, we do not
;; need deps-deploy for clojars.org deployment:
:deps/root "slim"}}
:ns-default build}
:test {:extra-paths ["test"]
:extra-deps {org.clojure/test.check {:mvn/version "1.1.1"}
io.github.cognitect-labs/test-runner
{:git/tag "v0.5.0" :git/sha "48c3c67"}}}}}
We’re going to ignore most of that, but add a dependency on the latest version of Lacinia.
{:paths ["src" "resources"]
:deps {org.clojure/clojure {:mvn/version "1.11.1"}
com.walmartlabs/lacinia {:mvn/version "1.2-alpha-4"}}
:aliases
{:run-m {:main-opts ["-m" "my.clojure-game-geek"]}
:run-x {:ns-default my.clojure-game-geek
:exec-fn greet
:exec-args {:name "Clojure"}}
:build {:deps {io.github.seancorfield/build-clj
{:git/tag "v0.8.2" :git/sha "0ffdb4c"
;; since we're building an app uberjar, we do not
;; need deps-deploy for clojars.org deployment:
:deps/root "slim"}}
:ns-default build}
:test {:extra-paths ["test"]
:extra-deps {org.clojure/test.check {:mvn/version "1.1.1"}
io.github.cognitect-labs/test-runner
{:git/tag "v0.5.0" :git/sha "48c3c67"}}}}}
Lacinia has just a few dependencies of its own:

Antlr is used to parse GraphQL queries and schemas.
org.flatland/ordered
provides the ordered map type, used to ensure that response
keys and values are in the client-specified order, as per the GraphQL spec.
Initial Schema¶
At this stage, we’re still just taking baby steps, and getting our bearings.
By the end of this stage, we’ll have a minimal schema and be able to execute our first query.
Schema EDN File¶
We’re going to define an initial schema for our application that matches the domain.
Our initial schema is just for the BoardGame entity, and a single operation to retrieve a game by its id:
{:objects
{:BoardGame
{:description "A physical or virtual board game."
:fields
{:id {:type (non-null ID)}
:name {:type (non-null String)}
:summary {:type String
:description "A one-line summary of the game."}
:description {:type String
:description "A long-form description of the game."}
:minPlayers {:type Int
:description "The minimum number of players the game supports."}
:maxPlayers {:type Int
:description "The maximum number of players the game supports."}
:playTime {:type Int
:description "Play time, in minutes, for a typical game."}}}
:Query
{:fields
{:gameById
{:type :BoardGame
:description "Access a BoardGame by its unique id, if it exists."
:args
{:id {:type ID}}}}}}}
A Lacinia schema is an EDN file.
It is a map of maps; the top level keys identify the type of definition: :objects
,
:interfaces
, :enums
, and so forth.
Each of these top level keys defines its own structure for the map
it contains.
Query is a special object whose fields define the GraphQL queries that
a client can execute.
This schema defines a single query, gameById
, that returns an object as defined by the
BoardGame type.
A schema is declarative: it defines what operations are possible, and what types and fields exist, but has nothing to say about where any of the data comes from. In fact, Lacinia has no opinion about that either! GraphQL is a contract between a consumer and a provider for how to request and present data - it’s not any form of database layer, object relational mapper, or anything similar.
Instead, Lacinia handles the parsing of a client query, and guides the execution of that query, ultimately invoking application-specific callback hooks: field resolvers. Field resolvers are the only source of actual data. Ultimately, field resolvers are simple Clojure functions, but those can’t, and shouldn’t, be expressed inside an EDN file.
Later we’ll see how to connect fields, such as gameById
to a field resolver.
We’ve made liberal use of the :description
property in the schema.
These descriptions are intended for developers who will make use of your
GraphQL interface.
Descriptions are the equivalent of docstrings on Clojure functions, and we’ll see them
show up later when we discuss GraphiQL.
It’s an excellent habit to add descriptions early, rather than try and go back
and add them in later.
We’ll add more fields, more object types, relationships between types, and more operations in later chapters.
We’ve also demonstrated the use of a few Lacinia conventions in our schema:
- Built-in scalar types, such as ID, String, and Int are referenced as symbols. [1]
- Schema-defined types, such as
:BoardGame
, are referenced as keywords. - Fields are lower-case names, and types are CamelCase.
In addition, all GraphQL names (for fields, types, and so forth) must contain only alphanumerics
and the underscore.
The dash character is, unfortunately, not allowed.
If we tried to name the query query-by-id
, Lacinia would throw a clojure.spec validation exception when we attempted
to compile the schema. [2]
In Lacinia, there are base types, such as String
and :BoardGame
, and wrapped types, such
as (non-null String)
.
The two wrappers are non-null
(a value must be present) and
list
(the type is a list of values, not a single value).
These can even be combined!
Notice that the return type of the gameByID
query is :BoardGame
and not
(non-null :BoardGame)
.
This is because we can’t guarantee that a game can be resolved, if the id provided in the client query fails
to match a game in our database.
If the client provides an invalid id, then the result will be nil, and that’s not considered an error.
In any case, this single BoardGame entity is a good starting point.
schema namespace¶
With the schema defined, the next step is to write code to load the schema into memory, and make it operational for queries:
(ns my.clojure-game-geek.schema
"Contains custom resolvers and a function to provide the full schema."
(:require [clojure.java.io :as io]
[com.walmartlabs.lacinia.util :as util]
[com.walmartlabs.lacinia.schema :as schema]
[clojure.edn :as edn]))
(defn resolver-map
[]
{:Query/gameById (fn [context args value]
nil)})
(defn load-schema
[]
(-> (io/resource "cgg-schema.edn")
slurp
edn/read-string
(util/inject-resolvers (resolver-map))
schema/compile))
This code loads the schema EDN file, injects field resolvers into the schema, then compiles the schema. The compilation step is necessary before it is possible to execute queries. Compilation reorganizes the schema, computes various defaults, performs verifications, and does a number of other necessary steps.
The inject-resolvers
function updates the schema, adding :resolve
keys to fields. The keys of the map identify a type and a field,
and the value is the resolver function.
The field resolver in this case is just a temporary placeholder; it ignores all the arguments passed to it, and simply returns nil. Like all field resolver functions, it accepts three arguments: a context map, a map of field arguments, and a container value. We’ll discuss what these are and how to use them shortly.
user namespace¶
A key advantage of Clojure is REPL-oriented [3] development: we want to be able to run our code through its paces almost as soon as we’ve written it - and when we change code, we want to be able to try out the changed code instantly.
Clojure, by design, is almost uniquely suited for this interactive style of development. Features of Clojure exist just to support REPL-oriented development, and its one of the ways in which using Clojure will vastly improve your productivity!
We can add a bit of scaffolding to the user
namespace, specific to
our needs in this project.
When you launch a REPL, it always starts in this namespace.
The user.clj
needs to be on the classpath, but shouldn’t be packaged when we eventually build a Jar from our project. We need
to introduce a new alias in the deps.edn
for this.
An alias is used to extend the base dependencies with more information about running the project; this includes extra source paths, extra dependencies, and extra configuration about what function to run at startup.
We’re going to start by adding a :dev
alias:
{:paths ["src" "resources"]
:deps {org.clojure/clojure {:mvn/version "1.11.1"}
com.walmartlabs/lacinia {:mvn/version "1.2-alpha-4"}}
:aliases
{:run-m {:main-opts ["-m" "my.clojure-game-geek"]}
:run-x {:ns-default my.clojure-game-geek
:exec-fn greet
:exec-args {:name "Clojure"}}
:build {:deps {io.github.seancorfield/build-clj
{:git/tag "v0.8.2" :git/sha "0ffdb4c"
;; since we're building an app uberjar, we do not
;; need deps-deploy for clojars.org deployment:
:deps/root "slim"}}
:ns-default build}
:dev {:extra-paths ["dev-resources"]}
:test {:extra-paths ["test"]
:extra-deps {org.clojure/test.check {:mvn/version "1.1.1"}
io.github.cognitect-labs/test-runner
{:git/tag "v0.5.0" :git/sha "48c3c67"}}}}}
We can now define the user namespace in the dev-resources
folder; this ensures
that it is not included with the rest of our application when we eventually package
and deploy the application.
(ns user
(:require
[clojure-game-geek.schema :as s]
[com.walmartlabs.lacinia :as lacinia]))
(def schema (s/load-schema))
(defn q
[query-string]
(lacinia/execute schema query-string nil nil))
The key function is q
(for query), which invokes com.walmartlabs.lacinia/execute.
We’ll use that to test GraphQL queries against our schema and see the results directly in the REPL: no web browser necessary!
With all that in place, we can launch a REPL and try it out:
> clj -M:dev
Clojure 1.11.1
user=> (q "{ gameById(id: \"foo\") { id name summary }}")
{:data #ordered/map ([:gameById nil])}
user=>
The clj -M:dev
indicates that a REPL should be started that includes the :dev
alias; this is what adds dev-resources
to the classpath and the user
namespace is then loaded from dev-resources/user.clj
.
We get an odd result when executing the query; not a map but that strange #ordered/map
business.
This is because value for the :data
key makes use of an ordered map - a map that always orders its keys in the exact order that they are added to the map.
That’s part of the GraphQL
specification: the order in which fields appear in the query dictates the order in which
they appear in the result. Clojure’s map implementations don’t always keep keys in the order they are added.
In any case, this result is equivalent to {:data {:gameById nil}}
.
That’s as it should be: the resolver was unable to resolve the provided id
to a BoardGame, so it returned nil.
This is not an error … remember that we defined the type of the
gameById
query to allow nulls, just for this specific situation.
However, Lacinia still returns a map with the operation name and the selection for that operation.
Summary¶
We’ve defined an exceptionally simple schema in EDN, but still have managed to load it into memory and compile it. We’ve also used the REPL to execute a query against the schema and seen the initial (and quite minimal) result.
In the next chapter, we’ll build on this modest start, introducing more schema types, and a few helpers to keep our code clean and easily testable.
[1] | Internally, everything is converted to keywords, so if you prefer to use symbols everywhere, nothing will break. Conversion to keyboards is one part of the schema compilation process. |
[2] | Because the input schema format is so complex, it is always validated using clojure.spec. This helps to ensure that minor typos or other gaffes are caught early rather than causing you great confusion later. |
[3] | Read Eval Print Loop: you type in an expression, and Clojure evaluates and prints the result. This is an innovation that came early to Lisps, and is integral to other languages such as Python, Ruby, and modern JavaScript. Stuart Halloway has a talk, Running with Scissors: Live Coding With Data, that goes into a lot more detail on how important and useful the REPL is. |
Placeholder Game Data¶
It would be nice to do some queries that actually return some data!
One option would be to fire up a database, define some tables, and load some data in.
… but that would slow us down, and not teach us anything about Lacinia and GraphQL.
Instead, we’ll create an EDN file with some test data in it, and wire that up to the schema. We can fuss with database access and all that later in the tutorial.
cgg-data.edn¶
{:games
[{:id "1234"
:name "Zertz"
:summary "Two player abstract with forced moves and shrinking board"
:minPlayers 2
:maxPlayers 2}
{:id "1235"
:name "Dominion"
:summary "Created the deck-building genre; zillions of expansions"
:minPlayers 2}
{:id "1236"
:name "Tiny Epic Galaxies"
:summary "Fast dice-based sci-fi space game with a bit of chaos"
:minPlayers 1
:maxPlayers 4}
{:id "1237"
:name "7 Wonders: Duel"
:summary "Tense, quick card game of developing civilizations"
:minPlayers 2
:maxPlayers 2}]}
This file defines just a few games I’ve recently played. It will take the place of an external database. Later, we can add more data for the other entities and their relationships.
Although the keys here are camel-case (like Java or JavaScript) and not kebab-case (like Clojure and other Lisps), they are still valid Clojure keywords and, more importantly, they match the field names in the GraphQL schema. Lacinia doesn’t do any trickery here, field names in the schema are matched directly to corresponding keyword keys in the value maps.
Later in this tutorial, we’ll actually connect our application up to an external database.
Resolver¶
Inside our schema
namespace, we need to read the data and provide a resolver
that can access it.
(ns my.clojure-game-geek.schema
"Contains custom resolvers and a function to provide the full schema."
(:require [clojure.java.io :as io]
[com.walmartlabs.lacinia.util :as util]
[com.walmartlabs.lacinia.schema :as schema]
[clojure.edn :as edn]))
(defn resolve-game-by-id
[games-map context args value]
(let [{:keys [id]} args]
(get games-map id)))
(defn resolver-map
[]
(let [cgg-data (-> (io/resource "cgg-data.edn")
slurp
edn/read-string)
games-map (->> cgg-data
:games
(reduce #(assoc %1 (:id %2) %2) {}))]
{:Query/gameById (partial resolve-game-by-id games-map)}))
(defn load-schema
[]
(-> (io/resource "cgg-schema.edn")
slurp
edn/read-string
(util/inject-resolvers (resolver-map))
schema/compile))
You can see a bit of the philosophy of Lacinia inside the load-schema
function: Lacinia strives
to provide only what is most essential, or truly useful and universal.
Lacinia explicitly does not provide a single function to read, parse, inject resolvers, and compile an EDN file in a single call.
That may seem odd – it feels like every application will just cut-and-paste something virtually identical to load-schema
.
In fact, not all schemas will come directly from a single EDN file.
Because the schema is Clojure data it can be constructed, modified, merged, and otherwise transformed
right up to the point that it is compiled.
By starting with a pipeline like the one inside load-schema
, it becomes easy to inject your own application-specific bits
into the steps leading up to schema/compile
, which ultimately becomes quite essential.
Back to the schema; the resolver itself is the resolve-game-by-id
function.
It is provided with a map of games, and the
standard triumvirate of
resolver function arguments: context, field arguments, and container value.
Field resolvers are passed a map of the field arguments (from the client query). This map contains keyword keys, and values of varying types (because field arguments have a type in the GraphQL schema).
We use a bit of destructuring to extract the id [1].
The data in the map is already in a form that matches the GraphQL schema, so it’s
just a matter of get
-ing it out of the games map.
Inside resolver-map
, we read the sample game data, then use typical Clojure data manipulation
to get it into the form that we want: we convert a seq of BoardGame maps into a map of maps, keyed on the :id
of each
BoardGame.
The partial
function is a real workhorse in Clojure code; it takes an existing function and a set of initial arguments
to that function and returns a new function that collects the remaining arguments needed by the original function.
This returned function will accept the standard field resolver arguments – context
, args
, and value
,
and pass the games-map
and those three arguments to resolve-game-by-id
.
This is one common example of the use of higher orderered functions. It’s not as complicated as the term might lead you to believe - just that functions can be arguments to, and return values from, other functions.
Running Queries¶
We’re finally almost ready to run queries … but first, let’s get rid of
that #ordered/map
business.
(ns user
(:require [my.clojure-game-geek.schema :as s]
[com.walmartlabs.lacinia :as lacinia]
[clojure.walk :as walk])
(:import (clojure.lang IPersistentMap)))
(def schema (s/load-schema))
(defn simplify
"Converts all ordered maps nested within the map into standard hash maps, and
sequences into vectors, which makes for easier constants in the tests, and eliminates ordering problems."
[m]
(walk/postwalk
(fn [node]
(cond
(instance? IPersistentMap node)
(into {} node)
(seq? node)
(vec node)
:else
node))
m))
(defn q
[query-string]
(-> (lacinia/execute schema query-string nil nil)
simplify))
This simplify
function finds all the ordered maps and converts them into
ordinary maps.
It also finds any lists and converts them to vectors.
With that in place, we’re ready to reload our code [2], and then run some queries:
(q "{ gameById(id: \"anything\") { id name summary }}")
=> {:data {:gameById nil}}
This hasn’t changed [3], except that, because of simplify
, the final result is just standard maps,
which are easier to look at in the REPL.
However, we can also get real data back from our query:
(q "{ gameById(id: \"1236\") { id name summary minPlayers }}")
=>
{:data {:gameById {:id "1236",
:name "Tiny Epic Galaxies",
:summary "Fast dice-based sci-fi space game with a bit of chaos",
:minPlayers 1}}}
Success! Lacinia has parsed our query string and executed it against our compiled schema. At the correct time, it dropped into our resolver function, which supplied the data that Lacinia then sliced and diced to compose the result map.
You should be able to devise and execute other simple queries at this point.
Summary¶
We’ve extended our schema and field resolvers with test data and are getting some actual data back when we execute a query.
Next up, we’ll continue extending the schema, and start discussing relationships between GraphQL types.
[1] | This is overkill for this very simple case, but it’s nice to demonstrate techniques that are likely to be used in real applications. |
[2] | How to reload your code is going to be specific to your IDE;
Cursive adds a If you are new to Clojure or not using Cursive, this is a big area to dive into; you can start with the Programming at the REPL guide. |
[3] | This REPL output is a bit different than earlier examples; we’ve switched from
the standard clj REPL to the Cursive REPL; the latter pretty-prints
the returned values. |
Adding Designers¶
So far, we’ve been working with just a single object type, BoardGame.
Let’s see what we can do when we add the Designer object type to the mix.
Initially, we’ll define each Designer in terms of an id, a name, and an optional home page URL.
{:games
[{:id "1234"
:name "Zertz"
:summary "Two player abstract with forced moves and shrinking board"
:designers #{"200"}
:minPlayers 2
:maxPlayers 2}
{:id "1235"
:name "Dominion"
:summary "Created the deck-building genre; zillions of expansions"
:designers #{"204"}
:minPlayers 2}
{:id "1236"
:name "Tiny Epic Galaxies"
:summary "Fast dice-based sci-fi space game with a bit of chaos"
:designers #{"203"}
:minPlayers 1
:maxPlayers 4}
{:id "1237"
:name "7 Wonders: Duel"
:summary "Tense, quick card game of developing civilizations"
:designers #{"201" "202"}
:minPlayers 2
:maxPlayers 2}]
:designers
[{:id "200"
:name "Kris Burm"
:url "http://www.gipf.com/project_gipf/burm/burm.html"}
{:id "201"
:name "Antoine Bauza"
:url "http://www.antoinebauza.fr/"}
{:id "202"
:name "Bruno Cathala"
:url "http://www.brunocathala.com/"}
{:id "203"
:name "Scott Almes"}
{:id "204"
:name "Donald X. Vaccarino"}]}
If this was a relational database, we’d likely have a join table between BoardGame and Designer, but that can come later. For now, we have a set of designer ids inside each BoardGame.
Schema Changes¶
{:objects
{:BoardGame
{:description "A physical or virtual board game."
:fields
{:id {:type (non-null ID)}
:name {:type (non-null String)}
:summary {:type String
:description "A one-line summary of the game."}
:description {:type String
:description "A long-form description of the game."}
:designers {:type (non-null (list :Designer))
:description "Designers who contributed to the game."}
:minPlayers {:type Int
:description "The minimum number of players the game supports."}
:maxPlayers {:type Int
:description "The maximum number of players the game supports."}
:playTime {:type Int
:description "Play time, in minutes, for a typical game."}}}
:Designer
{:description "A person who may have contributed to a board game design."
:fields
{:id {:type (non-null ID)}
:name {:type (non-null String)}
:url {:type String
:description "Home page URL, if known."}
:games {:type (non-null (list :BoardGame))
:description "Games designed by this designer."}}}
:Query
{:fields
{:gameById
{:type :BoardGame
:description "Access a BoardGame by its unique id, if it exists."
:args
{:id {:type ID}}}}}}}
We’ve added a designers
field to BoardGame, and added
a new Designer type.
In Lacinia, we use a wrapper, list
, around a type, to denote a list of that type.
In the EDN, the list
wrapper is applied using the syntax of a function call in Clojure code.
A second wrapper, non-null
, is used when a value must be present, and not null (or nil
in Clojure).
By default, all values can be nil and that flexibility is encouraged, so non-null
is rarely used.
Here we’ve defined the designers
field as (non-null (list :Designer))
.
This is somewhat overkill (the world won’t end if the result map contains a nil
instead of an
empty list), but demonstrates that the list
and non-null
modifiers can
nest properly.
We could go further: (non-null (list (non-null :Designer)))
… but that’s
adding far more complexity than value.
We need a field resolver for the designers
field, to convert from
what’s in our data (a set of designer ids) into what we are promising in the schema:
a list of Designer objects.
Likewise, we need a field resolver in the Designer entity to figure out which BoardGames are associated with the designer.
Code Changes¶
(ns my.clojure-game-geek.schema
"Contains custom resolvers and a function to provide the full schema."
(:require [clojure.java.io :as io]
[com.walmartlabs.lacinia.util :as util]
[com.walmartlabs.lacinia.schema :as schema]
[clojure.edn :as edn]))
(defn resolve-game-by-id
[games-map context args value]
(let [{:keys [id]} args]
(get games-map id)))
(defn resolve-board-game-designers
[designers-map context args board-game]
(->> board-game
:designers
(map designers-map)))
(defn resolve-designer-games
[games-map context args designer]
(let [{:keys [id]} designer]
(->> games-map
vals
(filter #(-> % :designers (contains? id))))))
(defn entity-map
[data k]
(reduce #(assoc %1 (:id %2) %2)
{}
(get data k)))
(defn resolver-map
[]
(let [cgg-data (-> (io/resource "cgg-data.edn")
slurp
edn/read-string)
games-map (entity-map cgg-data :games)
designers-map (entity-map cgg-data :designers)]
{:Query/gameById (partial resolve-game-by-id games-map)
:BoardGame/designers (partial resolve-board-game-designers designers-map)
:Designer/games (partial resolve-designer-games games-map)}))
(defn load-schema
[]
(-> (io/resource "cgg-schema.edn")
slurp
edn/read-string
(util/inject-resolvers (resolver-map))
schema/compile))
As with all field resolvers [1], resolve-board-game-designers
is passed the containing resolved value
(a BoardGame map, in this case)
and in turn, resolves the next step down, in this case, a list of Designers.
This is an important point: the data from your external source does not have to be in the shape described by your schema … you just must be able to transform it into that shape. Field resolvers come into play both when you need to fetch data from an external source, and when you need to reshape that data to match the schema.
For example, in our BoardGame values, the :designers
key is a set of designer ids, but in
our schema the BoardGame designers
field is a list of Designer objects.
The resolve-board-game-designers
resolver function dynamically reshapes the BoardGame value’s data into the
shape mandated by the GraphQL schema.
GraphQL doesn’t make any guarantees about order of values in a list field; when it matters, it falls on us to add documentation to describe the order, or even provide field arguments to let the client specify the order.
The inverse of resolve-board-game-designers
is resolve-designer-games
.
It starts with a Designer and uses the Designer’s id as a filter to find
BoardGames whose :designers
set contains the id.
Testing It Out¶
After reloading code in the REPL, we can exercise these new types and relationships:
(q "{ gameById(id: \"1237\") { name designers { name }}}")
=> {:data {:gameById {:name "7 Wonders: Duel",
:designers [{:name "Antoine Bauza"}
{:name "Bruno Cathala"}]}}}
For the first time, we’re seeing the “graph” in GraphQL.
An important part of GraphQL is that your query must always extend to scalar fields;
if you select a field that is a compound type, such as BoardGame/designers
, Lacinia will report an error instead:
(q "{ gameById(id: \"1237\") { name designers }}")
=>
{:errors [{:message "Field `designers' (of type `Designer') must have at least one selection.",
:locations [{:line 1, :column 25}]}]}
Notice how the :data
key is not present here … that indicates that the error
occured during the parse and prepare phases, before execution in earnest began.
To really demonstrate navigation, we can go from BoardGame to Designer and back:
(q "{ gameById(id: \"1234\") { name designers { name games { name }}}}")
=> {:data {:gameById {:name "Zertz",
:designers [{:name "Kris Burm",
:games [{:name "Zertz"}]}]}}}
Summary¶
Lacinia provides the mechanism to create relationships between entities, such as between BoardGame and Designer. It still falls on the field resolvers to provide that data for such linkages.
With that in place, the same com.walmartlabs.lacinia/execute function that gives us data about a single entity can traverse the graph and return data from a variety of entities, organized however you need it.
Next up, we’ll take what we have and make it easy to access via HTTP.
[1] | Root resolvers, such as for the gameById query operation, are the
exception: they are passed a nil value. |
Lacinia Pedestal¶
Working from the REPL is important, but ultimately GraphQL exists to provide a web-based API. Fortunately, it is very easy to get your Lacinia application up on the web, on top of the Pedestal web tier, using the lacinia-pedestal library.
In addition, for free, we get GraphQL’s own REPL: GraphiQL.
Add Dependencies¶
{:paths ["src" "resources"]
:deps {org.clojure/clojure {:mvn/version "1.11.1"}
com.walmartlabs/lacinia {:mvn/version "1.2-alpha-4"}
com.walmartlabs/lacinia-pedestal {:mvn/version "1.1"}
io.aviso/logging {:mvn/version "1.0"}}
:aliases
{:run-m {:main-opts ["-m" "my.clojure-game-geek"]}
:run-x {:ns-default my.clojure-game-geek
:exec-fn greet
:exec-args {:name "Clojure"}}
:build {:deps {io.github.seancorfield/build-clj
{:git/tag "v0.8.2" :git/sha "0ffdb4c"
;; since we're building an app uberjar, we do not
;; need deps-deploy for clojars.org deployment:
:deps/root "slim"}}
:ns-default build}
:dev {:extra-paths ["dev-resources"]}
:test {:extra-paths ["test"]
:extra-deps {org.clojure/test.check {:mvn/version "1.1.1"}
io.github.cognitect-labs/test-runner
{:git/tag "v0.5.0" :git/sha "48c3c67"}}}}}
We’ve added two libraries: lacinia-pedestal
and io.aviso/logging
.
The former brings in quite a few dependencies, including Pedestal, and the underlying Jetty layer that Pedestal builds upon.
The io.aviso/logging
library sets up
Logback as the logging library.
Clojure and Java are both rich with web and logging frameworks; Pedestal and Logback are simply particular choices that we’ve made and prefer; many other people are using Lacinia on the web without using Logback or Pedestal.
Some Configuration¶
For best results, we can configure Logback; this keeps startup and request handling from being very chatty:
<configuration scan="true" scanPeriod="1 seconds">
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%-5level %logger - %msg%n</pattern>
</encoder>
</appender>
<root level="warn">
<appender-ref ref="STDOUT"/>
</root>
</configuration>
This configuration hides log events below the warning level (that is, debug and info events). If any warnings or errors do occur, minimal output is sent to the console.
A logback-test.xml
takes precendence over the production logback.xml
configuration
we will eventually supply.
User Namespace¶
We’ll add more scaffolding to the user
namespace, to make it possible to start and stop
the Pedestal server.
(ns user
(:require [my.clojure-game-geek.schema :as s]
[com.walmartlabs.lacinia :as lacinia]
[com.walmartlabs.lacinia.pedestal2 :as lp]
[io.pedestal.http :as http]
[clojure.java.browse :refer [browse-url]]
[clojure.walk :as walk])
(:import (clojure.lang IPersistentMap)))
(def schema (s/load-schema))
(defn simplify
"Converts all ordered maps nested within the map into standard hash maps, and
sequences into vectors, which makes for easier constants in the tests, and eliminates ordering problems."
[m]
(walk/postwalk
(fn [node]
(cond
(instance? IPersistentMap node)
(into {} node)
(seq? node)
(vec node)
:else
node))
m))
(defn q
[query-string]
(-> (lacinia/execute schema query-string nil nil)
simplify))
(defonce server nil)
(defn start-server
[_]
(let [server (-> (lp/default-service schema nil)
http/create-server
http/start)]
(browse-url "http://localhost:8888/ide")
server))
(defn stop-server
[server]
(http/stop server)
nil)
(defn start
[]
(alter-var-root #'server start-server)
:started)
(defn stop
[]
(alter-var-root #'server stop-server)
:stopped)
This new code is almost entirely boilerplate for Pedestal and for Lacinia-Pedestal.
The core function is com.walmartlabs.lacinia.pedestal2/default-service
[1] which is passed the compiled schema
and a map of options, and returns a Pedestal service map which is then used
to create the Pedestal server.
By default, incoming GraphQL POST requests are handled at the /api
path.
The default port is 8888. We’ll get to the details later.
The /ide
path (which is opened at startup), and related JavaScript and CSS resources, can only be accessed
when GraphiQL is enabled.
Starting The Server¶
With the above scaffolding in place, it is just a matter of starting the REPL and evaluating (start)
.
At this point, your web browser should open to the GraphiQL application:

Tip
It’s really worth following along with this section, especially if you haven’t played with GraphiQL before. GraphiQL assists you with formatting, provides pop-up help, flags errors in your query, and supplies automatic input completion. It can even pretty print your query. It makes for quite the demo!
Running Queries¶
We can now type a query into the large text area on the left and then click
the right arrow button (or type Command+Enter
), and see the server response as pretty-printed JSON on the right:

Notice that the URL bar in the browser has updated: it contains the full query string.
This means that you can bookmark a query you like for later (though it’s easier to access prior
queries using the the History
button).
Importantly, you can copy that URL and provide it to other developers. They can start up the application on their workstations and see exactly what you see, a real boon for describing and diagnosing problems.
This approach works even better when you keep a GraphQL server running on a shared staging server. On split [2] teams, the developers creating the application can easily explore the interface exposed by the GraphQL server, even before writing their first line of client-side code.
Trust me, they love that.
You’ll notice that the returned map is in JSON format, not EDN, and that it includes a lot more information in the extensions
key. This is optional tracing information, where Lacinia identifies how it spent time processing the request. This is an example of something that’s automatic when using default-service
that you’ll definitely want to turn off in production.
Documentation Browser¶
The < Docs
button on the right opens the documentation browser:

The documentation browser is invaluable: it allows you to navigate around your schema, drilling down
to objects, fields, and types to see a summary of each
declaration, as well as documentation - those
:description
values we added way back
at the beginning.
Take some time to learn what GraphiQL can do for you.
Summary¶
It takes very little effort, just a dependency change and a little boilerplate code, to expose our little application to the web, and along the way, we gain access to the powerful GraphiQL IDE.
Next up, we’ll look into reorganization our code for later growth by adding a layer of components atop our code.
[1] | Why pedestal2 ? The initial version of lacinia-pedestal had a slightly different
approach to setting up Pedestal that proved to be problematic, it also supported some outdated
ideas about how to process incoming requests.
For compatibility, the
original namespace, com.walmartlabs.lacinia.pedestal was left functionally s-is, but a new namespace,
pedestal2 was created to address the concerns. |
[2] | That is, where one team or set of developers just does the user interface, and the other team just does the server side (including Lacinia). Part of the value proposition for GraphQL is how clean and uniform this split can be. |
Refactoring to Components¶
Before we add the next bit of functionality to our application, it’s time to take a small detour, into the use of Sandra Sierra’s Component library. [1]
As Clojure programs grow, the namespaces, and relationships between those
namespaces, grow in number and complexity.
In our previous example, we saw that the logic to
start the Jetty instance was strewn across the user
namespace.
This isn’t a problem in our toy application, but as a real application grows, we’d start to see some issues and concerns:
- A single ‘startup’ namespace (maybe with a
-main
method) imports every single other namespace. - Potential for duplication or conflict between the real startup code and the test startup code. [2]
- Is there a good way to stop things, say, between tests?
- Is there a way to mock parts of the system (for testing purposes)?
- We really want to avoid a proliferation of global variables. Ideally, none!
Component is a simple, no-nonsense way to achieve the above goals. It gives you a clear way to organize your code, and it does things in a fully functional way: no globals, no update-in-place, and easy to reason about.
The building-block of Component is, unsurprisingly, components. These components are simply ordinary Clojure maps – though for reasons we’ll discuss shortly, Clojure record types are more typically used.
The components are formed into a system, which again is just a map. Each component has a unique, well-known key in the system map.
Components may have dependencies on other components. That’s where the true value of the library comes into play.
Components may have a lifecycle; if they do, they implement the Lifecycle
protocol containing methods start
and stop
.
This is why many components are implemented as Clojure records …
records can implement a protocol, but simple maps can’t.
Rather than get into the minutiae, let’s see how it all fits together in our Clojure Game Geek application.
Add Dependencies¶
{:paths ["src" "resources"]
:deps {org.clojure/clojure {:mvn/version "1.11.1"}
com.walmartlabs/lacinia {:mvn/version "1.2-alpha-4"}
com.walmartlabs/lacinia-pedestal {:mvn/version "1.1"}
com.stuartsierra/component {:mvn/version "1.1.0"}
io.aviso/logging {:mvn/version "1.0"}}
:aliases
{:run-m {:main-opts ["-m" "my.clojure-game-geek"]}
:run-x {:ns-default my.clojure-game-geek
:exec-fn greet
:exec-args {:name "Clojure"}}
:build {:deps {io.github.seancorfield/build-clj
{:git/tag "v0.8.2" :git/sha "0ffdb4c"
;; since we're building an app uberjar, we do not
;; need deps-deploy for clojars.org deployment:
:deps/root "slim"}}
:ns-default build}
:dev {:extra-paths ["dev-resources"]}
:test {:extra-paths ["test"]
:extra-deps {org.clojure/test.check {:mvn/version "1.1.1"}
io.github.cognitect-labs/test-runner
{:git/tag "v0.5.0" :git/sha "48c3c67"}}}}}
We’ve added the component
library.
System Map¶
We’re starting quite small, with just two components in our system:
![digraph {
server [label=":server"]
schema [label=":schema-provider"]
server -> schema
}](_images/graphviz-ee719bc98fc45a751518f83f796355d5f7334a34.png)
The :server
component is responsible for setting up the Pedestal service,
which requires a compiled Lacinia schema.
The :schema-provider
component exposes that schema as its :schema
key.
Later, we’ll be adding additional components for other logic, such as database connections, thread pools, authentication/authorization checks, caching, and so forth. But it’s easier to start small.
What does it mean for one service to depend on another? Dependencies are acted upon when the system is started (and again when the system is stopped).
The dependencies influence the order in which each component is started.
Here, :schema-provider
is started before :server
, as :server
depends on
:schema-provider
.
Secondly, the started version of a dependency is assoc
-ed into
the dependant component.
After :schema-provider
starts, the started version of the component
will be assoc
-ed as the :schema-provider
key of the :server
component.
Once a component has its dependencies assoc
-ed in, and is itself started
(more on that in a moment), it may be assoc
-ed into further components.
The Component library embraces Clojure’s core concept of identity vs. state; the identity of the component is its key in the system map … its state is a series of transformations of the initial map.
:schema-provider component¶
The my.clojure-game-geek.schema
namespace has been extended to provide
the :schema-provider
component.
(ns my.clojure-game-geek.schema
"Contains custom resolvers and a function to provide the full schema."
(:require [clojure.java.io :as io]
[com.stuartsierra.component :as component]
[com.walmartlabs.lacinia.util :as util]
[com.walmartlabs.lacinia.schema :as schema]
[clojure.edn :as edn]))
(defn resolve-game-by-id
[games-map context args value]
(let [{:keys [id]} args]
(get games-map id)))
(defn resolve-board-game-designers
[designers-map context args board-game]
(->> board-game
:designers
(map designers-map)))
(defn resolve-designer-games
[games-map context args designer]
(let [{:keys [id]} designer]
(->> games-map
vals
(filter #(-> % :designers (contains? id))))))
(defn entity-map
[data k]
(reduce #(assoc %1 (:id %2) %2)
{}
(get data k)))
(defn resolver-map
[]
(let [cgg-data (-> (io/resource "cgg-data.edn")
slurp
edn/read-string)
games-map (entity-map cgg-data :games)
designers-map (entity-map cgg-data :designers)]
{:Query/gameById (partial resolve-game-by-id games-map)
:BoardGame/designers (partial resolve-board-game-designers designers-map)
:Designer/games (partial resolve-designer-games games-map)}))
(defn load-schema
[]
(-> (io/resource "cgg-schema.edn")
slurp
edn/read-string
(util/inject-resolvers (resolver-map))
schema/compile))
(defrecord SchemaProvider [schema]
component/Lifecycle
(start [this]
(assoc this :schema (load-schema)))
(stop [this]
(assoc this :schema nil)))
The significant changes are at the bottom of the namespace. There’s a new record, SchemaProvider, that implements the Lifecycle protocol.
Lifecycle is optional; trivial components may not need it.
In our case, we use the start
method as an opportunity to
load and compile the Lacinia schema.
When you implement a protocol, you must implement all the methods of the
protocol.
In Component’s Lifecycle protocol, you typically will undo in stop
whatever you did in start
.
For example, a Component that manages a database connection will open it in start
and
close it in stop
.
Here we just get rid of the compiled schema, [3]
but it is also common
and acceptable for a stop
method to just return this
if the component
doesn’t have external resources,
such as a database connection, to manage.
:server component¶
Next well add the my.clojure-game-geek.server
namespace to provide the
:server
component.
(ns my.clojure-game-geek.server
(:require [com.stuartsierra.component :as component]
[com.walmartlabs.lacinia.pedestal2 :as lp]
[io.pedestal.http :as http]))
(defrecord Server [schema-provider server]
component/Lifecycle
(start [this]
(assoc this :server (-> schema-provider
:schema
(lp/default-service nil)
http/create-server
http/start)))
(stop [this]
(http/stop server)
(assoc this :server nil)))
Much of the code previously in the user
namespace has moved here.
You can see how the components work together, inside the start
method.
The Component library has assoc
-ed the :schema-provider
component
into the :server
component, so it’s possible to get the :schema
key
and build the Pedestal server from it.
start
and stop
methods often have side-effects.
This is explicit here, with the call to http/stop
before clearing
the :server
key.
stop
is especially important in this component, as it calls http/stop; without this, the system will shut down,
but the Jetty instance will continue to listen on port 8080. This kind of side-effect is exactly what
the Lifecycle protocol is used for.
system namespace¶
We’ll create a new my.clojure-game-geek.system
namespace just to put together the Component system map.
(ns my.clojure-game-geek.system
(:require [com.stuartsierra.component :as component]
[my.clojure-game-geek.schema :as schema]
[my.clojure-game-geek.server :as server]))
(defn new-system
[]
(assoc (component/system-map)
:server (component/using (server/map->Server {})
[:schema-provider])
:schema-provider (schema/map->SchemaProvider {})))
The call to component/using
establishes the dependency between the components.
You can imagine that, as the system grows larger, so will this namespace. But at the same time, the namespaces for the individual components will only need to know about the namespaces of components they directly depend upon.
user namespace¶
Next, we’ll look at changes to the user
namespace:
(ns user
(:require [com.stuartsierra.component :as component]
[my.clojure-game-geek.system :as system]
[com.walmartlabs.lacinia :as lacinia]
[clojure.java.browse :refer [browse-url]]
[clojure.walk :as walk])
(:import (clojure.lang IPersistentMap)))
(defn simplify
"Converts all ordered maps nested within the map into standard hash maps, and
sequences into vectors, which makes for easier constants in the tests, and eliminates ordering problems."
[m]
(walk/postwalk
(fn [node]
(cond
(instance? IPersistentMap node)
(into {} node)
(seq? node)
(vec node)
:else
node))
m))
(defonce system (system/new-system))
(defn q
[query-string]
(-> system
:schema-provider
:schema
(lacinia/execute query-string nil nil)
simplify))
(defn start
[]
(alter-var-root #'system component/start-system)
(browse-url "http://localhost:8888/ide")
:started)
(defn stop
[]
(alter-var-root #'system component/stop-system)
:stopped)
The user
namespace has shrunk; previously
it was responsible for loading the schema, and creating and starting
the Pedestal service; the code for all that has shifted to the individual components.
Instead, the user
namespace uses the my.clojure-game-geek.system/new-system
function
to create a system map, and can use start-system
and stop-system
on that system map: no direct knowledge of
loading schemas or starting and stopping Pedestal is present any longer.
The user namespace previously had vars for both the schema and the Pedestal service. Now it only has a single var, for the Component system map.
Interestingly, as our system grows in later chapters, the user namespace will likely
not change at all, just the system map it gets from new-system
will
expand.
The only wrinkle here is in the q
function; since there’s no longer a local
schema
var it is necessary to pull the :schema-provider
component from the system map,
and extract the schema from that component.
Summary¶
Even with just two components, using the Component library simplifies our code, and lays the groundwork for rapidly expanding the behaviour of the application.
In the next chapter, we’ll look at adding new queries and types to the schema, in preparation to adding our first mutations.
[1] | Sandra provides a really good explanation of Component in their Clojure/West 2014 talk. |
[2] | We’ve been sloppy so far, in that we haven’t even thought about testing. That will change later in the tutorial. |
[3] | You might be tempted to use a dissoc here, but if you
dissoc a declared key of a record, the result is an ordinary
map, which can break tests that rely on repeatedly starting and stopping
the system. |
[4] | This is just one approach; another would be to provide a function
that assoc -ed the component into the system map. |
Adding Members and Ratings¶
We’re now starting an arc towards adding our first mutations.
We’re going to extend our schema to add Members (the name for a user of the Clojure Game Geek web site), and GameRatings … how a member has rated a game, on a scale of one to five.
Each Member can rate any BoardGame, but can only rate any single game once.
Schema Changes¶
First, let’s add new fields, types, and queries to support these new features:
{:objects
{:BoardGame
{:description "A physical or virtual board game."
:fields
{:id {:type (non-null ID)}
:name {:type (non-null String)}
:summary {:type String
:description "A one-line summary of the game."}
:ratingSummary {:type (non-null :GameRatingSummary)
:description "Summarizes member ratings for the game."}
:description {:type String
:description "A long-form description of the game."}
:designers {:type (non-null (list :Designer))
:description "Designers who contributed to the game."}
:minPlayers {:type Int
:description "The minimum number of players the game supports."}
:maxPlayers {:type Int
:description "The maximum number of players the game supports."}
:playTime {:type Int
:description "Play time, in minutes, for a typical game."}}}
:GameRatingSummary
{:description "Summary of ratings for a single game."
:fields
{:count {:type (non-null Int)
:description "Number of ratings provided for the game. Ratings are 1 to 5 stars."}
:average {:type (non-null Float)
:description "The average value of all ratings, or 0 if never rated."}}}
:Member
{:description "A member of Clojure Game Geek. Members can rate games."
:fields
{:id {:type (non-null ID)}
:name {:type (non-null String)
:description "Unique name of the member."}
:ratings {:type (list :GameRating)
:description "List of games and ratings provided by this member."}}}
:GameRating
{:description "A member's rating of a particular game."
:fields
{:game {:type (non-null :BoardGame)
:description "The Game rated by the member."}
:rating {:type (non-null Int)
:description "The rating as 1 to 5 stars."}}}
:Designer
{:description "A person who may have contributed to a board game design."
:fields
{:id {:type (non-null ID)}
:name {:type (non-null String)}
:url {:type String
:description "Home page URL, if known."}
:games {:type (non-null (list :BoardGame))
:description "Games designed by this designer."}}}
:Query
{:fields
{:gameById
{:type :BoardGame
:description "Access a BoardGame by its unique id, if it exists."
:args
{:id {:type ID}}}
:memberById
{:type :Member
:description "Access a ClojureGameGeek Member by their unique id, if it exists."
:args
{:id {:type (non-null ID)}}}}}}}
For a particular BoardGame, you can get just a simple summary of the ratings: the total number, and a simple average.
We’ve added a new top-level entity, Member. From a Member, you can get a detailed list of all the games that member has rated.
Data Changes¶
We’ll model these ratings in our test data, much as we would a many-to-many relationship within a SQL database:
{:games
[{:id "1234"
:name "Zertz"
:summary "Two player abstract with forced moves and shrinking board"
:designers #{"200"}
:minPlayers 2
:maxPlayers 2}
{:id "1235"
:name "Dominion"
:summary "Created the deck-building genre; zillions of expansions"
:designers #{"204"}
:minPlayers 2}
{:id "1236"
:name "Tiny Epic Galaxies"
:summary "Fast dice-based sci-fi space game with a bit of chaos"
:designers #{"203"}
:minPlayers 1
:maxPlayers 4}
{:id "1237"
:name "7 Wonders: Duel"
:summary "Tense, quick card game of developing civilizations"
:designers #{"201" "202"}
:minPlayers 2
:maxPlayers 2}]
:members
[{:id "37"
:name "curiousattemptbunny"}
{:id "1410"
:name "bleedingedge"}
{:id "2812"
:name "missyo"}]
:ratings
[{:member-id "37" :game-id "1234" :rating 3}
{:member-id "1410" :game-id "1234" :rating 5}
{:member-id "1410" :game-id "1236" :rating 4}
{:member-id "1410" :game-id "1237" :rating 4}
{:member-id "2812" :game-id "1237" :rating 4}
{:member-id "37" :game-id "1237" :rating 5}]
:designers
[{:id "200"
:name "Kris Burm"
:url "http://www.gipf.com/project_gipf/burm/burm.html"}
{:id "201"
:name "Antoine Bauza"
:url "http://www.antoinebauza.fr/"}
{:id "202"
:name "Bruno Cathala"
:url "http://www.brunocathala.com/"}
{:id "203"
:name "Scott Almes"}
{:id "204"
:name "Donald X. Vaccarino"}]}
New Resolvers¶
Our schema changes introduced a few new field resolvers, which we must implement:
(ns my.clojure-game-geek.schema
"Contains custom resolvers and a function to provide the full schema."
(:require [clojure.java.io :as io]
[com.stuartsierra.component :as component]
[com.walmartlabs.lacinia.util :as util]
[com.walmartlabs.lacinia.schema :as schema]
[clojure.edn :as edn]))
(defn resolve-element-by-id
[element-map]
(fn [context args value]
(let [{:keys [id]} args]
(get element-map id))))
(defn resolve-board-game-designers
[designers-map context args board-game]
(->> board-game
:designers
(map designers-map)))
(defn resolve-designer-games
[games-map context args designer]
(let [{:keys [id]} designer]
(->> games-map
vals
(filter #(-> % :designers (contains? id))))))
(defn entity-map
[data k]
(reduce #(assoc %1 (:id %2) %2)
{}
(get data k)))
(defn rating-summary
[ratings]
(fn [_ _ board-game]
(let [id (:id board-game)
ratings' (->> ratings
(filter #(= id (:game-id %)))
(map :rating))
n (count ratings')]
{:count n
:average (if (zero? n)
0
(/ (apply + ratings')
(float n)))})))
(defn member-ratings
[ratings-map]
(fn [_ _ member]
(let [id (:id member)]
(filter #(= id (:member-id %)) ratings-map))))
(defn game-rating->game
[games-map]
(fn [_ _ game-rating]
(get games-map (:game-id game-rating))))
(defn resolver-map
[]
(let [cgg-data (-> (io/resource "cgg-data.edn")
slurp
edn/read-string)
games-map (entity-map cgg-data :games)
designers-map (entity-map cgg-data :designers)
members-map (entity-map cgg-data :members)
ratings (:ratings cgg-data)]
{:Query/gameById (resolve-element-by-id games-map)
:Query/memberById (resolve-element-by-id members-map)
:BoardGame/designers (partial resolve-board-game-designers designers-map)
:BoardGame/ratingSummary (rating-summary ratings)
:Designer/games (partial resolve-designer-games games-map)
:Member/ratings (member-ratings ratings)
:GameRating/game (game-rating->game games-map)}))
(defn load-schema
[]
(-> (io/resource "cgg-schema.edn")
slurp
edn/read-string
(util/inject-resolvers (resolver-map))
schema/compile))
(defrecord SchemaProvider [schema]
component/Lifecycle
(start [this]
(assoc this :schema (load-schema)))
(stop [this]
(assoc this :schema nil)))
We’ve generalized resolve-game-by-id
into resolve-element-by-id
so that we could
re-use the logic for the memberById
query. This is another example
of a higher order function, in that it is a function that is passed in a map
and returns a new function that closes [1] on the provided element map (a map of BoardGames in
one case, a map of Members in the other).
We’ve introduced three new resolvers, rating-summary
, member-ratings
, and game-rating->game
.
These new resolvers are implemented using a the same style as
resolve-element-by-id
; each function acts as a factory, returning the actual
field resolver function. No use of partial
is needed anymore.
This new pattern is closer to what we’ll end up with in a later tutorial chapter, when we see how to use a Component as a field resolver.
It’s worth emphasising again that field resolvers don’t just access data, they can transform it.
The ratingSummary
field resolver is an example of that; there’s no database entity directly
corresponding to the schema type :GameRatingSummary
, but the field resolver can build that information directly.
There doesn’t even have to be a special type or record … just a standard Clojure map
with the correctly named keys.
Testing it Out¶
Back at the REPL, we can test out the new functionality. We need the server started after the component refactoring:
(start)
=> :started
First, select the rating summary data for a game:
(q "{ gameById(icd: \"1237\") { name ratingSummary { count average }}}")
=> {:data {:gameById {:name "7 Wonders: Duel", :ratingSummary {:count 3, :average 4.333333333333333}}}}
We can also lookup a member, and find all the games they’ve rated:
(q "{ memberById(id: \"1410\") { name ratings { game { name } rating }}}")
=>
{:data {:memberById {:name "bleedingedge",
:ratings [{:game {:name "Zertz"}, :rating 5}
{:game {:name "Tiny Epic Galaxies"}, :rating 4}
{:game {:name "7 Wonders: Duel"}, :rating 4}]}}}
In fact, leveraging the “graph” in GraphQL, we can compare a member’s ratings to the averages:
(q "{ memberById(id: \"1410\") { name ratings { game { name ratingSummary { average }} rating }}}")
=>
{:data {:memberById {:name "bleedingedge",
:ratings [{:game {:name "Zertz", :ratingSummary {:average 4.0}}, :rating 5}
{:game {:name "Tiny Epic Galaxies", :ratingSummary {:average 4.0}}, :rating 4}
{:game {:name "7 Wonders: Duel", :ratingSummary {:average 4.333333333333333}}, :rating 4}]}}}
Summary¶
We’re beginning to pick up the pace, working with our application’s simple skeleton to add new types and relationships to the queries.
Next up, we’ll add new components to manage an in-memory database.
[1] | This is a computer science term that means that the value, element-map ,
will be in-scope inside the returned function after the resolve-element-by-id
function returns; the returned function is a closure and, yes, that’s part of
the basis for the name Clojure. |
Mutable Database¶
We’re still not quite ready to implement our first mutation … because we’re storing our data in an immutable map. Once again, we’re not going to take on running an external database, instead we’ll put our immutable map inside an Atom. We’ll also do some refactoring that will make our eventual transition to an external database that much easier.
System Map¶
In the previous versions of the application, the database data was an immutable map, and all the logic
for traversing that map was inside the my.clojure-game-geek.schema
namespace.
With this change, we’re breaking things apart, there’ll be a new namespace, and new component,
to encapsulate the database itself.
![digraph {
server [label=":server"]
schema [label=":schema-provider"]
db [label=":db"]
server -> schema -> db
}](_images/graphviz-5bf5338292e07108cca2cf053fe4ddc66dc7cea7.png)
db namespace¶
(ns my.clojure-game-geek.db
(:require [clojure.edn :as edn]
[clojure.java.io :as io]
[com.stuartsierra.component :as component]))
(defrecord ClojureGameGeekDb [data]
component/Lifecycle
(start [this]
(assoc this :data (-> (io/resource "cgg-data.edn")
slurp
edn/read-string
atom)))
(stop [this]
(assoc this :data nil)))
(defn find-game-by-id
[db game-id]
(->> db
:data
deref
:games
(filter #(= game-id (:id %)))
first))
(defn find-member-by-id
[db member-id]
(->> db
:data
deref
:members
(filter #(= member-id (:id %)))
first))
(defn list-designers-for-game
[db game-id]
(let [designers (:designers (find-game-by-id db game-id))]
(->> db
:data
deref
:designers
(filter #(contains? designers (:id %))))))
(defn list-games-for-designer
[db designer-id]
(->> db
:data
deref
:games
(filter #(-> % :designers (contains? designer-id)))))
(defn list-ratings-for-game
[db game-id]
(->> db
:data
deref
:ratings
(filter #(= game-id (:game-id %)))))
(defn list-ratings-for-member
[db member-id]
(->> db
:data
deref
:ratings
(filter #(= member-id (:member-id %)))))
This namespace does two things:
- Defines a component in terms of a record and a constructor function
- Provides an API for database access focused upon that component
At this point, the Component is nothing more than a home for the :data
Atom.
That Atom is created and initialized inside the start
lifecycle method.
All of those data access functions follow.
This code employs a few reasonable conventions:
find-
prefix for functions that get data by primary key, and may return nil if not foundlist-
prefix is likefind-
, but returns a seq of matches- The
:db
component is always the first parameter, asdb
Later, when we add some mutations, we’ll define further functions and new naming and coding conventions.
The common trait for all of these is the (-> db :data deref ...)
code; in other words,
reach into the component, access the :data
property (the Atom) and deref the Atom to get the
immutable map.
Looking forward to when we do have an external database … these functions will change, but
their signatures (their parameters and return values) will not.
Any code that invokes these functions, for example the field resolver functions defined in
my.clojure-game-geek.schema
, will work, unchanged, after we swap in the external database
implementation.
system namespace¶
We need to introduce the new :db
component, and wire it into the system.
(ns my.clojure-game-geek.system
(:require [com.stuartsierra.component :as component]
[my.clojure-game-geek.schema :as schema]
[my.clojure-game-geek.server :as server]
[my.clojure-game-geek.db :as db]))
(defn new-system
[]
(assoc (component/system-map)
:db (db/map->ClojureGameGeekDb {})
:server (component/using (server/map->Server {})
[:schema-provider])
:schema-provider (component/using
(schema/map->SchemaProvider {})
[:db])))
As promised previously, namespaces that use the system (such as the user
namespace)
don’t change at all. Likewise, the :server
component (and my.clojure-game-geek.server
namespace
don’t have to change even though the schema used by the component has changed drastically.
schema namespace¶
The schema namespace has shrunk, and improved:
(ns my.clojure-game-geek.schema
"Contains custom resolvers and a function to provide the full schema."
(:require [clojure.java.io :as io]
[com.stuartsierra.component :as component]
[com.walmartlabs.lacinia.util :as util]
[com.walmartlabs.lacinia.schema :as schema]
[my.clojure-game-geek.db :as db]
[clojure.edn :as edn]))
(defn game-by-id
[db]
(fn [_ args _]
(db/find-game-by-id db (:id args))))
(defn member-by-id
[db]
(fn [_ args _]
(db/find-member-by-id db (:id args))))
(defn board-game-designers
[db]
(fn [_ _ board-game]
(db/list-designers-for-game db (:id board-game))))
(defn designer-games
[db]
(fn [_ _ designer]
(db/list-games-for-designer db (:id designer))))
(defn rating-summary
[db]
(fn [_ _ board-game]
(let [ratings (map :rating (db/list-ratings-for-game db (:id board-game)))
n (count ratings)]
{:count n
:average (if (zero? n)
0
(/ (apply + ratings)
(float n)))})))
(defn member-ratings
[db]
(fn [_ _ member]
(db/list-ratings-for-member db (:id member))))
(defn game-rating->game
[db]
(fn [_ _ game-rating]
(db/find-game-by-id db (:game-id game-rating))))
(defn resolver-map
[component]
(let [{:keys [db]} component]
{:Query/gameById (game-by-id db)
:Query/memberById (member-by-id db)
:BoardGame/designers (board-game-designers db)
:BoardGame/ratingSummary (rating-summary db)
:Designer/games (designer-games db)
:Member/ratings (member-ratings db)
:GameRating/game (game-rating->game db)}))
(defn load-schema
[component]
(-> (io/resource "cgg-schema.edn")
slurp
edn/read-string
(util/inject-resolvers (resolver-map component))
schema/compile))
(defrecord SchemaProvider [db schema]
component/Lifecycle
(start [this]
(assoc this :schema (load-schema this)))
(stop [this]
(assoc this :schema nil)))
Now all of the resolver functions are following the factory style, but they’re largely just wrappers
around the functions from the my.clojure-game-geek.db
namespace.
And we still don’t have any tests (the shame!), but we can exercise a lot of the system from the REPL:
(q "{ memberById(id: \"1410\") { name ratings { game { name ratingSummary { count average } designers { name games { name }}} rating }}}")
=>
{:data {:memberById {:name "bleedingedge",
:ratings [{:game {:name "Zertz",
:ratingSummary {:count 2, :average 4.0},
:designers [{:name "Kris Burm", :games [{:name "Zertz"}]}]},
:rating 5}
{:game {:name "Tiny Epic Galaxies",
:ratingSummary {:count 1, :average 4.0},
:designers [{:name "Scott Almes", :games [{:name "Tiny Epic Galaxies"}]}]},
:rating 4}
{:game {:name "7 Wonders: Duel",
:ratingSummary {:count 3, :average 4.333333333333333},
:designers [{:name "Antoine Bauza", :games [{:name "7 Wonders: Duel"}]}
{:name "Bruno Cathala", :games [{:name "7 Wonders: Duel"}]}]},
:rating 4}]}}}
Summary¶
Adding a new component to manage mutable (but still in-memory) data is very straight-forward, and we’ve added a new API that will be stable when we start to use an external database.
With the mutable database ready to go, we can introduce our first mutation.
Game Rating Mutation¶
We’re finally ready to add our first mutation, which will be used to add a GameRating.
Our goal is a mutation which allows a member of ClojureGameGeek to apply a rating to a game. We must cover two cases: one where the member is adding an entirely new rating for particular game, and one where the member is revising a prior rating of the game.
Along the way, we’ll also start to see how to handle errors; errors tend to be more common when implementing mutations than with queries.
It is implicit that queries are idempotent (can be repeated getting the same results, and don’t change server-side state), whereas mutations are expected to make changes to server-side state as a side-effect. However, that side-effect is essentially invisible to Lacinia, as it will occur during the execution of a field resolver function.
The difference in how Lacinia executes a query and a mutation is razor thin. When the incoming query document contains only a single top level operation, as is the case in all the examples so far in this tutorial, then there is no difference at all.
GraphQL allows a single request to contain multiple operations of the same type: multiple queries or multiple mutations.
When possible, Lacinia will execute multiple query operations in parallel [1]. Multiple mutations are always executed serially.
We’ll consider the changes here back-to-front, starting with our database (which is still just a map inside an Atom).
Database Layer Changes¶
(ns my.clojure-game-geek.db
(:require [clojure.edn :as edn]
[clojure.java.io :as io]
[com.stuartsierra.component :as component]))
(defrecord ClojureGameGeekDb [data]
component/Lifecycle
(start [this]
(assoc this :data (-> (io/resource "cgg-data.edn")
slurp
edn/read-string
atom)))
(stop [this]
(assoc this :data nil)))
(defn find-game-by-id
[db game-id]
(->> db
:data
deref
:games
(filter #(= game-id (:id %)))
first))
(defn find-member-by-id
[db member-id]
(->> db
:data
deref
:members
(filter #(= member-id (:id %)))
first))
(defn list-designers-for-game
[db game-id]
(let [designers (:designers (find-game-by-id db game-id))]
(->> db
:data
deref
:designers
(filter #(contains? designers (:id %))))))
(defn list-games-for-designer
[db designer-id]
(->> db
:data
deref
:games
(filter #(-> % :designers (contains? designer-id)))))
(defn list-ratings-for-game
[db game-id]
(->> db
:data
deref
:ratings
(filter #(= game-id (:game-id %)))))
(defn list-ratings-for-member
[db member-id]
(->> db
:data
deref
:ratings
(filter #(= member-id (:member-id %)))))
(defn ^:private apply-game-rating
[game-ratings game-id member-id rating]
(->> game-ratings
(remove #(and (= game-id (:game-id %))
(= member-id (:member-id %))))
(cons {:game-id game-id
:member-id member-id
:rating rating})))
(defn upsert-game-rating
"Adds a new game rating, or changes the value of an existing game rating."
[db game-id member-id rating]
(-> db
:data
(swap! update :ratings apply-game-rating game-id member-id rating)))
Now, our goal here is not efficiency - this is throw away code - it’s to provide clear and concise code. Efficiency comes later.
To that goal, the meat of the upsert, the apply-game-rating
function,
simply removes any prior row, and then adds a new
row with the provided rating value.
Schema Changes¶
Our only change to the schema is to introduce the new mutation.
{:objects
{:BoardGame
{:description "A physical or virtual board game."
:fields
{:id {:type (non-null ID)}
:name {:type (non-null String)}
:summary {:type String
:description "A one-line summary of the game."}
:ratingSummary {:type (non-null :GameRatingSummary)
:description "Summarizes member ratings for the game."}
:description {:type String
:description "A long-form description of the game."}
:designers {:type (non-null (list :Designer))
:description "Designers who contributed to the game."}
:minPlayers {:type Int
:description "The minimum number of players the game supports."}
:maxPlayers {:type Int
:description "The maximum number of players the game supports."}
:playTime {:type Int
:description "Play time, in minutes, for a typical game."}}}
:GameRatingSummary
{:description "Summary of ratings for a single game."
:fields
{:count {:type (non-null Int)
:description "Number of ratings provided for the game. Ratings are 1 to 5 stars."}
:average {:type (non-null Float)
:description "The average value of all ratings, or 0 if never rated."}}}
:Member
{:description "A member of Clojure Game Geek. Members can rate games."
:fields
{:id {:type (non-null ID)}
:name {:type (non-null String)
:description "Unique name of the member."}
:ratings {:type (list :GameRating)
:description "List of games and ratings provided by this member."}}}
:GameRating
{:description "A member's rating of a particular game."
:fields
{:game {:type (non-null :BoardGame)
:description "The Game rated by the member."}
:rating {:type (non-null Int)
:description "The rating as 1 to 5 stars."}}}
:Designer
{:description "A person who may have contributed to a board game design."
:fields
{:id {:type (non-null ID)}
:name {:type (non-null String)}
:url {:type String
:description "Home page URL, if known."}
:games {:type (non-null (list :BoardGame))
:description "Games designed by this designer."}}}
:Query
{:fields
{:gameById
{:type :BoardGame
:description "Access a BoardGame by its unique id, if it exists."
:args
{:id {:type ID}}}
:memberById
{:type :Member
:description "Access a ClojureGameGeek Member by their unique id, if it exists."
:args
{:id {:type (non-null ID)}}}}}
:Mutation
{:fields
{:rateGame
{:type :BoardGame
:description "Establishes a rating of a board game, by a Member."
:args
{:gameId {:type (non-null ID)}
:memberId {:type (non-null ID)}
:rating {:type (non-null Int)
:description "Game rating as number between 1 and 5."}}}}}}}
Mutation is another special GraphQL object, much like Query. It’s fields define what mutations are available in the schema.
Mutations nearly always include field arguments to define what will be affected by the mutation, and how. Here we have to provide field arguments to identify the game, the member, and the new rating.
Just as with queries, it is necessary to define what value will be resolved by the mutation; typically, when a mutation modifies a single object, that object is resolved, in its updated state.
Here, resolving a GameRating didn’t seem to provide value, and we arbitrarily decided to instead resolve the BoardGame … we could have just as easily resolved the Member instead. The right option is often revealed based on client requirements.
GraphQL doesn’t have a way to describe error cases comparable to how it defines types: every field resolver may return errors instead of, or in addition to, an actual value. We attempt to document the kinds of errors that may occur as part of the operation’s documentation.
Code Changes¶
Finally, we knit together the schema changes and the database changes
in the schema
namespace.
(ns clojure-game-geek.schema
"Contains custom resolvers and a function to provide the full schema."
(:require
[clojure.java.io :as io]
[com.walmartlabs.lacinia.util :as util]
[com.walmartlabs.lacinia.schema :as schema]
[com.walmartlabs.lacinia.resolve :refer [resolve-as]]
[com.stuartsierra.component :as component]
[clojure-game-geek.db :as db]
[clojure.edn :as edn]))
(defn game-by-id
[db]
(fn [_ args _]
(db/find-game-by-id db (:id args))))
(defn member-by-id
[db]
(fn [_ args _]
(db/find-member-by-id db (:id args))))
(defn rate-game
[db]
(fn [_ args _]
(let [{game-id :game_id
member-id :member_id
rating :rating} args
game (db/find-game-by-id db game-id)
member (db/find-member-by-id db member-id)]
(cond
(nil? game)
(resolve-as nil {:message "Game not found."
:status 404})
(nil? member)
(resolve-as nil {:message "Member not found."
:status 404})
(not (<= 1 rating 5))
(resolve-as nil {:message "Rating must be between 1 and 5."
:status 400})
:else
(do
(db/upsert-game-rating db game-id member-id rating)
game)))))
(defn board-game-designers
[db]
(fn [_ _ board-game]
(db/list-designers-for-game db (:id board-game))))
(defn designer-games
[db]
(fn [_ _ designer]
(db/list-games-for-designer db (:id designer))))
(defn rating-summary
[db]
(fn [_ _ board-game]
(let [ratings (map :rating (db/list-ratings-for-game db (:id board-game)))
n (count ratings)]
{:count n
:average (if (zero? n)
0
(/ (apply + ratings)
(float n)))})))
(defn member-ratings
[db]
(fn [_ _ member]
(db/list-ratings-for-member db (:id member))))
(defn game-rating->game
[db]
(fn [_ _ game-rating]
(db/find-game-by-id db (:game_id game-rating))))
(defn resolver-map
[component]
(let [db (:db component)]
{:query/game-by-id (game-by-id db)
:query/member-by-id (member-by-id db)
:mutation/rate-game (rate-game db)
:BoardGame/designers (board-game-designers db)
:BoardGame/rating-summary (rating-summary db)
:GameRating/game (game-rating->game db)
:Designer/games (designer-games db)
:Member/ratings (member-ratings db)}))
(defn load-schema
[component]
(-> (io/resource "cgg-schema.edn")
slurp
edn/read-string
(util/attach-resolvers (resolver-map component))
schema/compile))
(defrecord SchemaProvider [schema]
component/Lifecycle
(start [this]
(assoc this :schema (load-schema this)))
(stop [this]
(assoc this :schema nil)))
(defn new-schema-provider
[]
{:schema-provider (-> {}
map->SchemaProvider
(component/using [:db]))})
It all comes together in the rate-game
function;
we first check that the gameId
and memberId
passed in
are valid (that is, they map to actual BoardGames and Members).
The resolve-as
function is essential here: the first parameter is the
value to resolve and is often nil when there are errors.
The second parameter is an error map. [2]
resolve-as
returns a wrapper object around the resolved value
(which is nil in these examples) and the error map.
Lacinia will later pull out the error map, add additional details,
and add it to the :errors
key of the result map.
These examples also show the use of the :status
key in the
error map.
lacinia-pedestal will look for such values in the result map, and
will set the HTTP status of the response to any value it finds
(if there’s more than one, the HTTP status will be the maximum).
The :status
keys are stripped out of the error maps before
the response is sent to the client. [3]
At the REPL¶
Let’s start by seeing the initial state of things, using the default database:
(q "{ memberById(id: \"1410\") { name ratings { game { id name } rating }}}")
=>
{:data {:memberById {:name "bleedingedge",
:ratings [{:game {:id "1234", :name "Zertz"}, :rating 5}
{:game {:id "1236", :name "Tiny Epic Galaxies"}, :rating 4}
{:game {:id "1237", :name "7 Wonders: Duel"}, :rating 4}]}}}
Ok, so maybe we’ve soured on Tiny Epic Galaxies for the moment:
(q "mutation { rateGame(memberId: \"1410\", gameId: \"1236\", rating: 3) { ratingSummary { count average }}}")
=> {:data {:rateGame {:ratingSummary {:count 1, :average 3.0}}}}
(q "{ memberById(id: \"1410\") { name ratings { game { id name } rating }}}")
=>
{:data {:memberById {:name "bleedingedge",
:ratings [{:game {:id "1236", :name "Tiny Epic Galaxies"}, :rating 3}
{:game {:id "1234", :name "Zertz"}, :rating 5}
{:game {:id "1237", :name "7 Wonders: Duel"}, :rating 4}]}}}
Dominion is a personal favorite, so let’s rate that:
(q "mutation { rateGame(memberId: \"1410\", gameId: \"1235\", rating: 4) { name ratingSummary { count average }}}")
=> {:data {:rateGame {:name "Dominion", :ratingSummary {:count 1, :average 4.0}}}}
We can also see what happens when the query contains mistakes:
(q "mutation { rateGame(memberId: \"1410\", gameId: \"9999\", rating: 4) { name ratingSummary { count average }}}")
=>
{:data {:rateGame nil},
:errors [{:message "Game not found",
:locations [{:line 1, :column 12}],
:path [:rateGame],
:extensions {:status 404, :arguments {:memberId "1410", :gameId "9999", :rating 4}}}]}
Although the rate-game
field resolver just returned a simple error map (with keys :message
and :status
),
Lacinia has enhanced the map identifying the location (within the query document), the query path
(which indicates which operation or nested field was involved), and the arguments passed to
the field resolver function. It has also moved any keys it doesn’t recognize, in this case :status
and :arguments
, to an embedded :extensions
map.
In Lacinia, there’s a difference between a resolver error, from using resolve-as
, and an overall failure parsing
or executing the query.
If the rating
argument is omitted from the query, we can see a significant difference:
(q "mutation { rateGame(memberId: \"1410\", gameId: \"9999\") { name ratingSummary { count average }}}")
=>
{:errors [{:message "Exception applying arguments to field `rateGame': Not all non-nullable arguments have supplied values.",
:locations [{:line 1, :column 12}],
:extensions {:field-name :Mutation/rateGame, :missing-arguments [:rating]}}]}
Here, the result map contains only the :errors
key; the :data
key is missing.
A similar error would occur if the type of value provided to field argument is incompatible:
(q "mutation { rateGame(memberId: \"1410\", gameId: \"9999\", rating: \"Great!\") { name rating_summary { count average }}}")
=>
{:errors [{:message "Exception applying arguments to field `rateGame': For argument `rating', unable to convert \"Great!\" to scalar type `Int'.",
:locations [{:line 1, :column 12}],
:extensions {:field-name :Mutation/rateGame,
:argument :Mutation/rateGame.rating,
:value "Great!",
:type-name :Int}}]}
Summary¶
And now we have mutations! The basic structure of our application is nearly fully formed, but we can’t go to production with an in-memory database. In the next chapter, we’ll start work on storing the database data in an actual SQL database.
[1] | Parallel execution is optional in Lacinia, and requires application changes to support it. |
[2] | Each map must contain, at a minimum, a :message key. |
[3] | The very idea of changing the HTTP response status is somewhat antithetical to some GraphQL developers and this behavior is optional, but on by default. |
External Database, Phase 1¶
We’ve gone pretty far with our application so far, but it’s time to make that big leap, and convert things over to an actual database. We’ll be running PostgreSQL inside a Docker container.
We’re definitely going to be taking two steps backward before taking further steps forward, but the majority of the changes
will be in the my.clojure-game-geek.db
namespace; the majority of the application, including the
field resolvers, will be unaffected.
Dependency Changes¶
{:paths ["src" "resources"]
:deps {org.clojure/clojure {:mvn/version "1.11.1"}
com.walmartlabs/lacinia {:mvn/version "1.2-alpha-4"}
com.walmartlabs/lacinia-pedestal {:mvn/version "1.1"}
org.clojure/java.jdbc {:mvn/version "0.7.12"}
org.postgresql/postgresql {:mvn/version "42.5.1"}
com.mchange/c3p0 {:mvn/version "0.9.5.5"}
com.stuartsierra/component {:mvn/version "1.1.0"}
io.aviso/logging {:mvn/version "1.0"}}
:aliases
{:run-m {:main-opts ["-m" "my.clojure-game-geek"]}
:run-x {:ns-default my.clojure-game-geek
:exec-fn greet
:exec-args {:name "Clojure"}}
:build {:deps {io.github.seancorfield/build-clj
{:git/tag "v0.8.2" :git/sha "0ffdb4c"
;; since we're building an app uberjar, we do not
;; need deps-deploy for clojars.org deployment:
:deps/root "slim"}}
:ns-default build}
:dev {:extra-paths ["dev-resources"]}
:test {:extra-paths ["test"]
:extra-deps {org.clojure/test.check {:mvn/version "1.1.1"}
io.github.cognitect-labs/test-runner
{:git/tag "v0.5.0" :git/sha "48c3c67"}}}}}
This adds several new dependencies for accessing a PostgreSQL database:
- The Clojure
java.jdbc
library - The PostgreSQL driver that plugs into the library
- A java library,
c3p0
, that is used for connection pooling
Database Initialization¶
We’ll be using Docker’s compose functionality to start and stop the container.
version: '3'
services:
db:
ports:
- 25432:5432
image: postgres:15.1-alpine
environment:
POSTGRES_PASSWORD: supersecret
The image
key identifies the name of the image to download from hub.docker.com. Postgres requires you to provide a server-level password as well, that’s specified in the service’s environment.
The port mapping is part of the magic of Docker … the PostgreSQL server, inside the container,
will listen to requests on its normal port: 5432, but our code, running on the host operation system,
can reach the server as port 25432 on localhost
.
To start working with the database, we’ll let Docker start it:
> docker compose up -d
[+] Running 2/2
⠿ Network clojure-game-geek_default Created 0.0s
⠿ Container clojure-game-geek-db-1 Started 0.3s
You’ll see Docker download the necessary Docker images the first time you execute this.
The -d
argument detaches the container from the terminal, otherwise PostgreSQL would write output to your terminal, and would shut down if you hit Ctrl-C.
Later, you can shut down this detached container with docker compose down
.
There’s also bin/psql.sh` to launch a SQL command prompt for the database (not shown here).
After starting the container, it is necessary to create the cggdb
database and populate it with initial data, using
the setup-db.sh
script:
#!/usr/bin/env bash
docker exec -i --user postgres clojure-game-geek-db-1 createdb cggdb
docker exec -i --user postgres clojure-game-geek-db-1 psql cggdb -a <<__END
create user cgg_role password 'lacinia';
grant create on schema public to cgg_role;
__END
docker exec -i clojure-game-geek-db-1 psql -Ucgg_role cggdb -a <<__END
drop table if exists designer_to_game;
drop table if exists game_rating;
drop table if exists member;
drop table if exists board_game;
drop table if exists designer;
CREATE OR REPLACE FUNCTION mantain_updated_at()
RETURNS TRIGGER AS \$\$
BEGIN
NEW.updated_at = now();
RETURN NEW;
END;
\$\$ language 'plpgsql';
create table member (
member_id int generated by default as identity primary key,
name text not null,
created_at timestamp not null default current_timestamp,
updated_at timestamp not null default current_timestamp);
create trigger member_updated_at before update
on member for each row execute procedure
mantain_updated_at();
create table board_game (
game_id int generated by default as identity primary key,
name text not null,
summary text,
min_players integer,
max_players integer,
created_at timestamp not null default current_timestamp,
updated_at timestamp not null default current_timestamp);
create trigger board_game_updated_at before update
on board_game for each row execute procedure
mantain_updated_at();
create table designer (
designer_id int generated by default as identity primary key,
name text not null,
uri text,
created_at timestamp not null default current_timestamp,
updated_at timestamp not null default current_timestamp);
create trigger designer_updated_at before update
on designer for each row execute procedure
mantain_updated_at();
create table game_rating (
game_id int references board_game(game_id),
member_id int references member(member_id),
rating integer not null,
created_at timestamp not null default current_timestamp,
updated_at timestamp not null default current_timestamp,
primary key (game_id, member_id));
create trigger game_rating_updated_at before update
on game_rating for each row execute procedure
mantain_updated_at();
create table designer_to_game (
designer_id int references designer(designer_id),
game_id int references board_game(game_id),
primary key (designer_id, game_id));
insert into board_game (game_id, name, summary, min_players, max_players) values
(1234, 'Zertz', 'Two player abstract with forced moves and shrinking board', 2, 2),
(1235, 'Dominion', 'Created the deck-building genre; zillions of expansions', 2, null),
(1236, 'Tiny Epic Galaxies', 'Fast dice-based sci-fi space game with a bit of chaos', 1, 4),
(1237, '7 Wonders: Duel', 'Tense, quick card game of developing civilizations', 2, 2);
alter table board_game alter column game_id restart with 1300;
insert into member (member_id, name) values
(37, 'curiousattemptbunny'),
(1410, 'bleedingedge'),
(2812, 'missyo');
alter table member alter column member_id restart with 2900;
insert into designer (designer_id, name, uri) values
(200, 'Kris Burm', 'http://www.gipf.com/project_gipf/burm/burm.html'),
(201, 'Antoine Bauza', 'http://www.antoinebauza.fr/'),
(202, 'Bruno Cathala', 'http://www.brunocathala.com/'),
(203, 'Scott Almes', null),
(204, 'Donald X. Vaccarino', null);
alter table designer alter column designer_id restart with 300;
insert into designer_to_game (designer_id, game_id) values
(200, 1234),
(201, 1237),
(204, 1235),
(203, 1236),
(202, 1237);
insert into game_rating (game_id, member_id, rating) values
(1234, 37, 3),
(1234, 1410, 5),
(1236, 1410, 4),
(1237, 1410, 4),
(1237, 2812, 4),
(1237, 37, 5);
__END
The DDL for the cggdb
database includes a pair of timestamp columns, created_at
and updated_at
, in most tables.
Defaults and database triggers ensure that these are maintained by PostgreSQL.
Primary Keys¶
There’s a problem with the data model we’ve used in prior chapters: the primary keys.
We’ve been using simple numeric strings as primary keys, because it was convenient. Literally, we just made up those values. But eventually, we’re going to be writing data to the database, including new Board Games, new Publishers, and new Members.
With the change to using PostgreSQL, we’ve switched to using numeric primary keys. Not only are these more space efficient, but we have set up PostgreSQL to allocate them automatically. We’ll circle back to this issue when we add mutations to create new entities.
In the meantime, our database schema uses numeric primary keys, so we’ll need
to make changes to the GraphQL schema to match [1]; the id fields have changed type from type ID
(which, in GraphQL,
is a kind of opaque string) to type Int
(which is a 32 bit, signed integer).
{:objects
{:BoardGame
{:description "A physical or virtual board game."
:fields
{:id {:type (non-null Int)}
:name {:type (non-null String)}
:summary {:type String
:description "A one-line summary of the game."}
:ratingSummary {:type (non-null :GameRatingSummary)
:description "Summarizes member ratings for the game."}
:description {:type String
:description "A long-form description of the game."}
:designers {:type (non-null (list :Designer))
:description "Designers who contributed to the game."}
:minPlayers {:type Int
:description "The minimum number of players the game supports."}
:maxPlayers {:type Int
:description "The maximum number of players the game supports."}
:playTime {:type Int
:description "Play time, in minutes, for a typical game."}}}
:GameRatingSummary
{:description "Summary of ratings for a single game."
:fields
{:count {:type (non-null Int)
:description "Number of ratings provided for the game. Ratings are 1 to 5 stars."}
:average {:type (non-null Float)
:description "The average value of all ratings, or 0 if never rated."}}}
:Member
{:description "A member of Clojure Game Geek. Members can rate games."
:fields
{:id {:type (non-null Int)}
:name {:type (non-null String)
:description "Unique name of the member."}
:ratings {:type (list :GameRating)
:description "List of games and ratings provided by this member."}}}
:GameRating
{:description "A member's rating of a particular game."
:fields
{:game {:type (non-null :BoardGame)
:description "The Game rated by the member."}
:rating {:type (non-null Int)
:description "The rating as 1 to 5 stars."}}}
:Designer
{:description "A person who may have contributed to a board game design."
:fields
{:id {:type (non-null Int)}
:name {:type (non-null String)}
:url {:type String
:description "Home page URL, if known."}
:games {:type (non-null (list :BoardGame))
:description "Games designed by this designer."}}}
:Query
{:fields
{:gameById
{:type :BoardGame
:description "Access a BoardGame by its unique id, if it exists."
:args
{:id {:type Int}}}
:memberById
{:type :Member
:description "Access a ClojureGameGeek Member by their unique id, if it exists."
:args
{:id {:type (non-null Int)}}}}}
:Mutation
{:fields
{:rateGame
{:type :BoardGame
:description "Establishes a rating of a board game, by a Member."
:args
{:gameId {:type (non-null Int)}
:memberId {:type (non-null Int)}
:rating {:type (non-null Int)
:description "Game rating as number between 1 and 5."}}}}}}}
In addition, the id
field on the BoardGame, Member, and Publisher objects has been renamed: to game_id
, member_id
,
and publisher_id
, respectfully.
This will be handy when performing joins across tables.
org.clojure/java.jdbc¶
This java.jdbc
library is the standard approach to accessing a typical SQL database from Clojure code.
java.jdbc
can access, in a uniform manner, any database for which there is a Java JDBC driver.
The clojure.java.jdbc
namespace contains a number of functions for acessing a database, including
functions for executing arbitrary queries, and specialized functions for peforming inserts, updates, and deletes.
For all of those functions, the first parameter is a database spec, a map of data used to connect to the database, to perform the desired query or other operation.
In a trivial case, the spec identifies the Java JDBC driver class, and provides extra information to build a JDBC URL, including details such as the database host, the user name and password, and the name of the database.
That is fine for initial prototyping, but a JDBC connection is created and destroyed every time a query is executed.
In production, opening up a new connection for each operation has unacceptible performance, so we’ll jump right in with a database connection pooling library, C3P0, from the get go.
java.jdbc
supports this with the :datasource
key in the spec.
A class in C3P0 implements the javax.sql.DataSource
interface,
making it compatible with java.jdbc
.
my.clojure-game-geek.db¶
In prior chapters, the :db
component was just a wrapper around a Clojure Atom; starting here, we’re going to
revise the component to be a wrapper around a pooled connection pool to the PostgreSQL database running in the Docker container.
Our goal in this chapter is to update just one basic query to use the database, the query that retrieves a board game by its unique id. We’ll make just the changes necessary for that one query before moving on.
(ns my.clojure-game-geek.db
(:require [clojure.java.jdbc :as jdbc]
[clojure.set :as set]
[com.stuartsierra.component :as component])
(:import (com.mchange.v2.c3p0 ComboPooledDataSource)))
(defn- pooled-data-source
[host dbname user password port]
(doto (ComboPooledDataSource.)
(.setDriverClass "org.postgresql.Driver")
(.setJdbcUrl (str "jdbc:postgresql://" host ":" port "/" dbname))
(.setUser user)
(.setPassword password)))
(defrecord ClojureGameGeekDb [^ComboPooledDataSource datasource]
component/Lifecycle
(start [this]
(assoc this :datasource (pooled-data-source "localhost" "cggdb" "cgg_role" "lacinia" 25432)))
(stop [this]
(.close datasource)
(assoc this :datasource nil)))
The requires for the db
namespace have changed; we’re using the clojure.java.jdbc
namespace to
connect to the database and execute queries, and also making use of the ComboPooledDataSource
class,
which allows for pooled connections.
The ClojureGameGeekDb record has changed; it now has a datasource
field, and that is
the connection pool for the PostgreSQL database.
The start
method now creates the connection pool
stop
method shuts down the connection pool.
For the meantime, we’ve hardwired the connection details (hostname, username, password, and port) to our Docker container.
A later chapter will discuss approaches to configuration.
Also note that we’re connecting to port 25432
on localhost
; Docker will forward that port to the container
port 5432
, which is the port the PostgreSQL server listens to.
By the time the start
method completes, the :db
component is in
the correct shape to be passed as a clojure.java.jdbc
database spec; it will have a :datasource
key.
find-game-by-id¶
That leaves the revised implementation of the find-game-by-id
function; the only data access function so far rewritten to use
the database.
It simply constructs and executes the SQL query.
(defn- remap-board-game
[row-data]
(set/rename-keys row-data {:game_id :id
:min_players :minPlayers
:max_players :maxPlayers
:created_at :createdAt
:updated_at :updatedAt}))
(defn find-game-by-id
[component game-id]
(-> (jdbc/query component
["select game_id, name, summary, min_players, max_players, created_at, updated_at
from board_game where game_id = ?" game-id])
first
remap-board-game))
With clojure.java.jdbc
the query is a vector
consisting of a SQL query string followed by zero or more query parameters.
Each ?
character in the query string corresponds to a query parameter, based on position.
The query
function returns a seq of matching rows.
By default, each selected row is converted into a Clojure map, and the column names are
converted from strings into keywords.
For an operation like this one, which returns at most one map, we use first
.
Further, we remap the keys from their database snake_case names, to their GraphQL camelCase names, where necessary. This could be done in the query using the SQL AS
keyword, but it makes
the SQL code harder to read and write and is easy to do in Clojure code.
If no rows match, then the seq will be empty, and first
will return nil.
That’s a perfectly good way to identify that the provided Board Game id was not valid.
At the REPL¶
Starting a new REPL, we can give the new code and schema a test:
(start)
=> :started
(q "{ gameById(id: 1234) { id name summary minPlayers maxPlayers }}")
=>
{:data {:gameById {:id 1234,
:name "Zertz",
:summary "Two player abstract with forced moves and shrinking board",
:minPlayers 2,
:maxPlayers 2}}}
Great! That works!
Meanwhile all the other my.clojure-game-geek.db
namespace functions,
expecting to operate against a map inside an Atom, are now broken.
We’ll fix them in the next couple of chapters.
Summary¶
We now have our application working against a live PostgreSQL database and one operation actually works. However, we’ve been sloppy about a key part of our application development: we’ve entirely been testing at the REPL.
In the next chapter, we’ll belatedly write some tests, then convert the rest of the db
namespace to use the database.
[1] | This kind of change is very incompatible - it could easily break clients that expect fields and arguments to still be type ID, and should only be considered before a schema is released in any form. |
Testing, Phase 1¶
Before we get much further, we are very far along for code that has no tests. Let’s fix that.
First, we need to reorganize a couple of things, to make testing easier.
HTTP Port¶
Let’s save ourselves some frustration: when we run our tests, we can’t know if there is a REPL-started system running or not. There’s no problem with two complete system maps running at the same time, and even hitting the same database, all within a single process … that’s why we like the Component library, as it helps us avoid unnecessary globals.
Unfortunately, we still have one global conflict: the HTTP port for inbound requests. Only one of the systems can bind to the default 8888 port, so let’s make sure our tests use a different port.
In previous examples, we’ve always initialized a component record from an empty map, but
that is not strictly necessary. Instead, we can start with a map that provides some configuration,
then perform additional work, inside the start
method, to make the component fully operational
within the system.
(ns my.clojure-game-geek.server
(:require [com.stuartsierra.component :as component]
[com.walmartlabs.lacinia.pedestal2 :as lp]
[io.pedestal.http :as http]))
(defrecord Server [schema-provider server port]
component/Lifecycle
(start [this]
(assoc this :server (-> schema-provider
:schema
(lp/default-service {:port port})
http/create-server
http/start)))
(stop [this]
(http/stop server)
(assoc this :server nil)))
So the Server record now has three fields:
schema-provider
, an injected dependencyport
, containing configuration supplied from outsideserver
, setup inside thestart
method
When we set up the system for production or local REPL development, we use a standard port, such as 8080. When we set up the system for testing, we’ll use a different port, to prevent conflicts.
system namespace¶
So where does the port come from? We can peel back the onion a bit to the my.clojure-game-geek.system
namespace, and that’s a great place to supply the port and set a default:
(ns my.clojure-game-geek.system
(:require [com.stuartsierra.component :as component]
[my.clojure-game-geek.schema :as schema]
[my.clojure-game-geek.server :as server]
[my.clojure-game-geek.db :as db]))
(defn new-system
([]
(new-system nil))
([opts]
(let [{:keys [port]
:or {port 8888}} opts]
(assoc (component/system-map)
:db (db/map->ClojureGameGeekDb {})
:server (component/using (server/map->Server {:port port})
[:schema-provider])
:schema-provider (component/using
(schema/map->SchemaProvider {})
[:db])))))
new-system
now has two arities; in the second, a set of options are passed in and those are used to specify the :port
in the map for the Server record.
Over time, we’ll likely add further options, and a fully fledged application may require a more sophisticated approach for configuration.
Simplify Utility¶
To keep our tests simple, we’ll want to use the simplify
utility function discussed earlier.
Here, we’re creating a new namespace for test utilities, and moved the simplify
function
from the user
namespace to the test-utils
namespace:
(ns my.clojure-game-geek.test-utils
(:require [clojure.walk :as walk])
(:import (clojure.lang IPersistentMap)))
(defn simplify
"Converts all ordered maps nested within the map into standard hash maps, and
sequences into vectors, which makes for easier constants in the tests, and eliminates ordering problems."
[m]
(walk/postwalk
(fn [node]
(cond
(instance? IPersistentMap node)
(into {} node)
(seq? node)
(vec node)
:else
node))
m))
This is located in the dev-resource
folder, so that that Leiningen
won’t treat it as a namespace containing tests to execute.
Over time, we’re likely to add a number of little tools here to make tests more clear and concise.
Integration or Unit? Yes¶
When it comes to testing, your first thought should be at what level of granularity testing should occur. Unit testing is generally testing the smallest possible bit of code; in Clojure terms, testing a single function, ideally isolated from everything else.
Integration testing is testing at a higher level, testing how several elements of the system work together.
Our application is layered as follows:
![digraph {
graph [rankdir=LR];
client [label="External Client"]
fieldresolver [label="Field Resolver\nfunction"]
dbaccess [label="my.clojure-game-geek.db\nfunction"]
client -> Pedestal [label="HTTP"]
Pedestal -> Lacinia -> fieldresolver -> dbaccess -> PostgreSQL
}](_images/graphviz-57791296027f79adc0cac30f86a9d8c2c1389ce7.png)
In theory, we could test each layer separately; that is, we could test the
my.clojure-game-geek.db
functions against a database (or even, some mockup of a database),
then test the field resolver functions against the db
functions, etc.
In practice, building a Lacinia application is an exercise in integration; the individual bits of code are often quite small and simple, but there can be issues with how these bits of code interact.
I prefer a modest amount of integration testing using a portion of the full stack.
There’s no point in testing a block of database code, only to discover that the results don’t work with the field resolver functions calling that code. Likewise, for nominal success cases, there’s no point in testing the raw database code if the exact same code will be exercised when testing the field resolver functions.
There’s still a place for more focused testing, especially when chasing down failure scenarios and other edge cases.
For our first test, we’ll do some integration testing; our tests will start at the Lacinia step from the diagram above, and work all the way down to the database instance (running in our Docker container).
To that mind, we want to start up the schema connected to field resolvers, and ensure
that the resolvers can access the database via the :db
component.
The easiest way to do this start up a full, new system, and then extract the necessary components
from the running system map.
Later, as we build up more code in our application outside of Lacinia, such as request authentication and authorization, we may want to exercise our code by sending HTTP requests in from the tests, rather than bypassing HTTP entirely, as we will do in the meantime.
First Test¶
Our first test will replicate a bit of the manual testing we’ve already done in the REPL: reading an existing board game by its primary key.
(ns my.clojure-game-geek.system-test
(:require [clojure.test :refer [deftest is]]
[com.stuartsierra.component :as component]
[com.walmartlabs.lacinia :as lacinia]
[my.clojure-game-geek.test-utils :refer [simplify]]
[my.clojure-game-geek.system :as system]))
(defn- test-system
"Creates a new system suitable for testing, and ensures that
the HTTP port won't conflict with a default running system."
[]
(system/new-system {:port 8989}))
(defn- q
"Extracts the compiled schema and executes a query."
[system query variables]
(-> system
(get-in [:schema-provider :schema])
(lacinia/execute query variables nil)
simplify))
(deftest can-read-board-game
(let [system (component/start-system (test-system))]
(try
(is (= {:data {:gameById {:name "Zertz"
:summary "Two player abstract with forced moves and shrinking board"
:maxPlayers 2
:minPlayers 2
:playTime nil}}}
(q system
"{ gameById(id: 1234) { name summary minPlayers maxPlayers playTime }}"
nil)))
(finally
(component/stop-system system)))))
We’re making use of the standard clojure.test
library.
The test-system
function builds a standard system, but overrides the HTTP port, as dicussed above.
We use that function to create and start a system for our first test. This first test is a bit verbose; later we’ll refactor some of the code out of it, to make writing additional tests easier.
Importantly, we create a new system, start it, run tests and check expectations, and then stop the system, all within the test. Starting a system is not a heavy weight operation, so starting a new system for each individual test is not problematic [1].
The use of (try ... finally)
, however, is vitally important.
If a test errors (throws an exception), we need to ensure
that the system started by the test is, in fact, shutdown; otherwise the started Jetty threads
will continue to run, keeping port 8989 bound - and therefore, preventing later tests from even starting.
The test itself is quite simple: we execute a query and ensure the correct query response. Because we control the initial test data [2] we know what at least a couple of rows in our database look like.
It’s quite easy to craft a tiny GraphQL query and execute it; execution of that query will flow through Lacinia, to our field resolvers, to the database access code, and ultimately to the database, just like in the diagram. Because queries are expected to be side-effect free, simply checking the query result is sufficient - as we’ll see, testing mutations is a bit more involved.
Running the Tests¶
We’ve written the tests, but now it’s time to execute them.
There’s a number of ways to run Clojure tests.
Let’s look at running them with the command line first.
We have to make a small change to the build.clj
file
generated from a template earlier in the tutorial
because our tests require the :dev
alias to be active.
(ns build
(:refer-clojure :exclude [test])
(:require [org.corfield.build :as bb]))
(def lib 'net.clojars.my/clojure-game-geek)
(def version "0.1.0-SNAPSHOT")
(def main 'my.clojure-game-geek)
(defn test "Run the tests." [opts]
(bb/run-tests (assoc opts :aliases [:dev])))
(defn ci "Run the CI pipeline of tests (and build the uberjar)." [opts]
(-> opts
(assoc :lib lib :version version :main main)
(bb/run-tests)
(bb/clean)
(bb/uber)))
Without this change, you would see some namespace loading errors when
the tests were executed, because
the my.clojure-game-geek.test-utils
namespace wouldn’t be on the classpath.
To actually execute the tests, simply enter clj -T:build test
:
> clj -T:build test
Running task for: test, dev
WARNING: update-vals already refers to: #'clojure.core/update-vals in namespace: clojure.tools.analyzer.utils, being replaced by: #'clojure.tools.analyzer.utils/update-vals
WARNING: update-keys already refers to: #'clojure.core/update-keys in namespace: clojure.tools.analyzer.utils, being replaced by: #'clojure.tools.analyzer.utils/update-keys
WARNING: update-vals already refers to: #'clojure.core/update-vals in namespace: clojure.tools.analyzer, being replaced by: #'clojure.tools.analyzer.utils/update-vals
WARNING: update-keys already refers to: #'clojure.core/update-keys in namespace: clojure.tools.analyzer, being replaced by: #'clojure.tools.analyzer.utils/update-keys
WARNING: update-vals already refers to: #'clojure.core/update-vals in namespace: clojure.tools.analyzer.passes, being replaced by: #'clojure.tools.analyzer.utils/update-vals
WARNING: update-vals already refers to: #'clojure.core/update-vals in namespace: clojure.tools.analyzer.passes.uniquify, being replaced by: #'clojure.tools.analyzer.utils/update-vals
Running tests in #{"test"}
Testing my.clojure-game-geek.system-test
Ran 1 tests containing 1 assertions.
0 failures, 0 errors.
>
But who wants to do that all the time [3]?
Clojure startup time is somewhat slow, as before your tests can run, large numbers of Java classes must be loaded, and signifcant amounts of Clojure code, both from our application and from any libraries, must be read, parsed, and compiled.
Fortunately, Clojure was created with a REPL-oriented development workflow in mind. This is a fast-feedback cycle, where you can run tests, diagnose failures, make code corrections, and re-run the tests in a matter of seconds - sometimes as fast as you can type! Generally, the slowest part of the loop is the part that executes inside your grey matter.
Because the Clojure code base is already loaded and running, even a change that affects many namespaces can be reloaded in milliseconds.
If you are using an IDE, you will be able to run tests directly in a running REPL. In Cursive, Ctrl-Shift-T runs all tests in the current namespace, and Ctrl-Alt-Cmd-T runs just the test under the cursor. Cursive is even smart enough to properly reload all modified namespaces before executing the tests.
Similar commands exist for whichever editor you are using. Being able to load code and run tests in a fraction of a second is incredibly liberating if you are used to a more typical grind of starting a new process every time you want to run some tests [4] .
Database Issues¶
These tests assume the database is running locally, and has been initialized.
What if it’s not? It might look like this:
> clj -T:build test
Running task for: test, dev
WARNING: update-vals already refers to: #'clojure.core/update-vals in namespace: clojure.tools.analyzer.utils, being replaced by: #'clojure.tools.analyzer.utils/update-vals
WARNING: update-keys already refers to: #'clojure.core/update-keys in namespace: clojure.tools.analyzer.utils, being replaced by: #'clojure.tools.analyzer.utils/update-keys
WARNING: update-vals already refers to: #'clojure.core/update-vals in namespace: clojure.tools.analyzer, being replaced by: #'clojure.tools.analyzer.utils/update-vals
WARNING: update-keys already refers to: #'clojure.core/update-keys in namespace: clojure.tools.analyzer, being replaced by: #'clojure.tools.analyzer.utils/update-keys
WARNING: update-vals already refers to: #'clojure.core/update-vals in namespace: clojure.tools.analyzer.passes, being replaced by: #'clojure.tools.analyzer.utils/update-vals
WARNING: update-vals already refers to: #'clojure.core/update-vals in namespace: clojure.tools.analyzer.passes.uniquify, being replaced by: #'clojure.tools.analyzer.utils/update-vals
Running tests in #{"test"}
Testing my.clojure-game-geek.system-test
WARN com.mchange.v2.resourcepool.BasicResourcePool - com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask@60429885 -- Acquisition Attempt Failed!!! Clearing pending acquires. While trying to acquire a needed new resource, we failed to succeed more than the maximum number of allowed acquisition attempts (30). Last acquisition attempt exception:
org.postgresql.util.PSQLException: Connection to localhost:25432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:319)
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49)
at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:247)
at org.postgresql.Driver.makeConnection(Driver.java:434)
at org.postgresql.Driver.connect(Driver.java:291)
at com.mchange.v2.c3p0.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:175)
at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:220)
at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:206)
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool$1PooledConnectionResourcePoolManager.acquireResource(C3P0PooledConnectionPool.java:203)
...
Ran 1 tests containing 1 assertions.
0 failures, 1 errors.
Execution error (ExceptionInfo) at org.corfield.build/run-task (build.clj:324).
Task failed for: test, dev
Full report at:
/var/folders/yg/vytvxpw500520vzjlc899dlm0000gn/T/clojure-7528489387806836542.edn
Because of the connection pooling, this actually takes quite some time to fail, and produces hundreds (!) of lines of exception output, which has been largely elided here.
If you see a huge swath of tests failing, the first thing to do is double check external dependencies, such as the database running inside the Docker container.
Summary¶
We’ve created just one test, and managed to get it to run.
That’s a great start.
Next up, we’ll flesh out our tests, fix the many outdated
functions in the my.clojure-game-geek.db
namespace,
and do some refactoring to ensure that our tests are concise, readable, and efficient.
[1] | Another common approach is to create a system for each test namespace, using a test fixture, that is started before all tests executed, and shut down afterwards. |
[2] | An improved approach might be to create a fresh database namespace for each test, or each test namespace, and create and populate the tables with fresh test data each time. This might be very important when attempting to run these tests inside a Continuous Integration server. |
[3] | On my laptop, it takes 53 seconds to run the tests from the command line. |
[4] | Downside: you’ll probably read a lot less Twitter while developing. |
External Database, Phase 2¶
Let’s get the rest of the functions in the my.clojure-game-geek.db
namespace
working again and add tests for them.
We’ll do a little refactoring as well, to make both the production code
and the tests clearer and simpler.
Logging¶
It’s always a good idea to know exactly what SQL queries are executing in your application; you’ll never figure out what’s slowing down your application if you don’t know what queries are even executing.
(ns my.clojure-game-geek.db
(:require [clojure.java.jdbc :as jdbc]
[io.pedestal.log :as log]
[clojure.string :as string]
[clojure.set :as set]
[com.stuartsierra.component :as component])
(:import (com.mchange.v2.c3p0 ComboPooledDataSource)))
(defn- pooled-data-source
[host dbname user password port]
(doto (ComboPooledDataSource.)
(.setDriverClass "org.postgresql.Driver")
(.setJdbcUrl (str "jdbc:postgresql://" host ":" port "/" dbname))
(.setUser user)
(.setPassword password)))
(defrecord ClojureGameGeekDb [^ComboPooledDataSource datasource]
component/Lifecycle
(start [this]
(assoc this :datasource (pooled-data-source "localhost" "cggdb" "cgg_role" "lacinia" 25432)))
(stop [this]
(.close datasource)
(assoc this :datasource nil)))
(defn- query
[component statement]
(let [[sql & params] statement]
(log/debug :sql (string/replace sql #"\s+" " ")
:params params))
(jdbc/query component statement))
(defn- execute!
[component statement]
(let [[sql & params] statement]
(log/debug :sql (string/replace sql #"\s+" " ")
:params params))
(jdbc/execute! component statement))
We’ve introduced our own versions of clojure.java.jdbc/query
and clojure.java.jdbc/execute!
that
logs the SQL and parameters before continuing on to the standard implementation.
Because of how we format the SQL in our code, it is useful to convert the embedded newlines and indentation into single spaces.
A bit about logging: In a typical Java, or even Clojure, application the focus on logging is on a textural message for the user to read. Different developers approach this in different ways … everything from the inscrutably cryptic to the overly verbose. Yet, across that spectrum, there always an assumption that some user is reading the log.
The io.pedestal/pedestal.log
library introduces a different idea:
logs as a stream of data … a sequence of maps.
That’s what we see in the call to log/debug
: just keys and values
that are interesting.
When logged, it may look like:
DEBUG my.clojure-game-geek.db - {:sql "select game_id, name, summary, min_players, max_players, created_at, updated_at from board_game where game_id = ?", :params (1234), :line 32}
That’s the debug level and namespace, then the map of keys and values (io.pedestal.log
adds the :line
key).
The useful and interesting details are present and unambiguously formatted, since the output is not formatted specifically for a user to read.
This can be a very powerful concept; these logs can even be read back
into memory, converted back into data, and operated on with all the
map
, reduce
, and filter
power that Clojure provides. [1]
After years of sweating the details on formatting (and capitalizing, and quoting, and punctuating) human-readible error messages, it is a joy to just throw whatever data is useful into the log, and not care about all those human oriented formatting details.
This is, of course, all possible because all data in Clojure can be printed out nicely
and even read back in again.
By comparison, data values or other objects in Java
only have useful debugging output if their class provides
an override of the default toString()
method.
When it comes time to execute a query, little has changed
except that the call is now to the local query
function, not the one
provided by clojure.java.jdcb
:
(defn find-game-by-id
[component game-id]
(-> (query component
["select game_id, name, summary, min_players, max_players, created_at, updated_at
from board_game where game_id = ?" game-id])
first
remap-board-game))
logback-test.xml¶
We can enable logging, just for testing purposes, in our logback-test.xml
:
<configuration scan="true" scanPeriod="1 seconds">
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%-5level %logger - %msg%n</pattern>
</encoder>
</appender>
<root level="warn">
<appender-ref ref="STDOUT"/>
</root>
<logger name="my.clojure-game-geek.db" level="DEBUG"/>
</configuration>
Adding a <logger>
element provides an override for the namespace, so that DEBUG
level
calls will be output instead of omitted (the debug level is lower than the default warn level).
Nicely, Logback will pick up this change to the configuration file without a restart.
Re-running tests¶
If we switch back to the my.clojure-game-geek.system-test
namespace and re-run the tests,
the debug output will be mixed into the test tool output:
Loading src/my/clojure_game_geek/db.clj... done
Loading test/my/clojure_game_geek/system_test.clj... done
Running tests in my.clojure-game-geek.system-test
DEBUG my.clojure-game-geek.db - {:sql "select game_id, name, summary, min_players, max_players, created_at, updated_at from board_game where game_id = ?", :params (1234), :line 31}
Ran 1 test containing 1 assertion.
No failures.
More code updates¶
The remaining functions in my.clojure-game-geek.db
can be rewritten to make use of the
local query
and execute!
functions, operate on the real database:
(defn- remap-member
[row-data]
(set/rename-keys row-data {:member_id :id
:created_at :createdAt
:updated_at :updatedAt}))
(defn- remap-designer
[row-data]
(set/rename-keys row-data {:designer_id :id
:created_at :createdAt
:updated_at :updatedAt}))
(defn- remap-rating
[row-data]
(set/rename-keys row-data {:member_id :member-id
:game_id :game-id
:created_at :createdAt
:updated_at :updatedAt}))
(defn find-member-by-id
[component member-id]
(-> (query component
["select member_id, name, created_at, updated_at
from member
where member_id = ?" member-id])
first
remap-member))
(defn list-designers-for-game
[component game-id]
(->> (query component
["select d.designer_id, d.name, d.uri, d.created_at, d.updated_at
from designer d
inner join designer_to_game j on (d.designer_id = j.designer_id)
where j.game_id = ?
order by d.name" game-id])
(map remap-designer)))
(defn list-games-for-designer
[component designer-id]
(->> (query component
["select g.game_id, g.name, g.summary, g.min_players, g.max_players, g.created_at,
g.updated_at
from board_game g
inner join designer_to_game j on (g.game_id = j.game_id)
where j.designer_id = ?
order by g.name" designer-id])
(map remap-board-game)))
(defn list-ratings-for-game
[component game-id]
(->> (query component
["select game_id, member_id, rating, created_at, updated_at
from game_rating
where game_id = ?" game-id])
(map remap-rating)))
(defn list-ratings-for-member
[component member-id]
(->> (query component
["select game_id, member_id, rating, created_at, updated_at
from game_rating
where member_id = ?" member-id])
(map remap-rating)))
(defn upsert-game-rating
"Adds a new game rating, or changes the value of an existing game rating.
Returns nil."
[component game-id member-id rating]
(execute! component
["insert into game_rating (game_id, member_id, rating)
values (?, ?, ?)
on conflict (game_id, member_id) do update set rating = ?"
game-id member-id rating rating])
nil)
The majority of this is quite straight-forward, except for the
upsert-game-rating
function, which makes use of the SQL on conflict
clause
to handle the case where a rating already exists for a particular
game and member - what starts as an insert is converted to an update.
Summary¶
With the database enabled, it was relatively straight forward to convert the old in-memory code to make use of the real database - assuming you are up-to speed on SQL. Most importantly, none of these changes affected the calling code, the field resolvers, at all.
In the next chapter, we’ll focus on testing the code we’ve just added.
[1] | I’ve used this on another project where a bug manifested only at a large scale of operations; by hooking into Logback and capturing the logged maps, it was possible to quickly filter through megabytes of output to the find the clues that revealed how the bug occured. |
Tutorial Wrapup¶
That’s as far as we’ve made it with the tutorial so far.
Looking forward to more updates, someday :-).
Our goal with this tutorial is to build up the essentials of a full application implemented in Lacinia, starting from nothing.
Unlike many of the snippets used elsewhere in the Lacinia documentation, this will be something you can fork and experiment with yourself.
Along the way, we hope you’ll learn quite a bit about not just Lacinia and GraphQL, but about building Clojure applications in general.
You can pull down the full source for the tutorial from GitHub: https://github.com/walmartlabs/clojure-game-geek
Fields¶
Fields are the basic building block of GraphQL data.
Objects and interfaces are composed of fields. Queries, mutations, and subscriptions are also special kinds of fields.
Fields are functions. Or, more specifically, fields are a kind of operation that begins with some data, adds in other details (such as field arguments provided in the query), and produces new data that can be incorporated into the overall result.
Field Definition¶
A field definition occurs in the schema to describe the type and other details of a field. A field definition is a map with specific keys.
Field Type¶
The main key in a field definition is :type
, which is required.
This is the type of value that may be returned by the field resolver, and
is specified in terms of the type DSL.
Types DSL¶
Types are the essence of fields; they can represent scalar values (simple values, such as string or numbers), composite objects with their own fields, or lists of either scalars or objects.
In the schema, a type can be:
- A keyword corresponding to an object, interface, enum, or union
- A scalar type (built in, or schema defined)
- A non-nillable version of any of the above:
(non-null X)
- A list of any of the above:
(list X)
The built-in scalar types:
- String
- Float
- Int
- Boolean
- ID
Field Resolver¶
The :resolve
key in the field definition identifies the field resolver function, used to provide the actual data. The :resolve
key, being a function, is usually
provided at runtime.
This data, the resolved value, is never directly returned to the client; this is because in GraphQL, the client query identifies which fields from the resolved value are selected (and often, renamed) to form the result value.
When a specific resolver is not provided for a field, Lacinia will provide a simple default: it is assumed that the containing field’s resolved value is a map containing a key exactly matching the field’s name.
The field’s resolver is passed the resolved value of the containing field, object, query, or mutation.
The return value may be a scalar type, or a structured type, as defined by the
field’s :type
.
For composite (non-scalar) types, the client query must include a nested set of fields to be returned in the result map. The query is a tree, and the leaves of that tree must always be simple scalar values.
Arguments¶
A field may define arguments using the :args
key; this is a map from argument name to
an argument definition.
A field uses arguments to modify what data, and in what order, is to be returned. For example, arguments could set boundaries on a query based on date or price, or determine sort order.
Argument definitions define a value for :type
, and may optionally provide a :description
.
Arguments do not have resolvers, as they represent explicit data from the client
passed to the field.
Arguments may also have a :default-value
.
The default value is supplied to the field resolver when the request does not itself supply
a value for the argument.
An argument that is not specified in the query, and does not have a default value, will be omitted from the argument map passed to the field resolver.
Description¶
A field may include a :description
key; the value is a string exposed through Introspection.
Deprecation¶
A field may include a :deprecated
key; this identifies that the field
is deprecated.
Objects¶
A schema object defines a single type of data that may be queried. This is often a mapping from data obtained from a database or other external store and exposed by GraphQL.
GraphQL supports objects, interfaces, unions, and enums.
A simple object defines a set of fields, each with a specific type.
Object definitions are under the :objects
key of the schema.
{:objects
{:Product
{:fields {:id {:type ID}
:name {:type String}
:sku {:type String}
:keywords {:type (list String)}}}}}
This defines a schema containing only a single schema object [1], Product, with four fields:
- id - an identifier
- name - a string
- sku - a string
- keyword - a list of strings
Field Definitions¶
An object definition contains a :fields
key, whose value is a map from field name to
field definition. Field names are keywords.
Interface Implementations¶
An object may implement zero or more interfaces.
This is described using the :implements
key, whose value is a list of keywords identifying interfaces.
Objects can only implement interfaces: there’s no concept of inheritance from other objects.
An object definition must include all the fields from all implemented interfaces; failure to do so will cause an exception to be thrown when the schema is compiled.
In some cases, a field defined in an object may be more specific than a field from an inherited interface; for example, the field type in the interface may itself be an interface; the field type in the object must be that exact interface or an object that implements that interface.
In our Star Wars themed example schema, we see that the :Character
interface defines the :friends
field
as type (list :Character)
. So, in the generic case, the friends of a character can be either Humans or Droids.
Perhaps in a darker version of the Star Wars universe, Humans can not be friends with Droids.
In that case, the :friends
field of the :Human
object would be type
(list :Human)
rather than the more egalitarian (list :Character)
.
This appears to be a type conflict, as the type of the :friends
field differs between :Human
and :Character
In fact, this does not violate type constraints, because a Human is always a Character.
Object Description¶
An object definition may include a :description
key; the value is a string exposed through Introspection.
When an object implement an interface, it may omit the :description
of inherited fields, and on
arguments of inherited fields to inherit the description from the interface.
[1] | A schema that fails to define either queries or mutations is useful only as an example. |
Interfaces¶
GraphQL supports the notion of interfaces, collections of fields and their arguments.
To keep things simple, interfaces can not extend other interfaces. Likewise, objects can implement multiple interfaces, but can not extend other objects.
Interfaces are valid types, they can be specified as the return type of a query, mutation, or as the type of a field.
{:interfaces
{:Named
{:fields {:name {:type String}}}}
:objects
{:Person
{:implements [:Named]
:fields {:name {:type String}
:age {:type Int}}}
:Business
{:implements [:Named]
:fields {:name {:type String}
:employee_count {:type Int}}}}}
An interface definition may include a :description
key; the value is a string exposed through Introspection.
The description on an interface field, or on an argument of an interface field, will be inherited by the object field (or argument) unless overriden. This helps to eliminate duplication of documentation between an interface and the object implementing the interface.
The object definition must include all the fields of all extended interfaces.
Tip
When a field or operation type is an interface, the field resolver may return any of a number of different concrete object types, and Lacinia has no way to determine which; this information must be explicitly provided.
Enums¶
GraphQL supports enumerated types, types whose value is limited to a explicit list.
{:enums
{:Episode
{:description "The episodes of the original Star Wars trilogy."
:values [:NEWHOPE :EMPIRE :JEDI]}}}
It is allowed to define enum values as either strings, keywords, or symbols. Internally, the enum values are converted to keywords.
Enum values must be unique, otherwise an exception is thrown when compiling the schema.
Enum values must be GraphQL Names: they may contain only letters, numbers, and underscores.
Enums are case sensitive; by convention they are in all upper-case.
When an enum type is used as an argument, the value provided to the field resolver function will be a keyword, regardless of whether the enum values were defined using strings, keywords, or symbols.
Field resolvers are required to return a keyword, and that keyword must match one of the values in the enum.
As with many other elements in GraphQL, a description may be provided for the enum (for use with Introspection).
To provide a description for individual enum values, a different form must be used:
{:enums
{:Episode
{:description "The episodes of the original Star Wars trilogy."
:values [{:enum-value :NEWHOPE :description "The first one you saw."}
{:enum-value :EMPIRE :description "The good one."}
{:enum-value :JEDI :description "The one with the killer teddy bears."}]}}}
The :description
key is optional.
You may include the :deprecated
key used to mark a single value as deprecated.
You may mix-and-match the two forms.
Parse and Serialize¶
Normally, when using enums, you must match your application’s data model to the GraphQL model; for enums that means that you will receive (via field arguments) enum values as simple keywords. Your resolver code must provide enums as strings, keywords, or symbols that match one of the defined values for the enum.
However, in Clojure we often use namespaced keywords in our application model, or other representations of enum values specific to your application. Starting in Lacinia 0.36.0, it is possible to control the mapping between the GraphQL model (the simple keywords) and your application model.
Much like scalars you may optionally
provide a :parse
and :serialize
for enums, but the
intention is slightly different.
For an enum, the :parse
function is passed a valid enum keyword and returns a value used by the application’s
data model … most often, this is a namespaced keyword.
The default :parse
function is identity; the GraphQL value is the same as the application value,
an unqualified keyword.
The :serialize
function is the opposite; it converts from the application model back to
a GraphQL value, which is then verified to be valid for the enum.
The function com.walmartlabs.lacinia.util/inject-enum-transformers is an easy way to add the :parse
and :serialize
functions
to the schema.
Unions¶
A union type is a type that may be any of a list of possible objects.
A union is a type defined in terms of different objects:
{:unions
{:SearchResult
{:members [:Person :Photo]}}
:objects
{:Person
{:fields {:name {:type String}
:age {:type Int}}}
:Photo
{:fields {:imageURL {:type String}
:title {:type String}
:height {:type Int}
:width {:type Int}}}
:Query
{:fields
{:search
{:type (list :SearchResult)
:args {:term String}}}}}}
A union definition must include a :members
key, a sequence of object types.
The above example identifies the SearchResult` union type to be either a ``Person
(with fields
name
and age
), or a Photo
(with fields imageURL
, title
, height
, and width
).
Unions must define at least one type; each member type must be an object type (members may not be reference scalar types, interfaces, or other unions).
When a client makes a union request, they must use the fragment spread syntax to identify what is to be returned based on the runtime type of object:
{ search (term:"ewok") {
... on Person { name }
... on Photo { imageURL title }
}}
This breaks down what will be returned in the result map based on the type of the value produced
by the search
query. Sometimes there will be a name
key in the result, and other times
an image-url
and title
key.
This may vary result by result even within a single request:
{:data
{:search
[{:name "Nik-Nik"}
{:imageURL "http://www.telegraph.co.uk/content/dam/film/ewok-xlarge.jpg"
:title "an Ewok in The Return of the Jedi"}
]}}
Tip
When a field or operation type is a union, the field resolver may return any of a number of different concrete object types, and Lacinia has no way to determine which; this information must be explicitly provided.
Queries¶
Queries are responsible for generating the initial resolved values that will be picked apart to form the result map.
Other than that, queries are just the same as any other field. Queries have a type, and accept arguments.
Queries are defined as the fields of a special object, Query object.
{:objects
{:Query
{:fields
{:hero
{:type (non-null :Character)
:args {:episode {:type :episode}}}
:human
{:type (non-null :Human)
:args {:id {:type String
:default-value "1001"}}}}}}}
The field resolver for a query is passed nil as the the value (the third parameter). Outside of this, the query field resolver is the same as any field resolver anywhere else in the schema.
In the GraphQL specification, it is noted that queries are idempotent; if the query document includes multiple queries, they are allowed to execute in parallel.
:queries key¶
The above is the “modern” way to define queries; an older approach is still supported.
Queries may instead be defined using the :queries
key of the schema.
{:queries
{:hero
{:type (non-null :Character)
:args {:episode {:type :episode}}}
:human
{:type (non-null Hhuman)
:args {:id {:type String
:default-value "1001"}}}}}
Mutations¶
Mutations parallel queries, except that the root field resolvers may make changes to underlying data in addition to exposing data.
The field resolver for a mutation will, as with a query, be passed nil as its value argument (the third argument). A mutation is expected to perform some state changing operation, then return a value that indicates the new state; this value will be recursively resolved and selected, just as with a query.
Mutations are defined as fields of the Mutation object.
When a single query includes more than one mutation, the mutations must execute in the client-specified order. This is different from queries, which allow for each root query to run in parallel.
Typically, mutations are only allowed when the incoming request is explicitly an HTTP POST. However, that is beyond the scope of Lacinia (it doesn’t know about the HTTP request, just the query string extracted from the HTTP request).
Subscriptions¶
Subscriptions are GraphQL’s approach to server-side push. The description is a bit abstract, as the specification keeps all options open on how subscriptions are to be implemented.
With subscriptions, a client can establish a long-lived connection to a server, and will receive new data on the connection as it becomes available to the server.
Common use cases for subscriptions are updating a conversation page as new messages are added, updating a dashboard as interesting events about a system occur, or monitoring the progress of some long-lived process.
Overview¶
The specification discusses a source stream and a response stream.
Lacinia implements the source stream as a callback function. The response stream is largely the responsibility of the web tier.
Lacinia invokes a streamer function once, to initialize the subscription stream.
The streamer is provided with a source stream callback function; as new values are available they are passed to this callback.
Typically, the streamer will create a thread, core.async process, or other long-lived construct to feed values to the source stream.
Whenever the source stream callback is passed a value, Lacinia will execute the subscription as a query, which will generate a new response (with the standard
:data
and/or:errors
keys).The response will be converted as necessary and streamed to the client, forming the response stream.
The streamer must return a function that will be invoked to perform cleanup. This cleanup function typically stops whatever process was started earlier.
Subscriptions are operations, like queries or mutations. They are defined as fields of the Subscription object.
Streamer¶
The streamer is responsible for initiating and managing the the source stream.
The streamer is provided as the :stream
key in the subscription definition.
{:objects
{:LogEvent
{:fields
{:severity {:type String}
:message {:type String}}}
:Subscription
{:fields
{:logs
{:type :LogEvent
:args {:severity {:type String}}
:stream :stream-logs}}}}}
Streamers parallel field resolvers, and a function, com.walmartlabs.lacinia.util/inject-streamers, is provided to replace keywords in the schema with actual functions.
A streamer is passed three values:
- The application context
- The field arguments
- The source stream callback
The first two are the same as a field resolver; the third is a function that accepts a single value.
The streamer should perform whatever operations are necessary for it to set up the stream of values; typically this is registering as a listener for updates to some form of publish/subscribe system.
As new values are published, the streamer must pass those values to the source stream callback.
Further, the streamer must return a function to clean up the stream when the subscription is terminated.
(defn log-message-streamer
[context args source-stream]
;; Create an object for the subscription.
(let [subscription (create-log-subscription)]
(on-publish subscription
(fn [log-event]
(-> log-event :payload source-stream)))
;; Return a function to cleanup the subscription
#(stop-log-subscription subscription)))
A lot of this example is hypothetical; it presumes create-log-subscription
will return an value
that can be used with on-publish
and stop-log-subscription
.
A real implementation might use Clojure core.async, subscribe to a JMS queue, or an almost
unbounded number of other options.
Regardless, the streamer provides the stream of source values, by making successive calls to the provided source stream callback function, and it provides a way to cleanup the subscription, by returning a cleanup function.
The subscription stays active until either the client closes the connection, or
until nil
is passed to the source stream callback.
In either case, the cleanup callback will then be invoked.
Invoking the streamer¶
The streamer must be invoked with the parsed query and source stream callback in order to setup the subscription using com.walmartlabs.lacinia.executor/invoke-streamer.
(require
'[com.walmartlabs.lacinia.executor :as executor]
[com.walmartlabs.lacinia.parser :as parser]
[com.walmartlabs.lacinia.constants :as constants])
(let [prepared-query (-> schema
(parser/parse-query query)
(parser/prepare-with-query-variables variables))
source-stream-callback (fn [data]
;; Do something with the data
;; e.g. send it to a websocket client
)
cleanup-fn (executor/invoke-streamer
{constants/parsed-query-key prepared-query} source-stream-callback)]
;; Do something with the cleanup-fn e.g. call it when a websocket connection is closed
)
Typically subscriptions are used with websockets so this example could be adapted to receive a message with a query and variables from a connected websocket client. Then any messages received by the source stream callback can be pushed to the client.
Timing¶
The source stream callback will return immediately. It must return nil.
The provided value will be used to generate a GraphQL result map, which will be streamed to the client. Typically, the result map will be generated asynchronously, on another thread.
Implementations of the source stream callback may set different guarantees on when or if values in the source stream are converted to responses in the response stream.
Likewise, when the subscription is closed (by either the client or by the streamer itself), the callback will be invoked asynchronously.
Notes¶
The value passed to the source stream callback is normally a plain, non-nil value.
It may be a wrapped value (e.g., via com.walmartlabs.lacinia.resolve/with-error). This will be handled inside com.walmartlabs.lacinia.execute/execute-query (which is invoked with the value passed to the callback).
For historical reasons, it may also be a ResolverResult; it is the implementation’s job to obtain
the resolved value from this result before calling execute-query
; this is handled by lacinia-pedestal, for example.
Resolver¶
Unlike a query or mutation, the field resolver for a subscription always starts with a specific value, provided by the streamer, via the source stream callback.
Because of this, the resolver is optional: if not provided, a default resolver is used, one that simply returns the source stream value.
However, it still makes sense to implement a resolver in some cases.
Both the resolver and the streamer receive the same map of arguments: it is reasonable that some may be used by the streamer (for example, to filter which values go into the source stream), and some by the resolver (to control the selections on the source value).
Deprecation¶
Schemas grow and change over time. GraphQL values backwards compatibility quite highly, so changes to a schema are typically additive: introducing novel fields, types, and enum values.
However, when the implementation of a field is just wrong, it can be kept for compatibility, but deprecated.
Both fields and enums can include a :deprecated
key.
Remember that queries, mutations, and subscriptions are (under the covers) just fields, so they can be
deprecated as well.
The value for the key can either be true
, or a string description of the reason the field is deprecated.
Typically, the description indicates an alternative field to use instead.
Deprecation does not affect execution of the field in any way; the deprecation flag and reason simply shows up in introspection.
When using the Schema Definition Language, elements may be marked with the
@deprecated
directive.
Directives¶
Directives provide a way to describe additional options to the GraphQL executor.
Directives is a GraphQL term; in practice, directives are much like meta data in Clojure, or annotations in Java.
Directives allow Lacinia to change the incoming query based on additional criteria. For example, we can use directives to include or skip a field if certain criteria are met.
Currently Lacinia supports just the two standard query directives: @skip
and @include
, but future versions
may include more.
Warning
Directive support is currently in transition towards some support for custom directives.
Directives in Schema IDL¶
When using GraphQL SDL Parsing, the directive keyword allows new directives to be defined. Directive definitions can be defined for executable elements (such as a field selection in a query document), or for type system elements (such as an object or field definition in the schema).
Directives may also be defined in an EDN schema; the root :directive-defs
element is a map of directives types
to directive definition. A directive defintion defines the types of any arguments, as well as a set of locations.
{:directive-defs
{:access
{:locations #{:field-definition}
:args {:role {:type (non-null String)}}}}
:objects
{:Query
{:fields
{:ultimateAnswer
{:type String
:directives [{:directive-type :access
:directive-args {:role "deep-thought"}}]}}}}}
This defines a field definition directive, @access
, and applies it to the ultimateAnswer
query field.
Directive Validation¶
Directives are validated:
- Directives may have arguments, and the argument types must be rooted in known scalar types.
- Directives may be placed on schema elements (objects and input objects, unions, enums and enum values, fields, etc.). Directives placed on an element are verified to be applicable to the location.
Warning
Directive support is evolving quickly; full support for directives, including argument type validation, is forthcoming, as is an API to identify schema and executable directives.
The goal of the current stage is to support parsing of SDL schemas that include directive definitions and directives on elements.
Introspection hasn’t caught up to to these changes; custom directives are not identified, nor are directives on elements.
@deprecated directive¶
The @deprecated
directive is supported.
This enables Deprecation of object fields and enum values.
Field Resolvers¶
Field resolvers are how Lacinia goes beyond data modelling to actually providing access to data.
Field resolvers are attached to fields, including the root objects Query
, Mutations
, and Subscription
.
It is only inside field resolvers that a Lacinia application can connect to a database or
an external system: field resolvers are where the data actually comes from.
In essence, the top-level operations perform the initial work in a request, getting the root object (or collection of objects).
Field resolvers in nested fields are responsible for extracting and transforming that root data. In some cases, a field resolver may need to perform additional queries against a back-end data store.
Overview¶
Each operation (query, mutation, or subscription) will have a root field resolver. Every field inside the operation or other object will have a field resolver: if an explicit one is not provided, Lacinia creates a default one.
A field resolver’s responsibility is to resolve a value; in the simplest case, this is accomplished by just returning the value.
As you might guess, the processing of queries into result map data is quite recursive. The initial operation’s field resolver is passed nil as the container resolved value.
The root field resolver will return a map [1] ; as directed by the client’s query, the fields of this object will be selected and the top-level object passed to to the field resolvers for the fields in the selection.
This continues down, where at each layer of nested fields in the query, the containing field’s resolved value is passed to each field’s resolver function, along with the global context, and the arguments specific to that field.
The rules of field resolvers:
- A operation will resolve to a map of keys and values (or resolve to a sequence of such maps). The fields requested in the client’s query will be used to resolve nested selections.
- Each field is passed its containing field’s resolved value. It then returns a resolved value, which itself may be passed to its sub-fields.
Tip
It is possible to preview nested selections in a field resolver, which can be used to implement some important optimizations.
Meanwhile, the selected data from the resolved value is added to the result map.
If the value is a scalar type, it is added as-is.
Otherwise, the value is a structured type, and the query must provide nested selections.
Field Resolver Arguments¶
A field resolver is passed three arguments:
- The application context.
- The field’s arguments.
- The containing field’s resolved value.
Application Context¶
The application context is a map passed to the field resolver. It allows some global state to be passed down into field resolvers; the context is initially provided to com.walmartlabs.lacinia/execute The initial application context may even be nil.
Many resolvers can simply ignore the context.
Warning
Lacinia will frequently add its own keys to the context; these will be namespaced keywords. Please do not attempt to make use of these keys unless they are explicitly documented. Undocumented keys are not part of the Lacinia API and are subject to change without notice.
Field Arguments¶
This is a map of arguments provided in the query. The arguments map has keyword keys; the value types are as determined by definition of the field argument.
If the argument value is expressed as a query variable, the variable will be resolved to a simple value when the field resolver is invoked.
Container’s Resolved Value¶
As mentioned above, the execution of a query is highly recursive. The operation, as specified in the query document, executes first; its resolver is passed nil for the container resolved value.
The operation’s resolved value is passed to the field resolver for each field nested in the operation.
For scalar types, the field resolver can simply return the selected value.
For structured types, the field resolver returns a resolved value; the query must contain nested selections. These selections will trigger further fields, whose resolvers will be passed the resolved value.
For example, you might have a lineItem
query of type LineItem
, and LineItem
might include a field,
product
of type Product
.
A query {lineItem(id:"12345") { product }}
is not valid: it is not possible to return a Product directly,
you must select specific fields within Product: {lineItem(id:"12345") { product { name upc price }}}
.
Tip
Generally, we expect the individual values to be Clojure maps (or records). Lacinia supports other types, though that creates a bit of a burden on the developer to provide the necessary resolvers.
Resolving Collections¶
When an operation or field resolves as a collection, things are only slightly different.
The nested selections are applied to each resolved value in the collection.
Default Field Resolver¶
In the majority of cases, there is a direct mapping from a field name (in the schema) to a key of the resolved value.
When a resolver for a field is not explicitly specified, a default resolver is provided automatically; this default resolver simply expects the container resolved value to be a map containing a key that exactly matches the field name.
It is even possible to customize this default field resolver, as an option passed to com.walmartlabs.lacinia.schema/compile.
[1] | Or, in practice, a sequence of maps. In theory, an operation type could be a scalar, but use cases for this are rare. |
Injecting Resolvers¶
Schemas start as EDN files, which has the advantage that symbols do not have to be quoted
(useful when using the list
and non-null
qualifiers on types).
However, EDN is data, not code, which makes it nonsensical to define field resolvers directly in the schema file.
One option is to use assoc-in
to attach resolvers after reading the EDN, but before invoking
com.walmartlabs.lacinia.schema/compile.
This can become quite cumbersome in practice.
Instead, the standard approach is use
com.walmartlabs.lacinia.util/inject-resolvers, [1] which is a concise way of matching fields to resolvers, adding
the :resolve
key to the nested field definitions.
(ns org.example.schema
(:require
[clojure.edn :as edn]
[clojure.java.io :as io]
[com.walmartlabs.lacinia.schema :as schema]
[com.walmartlabs.lacinia.util :as util]
[org.example.db :as db]))
(defn star-wars-schema
[]
(-> (io/resource "star-wars-schema.edn")
slurp
edn/read-string
(util/inject-resolvers {:Query/hero db/resolve-hero
:Query/human db/resolve-human
:Query/droid db/resolve-droid
:Human/friends db/resolve-friends
:Droid/friends db/resolve-friends})
schema/compile))
The keys in the map passed to inject-resolvers
use the namespace to define the object (such as Human
or Query
) and the local name to define the field within the object (hero
and so forth).
The inject-resolvers
step occurs before the schema is compiled.
[1] | An older approach is still supported, via the function
com.walmartlabs.lacinia.util/attach-resolvers, but inject-resolvers is preferred, as it is simpler. |
Explicit Types¶
For structured types, Lacinia needs to know what type of data is returned by the field resolver, so that it can, as necessary, process query fragments.
When the type of field is a concrete object type, Lacinia automatically tags the value with the schema type.
When the type of a field is an interface or union, it is necessary for the field resolver to explicitly tag the value with its object type.
Using tag-with-type¶
The function com.walmartlabs.lacinia.schema/tag-with-type exists for this purpose. The tag value is a keyword matching an object definition.
When a field returns a list of an interface, or a list of a union, then each individual resolved value must be tagged with its concrete type. It is allowed and expected that different values in the collection will have different concrete types.
Generally, type tagging is just metadata added to a map (or Clojure record type).
However, Lacinia supports tagging of arbitrary objects that don’t support Clojure metadata
… but tag-with-type
will return a wrapper type in that case. When using Java types,
make sure that tag-with-type
is the last thing a field resolver does.
Using record types¶
As an alternative to tag-with-type
, it is possible to associate an object with a Java class; typically
this is a record type created using defrecord
.
The :tag
key of the object definition must be set to the the class name (as a symbol).
{:unions
{:Searchable
{:members [:Business :Employee]}}
:objects
{:Business
{:fields
{:id {:type ID}
:name {:type String}}
:tag com.example.data.Business}
:Employee
{:fields
{:id {:type ID}
:employer {:type :Business}
:givenName {:type String}
:familyName {:type String}}
:tag com.example.data.Employee}
:Query
{:fields
{:businesses
{:type (list :Business)}
:search
{:type (list :Searchable)}}}}}
This only works if the field resolver functions return the corresponding record types, rather than
ordinary Clojure maps.
In the above example, the field resolvers would need to invoke the map->Business
or map->Employee
constructor
functions as appropriate.
Tip
The :tag
value is a Java class name, not a namespaced Clojure name.
That means no slash character, and dashes in the namespace must be converted to underscores.
Container type¶
When a field resolver is invoked, the context value for key :com.walmartlabs.lacinia/container-type-name
will be the name of the concrete type (a keyword) for the resolved value passed into the resolver.
This will be nil for top-level operations.
When the type of the containing field is a union or interface, this value will be the specific concrete object type for the actual resolved value.
Exceptions¶
Field resolvers are responsible for catching any exceptions that occur.
Uncaught exceptions are not converted to errors; they are caught, wrapped in a new exception to identify the field name, field arguments, query path, and query location, but then allowed to bubble up out of Lacinia entirely.
This is not desirable: better to return a partial result along with errors.
Field resolvers should catch exceptions and use ResolverResults
to communicate errors back to Lacinia for inclusion in the :errors
key of the result.
Failure to catch exceptions is even more damaging when using async field resolvers, as this can cause query execution to entirely halt, due to resolver result promises never being delivered.
Using ResolverResult¶
In the simplest case, a field resolver will do its job, resolving a value, simply by returning the value.
However, there are several other scenarios:
- There may be errors associated with the field which must be communicated back to Lacinia (for the
top-level
:errors
key in the result map) - A field resolver may want to introduce changes into the application context as a way of communicating to deeply nested field resolvers
- A field resolver may operate asynchronously, and want to return a promise for data that will be available in the future
Field resolvers should not throw exceptions; instead, if there is a problem generating the resolved value, they should use the com.walmartlabs.lacinia.resolve/resolve-as function to return a ResolverResult value.
When using resolve-as
, you may pass the error map as the second parameter (which is optional).
This first parameter is the resolved value, which may be nil
.
Errors will be exposed as the top-level :errors
key of the execution result.
Error maps must contain at a minimum a :message
key with a value of type String.
You may specify other keys and values as you wish, but these values will be part of the ultimate result map, so they should be both concise and safe for the transport medium. Generally, this means not to include values that can’t be converted into JSON values.
In the result map, error maps are transformed; they contain the :message
key, as well
as :locations
, :path
, and (sometimes) :extensions
.
{:data {:hero {:friends [nil nil nil]}}
:errors [{:message "Non-nullable field was null."
:locations [{:line 1
:column 20}]
:path [:hero :friends :arch_enemy]}]}
The :locations
key identifies where in the query document, as a line and column address,
the error occured.
It’s value is an array (normally, a single value) of location maps; each location map
has :line
and :column
keys.
The :path
associates the error with a location in the result data; this is seq of the names of fields
(or aliases for fields).
Some elements of the path may be numeric indices into sequences, for fields of type list.
These indices are zero based.
Any additional keys in the error map are collected into the :extensions
key (which is only present
when the error map has such keys).
The order in which errors appear in the :errors
key of the result map is not specified;
however, Lacinia does remove duplicate errors.
Tagging Resolvers¶
If you write a function that always returns a ResolverResult, you should set the tag of the function to be com.walmartlabs.lacinia.resolve/ResolverResult. Doing so enables an optimization inside Lacinia - it can skip the code that checks to see if the function did in fact return a ResolverResult and wrap it in a ResolverResult if not.
Unfortunately, because of how Clojure handles function meta-data, you need to write your function as follows:
(require '[com.walmartlabs.lacinia.resolve :refer [resolve-as ResolverResult]])
(def tagged-resolver
^ResolverResult (fn [context args value]
(resolve-as ...)))
This places the type tag on the function, not on the symbol (as normally happens with defn
).
It doesn’t matter whether the function invokes resolve-as
or resolve-promise
, but returning
nil or a bare value from a field resolver tagged with ResolverResult will cause runtime exceptions, so be careful.
Default Resolvers¶
When a field does not have an explicit resolver in the schema, a default resolver is provided by Lacinia (this is just one of the many operations that occur when compiling a schema); ultimately, every field has a resolver, but the vast majority are these default resolvers.
The default resolver maps the field name directly to a map key; a field named userName
will default to a keyword key, :userName
; it does no
conversions beyond that, so there is no magic mapping from userName
to :user-name
, for example.
Nested Resolver Results¶
Normally, a field resolver for a non-scalar field returns a map; the map is recusively selected by any nested fields (a field with a list type will return a seq of raw values).
It is allowed that nested values are themselves ResolverResult instances; this is an alternative to defining resolvers for the nested fields, and only really makes sense for a ResolverResultPromise (an asynchronous result).
This has an advantages: there’s no need to define additional resolvers, and in some cases, no need to pass extra state in the context needed by the nested resolvers.
The disadvantage is that the field in question may not be selected in the query and the asynchronous work being performed will simply be discarded.
FieldResolver Protocol¶
In the majority of cases, a field resolver is simply a function that accepts the three parameters: context, args, and value.
However, when structuring large systems using Component (or any similar approach), this can be inconvienient, as it does not make it possible to structure field resolvers as components.
The com.walmartlabs.lacinia.resolve/FieldResolver protocol addresses this: it defines a single method, resolve-value
.
This method is the analog of an ordinary field resolver function.
The support for this protocol is baked directly into com.walmartlabs.lacinia.schema/compile.
Just like field resolver functions, FieldResolver instances can resolve a value directly by returning it, or can return a ResolverResult.
Asynchronous Field Resolvers¶
Lacinia supports asynchronous field resolvers: resolvers that run in parallel within a single request.
This can be very desirable: different fields within the same query may operate on different databases or other backend data sources, for example. Queries to these backends can easily execute in parallel, resulting in better throughput for the overall GraphQL query execution than executing serially.
Alternately, a single request may invoke multiple top-level operations which, again, can execute in parallel.
It’s very easy to convert a normal synchronous field resolver into an
asynchronous field resolver:
Instead of returning a normal value, an asynchronous field resolver
returns a special kind of ResolverResult, a
ResolverResultPromise
.
Such a promise is created by the com.walmartlabs.lacinia.resolve/resolve-promise function.
The field resolver function returns immediately, but will typically perform some work in a background
thread.
When the resolved value is ready, the deliver!
method can be invoked on the promise.
(require
'[com.walmartlabs.lacinia.resolve :as resolve])
(defn ^:private get-user
[connection user-id]
...)
(defn resolve-user
[context args _]
(let [{:keys [id]} args
{:keys [connection]} context
result (resolve/resolve-promise)]
(.start (Thread.
#(try
(resolve/deliver! result (get-user connection id))
(catch Throwable t
(resolve/deliver! result nil
{:message (str "Exception: " (.getMessage t))})))))
result))
The promise is created and returned from the
field resolver function.
In addition, as a side effect, a thread is started to perform some work.
When the work is complete, the deliver!
method on the promise will inform
Lacinia, at which point Lacinia can start to execute selections on the resolved value
(in this example, the user data).
On normal queries, Lacinia will execute as much as it can in parallel. This is controlled by how many of your field resolvers return a promise rather than a direct result.
Despite the order of execution, Lacinia ensures that the order of keys in the result map matches the order in the query.
For mutations, the top-level operations execute serially. That is, Lacinia will execute one top-level operation entirely before starting the next top-level operation.
Timeouts¶
Lacinia does not enforce any timeouts on the field resolver functions, or the
promises they return.
If a field resolver fails to deliver!
to a promise, then Lacinia will block,
indefinitely.
It’s quite reasonable for a field resolver to enforce some kind of timeout on its own, and deliver nil and an error message when a timeout occurs.
Exceptions¶
Uncaught exceptions in an asynchonous resolver are especially problematic, as it means that ResolverResultPromises are never delivered.
In the example above, any thrown exception is converted to an error map.
Warning
Not catching exceptions will lead to promises that are never delivered and that will cause Lacinia to block indefinitely.
Thread Pools¶
The on-deliver!
callback does not execute on the main thread, it is always
invoked in a worker thread within a thread pool.
It is recommended that an application-specific ThreadPoolExecutor
be provided as the
:executor
option when compiling the schema. If omitted, then a default
executor will be provided.
Note
The :executor
option was added in Lacinia 1.2.
Nested Selections¶
To review: executing a GraphQL query is highly recursive, where a field resolver will be responsible for obtaining data from an external resource or data store, and nested fields will refine the root data and make selections.
Consider a simple GraphQL query:
{
hero
{
name
friends { name }
... on human {
home_planet
}
}
}
The query can be visualized as a tree of selections:
![digraph {
subgraph cluster_query {
label="Selections"
hero
name
friends
fname [label="name"]
humanfrag[label="... on Human"]
planet[label="homePlanet"]
hero -> {name; humanfrag, friends}
friends -> fname
humanfrag -> planet
}
subgraph cluster_schema {
label="Schema"
node[shape=rectangle]
sqr [label="Query"]
shuman [label="Human"]
schar [label="Character"]
node[shape=ellipse]
shero [label="hero"]
sfriends [label="friends"]
sappears_in [label="appears_in"]
sname [label="name"]
shriends [label="friends"]
shappears_in [label="appearsIn"]
shname[label="name"]
splanet [label="homePlanet"]
sqr -> shero
schar -> { sname, sfriends, sappears_in}
shuman -> { shname, shriends, shappears_in, splanet }
}
edge[style=dashed]
name -> sname
hero -> shero
planet -> splanet
friends -> sfriends
fname -> sname
}](_images/graphviz-0d7173a698857aa3da81647adef50ca97dd7d1c9.png)
Nodes in the selections tree relate to fields in the schema.
Remember that the type of query hero
is the interface type Character
(and so can be either a Human
or a Droid
).
In execution order, resolution occurs top to bottom, so the hero
selection occurs
first, then (potentially in parallel) friends
, homePlanet
, and (hero) name
.
These last two are leaf nodes, because they are scalar values.
The list of Characters
(from the friends
field) then has its name
field selected.
The result map then constructs from bottom to top.
Accessing the Selection¶
From inside a field resolver, the function com.walmartlabs.lacinia.executor/selection may be invoked to return a com.walmartlabs.lacinia.protocols/FieldSelection instance; from this instance it is possible to identify directives on the field, navigate to nested selections, or even navigate to the field type and identify directives there.
Previewing Selections¶
Tip
This API is a bit older than com.walmartlabs.lacinia.executor/selection, but is a bit easier to use.
Also, selection
includes the field selection itself; these APIs identify only
nested selections below the current field.
A field resolver can “preview” what fields will be selected below it in the selections tree. This is a tool frequently used to optimize data retrieval operations.
As an example, lets assume a starting configuration where the hero
field resolver fetches just the
basic data for a hero (id
, name
, homePlanet
, etc.) and the
friends
resolver does a second query against the database to fetch the list of friends.
That’s two database queries. Perhaps we can optimize things by getting rid of the
friends
resolver, and doing a join to fetch the hero and friends at the same time.
The hero
resolver can just ensures there’s a :friends
key in the map (with the fetched friend values), and
the default field resolver for the friends
field will access it.
That’s simpler, but costly when friends
is not part of the query.
The function com.walmartlabs.lacinia.executor/selects-field? can help here:
(require
'[com.walmartlabs.lacinia.executor :as executor])
(defn resolve-hero
[context args _]
(if (executor/selects-field? context :Character/friends)
(fetch-hero-with-friends args)
(fetch-hero args)))
Here, inside the application context (provided to the resolve-hero
function)
is information about the selections, and selects-field?
can determine if a particular field appears anywhere below hero
in the selection tree.
selects-field?
identifies fields even inside nested or named fragments,
(executor/selects-field? context :Human/homePlanet)
would return true.
It is also possible to get all the fields that will be selected, using selections-seq
.
This a lazy, breadth-first navigation of all fields in the selection tree.
In the sequence of field names, any fragments are collapsed into their containing fields.
This level of detail may be insufficient, in which case the function selections-tree
can be used.
This function builds a recursive structure that identifies the entire tree structure. For the above query, it would return the following structure:
{:Character/friends [{:selections
{:Character/name [nil]}}]
:Human/homePlanet [nil]
:Character/name [nil]}
Each key in the map identifies a specific field by the qualified field name, such as
:Character/friends
.
The value is a vector of how that field is used; it is a vector because the same field
may appear in the selection multiple times, using aliases.
This shows, for example, that :Character/name
is used in two different ways (inside the
hero
query itself, and within the friends
field).
For fields with arguments, an :args
key is present, with the exact values which will be
supplied to the the nested field’s resolver.
For fields with an alias, an :alias
key will be present; the value is a keyword for the alias
(as provided in the client query).
Fields without arguments, sub-selections, or an alias are represented as nil.
Application Context¶
The application context passed to your field resolvers is normally set by the initial call to com.walmartlabs.lacinia/execute. Lacinia uses the context for its own book-keeping (the keys it places into the map are namespaced to avoid collisions) but otherwise the same map is passed to all field resolvers.
In specific cases, it is useful to allow a field resolver to modify the application context, with the change exposed just to the fields nested below, but to any depth.
For example, in this query:
{
products(search: "fuzzy") {
category {
name
product {
upc
name
highlightedName
}
}
}
Here, the search term is provided to the products
field, but is again needed by the highlightedName
field, to highlight the parts of the name that match the search term.
The resolver for the products
field can communicate this information
“down tree” to the resolver for the highlighted_name
field, by
using the com.walmartlabs.lacinia.resolve/with-context function.
(require '[com.walmartlabs.lacinia.resolve :as resolve])
(defn resolve-products
[_ args _]
(let [search-term (:search args)]
(-> (perform-product-search args)
(resolve/with-context {::search-term search-term}))))
(defn resolve-highlighted-name
[context _ product]
(let [{:keys [::search-term]} context]
(-> product :name (add-highlight search-term))))
The map provided to with-context
will be merged into the application context before any nested resolvers are invoked.
In this way, the new key, ::search-term
, is only present in the context for field resolvers below products
field.
Some field resolvers returns lists of values; the entire list can be wrapped in this way OR individual values within the list may be wrapped.
Tip
Remember that query execution is top to bottom, then the final result map is assembled from the leaves back up to the roots.
When using lacinia-pedestal, the default behavior is to capture the Ring request map and supply it
in the application context under the :request
key.
Extensions¶
Lacinia makes it possible to add extension data to the result map from within field resolvers.
This extension data is exposed as the :extensions
key on the result map (alongside the
more familiar :data
and :errors
keys).
The GraphQL specification allow for extensions but leaves them entirely up to the application to define and use - there’s no validation at all. One example of extension data is tracing information that Lacinia can optionally include in the result.
More general extension data is introduced using modifier functions defined in the com.walmartlabs.lacinia.resolve
namespace.
Extension Data¶
The with-extensions
function is used to introduce data into the result.
with-extension
is provided with a function, and additional optional arguments, and is used to modify
the extension data to a new state.
A hypothetical example; perhaps you would like to identify the total amount of time spent performing database
access, and return a total in the :total-db-access-ms
extension key.
You might instrument each resolver that accesses the database to calculate the elapsed time.
(require '[com.walmartlabs.lacinia.resolve :refer [with-extensions]])
(defn resolve-user
[context args value]
(let [start-ms (System/currentTimeMillis)
user-data (get-user-data (:id args))
elapsed-ms (- (System/currentTimeMillis) start-ms)]
(with-extensions user-data
update :total-db-access-ms (fnil + 0) elapsed-ms)))
The call to with-extensions
adds the elapsed time for this call to the key (the fnil
function
lets the +
function treat nil as 0).
This data is exposed in the final result map:
{:data
{:user
{:id "thx1138"
:name "Thex"}}
:extensions
{:total-db-access-ms 3}}
You should be aware that field resolvers may run in an unpredictable order, especially when
asynchronous field resolvers are involved.
Complex update logic may be problematic if one field resolver expects to modify extension data introduced
by a different field resolver.
Sticking with assoc-in
or update-in
is a good bet.
Warnings¶
The with-error
modifier function adds an error map to a result; in general, errors are
serious - Lacinia adds errors when it can’t parse the GraphQL document, for example.
Lacinia adds a less dramatic level, a warning, via the with-warning
modifier.
with-warning
adds an error map to the :warnings
map stored in the :extensions
key.
What a client does with errors and warnings is up to the application. In general, errors should indicate a failure of the overall request, and an interactive client might display an alert dialog related to the request and response.
An interactive client may present warnings differently, perhaps adding an icon to the user view to indicate that there were non-fatal issues executing the query.
Examples¶
Formatting a Date¶
This is a common scenario: you have a Date or Timestamp value and it needs to be in a specific format in the result map.
In this example, the field resolver will extract the key from the container’s resolved value, and format it:
(defn resolve-updated-at
[context args resolved-value]
(->> resolved-value
:updated-at
(format "%tm-%<td-%<tY")))
...
(inject-resolvers {:MyType/updatedAt resolve-updated-at})
This example is tied to a specific key (:updated-at
) and a specific format.
If this kind of transformation will apply to many different fields, you could easily create a function factory:
(defn resolve-date-field
[k fmt]
(fn [context args resolved-value]
(format fmt (get resolved-value k))))
...
(inject-resolvers {:MyType/updatedAt (resolve-date-field :updated-at "%tm-%<td-%<tY)})
Accessing a Java Instance Method¶
In some cases, you may find yourself exposing JavaBeans as schema objects. This fights against the grain of Lacinia, which expects schema objects to be Clojure maps.
It would be tedious to write a custom field resolver function for each and every Java instance method that needs to be invoked. Instead, we can use a factory function:
(defn- make-accessor
[method-sym]
(let [method-name (name method-sym)
arg-types (make-array Class 0)
args (make-array Object 0)]
(fn [value]
(let [c (class value)
method (.getMethod c method-name arg-types)]
(.invoke method value args)))))
(defn resolve-method
[method-sym]
(let [f (make-accessor method-sym)]
(fn [_ _ value]
(f value))))
;; Later, when injecting resolvers ...
(-> ...
(utils/inject-resolvers {:MyObject/myField (resolve-method 'myField)})
(schema/compile))
This won’t be the most efficient approach, since it has to lookup a method on each use and then invoke that method using Java reflection, but may be suitable for light use, or as the basis for a more efficient implementation.
Note
More examples forthcoming.
Input Objects¶
In some cases, it is desirable for a query to include arguments with more complex data than a single value. A typical example would be passing a bundle of values as part of a mutation operation (rather than an unmanageable number of individual field arguments).
Input objects are defined like ordinary objects, with a number of restrictions:
- Field types are limited to scalars, enums, and input objects (or list and non-null wrappers around those types)
- There are no field resolvers for input objects; these are values passed in their entirety from the client to the server
- Input objects do not implement interfaces
Input objects are defined as keys of the top-level :input-objects
key in the schema.
Custom Scalars¶
Defining custom scalars may allow users to better model their domain.
To define a custom scalar, you must provide implementations, in your schema, for two transforming callback functions:
- parse
- parses query arguments and coerces them into their scalar types according to the schema.
- serialize
- serializes a scalar to a value that will be in the result of the query or mutation.
In other words, a scalar is serialized to another type, typically a string, as part of executing a query and generating results. In some cases, such as field arguments, the reverse may be true: the client will provide the serialized version of a value, and the parse operation will convert it back to the appropriate type.
Both a parse function and a serialize function must be defined for each scalar type. These callback functions are passed a value and peform necessary coercions and validations.
Neither callback is ever passed nil
.
Dates are a common example of this, as dates are not supported directly in JSON, but are typically encoded as some form of string.
Here is an example that defines and uses a custom :Date
scalar type:
(require
'[clojure.spec.alpha :as s]
'[com.walmartlabs.lacinia :as g]
'[com.walmartlabs.lacinia.schema :as schema])
(import
java.text.SimpleDateFormat
java.util.Date)
(def date-formatter
"Used by custom scalar :Date."
(SimpleDateFormat. "yyyy-MM-dd"))
(def schema
(schema/compile
{:scalars
{:Date
{:parse #(when (string? %)
(try
(.parse date-formatter %)
(catch Throwable _
nil)))
:serialize #(try
(.format date-formatter %)
(catch Throwable _
nil))}}
:queries
{:today
{:type :Date
:resolve (fn [ctx args v] (Date.))}
:echo
{:type :Date
:args {:input {:type :Date}}
:resolve (fn [ctx {:keys [input]} v] input)}}}))
(g/execute schema "{today}" nil nil)
=> {:data {:today "2018-11-21"}}
(g/execute schema "{ echo(input: \"2018-11-22\") }" nil nil)
=> {:data {:echo "2018-11-22"}}
(g/execute schema "{ echo(input: \"thanksgiving\") }" nil nil)
=>
{:errors [{:message "Exception applying arguments to field `echo': For argument `input', unable to convert \"thanksgiving\" to scalar type `Date'.",
:locations [{:line 1, :column 3}],
:extensions {:field :echo, :argument :input, :value "thanksgiving", :type-name :Date}}]}
Warning
This is just an simplified example used to illustrate the broad strokes. It is not thread safe, because
the SimpleDateFormat
class is not thread safe.
Parsing¶
The parse callback is provided a value the originates in either the GraphQL query document, or in the variables map.
The values passed to the callback may be strings, numbers, or even maps (with keyword keys). It is expected that the parse function will do any necessary conversions and validations, or indicate an invalid value.
Serializing¶
Serializing is often the same as parsing (in fact, it is not uncommon to use one function for both roles).
The serialize callback is passed whatever value was selected from a field and cooerces it to an appropriate value for the response (typically, either a string, or another value that can be encoded into JSON).
Handling Invalid Values¶
Especially when parsing an input string into a value, there can be problems, especially included invalid data sent in the request.
Values may not always be parsable or serializable: a faulty client may pass incorrect data into Lacinia to be parsed, or a programming error may cause a mismatch when serializing.
The simplest way to indicate a parse or serialize failure is to simply return nil. Lacinia will create a generic error map and add that to the response.
Alternately, a parse or serialize callback can throw an exception; the message of the exception can provided more details
about what failed,
and the ex-data
of the exception will be merged into the error map.
For example, a Date
scalar may use a java.time.format.DateTimeFormatter
to parse a string, and
may catch an exception from the call to parse
and supply a more user-friendly exception detailing the expected format.
Scalars and Variables¶
When using variables, the scalar parser will be provided not with a string per-se, but with a Clojure value: a native Long, Double, or Boolean. In this case, the parser is, not so much parsing, as validating and transforming.
For example, the built-in Int
parser handles strings and all kinds of numbers
(including non-integers). It also ensures that Int
values are, as identified in
the specification, limited to signed
32 bit numbers.
Attaching Scalar Transformers¶
As with field resolvers, the pair of transformers for each scalar have no place in an EDN file as they are functions. Instead, the transformers can be attached after reading the schema from an EDN file, using the function com.walmartlabs.lacinia.util/attach-scalar-transformers.
Root Object Names¶
Top-level query, mutation, and subscription operations are represented in Lacinia as fields on
special objects.
These objects are the “operation root objects”.
The default names of these objects are Query
, Mutation
, and Subscription
.
When compiling an input schema, these objects will be created if they do not already exist.
The :roots
key of the input schema is used to override the default names. Inside :roots
, you
may specify keys :query
, :mutation
, or :subscription
, with the value being the name of the
corresponding object. There is rarely a need to rename the objects from default values, however.
If the objects already exist, then any fields on the objects are automatically available operations. In the larger GraphQL world (beyond Lacinia), this is the typical way of defining operations.
Beyond that, the operations from the :queries
, :mutations
, and :subscriptions
maps of the input schema will be
merged into the fields of the corresponding root object; this is supported, but not prefered.
Name collisions are not allowed; a schema compile exception is thrown if an operation (via :queries
, etc.)
conflicts with a field of the corresponding root object.
Unions¶
It is allowed for the root type to be a union. This can be a handy way to organize many different operations.
In this case, Lacinia creates a new object and merges the fields of the members of the union, along with any operations from the input schema.
Again, the field names must not conflict.
The new object becomes the root operation object.
Resolver Tracing¶
When scaling a service, it is invaluable to know where queries are spending their execution time; Lacinia can help you here; when enabled, Lacinia will collect performance tracing information compatible with Apollo GraphQL.
Timing collection is enabled by passing the context through the com.walmartlabs.lacinia.tracing/enable-tracing function:
(require '[com.walmartlabs.lacinia :as lacinia]
[com.walmartlabs.lacinia.tracing :as tracing])
(def start-wars-schema ...)
(lacinia/execute
star-wars-schema "
{
luke: human(id: \"1000\") { friends { name }}
leia: human(id: \"1003\") { name }
}"
nil
(tracing/enable-tracing nil))
=>
{:data {:luke {:friends [{:name "Han Solo"}
{:name "Leia Organa"}
{:name "C-3PO"}
{:name "R2-D2"}]},
:leia {:name "Leia Organa"}},
:extensions {:tracing {:version 1,
:startTime "2020-08-31T22:14:25.401Z",
:endTime "2020-08-31T22:14:25.449Z",
:duration 47430231,
:parsing {:startOffset 68824, :duration 38932608},
:validation {:startOffset 39099642, :duration 1941960},
:execution {:resolvers [{:path [:luke],
:parentType :Query,
:fieldName :human,
:returnType "Human!",
:startOffset 42476480,
:duration 303264}
{:path [:luke :friends],
:parentType :Human,
:fieldName :friends,
:returnType "[Character]",
:startOffset 43183550,
:duration 185802}
{:path [:luke :friends 0 :name],
:parentType :Human,
:fieldName :name,
:returnType "String",
:startOffset 43669784,
:duration 16145}
{:path [:luke :friends 1 :name],
:parentType :Human,
:fieldName :name,
:returnType "String",
:startOffset 44205401,
:duration 4629}
{:path [:luke :friends 2 :name],
:parentType :Droid,
:fieldName :name,
:returnType "String",
:startOffset 44346489,
:duration 4563}
{:path [:luke :friends 3 :name],
:parentType :Droid,
:fieldName :name,
:returnType "String",
:startOffset 44477160,
:duration 3971}
{:path [:leia],
:parentType :Query,
:fieldName :human,
:returnType "Human!",
:startOffset 46609256,
:duration 130413}
{:path [:leia :name],
:parentType :Human,
:fieldName :name,
:returnType "String",
:startOffset 46866059,
:duration 7833}]}}}}
Note that tracing is an execution option, not a schema compilation option; it’s just a matter of setting
up the application context (via enable-tracing
) before parsing and executing the query.
When the field resolvers are asynchronous, you’ll often see that the startOffset
and duration
of multiple fields represent overlapping time periods, which is exactly what you want.
Generally, resolver tracing maps are added to the list in order of completion, but the exact order is not guaranteed.
Enabling tracing adds a lot of overhead to execution and parsing, both for the essential overhead of collecting the tracing information, but also because certain optimizations are disabled when tracing is enabled; for example default field resolvers (where no explicit resolver is provided) are heavily optimized normally, to avoid unecessary function calls and object creation.
Warning
Tracing should never be enabled in production. This can be accomplished by removing the com.walmartlabs.lacinia.pedestal2/enable-tracing-interceptor
interceptor from the pipeline (when using lacinia-pedestal).
Apollo GraphQL Federation¶
GraphQL federation is a concept, spearheaded by Apollo GraphQL (a Node-based JavaScript project) whereby multiple GraphQL schemas can be combined, behind a single gateway service. It’s a useful concept, as it allows different teams to stand up their own schemas and services, written in any language, and dynamically combine them into a single “super” schema.
Each service’s schema can evolve independentently (as long as that evolution is backwards compatible), and can deploy on its own cycle. The gateway becomes the primary entrypoint for all clients, and it knows how to break service-spanning queries apart and build an overall query plan.
Lacinia has been extended, starting in 0.38.0, to support acting as an implementing service; there is no plan at this time to act as a gateway.
Warning
At this time, only a schema defined with the Schema Definition Language, can be extended to act as a service implementation.
Essentially, federation allows a set of services to each provide their own types, queries, and mutations, and organizes things so that each service can provide additional fields to the types provided by the other services.
The Apollo GraphQL documentation includes
a basic example, where a users service exposes a User
type (and related queries), a products service exposes
a Product
type, and a reviews service exposes a Review
type.
Without federation, these individual services are useful, but limited. A smart client could know about all three services, and send a series of requests to each, to build up a model of, say, a particular user and the products that user has reviewed.
For example:
- Query the users service for the user, providing the user’s unique id
- Query the reviews service to get a list of reviews for that specific user (again, passing the user’s unique id)
- Query the products service for details (name, price, etc.) for each product reviewed by the user
… but this is a lot to heap on the client developers; each client will have to manage three sets of GraphQL endpoints, and know exactly which fields are needed to bridge relationships between the different services.
Instead, federation allows the Apollo GraphQL gateway to merge together the three individual services into one composite service.
The client only needs access to the single gateway endpoint, and is free to make complex queries that
span from User
to Review
to Product
seamlessly;
the gateway service is responsible for communicating with the implementing services and merging together the final
response.
A GraphQL type (or interface) that can span services this way is called an entity.
In federation terms, the User
entity is internal to the users service, and external to the other two services.
The users service defines all the fields of the User
entity, and can add new fields whenever necessary while staying
backwards compatible, just as with a traditional GraphQL schema.
In the other schemas, the User
type is external; just a kind of stub for User
is defined in the schemas for
the products and reviews service. The full type, and the stub, must agree on fields that define the primary key
for the entity. However, the stub can be extended by the other services to add new fields, surfacing data and relationships
owned by the particular service.
Internal Entities¶
Defining an entity that is internal is quite straight-forward, it is almost the same as in traditional GraphQL.
type User @key(fields: "id") {
id: String!
name: String!
}
type Query {
userById(id:String!) : User
}
When federation is enabled, a
number of new directives are automatically available,
including @key
, which defines the primary key, or primary keys, for the entity.
The above example would be the schema for a users service that can be extended by other services.
External Entities¶
The previous example showed an internal entity that can be extended; this example shows a different service providing
its own internal entity, but also extending the User
entity.
type User @extends @key(fields: "id") {
id: String! @external
favoriteProducts: [Product]
}
type Product @key(fields: "upc") {
upc: String!
name: String!
price: Int!
}
type Query {
productByUpc(upc: String!) : Product
}
Note the use of the @extends
directive, this indicates that User
(in the products service) is a stub for the full
User
entity in the users service.
You must ensure that the external User
includes the same @key
directive (or directives), and the same primary key
fields; here id
must be present, since it is part of the primary key.
The @external
directive indicates that the field is provided by another service (the users service).
The favoriteProducts
field on User
is an addition provided by this service, the products service.
Like any other field, a resolver must be provided for it.
We’ll see how that works shortly.
Notice that this service adds the productByUpc
query to the Query
object; the Apollo GraphQL gateway
merges together all the queries defined by all the implementing services.
Again, the point of the gateway is that it mixes and matches from all the implementing services; clients should only go through the gateway since that’s the only place where this merged view of all the individual schemas exists.
The gateway is capable of building a plan that involves multiple steps to satisfy a client query.
For example, consider this gateway query:
{
userById(id: "124c41") {
name
favoriteProducts { upc name price }
}
}
The gateway will start with a query to the users service; it will invoke the userById
query there and will
select both the name
field (as specified in the client query) and the id
field (since that’s specified
in the @key
directive on the User
entity).
A second query will be sent to the products service. This query is used to get those favorite products; but to understand exactly how that works, we must first discuss representations.
Representations¶
A representation is a map that can be transferred from one implementing service to another, within the same federation. This is necessary to allow work started in one service to continue in another; consider the query:
{
userById(id: "124c41") {
name
favoriteProducts { upc name price }
}
}
The gateway will query the User/favoriteProducts
field on the products service as the second step on this query … but
where does the User
come from?
After the gateway performs the initial query on the users service, it builds a representation of the specific User
to pass to the products service, using information from the @key
directive:
{"__typename": "User",
"id": "124c41"}
This representation is JSON, and is passed to an implementing service’s _entities
query, which is automaticaly added
to the implementing service’s schema by Lacinia:
scalar _Any
scalar _FieldSet
# a union of all types that use the @key directive
union _Entity
extend type Query {
_entities(representations: [_Any!]!): [_Entity]!
}
The _Entity
union will contain all entities, internal or external, in the local schema; for the products service, this
will be User
(external) and Product
(internal).
The _entities
query exists to convert some number of representations (here, as scalar type _Any
) into entities
(either stub entities or full entities).
The gateway sends a request that passes the representations in, and uses fragments to extract the data needed
by the original client query:
query($representations:[_Any!]!) {
_entities(representations:$representations) {
... on User {
favoriteProducts {upc name price}
}
}
}
So, in the products service, the _entities
resolver converts the representation into a stub User
object,
containing just enough information so that the favoriteProducts
resolver can perform whatever database query
it uses.
The response from the products service is merged together with the response from the users service and a final
response can be returned to the gateway service’s client.
Service Implementation¶
At this point, we’ve discussed what goes into each implementing service’s schema, and a bit about how each service is responsible for resolving representations; let’s finally see how this all fits together with Lacinia.
Below is a sketch of how this comes together in the products service:
(ns products.server
(:require
[io.pedestal.http :as http]
[clojure.java.io :as io]
[com.walmartlabs.lacinia.pedestal2 :as lp]
[com.walmartlabs.lacinia.parser.schema :refer [parse-schema]]
[com.walmartlabs.lacinia.schema :as schema]
[com.walmartlabs.lacinia.util :as util]))
(defn resolve-users-external
[_ _ reps]
(for [{:keys [id]} reps]
(schema/tag-with-type {:id id}
:User)))
(defn get-product-by-upc
[context upc]
;; Peform DB query here, return map with :upc, :name, :price
)
(defn get-favorite-products-for-user
[context user-id]
;; Perform DB query here, return seq of maps with :upc, :name, :price
)
(defn resolve-products-internal
[context _ reps]
(for [{:keys [upc]} reps
:let [product (get-product-by-upc context upc)]]
(schema/tag-with-type product :Product)))
(defn resolve-product-by-upc
[context {:keys [upc]} _]
(get-product-by-upc context upc))
(defn resolve-favorite-products
[context _ user]
(let [{:keys [id]} user]
(get-favorite-products-for-user context id)))
(defn products-schema
[]
(-> "products.gql"
io/resource
slurp
(parse-schema {:federation {:entity-resolvers {:Product resolve-products-internal
:User resolve-users-external}}})
(util/inject-resolvers {:Query/productByUpc #'resolve-product-by-upc
:User/favoriteProducts #'resolve-favorite-products})
schema/compile))
(defn start
[]
(-> (products-schema)
lp/default-service
http/create-server
http/start))
The resolve-users-external
function is used to convert a seq of User
representations
into a seq of User
entity stubs; this is called from the resolver for the _entities
query whose type
is a list of the _Entities
union, therefore each value must be tagged with the :User
type.
resolve-products-internal
does the same for Product
representations, but since this is the
products service, the expected behavior is to perform a query against an external data store and
ensure the results match the structure of the Product
entity.
resolve-product-by-upc
is the resolver function for the productByUpc
query.
Since the field type is Product
there’s no need to tag the value.
resolve-favorite-products
is the resolver function for the User/favoriteProducts
field.
This is passed the User
(provided by resolve-users-external
); it extracts the id
and passes
it to get-favorite-products-for-user
.
The remainder is bare-bones scaffolding to read, parse, and compile the schema and build a Pedestal service endpoint around it.
Pay careful attention to the call to com.walmartlabs.lacinia.parser-schema/parse-schema; the presence of the
:federation
option is critical; this adds the necessary base types and directives
before parsing the schema definition, and then adds the _entities
query and _Entities
union
afterwards, among other things.
The :entity-resolvers
map is critical; this maps from a type name to a entity resolver;
this information is used to build the field resolver function for the _entities
query.
Warning
A lot of details are left out of this, such as initializing the database and storing
the database connection into the application context, where functions like
get-product-by-upc
can access it.
This is only a sketch to help you connect the dots.
Introspection¶
Introspection is a key part of GraphQL: the schema is self-describing.
Introspection data is derived directly from the schema.
Often, a :description
key is added to the schema to provide additional help.
Introspection is necessary to support the in-browser graphiql IDE.
Introspection can also be leveraged by smart clients, presumably custom in-browser or mobile applications, to help deal with schema evolution. A smart client can use introspection to determine if a particular field exists before including that field in a query request; this can help defuse the process of introducing a new field (in the server) at the same time as a new client that needs to make use of that field.
clojure.spec¶
Lacinia makes use of clojure.spec; specifically, the arguments to com.walmartlabs.lacinia.schema/compile and com.walmartlabs.lacinia.parser.schema/parse-schema are always validated with spec.
This is useful, especially for compile
, and the data structure passed in for compilation is complex and
deeply nested.
However, the exceptions thrown by clojure.spec can be challenging to read.
The use of Expound is recommended; it does a much better job of formatting that wealth of data for a person to read.
For example, it omits all the extraneous detail, making it much easier to find where the problem exists:
-- Spec failed --------------------
{:objects
{:Henry
{:fields
{:higgins
{:type ...,
:resolve ...,
:deprecated 7.0}}}}}
^^^
should satisfy
true?
or
string?
Further, Lacinia includes an extra namespace, not loaded by default: com.walmartlabs.lacinia.expound
.
This namespace simply defines spec messages for some of the trickier specs defined by Lacinia.
GraphQL SDL Parsing¶
As noted in the overview, Lacinia schemas are represented as Clojure data. However, Lacinia also contains a facility to transform schemas written in the GraphQL Schema Definition Language into the form usable by Lacinia. This is exposed by the function com.walmartlabs.lacinia.parser.schema/parse-schema.
The Lacinia schema definition includes things which are not available in the SDL, such as
resolvers, subscription streamers, custom scalar parsers/serializers and documentation.
To add these, parse-schema
has two arguments: a string containing the SDL
schema definition, and a map of resolvers, streamers, scalar functions and documentation
to attach to the schema:
{:resolvers {:field-name resolver-fn}
:streamers {:field-name stream-fn}
:scalars {:scalar-name {:parse parse-spec
:serialize serialize-spec}}
:documentation {:type-name doc-str
:type-name/field-name doc-str
:type-name/field-name.arg-name doc-str}}
Example¶
enum episode {
NEWHOPE
EMPIRE
JEDI
}
type Character {
name: String!
episodes: [episode]
}
input CharacterArg {
name: String!
episodes: [episode]!
}
type Query {
findAllInEpisode(episode: episode!) : [Character]
}
type Mutation {
addCharacter(character: CharacterArg!) : Boolean
}
(parse-schema (slurp (clojure.java.io/resource "schema.txt"))
{:resolvers {:Query/findAllInEpsiode :find-all-in-episode
:Mutation/addCharacter :add-character}}
:documentation {:Character "A Star Wars character"
:Character/name "Character name"
:Query/findAllInEpisode "Find all characters in the given episode"
:Query/findAllInEpisode.episode "Episode for which to find characters."}})
{:objects
{:Character {:fields
{:name {:type (non-null String)
:description "Character name"}
:episodes {:type (list :episode)}}
:description "A Star Wars character"}
:Query {:fields
{:findAllInEpisode
{:type (list :Character)
:args
{:episode
{:type (non-null :episode)
:description "Episode for which to find characters."}}
:resolve :find-all-in-episode
:description "Find all characters in the given episode"}}}
:Mutation {:fields
{:addCharacter
{:type Boolean
:args {:character {:type (non-null :CharacterArg)}}
:resolve :add-character}}}}
:input-objects
{:CharacterArg {:fields
{:name {:type (non-null String)}
:episodes {:type (non-null (list :episode))}}}}
:enums
{:episode {:values [{:enum-value :NEWHOPE}
{:enum-value :EMPIRE}
{:enum-value :JEDI}]}}
:roots
{:query :Query
:mutation :Mutation}}
The :documentation
key uses a naming convention on the keys which become paths into the Lacinia input schema.
:Character/name
applies to the name
field of the Character
object.
:Query/find_all_in_episode.episode
applies to the episode
argument, inside the find_all_in_episode
field
of the Query
object.
Tip
Attaching documentation this way is less necessary since release 0.29.0, which added support for embedded schema documentation.
Alternately, the documentation map can be parsed from a Markdown file using com.walmartlabs.lacinia.parser.docs/parse-docs.
The same key structure can be used to document input objects and interfaces.
Unions may be documented, but do not contain fields.
Enums may be documented, as well as Enum values (e.g., :Episode/JEDI
).
As is normal with Schema Definition Language, the available queries, mutations, and subscriptions (not shown in this example)
are defined on ordinary schema objects, and the schema
element identifies which objects are used for
which purposes.
The :roots
map inside the Lacinia schema is equivalent to the schema
element in the SDL.
Warning
Schema extensions are defined in the GraphQL specification, but not yet implemented.
Sample Projects¶
- boardgamegeek-graphql-proxy
Howard Lewis Ship created this simple proxy to expose part of the BoardGameGeek database as GraphQL, using Lacinia.
It was used for examples in his Clojure/West 2017 talk: Power to the (Mobile) People: Clojure and GraphQL.
- leaderboard-api
- A simple API to track details about games and high scores. Built on top of Compojure and PostgreSQL. See this blog post by the author.
- Event sourcing tutorial
- This project consists of multiple components creating a bank simulation. The graphql-endpoint leverages Kafka to do queries, mutations and subscriptions. Also part of the project is a frontend using re-graph.
- Fullstack Learning Project
- A port of The Fullstack Tutorial for GraphQL, ported to Clojure and Lacinia.
- Hacker News GraphQL
- A version of Hacker News implemented using GraphQL and Datomic on the backend, and re-frame on the front end.
- Lacinia LDAP backend
- A sample library for querying a LDAP/Active directory using GraphQL
- Lacinia Qliksense backend
- A sample library for querying an Qliksense server (Repository API) using GraphQL
Clojure 1.9¶
Lacinia targets Clojure 1.9, and makes specific use of clojure.spec
.
To use Lacinia with Clojure 1.8, modify your project.clj
to include clojure-future-spec
:
:dependencies [[org.clojure/clojure "1.8.0"]
[com.walmartlabs/lacinia "x.y.z"]
[clojure-future-spec ""]
...]
Other Resources¶
-
A quick introduction to Lacinia, and goes into detail about the problems it solves for us at Walmart.
A talk at Clojure/West 2017: Power to the (Mobile) People: Clojure and GraphQL is available on YouTube.
A good introductory Blog Post by James Borden.
A talk at Clojure/Conj 2017: The Power of Lacinia & Hystrix in Production is available on You Tube.
Blog Post: The Case for Clojure and GraphQL: Replacing Django.
Blog Post: Through the Looking Graph: Experiences adopting GraphQL on a Clojure/script project.
Contributing to Lacinia¶
We hope to build a community around Lacinia and extensions and enhancements to it.
Licensing¶
Lacinia is licensed under the terms of the Apache Software License 2.0. Any contributions made by the community, including patches, pull requests, or any content provided in an issue, represents a transfer of intellectual property to Walmartlabs, for the sole purpose of maintaining and improving Lacinia.
Process¶
Our process is light: once a pull request (PR) comes in, core committers will review the the code and provide feedback.
After at least two core committers provide the traditional feedback LGTM (looks good to me), we’ll merge to master.
It is the submitter’s responsibility to keep a PR up to date when it has conflicts with master.
Please do not change the version number in VERSION.txt
; the core committers will handle version number changes.
Generally, we advance the version number immediately after a release.
Please close PRs that are not ready to merge, or need other refinements, and re-open when ready.
As Lacinia’s community grows, we’ll extend this process as needed … but we want to keep it light.
Issue Tracking¶
We currently use the GitHub issue tracker for Lacinia. We expect that to be sufficient for the meantime.
Coding Conventions¶
Please follow the existing code base.
We prefer defn ^:private
to defn-
.
Keep as much as possible private; the more public API there is, the more there is to support release on release.
Occasionally it is not reasonable to refactor a common implementation function
out of a public namespace to a private namespace, in order to share
the function between namespaces.
In that case, add the :no-doc
metadata.
This will prevent that var from appearing in the generated API documentation,
and signify that the function is intended to be private (and therefore,
subject to change without notice).
Where possible, apply :added
metadata on newly created namespaces or newly added
functions (or vars).
When a new namespace is introduced, only the namespace needs the :added
metadata,
not the individual functions (or vars).
We indent with spaces, and follow default indentation patterns.
We value documentation. Lacinia docstrings are formatted with Markdown. Tests can also be great documentation.
Tests¶
We are test driven. We expect patches to include tests.
We may reject patches or pull requests that arrive without tests.
Documentation¶
Patches that change behavior and invalidate existing documentation will be rejected. Such patches should also update the documentation.
Ideally, patches that introduce new functionality will also include documentation changes.
Documentation is generated using Sphinx, which is not difficult to set up, but does require Python.
Backwards Compatibility¶
We respect backwards compatibility.
We make it a top priority to ensure that anyone who writes an application using Lacinia will be free from upgrade headaches, even as Lacinia gains more features.
We have, so far, expressly not documented the internal structure of compiled schema or parsed query, so that we can be free to be fluid in making future improvements.
Using this library¶
This library aims to maintain feature parity to that of the official reference JavaScript implementation and be fully compliant with the GraphQL specification.
Lacinia can be plugged into any Clojure HTTP pipeline. The companion library lacinia-pedestal provides full HTTP support, including GraphQL subscriptions, for Pedestal.
Overview¶
A GraphQL server starts with a schema of exposed types.
This GraphQL schema is described as an EDN data structure:
{:enums
{:Episode
{:description "The episodes of the original Star Wars trilogy."
:values [:NEWHOPE :EMPIRE :JEDI]}}
:interfaces
{:Character
{:fields {:id {:type String}
:name {:type String}
:appearsIn {:type (list :Episode)}
:friends {:type (list :Character)}}}}
:objects
{:Droid
{:implements [:Character]
:fields {:id {:type String}
:name {:type String}
:appearsIn {:type (list :Episode)}
:friends {:type (list :Character)
:resolve :friends}
:primaryFunction {:type (list String)}}}
:Human
{:implements [:Character]
:fields {:id {:type String}
:name {:type String}
:appearsIn {:type (list :Episode)}
:friends {:type (list :Character)}
:home_planet {:type String}}}
:Query
{:fields
{:hero {:type (non-null :Character)
:args {:episode {:type :Episode}}}
:human {:type (non-null :Human)
:args {:id {:type String
:default-value "1001"}}}
:droid {:type :Droid
:args {:id {:type String
:default-value "2001"}}}}}}}
The schema defines all the data that could possibly be queried by a client.
To make this schema useful, field resolvers must be added to it. These functions are responsible for doing the real work (querying databases, communicating with other servers, and so forth). These are attached to the schema after it is read from an EDN file.
The client uses the GraphQL query language to specify exactly what data should be returned in the result map:
{
hero {
id
name
friends {
name
}
}
}
This translates to “run the hero query; return the default hero’s id and name, and friends; just return the name of each friend.”
Lacinia will return this as Clojure data:
{:data
{:hero
{:id "2001"
:name "R2-D2"
:friends [{:name "Luke Sykwalker"},
{:name "Han Solo"},
{:name "Leia Organa"}]}}}
This is because R2-D2 is, of course, considered the hero of the Star Wars trilogy.
This Clojure data can be trivially converted into JSON or other formats when Lacinia is used as part of an HTTP server application.
A key takeaway: GraphQL is a contract between a client and a server; it doesn’t know or care where the data comes from; that’s the province of the field resolvers. That’s great news: it means Lacinia is equally adept at pulling data out of a single database as it is at integrating and organizing data from multiple backend systems.