Dreamforce 2015 Video: Tour of Heroku + Salesforce Integration Methods

This year at Dreamforce I presented a session that walked through a few of the ways to integrate Heroku apps with Salesforce. Here is the session description:

Combining customer-facing apps on Heroku with employee-facing apps on Salesforce enables a whole new generation of connected and intelligent experiences. There are four primary ways to do this integration: Heroku Connect, Canvas, Apex / Process Callouts, and the Salesforce REST APIs. Using code and architectural examples, we’ll walk through these different methods. You will walk away knowing when you should use each and how to use them.

Check out the video recording of the session.

To dive into these methods here are the “Further Learning” resources for each method:

I hope this is helpful. Let me know if you have any questions.

Smoothing the Cloud & Local Roundtrip Developer Experience

Getting started with new technologies is usually a huge pain. Often I stumble around for hours trying to get an app’s toolchain setup correctly. Instructions are usually like:

Things get worse when I lead workshops for hundreds of enterprise developers where many are on Windows machines and not very comfortable with cmd.exe.

Experiencing this pain over-and-over is what led me to create Typesafe Activator as a smooth way to get started with Play Framework, Akka, and Scala. Developers have been thrilled with how easy taking their first step with Activator is but I never finished polishing the experience of the second step: App Deployment.

Over the past few months I’ve been working on a set of tools that make the roundtrip between deployment and local development super smooth with zero-CLI and zero-install. Check out a demo:

Here is a summary of the “from scratch” experience:

  1. Deploy the Click, Deploy, Develop app on the cloud
  2. Download the app’s source
  3. Run gulp from a file explorer to download Node, the app’s dependencies, and Atom and then launch the Node / Express server and the Atom code editor
  4. Open the local app in a browser: http://localhost:5000
  5. Make a change in Atom to the app.js file
  6. Test the changes locally
  7. Login to Heroku via Atom
  8. Deploy the changes via Atom

That is one smooth roundtrip!

For more detailed docs on this flow, checkout the Click, Deploy, Develop project.

Great dev experience starts with the simplest thing that can possibly work and has layered escape hatches to more complexity.

That kind of developer experience (DX) is something I’ve tried to do with this toolchain. It builds on top of tools that can be used directly by advanced users. Underneath the smooth DX is just a normal Node.js / Express app, a Gulp build, and the Atom code editor. Here are the pieces that I’ve built to polish the DX, creating the zero-CLI and zero-install experience:

I hope that others find this useful for helping to give new users a great roundtrip developer experience. Let me know what you think.

Note: Currently gulp-atom-downloader does not support Linux because there isn’t a standalone zip download of Atom for Linux. Hopefully we can get that resolved soon.

Comparing Application Deployment: 2005 vs. 2015

Over the past 10 years the ways we build and deliver applications has changed significantly. It seems like much of this change has happened overnight but don’t worry, it is perfectly normal to look up and feel disoriented in the 2015 deployment landscape.

This article compares the deployment in 2005 with “modern” deployment so that all the new terms and techniques will make sense. Forewarning: My background is primarily Java / JVM so I will use that terminology but try to make the ideas polyglot.

2005 = Multi-App Containers / App Servers / Monolithic Apps
2015 = Microservices / Docker Containers / Containerless Apps

Back in 2005 many of us worked on projects that resulted in a WAR file – a zip file containing a Java web application and its library dependencies. That web application would be deployed alongside other web applications into a single app server sometimes called a “container” because it contained and ran one or more applications. The app server provided a bunch of common services to the web apps like an HTTP server, a service directory, and shared libraries. Unfortunately deploying multiple apps in a single container created high friction for scaling, deployment, and resource usage. App servers were supposed to isolate an app from its underlying system dependencies in order to avoid “it works on my machine” problems but things often didn’t work that smoothly due to differing system dependencies and configuration that lived outside of the app server / container.

In 2015 apps are being deployed as self-contained units, meaning the app includes everything it needs to run on top of a standard set of system dependencies. The granularity of the self-contained unit differs depending on the deployment paradigm. In the Java / JVM world a “containerless” app is a zip file that includes everything the app needs on top of the JVM. Most modern JVM frameworks have switched to this containerless approach including Play Framework, Dropwizard, and Spring Boot. A few years ago I wrote in more detail about how app servers are fading away in the move from monolithic middleware to microservices and cloud services.

For a more complete and portable self-contained unit, system-level container technologies like Docker and LXC bundle the app with its system dependencies. Instead of deploying a bunch of apps into a single container, a single app is added to a Docker image and deployed on one or more servers. On Heroku a “Slug” file is similar to a Docker image.

Microservices play a role in this new landscape because deployment across microservices is independent, whereas with traditional app servers individual app deployment often involved restarting the whole server. This was one reason for the snail’s pace of deployment in enterprises – deployments were incredibly risky and had to be coordinated months in advance across numerous teams. Hot deployment was a promise that was never realized for production apps. Microservices enable individual teams to deploy at will and as often as they want. Microservices require the ability to quickly provision, deploy, and scale services which may have only a single responsibility. These requirements fit well with the infrastructure provided by containerless apps running on Docker(ish) Containers.

2005 = Manual Deployment
2015 = Continuous Delivery / Continuous Deployment

The app servers of 2005 that ran multiple monolithic apps combined with manual load balancer configurations made application upgrades risky and painful so deployments were usually done sparingly in designated maintenance windows. Back then it was pretty much unheard of to have a deployment pipeline that fully automated delivery from an SCM to production.

Today Continuous Delivery and Continuous Deployment enable developers to get code to staging and production sometimes as often as tens or even hundreds of times a day. Scalable deployment pipelines range from the simple “git push heroku master” to a more risk averse pipeline that includes pull requests, Continuous Integration, staging auto-deployment, manual promotion to production, and possibly Canary Releases & Feature Flags. These pipelines enable organizations to move fast and distribute risk across many small releases.

In order for Continuous Delivery to work well there are a few ancillary requirements:

  • Release rollbacks must be instant and easy because sometimes things are going to break and getting back to a working state quickly must be painless and fast.
  • Patch releases must be able to make it from SCM to production (through a continuous delivery pipeline) in minutes.
  • Load balancers must be able to handle automatic switching between releases.
  • Database schema changes should be decoupled from app releases otherwise releases and rollbacks can be blocked.
  • App-tier servers should be stateless with state living in external data stores otherwise state will be frequently lost and/or inconsistent.

2005 = Persistent Servers / “Pray it never goes down”
2015 = Immutable Infrastructure / Ephemeral Servers

When a server crashed in 2005 stuff usually broke. Some used session replication and server affinity but sessions were lost and bringing up new instances usually took quite a bit of manual work. Often changes were made to production systems via SSH making it difficult to accurately reproduce a production environment. Logging was usually done to local disk making it hard to see what was going on across servers and load balancers.

Servers in 2015 are disposable, immutable, and ephemeral forcing us to plan for them to go down. Tools like Netflix’s Chaos Monkey randomly shut down servers to make sure we are preparing for crashes. Load balancers and management backplanes work together to start and stop new instances in an instant enabling rapid scaling both up and down. By being immutable we can no longer fix production issues by SSHing into a server but now environments are easily reproducible. Logging services route STDOUT to an external service enabling us to see the log stream in real time, across the whole system.

2005 = Ops Team
2015 = DevOps

In 2005 there was a team that would take your WAR file (or other deployable artifact) and be responsible for deploying it, managing it, and monitoring it. This was nice because developers didn’t have to wear pagers but ultimately the Ops team often couldn’t do much if there was a production issue at 3am. The biggest downside of this was that Ops became all about risk mitigation causing a tremendous slowdown in software delivery.

Modern technical organizations of all sizes are ditching the Ops velocity killer and making developers responsible for the stuff they put into production. Services like New Relic, VictorOps, and Slack help developers stay on top of their new operational responsibilities. The DevOps culture also directly incentivizes devs not to deploy things that will end up waking them or a team member up at 3am. A core indicator of a DevOps culture is whether a new team member can get code to production on their first day. Doing that one thing right means doing so many other things right, like:

  • 3 Step Dev Setup: Provision the system, Checkout the code, and Run the App
  • SCM / Team Review (e.g. GitHub Flow)
  • Continuous Integration & Continuous Deployment / Delivery
  • Monitoring and Notifications

DevOps can sound very scary to traditional enterprise developers like myself. But from experience I can attest that wearing a pager (metaphorically) and assuming the direct risk of my deployments has made me a much better developer. The quality of my code and my feelings of fulfillment have increased with my new level of ownership over what is in production.

Learn More

I’ve just touched the surface of many of the deployment changes over the past 10 years but hopefully you now have a better understanding of some of the terminology you might be hearing at conferences and on blogs. For more details on these and related topics, check out The Twelve-Factor App and my blog Java Doesn’t Suck – You’re Just Using it Wrong. Let me know what you think!

Huge thanks to Jason Hand and Joe Kutner for reviewing this blog post.

Redirecting and Chunking Around Heroku’s 30 Second Request Timeout

In most cases a web request shouldn’t take more than 30 seconds to return a response so it is for good reason that Heroku has a 30 second request timeout. But there are times when things just take a while. There are different methods for dealing with this. Where possible, the best solution is to offload the job from the web request queue and have a background job queue that can be scaled separately. If the requestor needs the result then it can either poll for it or be pushed the value when the background job is complete. Yet there are some cases where this is overkill. For instance, if a web request takes a while but the user interaction must remain blocked (e.g. a modal spinner) until the request is complete, then setting up background jobs for slow requests can be unnecessary.

Lets look at two different methods for handling long (> 30 seconds) web requests on Heroku. On Heroku the request must start returning some data within 30 seconds or the load balancer will give up. One way to deal with this is to continually wait 25ish seconds for the result and then redirect the request to do the same thing again. The other option is to periodically dump empty chunks into the response until the actual response can be returned. Each of these methods has tradeoffs so lets look at each in more detail. I’ll be using Play Framework and Scala for the examples but both of these method could be handled in most frameworks.

Redirect Polling

The Redirect Polling method of dealing with long web requests continuously sends a redirect every 25 seconds until the result is available. Try it out! The downside of this approach is that HTTP clients usually have a maximum number of redirects that they will allow which limits the total amount of time this method can take. The upside is that the actual response status can be based on the result.

Ideally the web framework is Reactive / Non-Blocking so that threads are only used when there is active I/O. In some cases the underlying reason for the long request is another service that is slow. In that case the web request could be fully Reactive, thus preserving resources that would traditionally be wasted in waiting states.

To implement Redirect Polling (Reactively) in Play Framework and Scala I’ll use Akka as a place to run a long job off of the web request thread. The Actor job could be something that is computationally taxing or a long network request. By using Akka Actors I have a simple way to deal with job distribution, failure, and thread pool assignment & management. Here is my very simple Akka Actor job that takes 60 seconds to complete (full source):

class LongJob extends Actor {
  lazy val jobFuture: Future[String] = Promise.timeout("done!", 60.seconds)
  override def receive = {
    case GetJobResult => jobFuture.pipeTo(sender())
case object GetJobResult

When this Actor receives a GetJobResult message, it creates a job that in 60 seconds returns a String using a Scala Future. That String is sent (piped) to the sender of the message.

Here is a web request handler that does the Redirect Polling while waiting for a result from the Actor (full source):

def redir(maybeId: Option[String]) = Action.async {
  val (actorRefFuture, id) = maybeId.fold {
    // no id so create a job
    val id = UUID.randomUUID().toString
    (Future.successful(actorSystem.actorOf(Props[LongJob], id)), id)
  } { id =>
    (actorSystem.actorSelection(s"user/$id").resolveOne(1.second), id)
  actorRefFuture.flatMap { actorRef =>
    actorRef.ask(GetJobResult)(Timeout(25.seconds)).mapTo[String].map { result =>
      // received the result
    } recover {
      // did not receive the result in time so redirect
      case e: TimeoutException => Redirect(routes.Application.redir(Some(id)))
  } recover {
    // did not find the actor specified by the id
    case e: ActorNotFound => InternalServerError("Result no longer available")

This request handler uses an optional query string parameter (id) as the identifier of the job. Here is the logic for the request handler:

  1. If the id is not specified then a new LongJob Actor instance is created using a new id. Otherwise the Actor is resolved based on its id.
  2. If either a new Actor was created or an existing Actor was found, then the Actor is asked for its result and given 25 seconds to return it. Otherwise an error is returned.
  3. If the result is received within the timeout, the result is returned in a 200 response. Otherwise a redirect response is returned that includes the id in the query string.

This is really just automatic polling for a result using redirects. It would be nice if HTTP had some semantics around the HTTP 202 response code for doing this kind of thing.

Empty Chunking

In the Empty Chunking method of allowing a request to take more than 30 seconds the web server sends HTTP/1.1 chunks every few seconds until the actual response is ready. Try it out! The downside of this method is that the HTTP response status code must be returned before the actual request’s result is available. The upside is that a single web request can stay open for as long as it needs. To use this method a web framework needs to support chunked responses and ideally is Reactive / Non-Blocking so that threads are only used when there is active I/O.

This method doesn’t require an Actor like the Redirect Polling method. A Future could be used instead but I wanted to keep the job piece the same for both methods. Here is a web request handler that does the empty chunking (full source):

def chunker = Action {
  val actorRef = actorSystem.actorOf(Props[LongJob])
  val futureResult = actorRef.ask(GetJobResult)(Timeout(2.minutes)).mapTo[String]
  futureResult.onComplete(_ => actorSystem.stop(actorRef)) // stop the actor
  val enumerator = Enumerator.generateM {
    // output spaces until the future is complete
    if (futureResult.isCompleted) Future.successful(None)
    else Promise.timeout(Some(" "), 5.seconds)
  } andThen {
    // return the result

This web request handler does the following when a request comes in:

  1. An instance of the LongJob Actor is created
  2. The Actor instance (actorRef) is asked for the result of the GetJobResult message and given two minutes to receive a result which is mapped to a String.
  3. An onComplete handler stops the Actor instance after a result is received or request has timed out.
  4. An Enumerator is created that outputs spaces every five seconds until the result has been received or timed out, at which time the result is outputted and is done.
  5. A HTTP 200 response is returned that is setup to chunk the output of the Enumerator

That is it! I’ve used this method in a number of places with Scala and Java in Play Framework making them fully Reactive. This logic could be wrapped into something more reusable. Let me know if you need that or need a Java example.

Wrapping Up

As you have seen it is pretty easy to have traditional web requests that take longer than 30 seconds on Heroku. While this is not ideal for background jobs it can be an easy way to deal with situations where it is overkill to implement “queue and push” for long requests. The full source for the Redirect Polling and Empty Chunking methods are on GitHub. Let me know if you have any questions.

Intro to Multi-Sensory Applications

Recently Christophe Coenraets and I put together some thoughts on what we are calling “Multi-Sensory Applications” – a new way to think about how we build more deeply connected and engaging software. These news types of applications go way beyond typical CRUD apps by composing together a fabric of inputs (senses) and weaving them together through transducers. Here is a short demo of a very simple MSA that I built to show how IoT devices can be connected with back-office business processes:

If you’d like to dive further into the architecture and code for this demo, check out a blog I wrote: Building Multi-Sensory Apps that Connect IoT to Business Processes.

This is only the beginning of a series of blogs and example apps that Christophe and I will be building to illustrate how software is becoming more connected and engaging. My next MSA example app will dive into the Big Data, Lambda Architecture, and Machine Learning aspects of Multi-Sensory Applications. So stay tuned!

Refactoring to Microservices

Right now there is a ton of hype and pushback around Microservices. Most of the current debate revolves around when Microservices make sense with smart people arguing all across the spectrum. As with all architectural topics the right answer is “it depends” so you should never blindly chose Microservices without understanding your goals and how they align with Microservices.

Using the open source WebJars project as an example I’d like to walk through a process of deciding where to use Microservices and then refactor part of the webjars.org app to a Microservice. First a little background on WebJars… WebJars are JavaScript & CSS libraries packaged into Jar files and published on Maven Central for easy consumption by JVM build tools. The webjars.org site is a Play Framework + Scala app that provides search, publishing, and file service for the jsDelivr CDN.

Here is my checklist for determining whether a piece of functionality should be broken out into a separate Microservice:

  1. The piece of functionality does NOT have shared mutable state.
    When using a Microservice a copy of the data will be shared. Mutating that copy will likely not propagate those changes back to the original source and all of the other possible copies of the data. While shared mutable state is common in many OO apps, this makes it very hard to switch to Microservices. Functional Programming on the other hand encourages immutable data which makes it much easier to switch to Microservices where copies can be mutated but it is clear that those mutations do not act on the original or other copies.
  2. The piece of functionality has independent operational or computational needs.
    If SLAs, scaling, or deployment needs vary between different pieces of functionality then Microservices might make sense. For example, if one piece of a system requires five nines but rarely changes while another piece does not have an SLA requirement and changes multiple times a day, Microservices make sense. Likewise you shouldn’t need to scale up every part of a system just because one piece of functionality has significant computation needs.
  3. The piece of functionality has cross-platform clients.
    While sharing code across platforms (e.g. JVM, Ruby, Node.js, etc) is sometimes possible, it is often easier and more maintainable to just expose the needed piece of functionality as a Microservice so that any platform can use it. For example, webjars.org uses a bower-as-a-service Microservice that runs in Node.js because it uses the Bower NPM package. The webjars.org app is a cross-platform (JVM) client to the Node.js Microservice.

The whole webjars.org app is functional and uses immutable data so there isn’t any shared mutable state that would make it hard to break pieces of functionality out into Microservices. In Play Framework a controller is really just a stateless function that takes a request and returns a response. This means that any of the web endpoints can be easily moved without impacting the system.

One possible candidate for a Microservice in webjars.org is a utility that converts SemVer-style version ranges to Maven-style version ranges. The SemVer.convertSemVerToMaven() function is not side-effecting so it could easily become a Microservice. But at this time the utility does not have independent operational or computation needs and it also does not have any other clients than the webjars.org app. If the functionality was needed outside of webjars.org then it could easily be turned into a library but a Microservice would definitely be overkill.

Another candidate for a Microservice in webjars.org is a web endpoint that serves a file from a WebJar. The Application.file controller function is stateless and does not use shared mutable state so it could easily become a Microservice. This function is what provides the content for WebJars on the jsDelivr CDN. When a request for a WebJar file on jsDelivr is received, if the CDN does not have the asset it gets it from webjars.org. For example:
Is backed by:

The operational and computational needs of this piece of functionality are pretty different from the rest of the webjars.org app. Let’s compare the needs:

The webjars.org File Service Rest of webjars.org
SLA If it goes down then many production sites break No production uptime requirements
Scaling Most load is handled by the CDN but sometimes load spikes when caches are stale or invalidated Very light load
Deployment Rarely changes Changes a few times a week

So this seems like a great candidate for a Microservice! Here are the steps I used to break out this functionality into a Microservice.

Step 1) Create a new Play + Scala app

I used Typesafe Activator to create a new Play Framework + Scala app:

activator new webjars-file-service play-scala

Here is the commit from that starting place:

Step 2) Clean up the build and copy the code into the new project

I copy and pasted the parts of the code that I wanted to move to the Microservice into the new project. Here is the full change set:

There was very minimal refactoring between the original source and the new Microservice. Everything worked great locally so it was time to deploy the Microservice.

Step 3) Create a new Heroku app and setup GitHub auto-deployment

I created a new app on Heroku:

heroku create webjars-file-service

Instead of doing the usual git push heroku master I setup auto-deployment so that whenever I push to GitHub, Heroku deploys the changes. Check out a screencast of how to do that:

Now that the webjars-file-service is deployed let’s try it out:

Everything is working great so lets switch webjars.org over to the new Microservice.

Step 4) Make the webjars.org app use the new Microservice

To make webjars.org use the new Microservice I removed the actual logic but I didn’t want to break any clients that were using the endpoints. To do this I added redirects for the actual file service functionality and for the file listing functionality I added a utility that wraps the new webjars-file-service Microservice. Along the way I had to do a small refactor of some Memcache-related functionality. Here is the full change set:

After pushing the changes to GitHub, Codeship verified that the tests passed, and Heroku deployed webjars.org.

This whole process only took a few hours and so far everything has been working great! Because the file service functionality in webjars.org did not have shared mutable state it was incredibly easy to move to a Microservice which enables me to handle it’s unique operational and computational needs.

The decision to move something to a Microservice is always full of “it depends” factors. Microservices are certainly not a silver bullet especially when dealing with code bases that have shared mutable state. Like any tool, Microservice can be a powerful way to help you, or they can be the chainsaw you use to cut down the tree that falls on you. Handle with care!

NPM Packages in Maven Central with NPM WebJars

A few months ago I launched Bower WebJars which provides a way for anyone to deploy Bower packages into Maven Central through WebJars. Since then 539 packages have been deployed! Today I’ve added NPM WebJars which is built on the same foundation as Bower WebJars but for NPM packages.

Give it a try and let me know how it goes. If you are curious about the changes to make this happen, check out the pull request.

Auto-Deploy GitHub Repos to Heroku

My favorite new feature on Heroku is the GitHub Integration which enables auto-deployment of GitHub repos. Whenever a change is made on GitHub the app can be automatically redeployed on Heroku. You can even tell Heroku to wait until the CI tests pass before doing the deployment. I now use this on almost all of my Heroku apps because it allows me to move faster and do less thinking (which I’m fond of).

For apps like jamesward.com I just enable deployment straight to production. But for apps that need a less risky setup I have a full Continuous Delivery pipeline that looks like this:

  1. Push to GitHub
  2. CI Validates the build
  3. Heroku deploys changes to staging
  4. Manual testing / validation of staging
  5. Using Heroku Pipelines, promote staging to production

I’m loving the flexibility and simplicity of this new feature! Check out a quick screencast to see how to setup and use Heroku GitHub auto-deployment:

Notice that none of this required a command line! How cool is that?!?

Reactive Postgres with Play Framework & ScalikeJDBC

Lately I’ve built a few apps that have relational data. Instead of trying to shoehorn that data into a NoSQL model I decided to use the awesome Heroku Postgres service but I didn’t want to lose out on the Reactiveness that most of the NoSQL data stores support. I discovered ScalikeJDBC-Async which uses postgresql-async, a Reactive (non-blocking), JDBC-ish, Postgres driver. With those libraries I was able to keep my data relational and my app Reactive all the way down. Lets walk through how to do it in a Play Framework app. (TL;DR: Jump to the the full source.)

If you want to start from scratch, create a new Play app from the Play Scala Seed.

The minimum dependencies needed in the build.sbt file are:

libraryDependencies ++= Seq(
  "org.postgresql"       %  "postgresql"                    % "9.3-1102-jdbc41",
  "com.github.tototoshi" %% "play-flyway"                   % "1.2.0",
  "com.github.mauricio"  %% "postgresql-async"              % "0.2.16",
  "org.scalikejdbc"      %% "scalikejdbc-async"             % "0.5.5",
  "org.scalikejdbc"      %% "scalikejdbc-async-play-plugin" % "0.5.5"

The play-flyway library handles schema evolutions using Flyway. It is a great alternative to Play’s JDBC module because it just does evolutions and does one-way evolutions (i.e. no downs). But because play-flyway doesn’t use the postgresql-async driver, it needs the standard postgresql JDBC driver as well.

The scalikejdbc-async-play-plugin library manages the lifecycle of the connection pool used by scalikejdbc-async in a Play app.

To use play-flyway and scalikejdbc-async-play-plugin a conf/play.plugins file must tell Play about the plugins:


A first evolution script in conf/db/migration/default/V1__create_tables.sql will create a table named bar that will hold a list of bars for our little sample app:


You will of course need a Postgres database to proceed. You can either install one locally or create a free one on the Heroku Postres cloud service. Then update the conf/application.conf file to point to the database:


The last line above overrides the database connection url if there is a DATABASE_URL environment variable set (which is the case if your app is running on Heroku).

To run this app locally you can start the Play app by starting the Activator UI or from the command line with:

activator ~run

When you first open your app in the browser, the play-flyway plugin should detect that evolutions needs to be applied and ask you to apply them. Once applied you will be ready to create a simple database object and a few reactive request handlers.

Here is a Bar database object named app/models/Bar.scala that uses scalikejdbc-async for reactive creation and querying of Bars:

package models
import play.api.libs.json.Json
import scalikejdbc.WrappedResultSet
import scalikejdbc._
import scalikejdbc.async._
import scalikejdbc.async.FutureImplicits._
import scala.concurrent.Future
import scala.concurrent.ExecutionContext.Implicits.global
case class Bar(id: Long, name: String)
object Bar extends SQLSyntaxSupport[Bar] {
  implicit val jsonFormat = Json.format[Bar]
  override val columnNames = Seq("id", "name")
  lazy val b = Bar.syntax
  def db(b: SyntaxProvider[Bar])(rs: WrappedResultSet): Bar = db(b.resultName)(rs)
  def db(b: ResultName[Bar])(rs: WrappedResultSet): Bar = Bar(
  def create(name: String)(implicit session: AsyncDBSession = AsyncDB.sharedSession): Future[Bar] = {
    val sql = withSQL(insert.into(Bar).namedValues(column.name -> name).returningId)
    sql.updateAndReturnGeneratedKey().map(id => Bar(id, name))
  def findAll(implicit session: AsyncDBSession = AsyncDB.sharedSession): Future[List[Bar]] = {
    withSQL(select.from[Bar](Bar as b)).map(Bar.db(b))

The db functions perform the mapping from SQL results to the Bar case class.

The create function takes a Bar name and returns a Future[Bar] by doing a non-blocking insert using the ScalikeJDBC Query DSL. When the insert has completed the primary key is returned and a new Bar instance is created and returned.

The findAll method uses the ScalikeJDBC Query DSL to select all of the Bars from the database, returning a Future[List[Bar]]].

Now that we have a reactive database object, lets expose these through reactive request handlers. First setup the routes in the conf/routes file:

GET        /bars                   controllers.Application.getBars
POST       /bars                   controllers.Application.createBar

Define the controller functions in the app/controllers/Application.scala file:

def getBars = Action.async {
  Bar.findAll.map { bars =>
def createBar = Action.async(parse.urlFormEncoded) { request =>
  Bar.create(request.body("name").head).map { bar =>

Both functions use Action.async which holds a function that takes a request and returns a response (Result) in the future. By returning a Future[Result] Play is able to make requests to the controller function non-blocking. The getBars controller function calls the Bar.findAll and then transforms the Future[List[Bar]] into a Future[Result], the 200 response containing the JSON serialized list of bars. The createBar controller function parses the request, creates the Bar, and then transforms the Future[Bar] into a Future[Result] once the Bar has been created.

From the non-blocking perspective, here is what a request to the getBars controller function looks like:

  1. Web request made to /bars
  2. Thread allocated to web request
  3. Database request made for the SQL select
  4. Thread allocated to the database request
  5. Web request thread is deallocated (but the connection remains open)
  6. Database request thread is deallocated (but the connection remains open)
  7. Database response handler reallocates a thread
  8. SQL result is transformed to List[Bar]
  9. Database response thread is deallocated
  10. Web response handler reallocates a thread
  11. Web response is created from the list of bars
  12. Web response thread is deallocated

So everything is now reactive all the way down because there is a moment where the web request is waiting on the database to respond but no threads are allocated to the request.

Try it yourself with curl:

$ curl -X POST -d "name=foo" http://localhost:9000/bars
$ curl http://localhost:9000/bars

Grab the the full source and let me know if you have any questions. Thanks!