Smoothing the Cloud & Local Roundtrip Developer Experience

Getting started with new technologies is usually a huge pain. Often I stumble around for hours trying to get an app’s toolchain setup correctly. Instructions are usually like:

Things get worse when I lead workshops for hundreds of enterprise developers where many are on Windows machines and not very comfortable with cmd.exe.

Experiencing this pain over-and-over is what led me to create Typesafe Activator as a smooth way to get started with Play Framework, Akka, and Scala. Developers have been thrilled with how easy taking their first step with Activator is but I never finished polishing the experience of the second step: App Deployment.

Over the past few months I’ve been working on a set of tools that make the roundtrip between deployment and local development super smooth with zero-CLI and zero-install. Check out a demo:

Here is a summary of the “from scratch” experience:

  1. Deploy the Click, Deploy, Develop app on the cloud
  2. Download the app’s source
  3. Run gulp from a file explorer to download Node, the app’s dependencies, and Atom and then launch the Node / Express server and the Atom code editor
  4. Open the local app in a browser: http://localhost:5000
  5. Make a change in Atom to the app.js file
  6. Test the changes locally
  7. Login to Heroku via Atom
  8. Deploy the changes via Atom

That is one smooth roundtrip!

For more detailed docs on this flow, checkout the Click, Deploy, Develop project.

Great dev experience starts with the simplest thing that can possibly work and has layered escape hatches to more complexity.

That kind of developer experience (DX) is something I’ve tried to do with this toolchain. It builds on top of tools that can be used directly by advanced users. Underneath the smooth DX is just a normal Node.js / Express app, a Gulp build, and the Atom code editor. Here are the pieces that I’ve built to polish the DX, creating the zero-CLI and zero-install experience:

I hope that others find this useful for helping to give new users a great roundtrip developer experience. Let me know what you think.

Note: Currently gulp-atom-downloader does not support Linux because there isn’t a standalone zip download of Atom for Linux. Hopefully we can get that resolved soon.

Redirecting and Chunking Around Heroku’s 30 Second Request Timeout

In most cases a web request shouldn’t take more than 30 seconds to return a response so it is for good reason that Heroku has a 30 second request timeout. But there are times when things just take a while. There are different methods for dealing with this. Where possible, the best solution is to offload the job from the web request queue and have a background job queue that can be scaled separately. If the requestor needs the result then it can either poll for it or be pushed the value when the background job is complete. Yet there are some cases where this is overkill. For instance, if a web request takes a while but the user interaction must remain blocked (e.g. a modal spinner) until the request is complete, then setting up background jobs for slow requests can be unnecessary.

Lets look at two different methods for handling long (> 30 seconds) web requests on Heroku. On Heroku the request must start returning some data within 30 seconds or the load balancer will give up. One way to deal with this is to continually wait 25ish seconds for the result and then redirect the request to do the same thing again. The other option is to periodically dump empty chunks into the response until the actual response can be returned. Each of these methods has tradeoffs so lets look at each in more detail. I’ll be using Play Framework and Scala for the examples but both of these method could be handled in most frameworks.

Redirect Polling

The Redirect Polling method of dealing with long web requests continuously sends a redirect every 25 seconds until the result is available. Try it out! The downside of this approach is that HTTP clients usually have a maximum number of redirects that they will allow which limits the total amount of time this method can take. The upside is that the actual response status can be based on the result.

Ideally the web framework is Reactive / Non-Blocking so that threads are only used when there is active I/O. In some cases the underlying reason for the long request is another service that is slow. In that case the web request could be fully Reactive, thus preserving resources that would traditionally be wasted in waiting states.

To implement Redirect Polling (Reactively) in Play Framework and Scala I’ll use Akka as a place to run a long job off of the web request thread. The Actor job could be something that is computationally taxing or a long network request. By using Akka Actors I have a simple way to deal with job distribution, failure, and thread pool assignment & management. Here is my very simple Akka Actor job that takes 60 seconds to complete (full source):

class LongJob extends Actor {
  lazy val jobFuture: Future[String] = Promise.timeout("done!", 60.seconds)
  override def receive = {
    case GetJobResult => jobFuture.pipeTo(sender())
case object GetJobResult

When this Actor receives a GetJobResult message, it creates a job that in 60 seconds returns a String using a Scala Future. That String is sent (piped) to the sender of the message.

Here is a web request handler that does the Redirect Polling while waiting for a result from the Actor (full source):

def redir(maybeId: Option[String]) = Action.async {
  val (actorRefFuture, id) = maybeId.fold {
    // no id so create a job
    val id = UUID.randomUUID().toString
    (Future.successful(actorSystem.actorOf(Props[LongJob], id)), id)
  } { id =>
    (actorSystem.actorSelection(s"user/$id").resolveOne(1.second), id)
  actorRefFuture.flatMap { actorRef =>
    actorRef.ask(GetJobResult)(Timeout(25.seconds)).mapTo[String].map { result =>
      // received the result
    } recover {
      // did not receive the result in time so redirect
      case e: TimeoutException => Redirect(routes.Application.redir(Some(id)))
  } recover {
    // did not find the actor specified by the id
    case e: ActorNotFound => InternalServerError("Result no longer available")

This request handler uses an optional query string parameter (id) as the identifier of the job. Here is the logic for the request handler:

  1. If the id is not specified then a new LongJob Actor instance is created using a new id. Otherwise the Actor is resolved based on its id.
  2. If either a new Actor was created or an existing Actor was found, then the Actor is asked for its result and given 25 seconds to return it. Otherwise an error is returned.
  3. If the result is received within the timeout, the result is returned in a 200 response. Otherwise a redirect response is returned that includes the id in the query string.

This is really just automatic polling for a result using redirects. It would be nice if HTTP had some semantics around the HTTP 202 response code for doing this kind of thing.

Empty Chunking

In the Empty Chunking method of allowing a request to take more than 30 seconds the web server sends HTTP/1.1 chunks every few seconds until the actual response is ready. Try it out! The downside of this method is that the HTTP response status code must be returned before the actual request’s result is available. The upside is that a single web request can stay open for as long as it needs. To use this method a web framework needs to support chunked responses and ideally is Reactive / Non-Blocking so that threads are only used when there is active I/O.

This method doesn’t require an Actor like the Redirect Polling method. A Future could be used instead but I wanted to keep the job piece the same for both methods. Here is a web request handler that does the empty chunking (full source):

def chunker = Action {
  val actorRef = actorSystem.actorOf(Props[LongJob])
  val futureResult = actorRef.ask(GetJobResult)(Timeout(2.minutes)).mapTo[String]
  futureResult.onComplete(_ => actorSystem.stop(actorRef)) // stop the actor
  val enumerator = Enumerator.generateM {
    // output spaces until the future is complete
    if (futureResult.isCompleted) Future.successful(None)
    else Promise.timeout(Some(" "), 5.seconds)
  } andThen {
    // return the result

This web request handler does the following when a request comes in:

  1. An instance of the LongJob Actor is created
  2. The Actor instance (actorRef) is asked for the result of the GetJobResult message and given two minutes to receive a result which is mapped to a String.
  3. An onComplete handler stops the Actor instance after a result is received or request has timed out.
  4. An Enumerator is created that outputs spaces every five seconds until the result has been received or timed out, at which time the result is outputted and is done.
  5. A HTTP 200 response is returned that is setup to chunk the output of the Enumerator

That is it! I’ve used this method in a number of places with Scala and Java in Play Framework making them fully Reactive. This logic could be wrapped into something more reusable. Let me know if you need that or need a Java example.

Wrapping Up

As you have seen it is pretty easy to have traditional web requests that take longer than 30 seconds on Heroku. While this is not ideal for background jobs it can be an easy way to deal with situations where it is overkill to implement “queue and push” for long requests. The full source for the Redirect Polling and Empty Chunking methods are on GitHub. Let me know if you have any questions.

Auto-Deploy GitHub Repos to Heroku

My favorite new feature on Heroku is the GitHub Integration which enables auto-deployment of GitHub repos. Whenever a change is made on GitHub the app can be automatically redeployed on Heroku. You can even tell Heroku to wait until the CI tests pass before doing the deployment. I now use this on almost all of my Heroku apps because it allows me to move faster and do less thinking (which I’m fond of).

For apps like I just enable deployment straight to production. But for apps that need a less risky setup I have a full Continuous Delivery pipeline that looks like this:

  1. Push to GitHub
  2. CI Validates the build
  3. Heroku deploys changes to staging
  4. Manual testing / validation of staging
  5. Using Heroku Pipelines, promote staging to production

I’m loving the flexibility and simplicity of this new feature! Check out a quick screencast to see how to setup and use Heroku GitHub auto-deployment:

Notice that none of this required a command line! How cool is that?!?

Jekyll on Heroku

Jekyll is simple static content compiler popularized by GitHub Pages. If you use Jekyll in a GitHub repo a static website will automatically be created for you by running Jekyll on your content sources (e.g. Markdown). That works well but there are cases where it is nice to deploy a Jekyll site on Heroku. After trying (and failing) to follow many of the existing blogs about running Jekyll on Heroku, I cornered my coworker Terence Lee and got some help. Turns out it is pretty simple.

Not interested in the details? Skip right to a diff that makes a Jekyll site deployable on Heroku or start from scratch:

Deploy on Heroku

Here are the step by step instructions:

  1. Add a Gemfile in the Jekyll project root containing:

    source ''
    ruby '2.1.2'
    gem 'jekyll'
    gem 'kramdown'
    gem 'rack-jekyll'
    gem 'rake'
    gem 'puma'
  2. Run: bundler install
  3. Create a Procfile telling Heroku how to serve the web site with Puma:

    web: bundle exec puma -t 8:32 -w 3 -p $PORT
  4. Create a Rakefile which tells Heroku’s slug compiler to build the Jekyll site as part of the assets:precompile Rake task:

    namespace :assets do
      task :precompile do
        puts `bundle exec jekyll build`
  5. Add the following lines to the _config.yml file:

    gems: ['kramdown']
    exclude: ['', 'Gemfile', 'Gemfile.lock', 'vendor', 'Procfile', 'Rakefile']
  6. Add a file containing:

    require 'rack/jekyll'
    require 'yaml'

That is it! When you do the usual git push heroku master deployment, the standard Ruby Buildpack will kick off the Jekyll compiler and when your app runs, Puma will serve the static assets. If you are starting from scratch just clone my jekyll-heroku repo and you should have everything you need.

To run Jekyll locally using the dependencies in the project, run:

bundle exec jekyll serve --watch

Let me know how it goes.

New Adventures with Play, Scala, and Akka at Typesafe

Today I’m heading out on a new adventure at Typesafe, the company behind Play Framework, Scala, and Akka!

The past year and a half at Heroku have been really amazing. Not only have I enjoyed teaching others about Heroku, I’ve enjoyed my own frequent use of Heroku. It says something when a technology switch makes one never want to go back to the way it was done before. This is the experience that I (and many others) have had with Heroku. I can’t imagine going back to managing servers and painful deployments. I’m certainly a Heroku Evangelist for Life, but it’s time for a new adventure.

Typesafe is assembling a platform that is quickly becoming the foundation for next generation software. Working with the Typesafe technologies has been a delight and has opened my eyes to the future of scalable software. To build more scalable applications we need better programming models for asynchronous and concurrent code. Web applications are also transitioning to a modern Client/Server architecture where the Client is executed in the browser via JavaScript and the Server is RESTful JSON services.

The Typesafe Stack brings together a scalable foundation and a modern web framework to form a platform that is perfect for the next generation of software. I’m incredibly excited to continue learning Play, Scala, and Akka while also helping other developers do the same!

Working at Heroku has been a magnificent adventure and I’m sad to see it end. I will certainly continue showing off Heroku and using it myself, especially combined with the Typesafe Stack!

Heroku and Play Next Week in Chicago and New York

Next week I’ll be in Chicago and New York for a few presentation about Heroku, Play Framework, and running Java apps on the cloud.

Hope to see you there!

Run Revel Apps on Heroku

UPDATE: There have been some updates to my Revel Heroku Buildpack that make it work better and with newer versions of Revel. Check out the details.

Revel is a Play-like web framework for Go. I’m new to the Go programming language but I’ve heard good things. So I thought I’d take Revel for a spin and get it working on Heroku. Luckily there is already a Go Buildpack and a great article on how to use it.

To get Revel working on Heroku I had to make a few changes to Revel and create a modified buildpack. But it all works! Lets walk through the steps you can follow to deploy Revel apps on Heroku.

First lets get a simple Revel app working locally.
Step 1) Install Go
Step 2) Install Mercurial
Step 3) Install Git
Step 4) Create a new directory that will contain everything:

mkdir ~/go

Step 5) This new “go” directory will be your “GOPATH” so set that environment variable:

export GOPATH=~/go

Step 6) In the newly created “go” directory, get the original Revel source and my changes:

go get
go get

Step 7) Build the “revel” command:

go build -o bin/revel

Step 8) Grab my “hellorevel” sample app:

go get

Step 9) Start the local Revel server:

bin/revel run

Step 10) In your browser navigate to: http://localhost:9000

You should see a “hello, Revel” message. Lets walk through this simple app so you can get an idea of how it works.

The request you just made is mapped to code through the “conf/routes” file which contains:

# Routes
# This file defines all application routes (Higher priority routes first)
# ~~~~
GET     /                                       Application.Index

This tells Revel to handle HTTP GET requests to “/” with the “Application.Index” controller.

The “Application.Index” function is defined in the “app/controllers/app.go” file and contains:

package controllers
import (
type Application struct {
func (c Application) Index() rev.Result {
    message := "hello, Revel"
    return c.Render(message)

The “Index()” function creates a “message” string of “hello, Revel” and then returns the rendered template which received the “message” as a parameter.

The “app/views/Application/Index.html” template is used to get the HTML for the “Application.Index()” controller and contains:

<!DOCTYPE html>

This uses Go templates and uses the “message” parameter.

Feel free to make changes to the template and controller to see how Revel auto-recompiles the source.

Ok, now that you have the application running locally, lets deploy it on Heroku.

Step 1) Signup for a free Heroku account
Step 2) Install the Heroku Toolbelt
Step 3) Login to Heroku from the command line (this should also setup your SSH keys if you haven’t done so already):

heroku login

Step 4) Enter the directory for the “hellorevel” app and create a new application on Heroku which will use the Revel Buildpack:

cd ~/go/src/
heroku create --buildpack

Step 5) Upload the app to Heroku using Git:

git push heroku master

This will upload the application source, pull in all of the dependencies, and then deploy the application on Heroku. That process will look something like this:

jamesw@T420s:~/go/src/$ git push heroku master
Counting objects: 25, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (19/19), done.
Writing objects: 100% (25/25), 2.46 KiB, done.
Total 25 (delta 2), reused 0 (delta 0)
-----> Heroku receiving push
-----> Fetching custom git buildpack... done
-----> Revel app detected
-----> Installing Go 1.0.3... done
       Installing Virtualenv...running virtualenv done
       Installing Mercurial... done
-----> Copying src to .go/src/pkg/_app
-----> Getting and building Revel
-----> Discovering process types
       Procfile declares types -> (none)
       Default types for Revel -> web
-----> Compiled slug size: 31.0MB
-----> Launching... done, v4 deployed to Heroku
 * [new branch]      master -> master

Step 6) Check out your Revel app on the cloud by running:

heroku open

That’s it! Deployment of Revel apps couldn’t be easier! Let me know if you have any questions or feedback on this. Thanks!

Heroku & Play Framework at JavaOne 2012

This year at JavaOne I’ll be presenting two session and participating in one BOF:

BOF4149 – Web Framework Smackdown 2012 – Monday @ 8:30pm

Much has changed since the first Web framework smackdown, at JavaOne 2005. Or has it? The 2012 edition of this popular panel discussion surveys the current landscape of Web UI frameworks for the Java platform. The 2005 edition featured JSF, Webwork, Struts, Tapestry, and Wicket. The 2012 edition features representatives of the current crop of frameworks, with a special emphasis on frameworks that leverage HTML5 and thin-server architecture. Java Champion Markus Eisele leads the lively discussion with panelists James Ward (Play), Graeme Rocher (Grails), Edward Burns (JSF) and Santiago Pericasgeertsen (Avatar).

CON3845 – Introduction to the Play Framework – Tuesday @ 8:30am

The Play Framework is a lightweight, stateless Web framework for Java and Scala applications. It’s built on Java NIO, so it’s highly scalable. This session gives you an introduction to building Web applications with the Play Framework. You will learn how to set up routes and create controllers and views, plus how to deploy Play Framework applications in the cloud.

CON3842 – Client/Server Applications with HTML5 and Java – Thursday @ 11am

The Web application landscape is rapidly shifting back to a client/server architecture. This time around, the client is JavaScript, HTML, and Cascading Style Sheets in the browser. The tools and deployment techniques for these types of applications are abundant and fragmented. This session shows you how to pull together jQuery, LESS, and Twitter Bootstrap to build the client. The server can be anything that talks HTTP, but this session uses JAX-RS. You will also learn how to deploy client/server Web applications in the cloud with a content delivery network (CDN) for the client and a PaaS for the server.

I hope to see you there!

Atlanta Presentation: Practicing Continuous Delivery

Tomorrow I’ll be presenting Practicing Continuous Delivery on the Cloud at the Atlanta No Fluff Just Stuff conference. Here is the session description:

This session will teach you best practices and patterns for doing Continuous Delivery / Continuous Deployment in Cloud environments. You will learn how to handle schema migrations, maintain dev/prod parity, manage configuration and scaling. This session will use Heroku as an example platform but the patterns could be implemented anywhere.

This has become my favorite sessions to present. So if you are going to be at Atlanta NFJS, then I hope to see you there!