The 6 Minute Cloud/Local Dev Roundtrip with Spring Boot

Great developer experiences allow you go from nothing to something amazing in under ten minutes. So I’m always trying to see how much I can minimize getting started experiences. My latest attempt is to deploy a Spring Boot app on Heroku, download the source to a developer’s machine, setup & run the app locally, make & test changes, and then redeploy those changes — all in under ten minutes (assuming a fast internet connection). Here is that experience in about six minutes:

To try it yourself, start at the hello-springboot GitHub repo. Let me know how it goes!

Pulling Go Code Colorado Data into Salesforce

This weekend I’m at the Go Code Colorado Challenge Weekend event in Durango. The purpose of Go Code Colorado 2016 is for teams to build something useful for businesses using one or more of the Colorado Public Datasets. Some teams are using Salesforce for the back-office / business process side of the app they are building. So I decided to see if I could pull a Colorado Public Dataset into Salesforce. Turns out it’s super easy! Just follow these steps:

  1. Sign up for a Salesforce Developer Edition
  2. Create a new External Data Source with the following field values:

    External Data Source = Colorado Public Data
    Name = Colorado_Public_Data
    Type = Lightning Connect: OData 2.0
    URL = https://data.colorado.gov/OData.svc
    Special Compatibility = Socrata

    new_external_data_source

  3. Save the new External Data Source and then hit “Validate and Sync” to fetch the metadata for the services.
  4. Select one or more tables from the list. A good table to test with is the “Occupational Employment Statistics” dataset.
    validate_and_sync
    Sync the table and you should see a new “External Object” in the list of External Objects.
    external_objects
  5. The data is now available in Salesforce. An easy way to see the dataset is to create a tab in the Salesforce UI. On the Custom Tabs Setup page create a new Custom Object Tab for the “Occupational Employment Statistics” object and select a Tab Style:
    new_custom_object_tab
    Complete the creation of the tab (select Next, Next, Save).
  6. Select the “Occupational Employment Statistics” tab (which might be in a drop-down menu depending on the width of your browser:
    custom_tabs
    Next to the View – All selector, hit “Go!” to fetch the data from the Colorado Public Data source. You’ll now see the records:
    records
    Note: The columns displayed in this view can be customized in the External Object’s Search Layout.
    Selecting a record’s ID will display the record details:
    record_details

That’s it! Now you can build all sorts of business processes and other employee-facing interactions around the public data.

Good luck to all of the Go Code Colorado teams!

Quick Force Java – Getting Started with Salesforce REST in Java

Recently I blogged about a toolchain that quickly gets you going with the Salesforce REST APIs. I believe developers should be able to get started with new technologies without having to install tons of stuff and struggle for days. That blog used Quick Force Node for those who want to use JavaScript / Node.js. I’ve had a number of requests for a Java version of this toolchain so I created Quick Force Java.

Check out a screencast that shows how to start with nothing, deploy a Salesforce REST app on Heroku, setup OAuth, setup a local dev environment, make & test changes to the app, and then deploy those changes back to the cloud (all in under 12 minutes):

Try out Quick Force Java and let me know how it goes!

Salesforce REST APIs – From Zero to Cloud to Local Dev in Minutes

When getting acquainted with new technologies I believe that users shouldn’t have to spend more than 15 minutes getting something simple up and running. I wanted to apply this idea to building an app on the Salesforce REST APIs so I built Quick Force (Node). In about 12 minutes you can deploy a Node.js app on Heroku that uses the Salesforce REST APIs, setup OAuth, then pull the app down to your local machine, make and test changes, and then redeploy those changes. Check out a video walkthrough:

Ok, now give it a try yourself by following the instructions in Quick Force (Node)!

I hope this will be the quickest and easiest way you’ve gotten started with the Salesforce REST APIs. Let me know how it goes!

FYI: This *should* work on Windows but I haven’t tested it there yet. So if you have any problems please let me know.

Winter Tech Forum 2016 – My Favorite Developer Conference!

I’ve been to a TON of developer conferences and by a landslide my favorite is the Winter Tech Forum (which used to be the Java Posse Roundup). Here is why… Learning for me is experiential.

Typical eyes-forward conferences are like being a passenger on a sail boat. I can watch what is happening but I could definitely not become the captain based on my experience as a passenger. This is what makes WTF different; every attendee is a captain (or maybe a skipper if you are new). The whole conference is the experiences that the attendees want to have. Sometimes that means we write code together, explore new technologies, or discuss ideas. Those experiences have made a significant impact on my technical skills. We also eat together and play together which has helped me build some amazing relationships.

This might sound a little crazy until you actually experience it. Which I highly encourage you to do! This year’s WTF is Feb 29 – March 4 in Crested Butte, CO and will be followed by a new Developer Retreat event that also looks to be awesome. I hope to see you there!

Dreamforce 2015 Video: Tour of Heroku + Salesforce Integration Methods

This year at Dreamforce I presented a session that walked through a few of the ways to integrate Heroku apps with Salesforce. Here is the session description:

Combining customer-facing apps on Heroku with employee-facing apps on Salesforce enables a whole new generation of connected and intelligent experiences. There are four primary ways to do this integration: Heroku Connect, Canvas, Apex / Process Callouts, and the Salesforce REST APIs. Using code and architectural examples, we’ll walk through these different methods. You will walk away knowing when you should use each and how to use them.

Check out the video recording of the session.

To dive into these methods here are the “Further Learning” resources for each method:

I hope this is helpful. Let me know if you have any questions.

Smoothing the Cloud & Local Roundtrip Developer Experience

Getting started with new technologies is usually a huge pain. Often I stumble around for hours trying to get an app’s toolchain setup correctly. Instructions are usually like:

Things get worse when I lead workshops for hundreds of enterprise developers where many are on Windows machines and not very comfortable with cmd.exe.

Experiencing this pain over-and-over is what led me to create Typesafe Activator as a smooth way to get started with Play Framework, Akka, and Scala. Developers have been thrilled with how easy taking their first step with Activator is but I never finished polishing the experience of the second step: App Deployment.

Over the past few months I’ve been working on a set of tools that make the roundtrip between deployment and local development super smooth with zero-CLI and zero-install. Check out a demo:

Here is a summary of the “from scratch” experience:

  1. Deploy the Click, Deploy, Develop app on the cloud
  2. Download the app’s source
  3. Run gulp from a file explorer to download Node, the app’s dependencies, and Atom and then launch the Node / Express server and the Atom code editor
  4. Open the local app in a browser: http://localhost:5000
  5. Make a change in Atom to the app.js file
  6. Test the changes locally
  7. Login to Heroku via Atom
  8. Deploy the changes via Atom

That is one smooth roundtrip!

For more detailed docs on this flow, checkout the Click, Deploy, Develop project.

Great dev experience starts with the simplest thing that can possibly work and has layered escape hatches to more complexity.

That kind of developer experience (DX) is something I’ve tried to do with this toolchain. It builds on top of tools that can be used directly by advanced users. Underneath the smooth DX is just a normal Node.js / Express app, a Gulp build, and the Atom code editor. Here are the pieces that I’ve built to polish the DX, creating the zero-CLI and zero-install experience:

I hope that others find this useful for helping to give new users a great roundtrip developer experience. Let me know what you think.

Note: Currently gulp-atom-downloader does not support Linux because there isn’t a standalone zip download of Atom for Linux. Hopefully we can get that resolved soon.

Comparing Application Deployment: 2005 vs. 2015

Note: Check out the Latvian Translation.

Over the past 10 years the ways we build and deliver applications has changed significantly. It seems like much of this change has happened overnight but don’t worry, it is perfectly normal to look up and feel disoriented in the 2015 deployment landscape.

This article compares the deployment in 2005 with “modern” deployment so that all the new terms and techniques will make sense. Forewarning: My background is primarily Java / JVM so I will use that terminology but try to make the ideas polyglot.

2005 = Multi-App Containers / App Servers / Monolithic Apps
2015 = Microservices / Docker Containers / Containerless Apps

Back in 2005 many of us worked on projects that resulted in a WAR file – a zip file containing a Java web application and its library dependencies. That web application would be deployed alongside other web applications into a single app server sometimes called a “container” because it contained and ran one or more applications. The app server provided a bunch of common services to the web apps like an HTTP server, a service directory, and shared libraries. Unfortunately deploying multiple apps in a single container created high friction for scaling, deployment, and resource usage. App servers were supposed to isolate an app from its underlying system dependencies in order to avoid “it works on my machine” problems but things often didn’t work that smoothly due to differing system dependencies and configuration that lived outside of the app server / container.

In 2015 apps are being deployed as self-contained units, meaning the app includes everything it needs to run on top of a standard set of system dependencies. The granularity of the self-contained unit differs depending on the deployment paradigm. In the Java / JVM world a “containerless” app is a zip file that includes everything the app needs on top of the JVM. Most modern JVM frameworks have switched to this containerless approach including Play Framework, Dropwizard, and Spring Boot. A few years ago I wrote in more detail about how app servers are fading away in the move from monolithic middleware to microservices and cloud services.

For a more complete and portable self-contained unit, system-level container technologies like Docker and LXC bundle the app with its system dependencies. Instead of deploying a bunch of apps into a single container, a single app is added to a Docker image and deployed on one or more servers. On Heroku a “Slug” file is similar to a Docker image.

Microservices play a role in this new landscape because deployment across microservices is independent, whereas with traditional app servers individual app deployment often involved restarting the whole server. This was one reason for the snail’s pace of deployment in enterprises – deployments were incredibly risky and had to be coordinated months in advance across numerous teams. Hot deployment was a promise that was never realized for production apps. Microservices enable individual teams to deploy at will and as often as they want. Microservices require the ability to quickly provision, deploy, and scale services which may have only a single responsibility. These requirements fit well with the infrastructure provided by containerless apps running on Docker(ish) Containers.

2005 = Manual Deployment
2015 = Continuous Delivery / Continuous Deployment

The app servers of 2005 that ran multiple monolithic apps combined with manual load balancer configurations made application upgrades risky and painful so deployments were usually done sparingly in designated maintenance windows. Back then it was pretty much unheard of to have a deployment pipeline that fully automated delivery from an SCM to production.

Today Continuous Delivery and Continuous Deployment enable developers to get code to staging and production sometimes as often as tens or even hundreds of times a day. Scalable deployment pipelines range from the simple “git push heroku master” to a more risk averse pipeline that includes pull requests, Continuous Integration, staging auto-deployment, manual promotion to production, and possibly Canary Releases & Feature Flags. These pipelines enable organizations to move fast and distribute risk across many small releases.

In order for Continuous Delivery to work well there are a few ancillary requirements:

  • Release rollbacks must be instant and easy because sometimes things are going to break and getting back to a working state quickly must be painless and fast.
  • Patch releases must be able to make it from SCM to production (through a continuous delivery pipeline) in minutes.
  • Load balancers must be able to handle automatic switching between releases.
  • Database schema changes should be decoupled from app releases otherwise releases and rollbacks can be blocked.
  • App-tier servers should be stateless with state living in external data stores otherwise state will be frequently lost and/or inconsistent.

2005 = Persistent Servers / “Pray it never goes down”
2015 = Immutable Infrastructure / Ephemeral Servers

When a server crashed in 2005 stuff usually broke. Some used session replication and server affinity but sessions were lost and bringing up new instances usually took quite a bit of manual work. Often changes were made to production systems via SSH making it difficult to accurately reproduce a production environment. Logging was usually done to local disk making it hard to see what was going on across servers and load balancers.

Servers in 2015 are disposable, immutable, and ephemeral forcing us to plan for them to go down. Tools like Netflix’s Chaos Monkey randomly shut down servers to make sure we are preparing for crashes. Load balancers and management backplanes work together to start and stop new instances in an instant enabling rapid scaling both up and down. By being immutable we can no longer fix production issues by SSHing into a server but now environments are easily reproducible. Logging services route STDOUT to an external service enabling us to see the log stream in real time, across the whole system.

2005 = Ops Team
2015 = DevOps

In 2005 there was a team that would take your WAR file (or other deployable artifact) and be responsible for deploying it, managing it, and monitoring it. This was nice because developers didn’t have to wear pagers but ultimately the Ops team often couldn’t do much if there was a production issue at 3am. The biggest downside of this was that Ops became all about risk mitigation causing a tremendous slowdown in software delivery.

Modern technical organizations of all sizes are ditching the Ops velocity killer and making developers responsible for the stuff they put into production. Services like New Relic, VictorOps, and Slack help developers stay on top of their new operational responsibilities. The DevOps culture also directly incentivizes devs not to deploy things that will end up waking them or a team member up at 3am. A core indicator of a DevOps culture is whether a new team member can get code to production on their first day. Doing that one thing right means doing so many other things right, like:

  • 3 Step Dev Setup: Provision the system, Checkout the code, and Run the App
  • SCM / Team Review (e.g. GitHub Flow)
  • Continuous Integration & Continuous Deployment / Delivery
  • Monitoring and Notifications

DevOps can sound very scary to traditional enterprise developers like myself. But from experience I can attest that wearing a pager (metaphorically) and assuming the direct risk of my deployments has made me a much better developer. The quality of my code and my feelings of fulfillment have increased with my new level of ownership over what is in production.

Learn More

I’ve just touched the surface of many of the deployment changes over the past 10 years but hopefully you now have a better understanding of some of the terminology you might be hearing at conferences and on blogs. For more details on these and related topics, check out The Twelve-Factor App and my blog Java Doesn’t Suck – You’re Just Using it Wrong. Let me know what you think!

Huge thanks to Jason Hand and Joe Kutner for reviewing this blog post.

Redirecting and Chunking Around Heroku’s 30 Second Request Timeout

In most cases a web request shouldn’t take more than 30 seconds to return a response so it is for good reason that Heroku has a 30 second request timeout. But there are times when things just take a while. There are different methods for dealing with this. Where possible, the best solution is to offload the job from the web request queue and have a background job queue that can be scaled separately. If the requestor needs the result then it can either poll for it or be pushed the value when the background job is complete. Yet there are some cases where this is overkill. For instance, if a web request takes a while but the user interaction must remain blocked (e.g. a modal spinner) until the request is complete, then setting up background jobs for slow requests can be unnecessary.

Lets look at two different methods for handling long (> 30 seconds) web requests on Heroku. On Heroku the request must start returning some data within 30 seconds or the load balancer will give up. One way to deal with this is to continually wait 25ish seconds for the result and then redirect the request to do the same thing again. The other option is to periodically dump empty chunks into the response until the actual response can be returned. Each of these methods has tradeoffs so lets look at each in more detail. I’ll be using Play Framework and Scala for the examples but both of these method could be handled in most frameworks.

Redirect Polling

The Redirect Polling method of dealing with long web requests continuously sends a redirect every 25 seconds until the result is available. Try it out! The downside of this approach is that HTTP clients usually have a maximum number of redirects that they will allow which limits the total amount of time this method can take. The upside is that the actual response status can be based on the result.

Ideally the web framework is Reactive / Non-Blocking so that threads are only used when there is active I/O. In some cases the underlying reason for the long request is another service that is slow. In that case the web request could be fully Reactive, thus preserving resources that would traditionally be wasted in waiting states.

To implement Redirect Polling (Reactively) in Play Framework and Scala I’ll use Akka as a place to run a long job off of the web request thread. The Actor job could be something that is computationally taxing or a long network request. By using Akka Actors I have a simple way to deal with job distribution, failure, and thread pool assignment & management. Here is my very simple Akka Actor job that takes 60 seconds to complete (full source):

class LongJob extends Actor {
 
  lazy val jobFuture: Future[String] = Promise.timeout("done!", 60.seconds)
 
  override def receive = {
    case GetJobResult => jobFuture.pipeTo(sender())
  }
 
}
 
case object GetJobResult

When this Actor receives a GetJobResult message, it creates a job that in 60 seconds returns a String using a Scala Future. That String is sent (piped) to the sender of the message.

Here is a web request handler that does the Redirect Polling while waiting for a result from the Actor (full source):

def redir(maybeId: Option[String]) = Action.async {
 
  val (actorRefFuture, id) = maybeId.fold {
    // no id so create a job
    val id = UUID.randomUUID().toString
    (Future.successful(actorSystem.actorOf(Props[LongJob], id)), id)
  } { id =>
    (actorSystem.actorSelection(s"user/$id").resolveOne(1.second), id)
  }
 
  actorRefFuture.flatMap { actorRef =>
    actorRef.ask(GetJobResult)(Timeout(25.seconds)).mapTo[String].map { result =>
      // received the result
      actorSystem.stop(actorRef)
      Ok(result)
    } recover {
      // did not receive the result in time so redirect
      case e: TimeoutException => Redirect(routes.Application.redir(Some(id)))
    }
  } recover {
    // did not find the actor specified by the id
    case e: ActorNotFound => InternalServerError("Result no longer available")
  }
 
}

This request handler uses an optional query string parameter (id) as the identifier of the job. Here is the logic for the request handler:

  1. If the id is not specified then a new LongJob Actor instance is created using a new id. Otherwise the Actor is resolved based on its id.
  2. If either a new Actor was created or an existing Actor was found, then the Actor is asked for its result and given 25 seconds to return it. Otherwise an error is returned.
  3. If the result is received within the timeout, the result is returned in a 200 response. Otherwise a redirect response is returned that includes the id in the query string.

This is really just automatic polling for a result using redirects. It would be nice if HTTP had some semantics around the HTTP 202 response code for doing this kind of thing.

Empty Chunking

In the Empty Chunking method of allowing a request to take more than 30 seconds the web server sends HTTP/1.1 chunks every few seconds until the actual response is ready. Try it out! The downside of this method is that the HTTP response status code must be returned before the actual request’s result is available. The upside is that a single web request can stay open for as long as it needs. To use this method a web framework needs to support chunked responses and ideally is Reactive / Non-Blocking so that threads are only used when there is active I/O.

This method doesn’t require an Actor like the Redirect Polling method. A Future could be used instead but I wanted to keep the job piece the same for both methods. Here is a web request handler that does the empty chunking (full source):

def chunker = Action {
  val actorRef = actorSystem.actorOf(Props[LongJob])
  val futureResult = actorRef.ask(GetJobResult)(Timeout(2.minutes)).mapTo[String]
  futureResult.onComplete(_ => actorSystem.stop(actorRef)) // stop the actor
 
  val enumerator = Enumerator.generateM {
    // output spaces until the future is complete
    if (futureResult.isCompleted) Future.successful(None)
    else Promise.timeout(Some(" "), 5.seconds)
  } andThen {
    // return the result
    Enumerator.flatten(futureResult.map(Enumerator(_)))
  }
 
  Ok.chunked(enumerator)
}

This web request handler does the following when a request comes in:

  1. An instance of the LongJob Actor is created
  2. The Actor instance (actorRef) is asked for the result of the GetJobResult message and given two minutes to receive a result which is mapped to a String.
  3. An onComplete handler stops the Actor instance after a result is received or request has timed out.
  4. An Enumerator is created that outputs spaces every five seconds until the result has been received or timed out, at which time the result is outputted and is done.
  5. A HTTP 200 response is returned that is setup to chunk the output of the Enumerator

That is it! I’ve used this method in a number of places with Scala and Java in Play Framework making them fully Reactive. This logic could be wrapped into something more reusable. Let me know if you need that or need a Java example.

Wrapping Up

As you have seen it is pretty easy to have traditional web requests that take longer than 30 seconds on Heroku. While this is not ideal for background jobs it can be an easy way to deal with situations where it is overkill to implement “queue and push” for long requests. The full source for the Redirect Polling and Empty Chunking methods are on GitHub. Let me know if you have any questions.