NPM Packages in Maven Central with NPM WebJars

A few months ago I launched Bower WebJars which provides a way for anyone to deploy Bower packages into Maven Central through WebJars. Since then 539 packages have been deployed! Today I’ve added NPM WebJars which is built on the same foundation as Bower WebJars but for NPM packages.

Give it a try and let me know how it goes. If you are curious about the changes to make this happen, check out the pull request.

Auto-Deploy GitHub Repos to Heroku

My favorite new feature on Heroku is the GitHub Integration which enables auto-deployment of GitHub repos. Whenever a change is made on GitHub the app can be automatically redeployed on Heroku. You can even tell Heroku to wait until the CI tests pass before doing the deployment. I now use this on almost all of my Heroku apps because it allows me to move faster and do less thinking (which I’m fond of).

For apps like I just enable deployment straight to production. But for apps that need a less risky setup I have a full Continuous Delivery pipeline that looks like this:

  1. Push to GitHub
  2. CI Validates the build
  3. Heroku deploys changes to staging
  4. Manual testing / validation of staging
  5. Using Heroku Pipelines, promote staging to production

I’m loving the flexibility and simplicity of this new feature! Check out a quick screencast to see how to setup and use Heroku GitHub auto-deployment:

Notice that none of this required a command line! How cool is that?!?

Reactive Postgres with Play Framework & ScalikeJDBC

Lately I’ve built a few apps that have relational data. Instead of trying to shoehorn that data into a NoSQL model I decided to use the awesome Heroku Postgres service but I didn’t want to lose out on the Reactiveness that most of the NoSQL data stores support. I discovered ScalikeJDBC-Async which uses postgresql-async, a Reactive (non-blocking), JDBC-ish, Postgres driver. With those libraries I was able to keep my data relational and my app Reactive all the way down. Lets walk through how to do it in a Play Framework app. (TL;DR: Jump to the the full source.)

If you want to start from scratch, create a new Play app from the Play Scala Seed.

The minimum dependencies needed in the build.sbt file are:

libraryDependencies ++= Seq(
  "org.postgresql"       %  "postgresql"                    % "9.3-1102-jdbc41",
  "com.github.tototoshi" %% "play-flyway"                   % "1.2.0",
  "com.github.mauricio"  %% "postgresql-async"              % "0.2.16",
  "org.scalikejdbc"      %% "scalikejdbc-async"             % "0.5.5",
  "org.scalikejdbc"      %% "scalikejdbc-async-play-plugin" % "0.5.5"

The play-flyway library handles schema evolutions using Flyway. It is a great alternative to Play’s JDBC module because it just does evolutions and does one-way evolutions (i.e. no downs). But because play-flyway doesn’t use the postgresql-async driver, it needs the standard postgresql JDBC driver as well.

The scalikejdbc-async-play-plugin library manages the lifecycle of the connection pool used by scalikejdbc-async in a Play app.

To use play-flyway and scalikejdbc-async-play-plugin a conf/play.plugins file must tell Play about the plugins:


A first evolution script in conf/db/migration/default/V1__create_tables.sql will create a table named bar that will hold a list of bars for our little sample app:


You will of course need a Postgres database to proceed. You can either install one locally or create a free one on the Heroku Postres cloud service. Then update the conf/application.conf file to point to the database:


The last line above overrides the database connection url if there is a DATABASE_URL environment variable set (which is the case if your app is running on Heroku).

To run this app locally you can start the Play app by starting the Activator UI or from the command line with:

activator ~run

When you first open your app in the browser, the play-flyway plugin should detect that evolutions needs to be applied and ask you to apply them. Once applied you will be ready to create a simple database object and a few reactive request handlers.

Here is a Bar database object named app/models/Bar.scala that uses scalikejdbc-async for reactive creation and querying of Bars:

package models
import play.api.libs.json.Json
import scalikejdbc.WrappedResultSet
import scalikejdbc._
import scalikejdbc.async._
import scalikejdbc.async.FutureImplicits._
import scala.concurrent.Future
case class Bar(id: Long, name: String)
object Bar extends SQLSyntaxSupport[Bar] {
  implicit val jsonFormat = Json.format[Bar]
  override val columnNames = Seq("id", "name")
  lazy val b = Bar.syntax
  def db(b: SyntaxProvider[Bar])(rs: WrappedResultSet): Bar = db(b.resultName)(rs)
  def db(b: ResultName[Bar])(rs: WrappedResultSet): Bar = Bar(
  def create(name: String)(implicit session: AsyncDBSession = AsyncDB.sharedSession): Future[Bar] = {
    val sql = withSQL(insert.into(Bar).namedValues( -> name).returningId)
    sql.updateAndReturnGeneratedKey().map(id => Bar(id, name))
  def findAll(implicit session: AsyncDBSession = AsyncDB.sharedSession): Future[List[Bar]] = {
    withSQL(select.from[Bar](Bar as b)).map(Bar.db(b))

The db functions perform the mapping from SQL results to the Bar case class.

The create function takes a Bar name and returns a Future[Bar] by doing a non-blocking insert using the ScalikeJDBC Query DSL. When the insert has completed the primary key is returned and a new Bar instance is created and returned.

The findAll method uses the ScalikeJDBC Query DSL to select all of the Bars from the database, returning a Future[List[Bar]]].

Now that we have a reactive database object, lets expose these through reactive request handlers. First setup the routes in the conf/routes file:

GET        /bars                   controllers.Application.getBars
POST       /bars                   controllers.Application.createBar

Define the controller functions in the app/controllers/Application.scala file:

def getBars = Action.async { { bars =>
def createBar = Action.async(parse.urlFormEncoded) { request =>
  Bar.create(request.body("name").head).map { bar =>

Both functions use Action.async which holds a function that takes a request and returns a response (Result) in the future. By returning a Future[Result] Play is able to make requests to the controller function non-blocking. The getBars controller function calls the Bar.findAll and then transforms the Future[List[Bar]] into a Future[Result], the 200 response containing the JSON serialized list of bars. The createBar controller function parses the request, creates the Bar, and then transforms the Future[Bar] into a Future[Result] once the Bar has been created.

From the non-blocking perspective, here is what a request to the getBars controller function looks like:

  1. Web request made to /bars
  2. Thread allocated to web request
  3. Database request made for the SQL select
  4. Thread allocated to the database request
  5. Web request thread is deallocated (but the connection remains open)
  6. Database request thread is deallocated (but the connection remains open)
  7. Database response handler reallocates a thread
  8. SQL result is transformed to List[Bar]
  9. Database response thread is deallocated
  10. Web response handler reallocates a thread
  11. Web response is created from the list of bars
  12. Web response thread is deallocated

So everything is now reactive all the way down because there is a moment where the web request is waiting on the database to respond but no threads are allocated to the request.

Try it yourself with curl:

$ curl -X POST -d "name=foo" http://localhost:9000/bars
$ curl http://localhost:9000/bars

Grab the the full source and let me know if you have any questions. Thanks!

Scaling the WebJars Project with On-Demand Bower Packages

WebJars has been my hobby project for almost 3 years and thanks to tons of help from the community the project now has almost 900 JavaScript libraries in Maven Central. In February 2015 there were over 500,000 downloads of WebJars! Up until now all of the WebJars have been manually created, deployed, and maintained. Today I’m happy to launch Bower WebJars, an on-demand service that allows users to deploy new WebJars from Bower packages.

If a WebJar doesn’t exist for the library you need, but it does exist in Bower, then select the Bower package & version and deploy it! Under the covers the Bower package is converted to a Maven package (including transitive dependency definitions), deployed to BinTray, and then synced to Maven Central. Check out the code.

Bower WebJars should work for most Bower packages but I’ve noticed a few packages that do not follow standard dependency versioning practices or do not have standard OSS licenses. So if you have any problems file an issue and we’ll see if there is something we need to fix on our side or if we can work with the package provider to resolve the issue.

We’ve started with packaging Bower libraries but it would be logical to also do this for NPM. We are discussing teaming up with the npmaven folks to do this. So stay tuned.

A huge thanks to Fred Simon (Founder at JFrog) and the rest of the folks behind BinTray. At the WTF Conference a few weeks ago, Fred helped me get a handle on the BinTray REST APIs which really kick-started this project. Since then the BinTray folks have been ridiculously helpful and instrumental in getting the Bower WebJar stuff working seamlessly.

I hope the new Bower WebJars are useful for you. Let me know what you think.

Salesforce Canvas Quick Start for Java Developers

Salesforce provides a variety of different ways to integrate external apps into the Salesforce UI. Canvas is an iframe-based approach for loading externally hosted UIs into pages on Salesforce. The nice thing about Canvas versus a plain iframe is that Canvas has a JavaScript bridge which enables secure communication between the external iframe and Salesforce. This communication happens in the context of the Salesforce user and doesn’t require the typical OAuth handshake. Because Canvas apps live outside of Salesforce they can be built with any language and run anywhere, including Heroku.

I’ve put together a quick start Canvas app that uses Java and Play Framework to get you going in minutes. You can either run this app on Heroku or on your local machine. The fewest steps to get everything setup is with Heroku so I’ll cover that first.

Deploying a Canvas App on Heroku

Heroku is an app delivery platform that works with a variety of back-end programming languages and frameworks. I’ve created a Canvas-ready Java app using Play Framework and set it up for instant deployment. To get started, signup for a free Heroku account (if needed), then launch your own copy of the salesforce-canvas-seed app.

Once the app has been deployed, open / view the app and follow the instructions to complete the setup process. You will create a new Connected App on Salesforce that is used to identify the external app. Once you’ve completed the instructions you will now have a fully functional Canvas app, running on Heroku, and written in Java.

Running a Canvas App Locally

Cloud deployment with Heroku is quick and easy but if you want to make changes to your app you should setup a local development environment. To do that, download the salesforce-canvas-seed Activator bundle, extract the zip, and from a command line in the project’s root directory, run (on Windows, omit the ./):

./activator -Dhttps.port=9443 ~run

This will start the app with HTTPS enabled. Connect to the app: https://localhost:9443
Your browser may give you a warning about the certificate not being valid, which you should ignore / approve the connection. Then you will see instructions for setting up a new Connected App on Salesforce which will be used for local development. After following those instructions you will have a Canvas app running locally. Now you can begin making changes to the app.

The app/assets/javascripts/index.js file contains the client-side application while the app/views/index.scala.html file contains the HTML content for the application. The app you’ve just setup should just be displaying the logged in user’s name. That is happening using some jQuery and the Canvas SDK. The HTML file contains:

<h1>Hello <span id='username'></span></h1>

The index.js file contains:

Sfdc.canvas(function() {
  // Save the token
  Sfdc.canvas.byId('username').innerHTML = window.signedRequestJson.context.user.fullName;

This JavaScript sets up the OAuth token for the Canvas SDK. Then the contents of the username tag are replaced with the current user’s name.

That should be everything you need to get started building against the Canvas SDK. For more information on that check out the Canvas Docs.

Let me know how it goes!

Introducing Force WebJars: Add JavaScript Libs to Salesforce With a Click

The typical method of adding JavaScript and CSS libraries (e.g. jQuery, Bootstrap, and AngularJS) to Salesforce environments is to locate a library’s download, download it, then upload it to Salesforce, then figure out the structure of the files so that you can use them from Visualforce. Using WebJars as a basis, I’ve created an easy way to add libraries to Salesforce, called Force WebJars.

Here is a quick demo:

Give it a try and let me know how it goes!

BTW: The source is on GitHub.

Goodbye Java Posse Roundup – Hello WTF 2015

For the past 8 (maybe 9?) years the absolute best yearly conference I’ve attended has been The Java Posse Roundup in Crested Butte, Colorado. At most conferences my favorite parts are the conversations at the bar and writing code with other attendees. The Java Posse Roundup has always been a conference just of those best parts.

Even though The Java Posse is no-more, the Java Posse Roundup will live on as the Winter Tech Forum. I’m sure that many of the same developers will come since it has been a yearly pilgrimage for many. But hopefully the new name helps bring an even more diverse audience this year.

This year I’m looking forward to doing more with IoT and Salesforce. A few years ago Matt Raible and I built a shot ski so maybe this year we can build a smart shot ski that connects to an auto-reordering system. :)

Hope to see you there!

Introducing Gulp Launcher

Many developers already have the Node.js toolchain installed on their machines but when I lead workshops there are always a few who don’t. The process of installing Node build toolchains can take quite a bit of time for new users (especially on Windows). To simplify the process of getting the gulp toolchain setup, Bruce Eckel and I created gulp launcher. With a fresh system you can run gulp with only one download and one command:

$ gulp
Downloading jq 1.3 for x86_64 MAC
Downloading Node 0.10.33 for x86_64 MAC
... npm install output ...
[09:13:25] Using gulpfile ~/projects/gulp-starter/gulpfile.js
[09:13:25] Starting 'default'...
[09:13:25] Finished 'default' after 7.28 μs

On Windows the user can just double click on gulp.exe and it will run the default task for the gulp build.

If you want to try it out either download gulp (Linux, Mac, and Cygwin) or gulp.exe (Windows) from the latest release. Or if you want a ready-to-go starter gulp project, grab the gulp-starter project.

I’ve setup automated tests for the gulp launcher on Travis CI and AppVeyor but there isn’t a lot of real-world testing yet. So if you discover any problems please file an issue. Let me know how it goes. Thanks!

Java Doesn’t Suck – You’re Just Using it Wrong

Check out the Russian translation provided by Everycloudtech.

I’ve been building enterprise Java web apps since servlets were created. In that time the Java ecosystem has changed a lot but sadly many enterprise Java developers are stuck in some very painful and inefficient ways of doing things. In my travels I continue to see Java The Sucky Parts – but it doesn’t have to be that way. It is time for enterprises to move past the sucky ways they are using the Java platform. Here is a list of the suckiest parts of Java that I see most often and some recommendations for how to move past them.

10 Page Wikis to Setup Dev Environments Suck

Setting up a new development environment should be no more than 3 steps:

  1. Install the JDK
  2. Clone / checkout the SCM repo
  3. Run the build / start the app

Seriously. It can and should be that easy. Modern build tools like Gradle and sbt have launchers that you can drop right into your root source tree so that new developers can just run ./gradlew or./activator (for sbt). The build should have everything needed to get the app up and running – including the server. The easiest way to do this is to go containerless with things like Play Framework and Drop Wizard but if you are stuck in a container then consider things like Webapp Runner. One of the many problems with the container approach is the very high probability of running into the it works on my machine syndrome because environments easily differ when a critical dependency exists outside of the realm of the build and SCM. How many wikis keep the server.xml changes up-to-date? Wiki-based configuration is a great way to cause pain.

What about service dependencies like databases and external web services – don’t developers need to set those things up and configure them? Not if your build can do it for them. Smart build systems should be able to provision the required services either locally or on the cloud. Docker has emerged as a great way to manage local environments that are a replica of the production system.

If your app needs a relational database then use an in-memory db like hsql or cloud services like Heroku Postgres, RDS, Redis Labs, etc. However one risk with most in-memory databases is that they differ from what is used in production. JPA / Hibernate try to hide this but sometimes bugs crop up due to subtle differences. So it is best to mimic the production services for developers even down to the version of the database. Java-based databases like Neo4J work the same in-memory and out-of-process minimizing risk while also making it easy to setup new development environments. External web services should either have a sandbox host that can be used by developers or the web services should be mocked.

Incongruent Deployment Environments Suck

To minimize risk when promoting builds from dev to staging to production, the only thing that should change between each environment is configuration. A deployable artifact should not change as it moves between environments. Continuous Integration systems should run the same build and tests that developers run. Have the CI system do automatic deployment to a testing or staging environment. A proper release pipeline makes it easy to promote a deployable artifact from staging to production.

I used to maintain a Java web app where the deployment process went like this:

  1. Build a WAR file
  2. SCP the WAR file to a server
  3. SSH to the server
  4. Extract the WAR file
  5. Edit the web.xml file so it contains new database connection info
  6. Restart the server

That setup isn’t the worst I’ve seen but it is was always risky. It would have been much better to utilize environment variables so the only thing that changed between environments was those variables. Environment variables can be automatically read by the app so that the artifact stays the exact same. In this setup reproducing an environment is super easy – just set the env vars.

Servers That Take More Than 30 Seconds to Start Suck

For developer productivity and so that scaling up can happen instantly, servers should startup quickly. If your server takes more than 30 seconds to start then break the app into smaller pieces, adopting a Microservices architecture. Going containerless or having a one-app-per-container rule can really help reduce startup time. If your container takes a long time to start you should ask yourself: What are all those container services there for? Can the services be broken out into separate apps? Can they be removed or turned off?

If you need some ammunition to prove to your management that your startup times are killing your team’s productivity then use the stopwatch on your phone to count the total minutes per day wasted by waiting for the app to start. Bonus points if you calculate out how much wasted money that translates to for yourself, your team, and your org. Double bonus points if you show a chart that defeats the “we spent a lot of money on this app server” sunk cost argument.

Manually Managed Dependencies Suck

It sucks if any of your library dependencies aren’t managed by a build tool. Manually copying Jar files into the WEB-INF/lib is horribly error prone. It makes it hard to correlate files to versions. Transitive dependencies are addressed by ClassNotFound errors. Dependencies are brittle. Knowing the libraries’ licenses is hard. Getting your IDE to pull the sources and JavaDocs for the libraries is tough.

So first… Use a build tool. It doesn’t matter if you choose Ant + Ivy, Maven, Gradle, or sbt. Just pick one and use it to automatically pull your dependencies from Maven Central or your own Artifactory / Nexus server. With WebJars you can even manage your JavaScript and CSS library dependencies. Then get fancy by automatically denying SCM check-ins that include Jar files.

Unversioned & Unpublished Libraries Suck

Enterprises usually have many libraries and services shared across apps and teams. To help make teams more productive and to enable managed dependencies these libraries should be versioned and published to internal artifact servers like Nexus and Artifactory. SNAPSHOT releases should be avoided since they break the guarantee of a reproducible build. Instead consider versioning based on your SCM information. For instance, the sbt-git plugin defaults the build version to the git hash or if there is a git tag for the current position then the tag is used instead. This makes published releases immutable so that library consumers know exactly the correlation between the version they are using and the point-in-time in the code.

Long Development / Validation Cycles Really Suck

Billions of dollars a year are probably wasted with developers just waiting to see / test their changes. Modern web frameworks like Play Framework and tools like JRebel can significantly reduce the time to see changes. If every change requires a rebuild of a WAR file or a restart of a container then you are wasting ridiculous amounts of money. Likewise, running tests should happen continuously. Testing a code change (via reloading the browser or running a test) should not take more time than an incremental compile. Web frameworks that display helpful compile and runtime errors in the browser post-refresh are also very helpful to reduce long manual testing cycles.

When I work on Play apps I am continuously rebuilding the source on file save, re-running the tests, and reloading the web page – all automatically. If your dev tools & frameworks can’t support this kind of workflow then it is time to modernize. I’ve used a lot of Java frameworks over the years and Play Framework definitely has the most mature and rapid change cycle support. But if you can’t switch to Play, consider JRebel with a continuous testing plugin for Maven or Gradle.

Monolithic Releases Suck

Unless you work for NASA there is no reason to have release cycles longer than two weeks. It is likely that the reason you have such long release cycles is because a manager somewhere is trying to reduce risk. That manager probably used to do waterfall and then switched to Agile but never changed the actually delivery model to one that is also more Agile. So you have your short sprints but the code doesn’t reach production for months because it would be too risky to release more often. The truth is that Continuous Delivery (CD) actually lowers the cumulative risk of releases. No matter how often you release, things will sometimes break. But with small and more frequent releases fixing that breakage is much easier. When a monolithic release goes south, there goes your weekend, week, or sometimes month. Besides… Releasing feels good. Why not do it all the time?

Moving to Continuous Delivery has a lot of parts and can take years to fully embrace (unless like all startups today, you started with CD). Here are some of the most crucial elements to CD that you can implement one-at-a-time:

  • Friction-less App Provisioning & Deployment: Every developer should be able to instantly provision & deploy a new app.
  • Microservices: Logically group services/apps into independent deployables. This makes it easy for teams to move forward at their own pace.
  • Rollbacks: Make rolling back to a previous version of the app as simple as flipping a switch. There is an obvious deployment side to this but there is also some policy that usually needs to go into place around schema changes.
  • Decoupled Schema & Code Changes: When schema changes and code changes depend on each other rollbacks are really hard. Decoupling the two isolates risk and makes it possible to go back to a previous version of an app without having to also figure out what schema changes need to be made at the same time.
  • Immutable Deployments: Knowing the correlation between what is deployed and an exact point-in-time in your SCM is essential to troubleshooting problems. If you ssh into a server and change something on a deployed system you significantly reduce your ability to reproduce and understand the problem.
  • Zero Intervention Deployments: The environment you are deploying to should own the app’s config. If you have to edit files or perform other manual steps post-deployment then your process is brittle. Deployment should be no more than copying a tested artifact to a server and starting it’s process.
  • Automate Deployment: Provisioning virtual servers, adding & removing servers behind load balancers, auto-starting server processes, and restarting dead processes should be automated.
  • Disposable Servers: Don’t let the Chaos Monkey cause chaos. Servers die. Prepare for it by having a stateless architecture and ephemeral disks. Put persistent state in external, persistent data stores.
  • Central Logging Service: Don’t use the local disk for logs because it prevents disposability and makes it really hard to search across multiple servers.
  • Monitor & Notify: Setup automated health checks, performance monitoring, and log monitoring. Know before your users when something goes wrong.

There are a ton of details to these that I won’t go into here. If you’d like to see me expand on any of these in a future blog, let me know in the comments.

Sticky Sessions and Server State Suck

Sticky sessions and server state are usually one of the best ways to kill your performance and resilience. Session state (in the traditional Servlet sense) makes it really hard to do Continuous Delivery and scale horizontally. If you want a session cache use a real cache system – something that was designed to deal with multi-node use and failure. e.g. Memcache, ehcache, etc. In-memory caches are fast but hard to invalidate in multi-node environments and are not durable across restarts – they have their place, like calculated / derived properties where invalidation and recalculation are easy.

Web apps should move state to the edges. UI-related state should live on the client (e.g. cookies, local storage, and in-memory) and in external data stores (e.g. SQL/NoSQL databases, Memcache stores, and distributed cache clusters). Keep those REST services 100% stateless or else the state monster will literally eat you in your sleep.

Useless Blocking Sucks

In traditional web apps a request comes in, fetches some data from a database, creates a webpage, and then returns it. In this model it was ok to give that full roundtrip a single thread that remained blocked for the entire duration of the request. In the modern world requests often stay open beyond the life of a single database call because either it is a push connection or because it is composing multiple back-end services together. This new world requires a different model for how the threads / blocking is managed. The modern model for dealing with this is called async & non-blocking or Reactive.

Most of the traditional Java networking libraries (Servlets, JDBC, Apache HTTP, etc) are blocking. So even if a connection is idle (like when a database connection is waiting for the query to return), a thread is still allocated. The blocking model limits parallelism, horizontal scalability, and the number of concurrent push connections. The Reactive model only uses threads when they are actively doing something. Ideally your application is Reactive all the way down to the underlying network events. When a request comes in it gets a thread, then if that request needs to get data from another system the thread handling the request can be returned to the pool while waiting for the data. Once the data has arrived a thread can be reallocated to the request so the response can be returned to the requestor.

Java has a great foundation for Reactive with Java NIO. But unfortunately most of the traditional Java web frameworks, database drivers, and HTTP clients do not use it. Luckily a whole new landscape of Reactive libraries and frameworks is emerging that is built on NIO and Netty (a great NIO library). For example, Play Framework is a fully Reactive web framework which many people use with Reactive database libraries like Reactive Mongo.

To be Reactive means that you also need to have a construct for being asynchronous. The traditional way to do this in Java is with anonymous inner classes, like:

public static F.Promise<Result> index() {
    F.Promise<WS.Response> jw = WS.url("").get();
    return F.Function<WS.Response, Result>() {
        public Result apply(WS.Response response) throws Throwable {
            return ok(response.getBody());

Java 8 provides a much more concise syntax for asynchronous operations with Lambdas. The same Reactive request handler above with Java 8 & Lambdas is:

public static F.Promise<Result> foo() {
    F.Promise<WS.Response> jw = WS.url("").get();
    return -> ok(response.getBody()));

If your app does things in parallel and/or handles push connections then you really should be going Reactive. Check out my Building Reactive Apps presentation if you want to dive in deeper on this.

The Java Language Kinda Sucks

The Java Language has a lot of great aspects but due to its massive adoption and desire from its enterprise users for very gradual change, the language is showing its age. Luckily there are a ton of other options that run on the JVM. Here is a quick rundown of the most interesting options and my opinions on some positives and negatives:

  • Scala
    • Great
      • Likely the most widely adopted alternative language on the JVM
      • Fits well with Reactive and Big Data needs
      • Mature ecosystem for libraries, frameworks, support, etc
    • Good
      • Java interoperability is great but often not useful since the Java libraries aren’t built for Reactive and Scala idioms
      • Modern programming concepts with very powerful & flexible language
    • Bad
      • Language flexibility leads to significantly different ways of writing Scala sacrificing universal readability
      • Huge learning curve due to large number of features
  • Groovy
    • Great
      • Large ecosystem for libraries, frameworks, support, etc
      • Simple language with a few very useful features
    • Good
      • Interoperability with Java works and feels pretty natural
    • Bad
      • I prefer good type inference (like Scala) over Groovy’s dynamic and optional static typing
  • Clojure
    • Great
      • The elegance of a Lisp on the JVM
      • Mature ecosystem for libraries, frameworks, support, etc
    • Good
      • JavaScript target seems good but isn’t core
    • Bad
      • The lack of some OO constructs makes managing a large code base challenging
      • Dynamic typing
  • Kotlin
    • Great
      • Interoperability with Java seems natural
      • JavaScript target is first class
    • Good
      • IDE and build tooling seems decent but immature
      • Modern language features that aren’t overwhelming
    • Bad
      • Uncertain where it will be in 5 years – will it catch on and gain critical mass?

Starting with a new / greenfield project can be an easy time to try a new language but most enterprises don’t often do that. For existing projects there some frameworks and build tools that support mixing existing Java with alternative JVM languages better than others. Play Framework / sbt is the one I’ve used for this but I’m sure there are also others that do this well. At the very least, writing just your new tests in an alternative JVM language can be a great place to start experimenting.

Java 8’s Lambdas are a nice upgrade to the Java language. Lambdas do help reduce boilerplate and fit well with the Reactive model. But there is still a lot of other areas where the language is lacking. Now that I know Scala there are a few things I couldn’t live without that are still absent from Java: Type Inference, Pattern Matching, Case Classes, String Interpolation, and Immutability. It is also very nice to have Option and concurrency constructs baked into the core and library ecosystem.

Reality Check

If you are in a typical enterprise then maybe you are lucky and already doing most of this. As shocking as it may seem for some of us, this is really rare. Most of you are probably reading this and feeling sad because moving the enterprise monolith towards a lot of this stuff is really hard. As physics tells us, it is much harder to move large things than small things. But don’t lose heart! I’ve seen a number of stodgy enterprises slowly creep out of Java the Sucky Parts. Walmart Canada recently switched to Play Framework! My recommendation is to pick one of these sucky things and make it your goal to fix it over the next year. Often this requires buy-in from management which can be tough. Here is my suggestion… Spend a couple evenings or weekends working on implementing one of these items. Then show your manager what you did in your own time (that will convey how much you care) and then let them take the credit for the amazing new thing they thought of. Works every time. And if it doesn’t then there are tons of well paying startups who are already doing all of this stuff.

One last thing… Go read The Twelve-Factor App – it was the inspiration for a lot of this content.