Atlanta Presentation: Practicing Continuous Delivery

Tomorrow I’ll be presenting Practicing Continuous Delivery on the Cloud at the Atlanta No Fluff Just Stuff conference. Here is the session description:

This session will teach you best practices and patterns for doing Continuous Delivery / Continuous Deployment in Cloud environments. You will learn how to handle schema migrations, maintain dev/prod parity, manage configuration and scaling. This session will use Heroku as an example platform but the patterns could be implemented anywhere.

This has become my favorite sessions to present. So if you are going to be at Atlanta NFJS, then I hope to see you there!

Dreamforce 2011

I’m very excited to be presenting at Dreamforce (salesforce.com’s anual conference) this year! On Thursday, September 1, from 1:15 pm to 2:15 pm I will be presenting:

Developing Java Cloud Apps
The cloud makes it easy to deploy highly scalable apps in an instant. This session will walk you through the steps to build your first Java app for the cloud. You’ll also learn best practices for building mission-critical and horizontally scalable Java cloud apps.

Then on Friday, September 2, from 10:00 am to 11:00 am I will be hosting a panel discussion:

Fireside Chat: Java on the Cloud
Come join the Java on the cloud product managers, architects, and experts for a casual, unscripted chat to find out how Java developers can best take advantage of the cloud. The session will be a mix of preselected and audience-provided questions. So bring all your tough, interesting, and quirky questions to this Fireside Chat.

I hope to see you there!

Architectural Evolution: From Middleware to The Cloud

You’ve heard it said that “all things old are new again.” That statement can certainly be applied to the current Cloud hype. But each time the old becomes new it gets a bit better because of what was learned the last time around. If we look back ten years at enterprise application development in Java things were quite different than they are today. EJB was “the way” to build scalable systems from a vast abundance of components. But things didn’t work out as well as the vendors planned.

EJB Component Architecture

I remember back in the early days of enterprise Java everyone was talking about “Components.” Application complexity would be greatly reduced because there would be components for everything! Need to connect your app to Exchange? Well, there’s a component for that. Does your app need to send email? No problem, there are twenty components for that! Component marketplaces flourished with VC funding galore.

The official way to build reusable Java components became standardized as Enterprise Java Beans (EJB). These “beans” could be accessed either locally or remotely! Vendors led us to believe this was the panacea of Lego-style application development. Just grab pieces from every-which place and hook them together. Hooking the components together required a heavyweight “Middleware” server. Here is what Monolithic Middleware with EJBs looks like:

But the EJB Component Architecture didn’t work. Billions of dollars were spent on components and the middleware to tie them all together. And now I bet you can’t find a single person that doesn’t regret going that route. Why? Three primary reasons…

  1. The programming model was too hard. The EJB programming model consisted of too much boilerplate code (“solved” through code-gen tools like xdoclet). EJB’s also required configuration which was often middleware server-specific. The EJB Component Architecture creates too many layers of indirection (Core J2EE Patterns anyone?).
  2. Scalability was too hard. EJBs can either run inside your container (using what is called a “Local Interface”) or somewhere else (a “Remote Interface”). Using Local Interfaces is fast but causes middleware to run into memory limits and scaling bloated app servers is challenging. Using Remote Interfaces leads to massive serialization and routing overhead and whatever is on the remote end of the wire is still a pain to scale.
  3. Deployment was too hard. Remember the days when starting up an app server / middleware container took minutes not seconds?

If you need further proof that the middleware model didn’t work then just try to name one place you can still go to buy an EJB component today. Obviously we needed another way to compose the parts of an application.

POJO Component Architecture

SpringSource deserves a lot of credit for pulling us out of the EJB muck. They created a model where the application pieces are Plain Old Java Objects (POJOs) injected into an application. This led to better testability, much easier deployment, and a much better programming model. Essentially the revolution of Spring was to make all those app pieces injectable dependencies. This was a huge step forward. But there are still some limitations with this model that are currently being addressed by the next revolution. The three primary challenges with the POJO Component Architecture are:

  1. Isolation is too hard. It is now very easy to throw a bunch of components together into a single Web application ARchive (WAR). But at some point all of these pieces being stacked on top of each other make our application brittle and difficult to piece together. What do you do when the version of Hibernate you want to use requires a different version of an Apache Commons library than the version of XFire that you want to use? Or when two libraries that your app needs actually require conflicting dependencies. Sometimes isolating the pieces of an application is actually simpler than injecting them. And unfortunately with POJOs you may not be able to easily switch from using a “Local Interface” to an external “Remote Interface” like you can with EJBs.
  2. Polyglot is too hard. The POJO components we use today in our systems are not inherently supportive of a Polyglot world where different parts of a system may be built using different technologies. Suppose your system has a rules engine and you want to access it from a Java-based application and a Ruby-based application. Today the only way to do that is to proxy that component and expose it through an easily serialization protocol (likely XML or JSON over HTTP). This will likely add unnecessary complexity to your system. When the high-level functional pieces of a system are technology-specific the entire system may be forced to use that technology or those pieces may exist multiple times to support the Polyglot nature of today’s systems.
  3. Scaling is still too hard. As we continue to stack more pieces on top of each other it becomes harder to stick with simple, lightweight share-nothing architectures where each piece is individually horizontally scalable.

Cloud Component Architecture

The emerging solution to the challenges we have faced with the EJB and POJO Component Architectures is the Cloud Component Architecture. Instead of bundling components for things like search indexing, distributed caching, SMTP, and NoSQL data storage into your application those high level functions can instead be used as Cloud Components. There are already numerous vendors providing “Component as a Service” products like MongoDB, Redis, CouchDB, Lucene Search, SMTP, and Memcache.

SMTP / outbound email is a simple example where the Cloud Component Architecture makes a lot of sense. With the EJB and POJO Component Architectures I’d find a SMTP component that simply sends email. Then configure my server to be able to send emails that aren’t considered spam. I’d also need to deal with constant blacklisting challenges and a larger management surface. Or in a Cloud Component Architecture I could simply sign-up with one of the SMTP as a Service providers like AuthSMTP or SendGrid and then just use the Component as a Service.

Here is what the new Cloud Component Architecture for application composition looks like:

The top six benefits of the Cloud Component Architecture are:

  1. Simple scalability. By making each functional piece of an application an independent and lightweight service they can each be horizontally scaled without impacting the overall application architecture or configuration. If you chose to use a vendor’s Component as a Service then they will handle the scalability of those pieces. Then you only need to scale a very thin web layer. Composing Cloud Components also makes it easier to stick with a share-nothing architecture that is much easier to scale than the traditional architectures.
  2. Rapid composition. Cloud Components are flourishing! Most of the basic building blocks that applications need are now provided “as a Service” by vendors who maintain and enhance them. This is a much more erosion-resistant way to assemble applications when compared to the typical abandon-ware which is prevalent with many Java components. Many of the emerging Cloud Components also provide client libraries for multiple platforms and RESTful APIs to support easy composition in Polyglot systems.
  3. Reduced management surface. With Cloud Components you can reduce the number of pieces you must manage down to only the stuff that is unique to your app. Each Cloud Component you add doesn’t enlarge the management surface like it does in typical component models where you own the implementation of the component.
  4. Simple Deployment. One of the biggest benefits of using the Cloud is the ease of deployment. Partitioning the functional pieces of an application makes it thinner and easier to deploy. With Cloud Components you can also setup development and staging instances that make it easy to simulate the production environment. Then moving from one environment to another is simply a matter of configuration.
  5. Better Security. In most application architectures today there is one layer of security. This would be like a bank without a vault. There are a few ways into the bank that are wrapped with security (doors with locks) but as soon as someone has found a way in, they have access to everything. With Cloud Components security can be more easily distributed to provide multiple layers of security.
  6. Manageable costs. With Cloud Components your costs can scale with your usage. This means it’s easy to get started and grow rather than make large up-front investments.

The Cloud Component Architecture may seem similar in ways to the old EJB and POJO Component Architectures because it is similar! The wheel has not been reinvented, just improved. The dream of Lego-style application assembly is now being realized because we’ve come full circle on some old ideas from twenty years ago (CORBA anyone?). This time around those ideas are reality thanks to the evolution of many independent pieces like REST, Polyglot, and the Share-Nothing pattern. Cloud Components are the foundation of a new era of application development. My only question is… How long before we see the UDDI idea again? ;)

Getting Started with Node.js on The Cloud

In my new job at salesforce.com I’m incredibly exited about getting into Heroku, a Platform as a Service provider / Cloud Application Platform. In a future blog post I’ll provide more details on what Heroku is and how it works. But if you are like me the first thing you want to do when learning a new technology is to take it for a test drive. I decided to take my Heroku test drive using the recently announced Node.js support. I’m new to Node.js, but at least I know JavaScript. Heroku also offers Ruby / Rails support but I don’t know Ruby – yet. So let me walk you through the steps I took (and that you can follow) to get started with Node.js on the Heroku Cloud.

Step 1) Sign up for Heroku

Step 2) Install the Heroku command line client

Step 3) Login to Heroku via the command line:

heroku auth:login

Step 4) Install git

Step 5) Install Node.js

Step 6) Create a Node.js app

I started by building a very simple “hello, world” Node.js app. In a new project directory I created two new files. First is the package.json file which specifies the app metadata and dependencies:

{
  "name": "hellonode",
  "version": "0.0.1",
  "dependencies": {
    "express": "2.5.11"
  },
  "engines": {
    "node": "0.8.4",
    "npm": "1.1.45"
  }
}

Then the actual app itself contained in a file named web.js:

var express = require('express');
 
var app = express.createServer(express.logger());
 
app.get('/', function(request, response) {
  response.send('hello, world');
});
 
var port = process.env.PORT || 3000;
console.log("Listening on " + port);
 
app.listen(port);

This app simply maps requests to “/” to a function that sends a simple string back in the response. You will notice that the port to listen on will first try to see if it has been specified through an environment variable and then fallback to port 3000. This is important because Heroku can tell our app to run on a different port just by giving it an environment variable.

Step 7) Install the app dependencies with npm:

npm install .

This uses the package.json file to figure out what dependencies the app needs and then copies them into a “node_modules” directory.

Step 8) Try to run the app locally:

node web.js

You should see “Listening on 3000” to indicate that the Node.js app is running! Try to open it in your browser:
http://localhost:3000/

Hopefully you will see “hello, world”.

Step 9) Heroku uses a “Procfile” to determine how to actually run your app. Here I will just use a Procfile to tell Heroku what to run in the “web” process. But the Procfile is really the foundation for telling Heroku how to run your stuff. I won’t go into detail here since Adam Wiggins has done a great blog post about the purpose and use of a Procfile. Create a file named “Procfile” in the project directory with the following contents:

web: node web.js

This will instruct Heroku to run the web app using the node command and the web.js file as the main app. Heroku can also run workers (non-web apps) but for now we will just deal with web processes.

Step 10) In order to send the app to Heroku the files must be in a local git repository. To create the local git repo, run the following inside of your project directory:

git init

Now add the three files you’ve created to the git repo:

git add package.json Procfile web.js

Note: Make sure you don’t add the node_modules directory to the git repo! You can have git ignore it by creating a .gitignore file containing just “node_modules”.

Commit the files to the local repo:

git commit -m "initial commit"

Step 11) Create an app on Heroku:

heroku create

A default / random app name is automatically assigned to your app.

Step 12) Now you can push your app to Heroku! Just run:

git push heroku master

You should see something like:

Counting objects: 8, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (6/6), done.
Writing objects: 100% (6/6), 729 bytes, done.
Total 6 (delta 2), reused 0 (delta 0)
 
-----> Heroku receiving push
-----> Node.js app detected
-----> Resolving engine versions
       Using Node.js version: 0.8.4
       Using npm version: 1.1.41
-----> Fetching Node.js binaries
-----> Vendoring node into slug
-----> Installing dependencies with npm
       npm WARN package.json hellonode@0.0.1 No README.md file found!
       npm http GET https://registry.npmjs.org/express/2.5.11
       npm http 200 https://registry.npmjs.org/express/2.5.11
       npm http GET https://registry.npmjs.org/express/-/express-2.5.11.tgz
       npm http 200 https://registry.npmjs.org/express/-/express-2.5.11.tgz
       npm http GET https://registry.npmjs.org/connect
       npm http GET https://registry.npmjs.org/mime/1.2.4
       npm http GET https://registry.npmjs.org/qs
       npm http GET https://registry.npmjs.org/mkdirp/0.3.0
       npm http 200 https://registry.npmjs.org/qs
       npm http 200 https://registry.npmjs.org/mkdirp/0.3.0
       npm http 200 https://registry.npmjs.org/mime/1.2.4
       npm http 200 https://registry.npmjs.org/connect
       npm http GET https://registry.npmjs.org/qs/-/qs-0.4.2.tgz
       npm http GET https://registry.npmjs.org/connect/-/connect-1.9.2.tgz
       npm http GET https://registry.npmjs.org/mkdirp/-/mkdirp-0.3.0.tgz
       npm http GET https://registry.npmjs.org/mime/-/mime-1.2.4.tgz
       npm http 200 https://registry.npmjs.org/qs/-/qs-0.4.2.tgz
       npm http 200 https://registry.npmjs.org/mime/-/mime-1.2.4.tgz
       npm http 200 https://registry.npmjs.org/connect/-/connect-1.9.2.tgz
       npm http 200 https://registry.npmjs.org/mkdirp/-/mkdirp-0.3.0.tgz
       npm WARN package.json connect@1.9.2 No README.md file found!
       npm http GET https://registry.npmjs.org/formidable
       npm http 200 https://registry.npmjs.org/formidable
       npm http GET https://registry.npmjs.org/formidable/-/formidable-1.0.11.tgz
       npm http 200 https://registry.npmjs.org/formidable/-/formidable-1.0.11.tgz
       express@2.5.11 node_modules/express
       ├── qs@0.4.2
       ├── mime@1.2.4
       ├── mkdirp@0.3.0
       └── connect@1.9.2 (formidable@1.0.11)
       npm WARN package.json hellonode@0.0.1 No README.md file found!
       npm WARN package.json connect@1.9.2 No README.md file found!
       express@2.5.11 /tmp/build_h2gd35fmkgzm/node_modules/express
       connect@1.9.2 /tmp/build_h2gd35fmkgzm/node_modules/express/node_modules/connect
       qs@0.4.2 /tmp/build_h2gd35fmkgzm/node_modules/express/node_modules/qs
       mime@1.2.4 /tmp/build_h2gd35fmkgzm/node_modules/express/node_modules/mime
       formidable@1.0.11 /tmp/build_h2gd35fmkgzm/node_modules/express/node_modules/connect/node_modules/formidable
       mkdirp@0.3.0 /tmp/build_h2gd35fmkgzm/node_modules/express/node_modules/mkdirp
       Dependencies installed
-----> Discovering process types
       Procfile declares types -> web
-----> Compiled slug size is 4.0MB
-----> Launching... done, v5
       http://fast-plateau-8288.herokuapp.com deployed to Heroku
 
To git@heroku.com:fast-plateau-8288.git
   26b4efc..6e333a2  master -> master

Now you should be able to connect to your app in the browser! An easy way to open the app in your browser is to run:

heroku open

To see your app logs (provisioning, management, scaling, and system out messages) run:

heroku logs

To see your app processes run:

heroku ps

And best of all, if you want to add more Dynos* just run:

heroku scale web=2

* Dynos are the isolated containers that run your web and other processes. They are managed by the Heroku Dyno Manifold. Learn more about Dynos.

That increases the number of Dynos running the app from one to two. Automatically Heroku will distribute the load across those two Dynos, detect dead Dynos, restart them, etc! That is seriously easy app scalability!

There is much more to Heroku and I’ll be continuing to write about it here. But in the meantime, check out all of the great docs in the Heroku Dev Center. And please let me know if you have any questions or problems. Thanks!

New Adventures on The Cloud

When I started doing professional software development almost 15 years ago I was focused on the server-side. I started with Perl / CGI web apps – some of which are still in production today. Then I dove into Java web development with Java Web Server 1.0, Struts, JBoss, Tomcat and many other game changing technologies.

In 2004 I started getting into Macromedia Flex. I was amazed at how easy it was to retrieve and nicely render data from a Java back-end. In 2005 I began evangelizing Flex + Java. Following the acquisition of Macromedia by Adobe, Flex has really flourished. Adobe Flex is now the dominant RIA technology and it has been so fun to be a part of that!

Over the past seven years I’ve had so many great adventures on the client-side, but when a new opportunity on the server-side came my way I couldn’t pass it up. Starting June 6th I’ll be stepping back into the Java world to evangelize the Cloud for Salesforce.com. I’m excited to dive into some of the emerging Java/JVM technologies like Scala, Play Framework, and Clojure!

This change is certainly bittersweet for me. Flex continues to make app development easier. With things like Android support in Flex 4.5 and iOS support coming soon, the future of Flex is bright. I’ve been very privileged to be a part of the Flex community for the past seven years. This group of passionate and creative developers have taught me so many new things. Learning how to do runtime bytecode modification and co-creating Mixing Loom has certainly been one of the highlights!

As I begin this new adventure on the Cloud I’m excited about what lies ahead for Flex and for the Cloud. Both continue to help us developers build better software. I’ve hopefully helped you learn how to build great UIs with Flex. Now I will help you learn how to build solid and scalable back-ends on the Cloud!

Dreamforce 2010 and Cloudstock

I’ll be speaking at Dreamforce again this year! I have two sessions that are going to be super fun! First is a panel called “Cloud Mobility: Taking Critical Business Functions With You, Whenever, Wherever” on Wednesday at 3:15 PM. Then on Thursday at 11am I’ll be co-presenting a session on “Building Rich User Interfaces with Adobe Flash Builder for Force.com” with Markus Spohn from Salesforce.com.

Preceding Dreamforce is the Cloudstock event where you can see some other great presentations related to Flex and RIAs. Lee Brimelow will be doing a presentation on “Flex and Flash Platform on the Cloud” that is guaranteed to entertain and educate. There will also be presentations from Nigel Pegg on Real-time Apps and Keith Sutton on “Adobe’s Cloud Offerings for Developers and Enterprises”.

It’s going to be a great week! I hope to see you at Dreamforce 2010!

Webinar Tomorrow: Building Client/Cloud Apps with Flex and Force.com

I will co-presenting a free webinar tomorrow (September 28th, 2010) on building Client/Cloud Apps with Flex and Force.com. There are two times you can choose from:

  • September 28, 2010 | 6:00 a.m. PDT | 2:00 p.m. GMT | 6:30 p.m. IST
  • September 28, 2010 | 10:00 a.m. PDT

This session will walk through how you can get started building applications for the web, desktop, and mobile devices using Flex and Force.com. Salesforce.com and Adobe have worked together on an extension to the Flash Builder tool which enables developers to quickly build applications on top of the Force.com Cloud platform. I hope you can join me tomorrow! Sign up now!

Bay Area Event: Building RIAs using Flash Builder for Force.com

Salesforce.com is putting on a great event on August 25 in San Mateo, California where you can learn about how to build RIAs on the Cloud with Flash Builder for Force.com. This will be a great opportunity to meet the team that built the tool and learn how to use it! If you can’t make it then check out the article I recently published “Building Client / Cloud Apps with Flash Builder for Force.com“. But if you are in the Bay Area and want to get up to speed quickly on building Client / Cloud apps then Register Now!