Introducing Gulp Launcher

Many developers already have the Node.js toolchain installed on their machines but when I lead workshops there are always a few who don’t. The process of installing Node build toolchains can take quite a bit of time for new users (especially on Windows). To simplify the process of getting the gulp toolchain setup, Bruce Eckel and I created gulp launcher. With a fresh system you can run gulp with only one download and one command:

$ gulp
Downloading jq 1.3 for x86_64 MAC
Downloading Node 0.10.33 for x86_64 MAC
... npm install output ...
[09:13:25] Using gulpfile ~/projects/gulp-starter/gulpfile.js
[09:13:25] Starting 'default'...
[09:13:25] Finished 'default' after 7.28 μs

On Windows the user can just double click on gulp.exe and it will run the default task for the gulp build.

If you want to try it out either download gulp (Linux, Mac, and Cygwin) or gulp.exe (Windows) from the latest release. Or if you want a ready-to-go starter gulp project, grab the gulp-starter project.

I’ve setup automated tests for the gulp launcher on Travis CI and AppVeyor but there isn’t a lot of real-world testing yet. So if you discover any problems please file an issue. Let me know how it goes. Thanks!

Posted in Node.js | Leave a comment

Java Doesn’t Suck – You’re Just Using it Wrong

I’ve been building enterprise Java web apps since servlets were created. In that time the Java ecosystem has changed a lot but sadly many enterprise Java developers are stuck in some very painful and inefficient ways of doing things. In my travels I continue to see Java The Sucky Parts – but it doesn’t have to be that way. It is time for enterprises to move past the sucky ways they are using the Java platform. Here is a list of the suckiest parts of Java that I see most often and some recommendations for how to move past them.

10 Page Wikis to Setup Dev Environments Suck

Setting up a new development environment should be no more than 3 steps:

  1. Install the JDK
  2. Clone / checkout the SCM repo
  3. Run the build / start the app

Seriously. It can and should be that easy. Modern build tools like Gradle and sbt have launchers that you can drop right into your root source tree so that new developers can just run ./gradlew or./activator (for sbt). The build should have everything needed to get the app up and running – including the server. The easiest way to do this is to go containerless with things like Play Framework and Drop Wizard but if you are stuck in a container then consider things like Webapp Runner. One of the many problems with the container approach is the very high probability of running into the it works on my machine syndrome because environments easily differ when a critical dependency exists outside of the realm of the build and SCM. How many wikis keep the server.xml changes up-to-date? Wiki-based configuration is a great way to cause pain.

What about service dependencies like databases and external web services – don’t developers need to set those things up and configure them? Not if your build can do it for them. Smart build systems should be able to provision the required services either locally or on the cloud. Docker has emerged as a great way to manage local environments that are a replica of the production system.

If your app needs a relational database then use an in-memory db like hsql or cloud services like Heroku Postgres, RDS, Redis Labs, etc. However one risk with most in-memory databases is that they differ from what is used in production. JPA / Hibernate try to hide this but sometimes bugs crop up due to subtle differences. So it is best to mimic the production services for developers even down to the version of the database. Java-based databases like Neo4J work the same in-memory and out-of-process minimizing risk while also making it easy to setup new development environments. External web services should either have a sandbox host that can be used by developers or the web services should be mocked.

Incongruent Deployment Environments Suck

To minimize risk when promoting builds from dev to staging to production, the only thing that should change between each environment is configuration. A deployable artifact should not change as it moves between environments. Continuous Integration systems should run the same build and tests that developers run. Have the CI system do automatic deployment to a testing or staging environment. A proper release pipeline makes it easy to promote a deployable artifact from staging to production.

I used to maintain a Java web app where the deployment process went like this:

  1. Build a WAR file
  2. SCP the WAR file to a server
  3. SSH to the server
  4. Extract the WAR file
  5. Edit the web.xml file so it contains new database connection info
  6. Restart the server

That setup isn’t the worst I’ve seen but it is was always risky. It would have been much better to utilize environment variables so the only thing that changed between environments was those variables. Environment variables can be automatically read by the app so that the artifact stays the exact same. In this setup reproducing an environment is super easy – just set the env vars.

Servers That Take More Than 30 Seconds to Start Suck

For developer productivity and so that scaling up can happen instantly, servers should startup quickly. If your server takes more than 30 seconds to start then break the app into smaller pieces, adopting a Microservices architecture. Going containerless or having a one-app-per-container rule can really help reduce startup time. If your container takes a long time to start you should ask yourself: What are all those container services there for? Can the services be broken out into separate apps? Can they be removed or turned off?

If you need some ammunition to prove to your management that your startup times are killing your team’s productivity then use the stopwatch on your phone to count the total minutes per day wasted by waiting for the app to start. Bonus points if you calculate out how much wasted money that translates to for yourself, your team, and your org. Double bonus points if you show a chart that defeats the “we spent a lot of money on this app server” sunk cost argument.

Manually Managed Dependencies Suck

It sucks if any of your library dependencies aren’t managed by a build tool. Manually copying Jar files into the WEB-INF/lib is horribly error prone. It makes it hard to correlate files to versions. Transitive dependencies are addressed by ClassNotFound errors. Dependencies are brittle. Knowing the libraries’ licenses is hard. Getting your IDE to pull the sources and JavaDocs for the libraries is tough.

So first… Use a build tool. It doesn’t matter if you choose Ant + Ivy, Maven, Gradle, or sbt. Just pick one and use it to automatically pull your dependencies from Maven Central or your own Artifactory / Nexus server. With WebJars you can even manage your JavaScript and CSS library dependencies. Then get fancy by automatically denying SCM check-ins that include Jar files.

Unversioned & Unpublished Libraries Suck

Enterprises usually have many libraries and services shared across apps and teams. To help make teams more productive and to enable managed dependencies these libraries should be versioned and published to internal artifact servers like Nexus and Artifactory. SNAPSHOT releases should be avoided since they break the guarantee of a reproducible build. Instead consider versioning based on your SCM information. For instance, the sbt-git plugin defaults the build version to the git hash or if there is a git tag for the current position then the tag is used instead. This makes published releases immutable so that library consumers know exactly the correlation between the version they are using and the point-in-time in the code.

Long Development / Validation Cycles Really Suck

Billions of dollars a year are probably wasted with developers just waiting to see / test their changes. Modern web frameworks like Play Framework and tools like JRebel can significantly reduce the time to see changes. If every change requires a rebuild of a WAR file or a restart of a container then you are wasting ridiculous amounts of money. Likewise, running tests should happen continuously. Testing a code change (via reloading the browser or running a test) should not take more time than an incremental compile. Web frameworks that display helpful compile and runtime errors in the browser post-refresh are also very helpful to reduce long manual testing cycles.

When I work on Play apps I am continuously rebuilding the source on file save, re-running the tests, and reloading the web page – all automatically. If your dev tools & frameworks can’t support this kind of workflow then it is time to modernize. I’ve used a lot of Java frameworks over the years and Play Framework definitely has the most mature and rapid change cycle support. But if you can’t switch to Play, consider JRebel with a continuous testing plugin for Maven or Gradle.

Monolithic Releases Suck

Unless you work for NASA there is no reason to have release cycles longer than two weeks. It is likely that the reason you have such long release cycles is because a manager somewhere is trying to reduce risk. That manager probably used to do waterfall and then switched to Agile but never changed the actually delivery model to one that is also more Agile. So you have your short sprints but the code doesn’t reach production for months because it would be too risky to release more often. The truth is that Continuous Delivery (CD) actually lowers the cumulative risk of releases. No matter how often you release, things will sometimes break. But with small and more frequent releases fixing that breakage is much easier. When a monolithic release goes south, there goes your weekend, week, or sometimes month. Besides… Releasing feels good. Why not do it all the time?

Moving to Continuous Delivery has a lot of parts and can take years to fully embrace (unless like all startups today, you started with CD). Here are some of the most crucial elements to CD that you can implement one-at-a-time:

  • Friction-less App Provisioning & Deployment: Every developer should be able to instantly provision & deploy a new app.
  • Microservices: Logically group services/apps into independent deployables. This makes it easy for teams to move forward at their own pace.
  • Rollbacks: Make rolling back to a previous version of the app as simple as flipping a switch. There is an obvious deployment side to this but there is also some policy that usually needs to go into place around schema changes.
  • Decoupled Schema & Code Changes: When schema changes and code changes depend on each other rollbacks are really hard. Decoupling the two isolates risk and makes it possible to go back to a previous version of an app without having to also figure out what schema changes need to be made at the same time.
  • Immutable Deployments: Knowing the correlation between what is deployed and an exact point-in-time in your SCM is essential to troubleshooting problems. If you ssh into a server and change something on a deployed system you significantly reduce your ability to reproduce and understand the problem.
  • Zero Intervention Deployments: The environment you are deploying to should own the app’s config. If you have to edit files or perform other manual steps post-deployment then your process is brittle. Deployment should be no more than copying a tested artifact to a server and starting it’s process.
  • Automate Deployment: Provisioning virtual servers, adding & removing servers behind load balancers, auto-starting server processes, and restarting dead processes should be automated.
  • Disposable Servers: Don’t let the Chaos Monkey cause chaos. Servers die. Prepare for it by having a stateless architecture and ephemeral disks. Put persistent state in external, persistent data stores.
  • Central Logging Service: Don’t use the local disk for logs because it prevents disposability and makes it really hard to search across multiple servers.
  • Monitor & Notify: Setup automated health checks, performance monitoring, and log monitoring. Know before your users when something goes wrong.

There are a ton of details to these that I won’t go into here. If you’d like to see me expand on any of these in a future blog, let me know in the comments.

Sticky Sessions and Server State Suck

Sticky sessions and server state are usually one of the best ways to kill your performance and resilience. Session state (in the traditional Servlet sense) makes it really hard to do Continuous Delivery and scale horizontally. If you want a session cache use a real cache system – something that was designed to deal with multi-node use and failure. e.g. Memcache, ehcache, etc. In-memory caches are fast but hard to invalidate in multi-node environments and are not durable across restarts – they have their place, like calculated / derived properties where invalidation and recalculation are easy.

Web apps should move state to the edges. UI-related state should live on the client (e.g. cookies, local storage, and in-memory) and in external data stores (e.g. SQL/NoSQL databases, Memcache stores, and distributed cache clusters). Keep those REST services 100% stateless or else the state monster will literally eat you in your sleep.

Useless Blocking Sucks

In traditional web apps a request comes in, fetches some data from a database, creates a webpage, and then returns it. In this model it was ok to give that full roundtrip a single thread that remained blocked for the entire duration of the request. In the modern world requests often stay open beyond the life of a single database call because either it is a push connection or because it is composing multiple back-end services together. This new world requires a different model for how the threads / blocking is managed. The modern model for dealing with this is called async & non-blocking or Reactive.

Most of the traditional Java networking libraries (Servlets, JDBC, Apache HTTP, etc) are blocking. So even if a connection is idle (like when a database connection is waiting for the query to return), a thread is still allocated. The blocking model limits parallelism, horizontal scalability, and the number of concurrent push connections. The Reactive model only uses threads when they are actively doing something. Ideally your application is Reactive all the way down to the underlying network events. When a request comes in it gets a thread, then if that request needs to get data from another system the thread handling the request can be returned to the pool while waiting for the data. Once the data has arrived a thread can be reallocated to the request so the response can be returned to the requestor.

Java has a great foundation for Reactive with Java NIO. But unfortunately most of the traditional Java web frameworks, database drivers, and HTTP clients do not use it. Luckily a whole new landscape of Reactive libraries and frameworks is emerging that is built on NIO and Netty (a great NIO library). For example, Play Framework is a fully Reactive web framework which many people use with Reactive database libraries like Reactive Mongo.

To be Reactive means that you also need to have a construct for being asynchronous. The traditional way to do this in Java is with anonymous inner classes, like:

public static F.Promise<Result> index() {
    F.Promise<WS.Response> jw = WS.url("http://www.jamesward.com").get();
    return jw.map(new F.Function<WS.Response, Result>() {
        public Result apply(WS.Response response) throws Throwable {
            return ok(response.getBody());
        }
    });
}

Java 8 provides a much more concise syntax for asynchronous operations with Lambdas. The same Reactive request handler above with Java 8 & Lambdas is:

public static F.Promise<Result> foo() {
    F.Promise<WS.Response> jw = WS.url("http://www.jamesward.com").get();
    return jw.map(response -> ok(response.getBody()));
}

If your app does things in parallel and/or handles push connections then you really should be going Reactive. Check out my Building Reactive Apps presentation if you want to dive in deeper on this.

The Java Language Kinda Sucks

The Java Language has a lot of great aspects but due to its massive adoption and desire from its enterprise users for very gradual change, the language is showing its age. Luckily there are a ton of other options that run on the JVM. Here is a quick rundown of the most interesting options and my opinions on some positives and negatives:

  • Scala
    • Great
      • Likely the most widely adopted alternative language on the JVM
      • Fits well with Reactive and Big Data needs
      • Mature ecosystem for libraries, frameworks, support, etc
    • Good
      • Java interoperability is great but often not useful since the Java libraries aren’t built for Reactive and Scala idioms
      • Modern programming concepts with very powerful & flexible language
    • Bad
      • Language flexibility leads to significantly different ways of writing Scala sacrificing universal readability
      • Huge learning curve due to large number of features
  • Groovy
    • Great
      • Large ecosystem for libraries, frameworks, support, etc
      • Simple language with a few very useful features
    • Good
      • Interoperability with Java works and feels pretty natural
    • Bad
      • I prefer good type inference (like Scala) over Groovy’s dynamic and optional static typing
  • Clojure
    • Great
      • The elegance of a Lisp on the JVM
      • Mature ecosystem for libraries, frameworks, support, etc
    • Good
      • JavaScript target seems good but isn’t core
    • Bad
      • The lack of some OO constructs makes managing a large code base challenging
      • Dynamic typing
  • Kotlin
    • Great
      • Interoperability with Java seems natural
      • JavaScript target is first class
    • Good
      • IDE and build tooling seems decent but immature
      • Modern language features that aren’t overwhelming
    • Bad
      • Uncertain where it will be in 5 years – will it catch on and gain critical mass?

Starting with a new / greenfield project can be an easy time to try a new language but most enterprises don’t often do that. For existing projects there some frameworks and build tools that support mixing existing Java with alternative JVM languages better than others. Play Framework / sbt is the one I’ve used for this but I’m sure there are also others that do this well. At the very least, writing just your new tests in an alternative JVM language can be a great place to start experimenting.

Java 8’s Lambdas are a nice upgrade to the Java language. Lambdas do help reduce boilerplate and fit well with the Reactive model. But there is still a lot of other areas where the language is lacking. Now that I know Scala there are a few things I couldn’t live without that are still absent from Java: Type Inference, Pattern Matching, Case Classes, String Interpolation, and Immutability. It is also very nice to have Option and concurrency constructs baked into the core and library ecosystem.

Reality Check

If you are in a typical enterprise then maybe you are lucky and already doing most of this. As shocking as it may seem for some of us, this is really rare. Most of you are probably reading this and feeling sad because moving the enterprise monolith towards a lot of this stuff is really hard. As physics tells us, it is much harder to move large things than small things. But don’t lose heart! I’ve seen a number of stodgy enterprises slowly creep out of Java the Sucky Parts. Walmart Canada recently switched to Play Framework! My recommendation is to pick one of these sucky things and make it your goal to fix it over the next year. Often this requires buy-in from management which can be tough. Here is my suggestion… Spend a couple evenings or weekends working on implementing one of these items. Then show your manager what you did in your own time (that will convey how much you care) and then let them take the credit for the amazing new thing they thought of. Works every time. And if it doesn’t then there are tons of well paying startups who are already doing all of this stuff.

One last thing… Go read The Twelve-Factor App – it was the inspiration for a lot of this content.

Posted in Java | 65 Responses

Dreamforce 2014: Wearables, Engagement Apps, $1M Hackathon

Dreamforce 2014 is quickly approaching and this year is going to be amazing! I’ll be presenting a few sessions and helping at the $1 Million Hackathon. Here are my sessions:

  • Integrating Clouds & Humans with the Salesforce Wear Developer Packs

    As smart watches and other human-integrated devices make their way into the mainstream, developers will need to quickly ramp up to these new paradigms and interaction models. Integrating these new wearable devices with Salesforce connects users to their businesses and customers in new ways. Join us as we use code and examples to dive into the architecture and patterns for developing wearable Salesforce apps with the Salesforce Wear Developer Pack for Android Wear.

  • Architecting Engagement Apps

    Modern systems are composed from all sorts of pieces, like back-office systems, legacy systems, mobile apps, JavaScript web UIs, third-party services, relational data, NoSQL data, and big data. Effective user engagement requires an architecture that brings all of these pieces together instead of the traditional siloed approach. Join us to learn about the Engagement Architecture and how it can be used to create modern composition-oriented systems.

This year at the $1M Hackathon there will be 35 different prizes including prizes for building apps on Heroku and prizes for open source projects!

Hope to see you there!

Posted in Conferences, Salesforce.com | Leave a comment

Jekyll on Heroku

Jekyll is simple static content compiler popularized by GitHub Pages. If you use Jekyll in a GitHub repo a static website will automatically be created for you by running Jekyll on your content sources (e.g. Markdown). That works well but there are cases where it is nice to deploy a Jekyll site on Heroku. After trying (and failing) to follow many of the existing blogs about running Jekyll on Heroku, I cornered my coworker Terence Lee and got some help. Turns out it is pretty simple.

Not interested in the details? Skip right to a diff that makes a Jekyll site deployable on Heroku or start from scratch:

Deploy on Heroku

Here are the step by step instructions:

  1. Add a Gemfile in the Jekyll project root containing:

    source 'https://rubygems.org'
    ruby '2.1.2'
    gem 'jekyll'
    gem 'kramdown'
    gem 'rack-jekyll'
    gem 'rake'
    gem 'puma'
  2. Run: bundler install
  3. Create a Procfile telling Heroku how to serve the web site with Puma:

    web: bundle exec puma -t 8:32 -w 3 -p $PORT
  4. Create a Rakefile which tells Heroku’s slug compiler to build the Jekyll site as part of the assets:precompile Rake task:

    namespace :assets do
      task :precompile do
        puts `bundle exec jekyll build`
      end
    end
  5. Add the following lines to the _config.yml file:

    gems: ['kramdown']
    exclude: ['config.ru', 'Gemfile', 'Gemfile.lock', 'vendor', 'Procfile', 'Rakefile']
  6. Add a config.ru file containing:

    require 'rack/jekyll'
    require 'yaml'
    run Rack::Jekyll.new

That is it! When you do the usual git push heroku master deployment, the standard Ruby Buildpack will kick off the Jekyll compiler and when your app runs, Puma will serve the static assets. If you are starting from scratch just clone my jekyll-heroku repo and you should have everything you need.

To run Jekyll locally using the dependencies in the project, run:

bundle exec jekyll serve --watch

Let me know how it goes.

Posted in Heroku | 3 Responses

An Architects Guide to the Salesforce1 Platform

Salesforce.com was initially created as a Sales Force Automation (SFA) / Customer Relationship Management (CRM) application in the cloud but has evolved over the years into a modern platform for all types of enterprise applications. Now the Salesforce name is a legacy artifact of that past. This is like the name Frigidaire which is still the name for a company that now produces much more than Frigidaires (i.e. Refrigerators). The Salesforce1 Platform still provides the SFA & CRM applications but is also a foundation for building modern systems.

Pricing & Additions

The Salesforce1 Platform comes in many editions and packages, like the Sales Cloud edition for CRM and the Platform edition for anything. The edition you choose will enable different features and provide a different foundation to start with. Check out the full list of editions to see the pricing and features for each option. The Developer Edition provides a free platform for developers.

Now lets go through the many different components of the Salesforce1 Platform from the 30,000 foot perspective.

Metadata-Driven Data Model

At the core the Salesforce1 Platform is a cloud database. That database is customized and configured via metadata. The metadata that defines the data model can be modified via XML definitions and via a point-and-click UI. The metadata for a tenant environment, known as an ‘organization’, or ‘org’, is versionable, packageable, and testable. An object or table is called an SObject and provides a bunch of out-of-the-box features:

  • Custom Fields
  • Validation Logic
  • Field-level Security
  • Relationships & Pick-Lists
  • Derived Values

All SObjects automatically provide:

  • SOAP & REST APIs
  • Basic CRUD-ish UIs
  • Mobile CRUD-ish UIs via the Salesforce1 mobile app
  • Indexed Search

Fields on SObjects can be any of the following types:

  • Auto Number
  • Formula
  • Roll-Up Summary
  • Lookup Relationship
  • Master-Detail Relationship
  • Checkbox
  • Currency
  • Date & Date/Time
  • Email
  • Geolocation
  • Number & Percent
  • Phone
  • Picklist & Picklist Multi-Select
  • Text, Text Area, Encrypted Text
  • URL

New organizations on the platform come with a number of out-of-the-box SObjects which differ depending on which edition of the platform you are using. For instance, new organizations using the Sales Cloud edition come with SObjects including Contact, Lead, and Opportunity.

Built into the Salesforce data model are essential security features like change auditing and field-level security.

Managed Runtime for Programmatic Customizations & Extensions

A system on the Salesforce1 Platform can be built entirely using the Metadata-driven Data Model. But there are use cases when programmatic logic is needed for things like custom UIs, triggers, and scheduled jobs. The programming languages used to write programmatic logic on the platform are:

  • Visualforce – Server-side templating language for custom UIs that run inside of your Salesforce system
  • Apex – Programmable logic for triggers, Visualforce controllers, & scheduled jobs
  • SOQL – Domain Specific Language (DSL) for database queries

Visualforce uses a JSP-like syntax for creating custom HTML pages that are rendered inside of Salesforce.com and can also be rendered in the Salesforce1 mobile app. Pages in Visualforce use the traditional server-side MVC architecture where the Visualforce page is the View, an Apex class is the controller, and the model is SObjects. Visualforce pages can include any JavaScript and can use JavaScript Remoting and/or RESTful JSON services. Here is a simple Visualforce page:

<apex:page>
    hello, world
</apex:page>

Apex has a Java-like syntax and runs on Salesforce in a managed, sandboxed, and secure runtime. There is both an Eclipse plugin and a web-based Developer Console for writing Apex. Apex triggers attach to SObject events like update, create, and delete. Batch jobs and scheduled jobs are also written in Apex. Here is a simple Apex trigger:

trigger Foo on Contact (after insert) {
    for (Contact newItem : trigger.new) {
        System.debug('Contact Created: ' + newItem.Name)
    }
}

SOQL queries can be run in the Developer Console and can also be easily embedded in Apex, for instance:

Contact contact = [SELECT Id FROM Contact LIMIT 1];

Apex includes a JPA/Hibernate-like database access syntax called DML. This makes it easy to create, update, and delete SObjects in Apex. For example:

Contact c = new Contact(LastName='Bar');
insert c;
c.FirstName = 'Foo';
update c;
delete c;

Like SObject metadata, all of the programmatic code in Salesforce is versioned, packageable, and testable. Unit testing and code coverage are built into the Apex runtime and 75% code coverage is required in order to deploy code into a production Salesforce system. This code coverage requirement helps maintain stability across major platform upgrades because Salesforce uses customer tests to detect regressions and breakages.

Instead of writing Apex many approval processes and business rules can be created declaratively using Workflow. Just like SObject metadata, Workflows can be created with a point-and-click web interface, the Visual Workflow editor. Under the covers all workflows are just metadata which can be versioned and packaged like all of the other platform extensions and customizations.

The Salesforce.com UIs, Salesforce1 Mobile App, Apex runtime and Workflow systems are for back-office, employee-facing interactions. For customer-facing interfaces that interact with data on Salesforce, the Heroku service (part of the Salesforce1 Platform) enables developers to easily create, deploy, and scale custom web apps, mobile apps, and web / REST services. Heroku apps can be written in any language (Java, Ruby, Node.js, etc) and are deployed on a fully managed system that provides the infrastructure developers would normally have to assemble and manage on their own. For instance, services like load balancing, failover, centralized logging, continuous delivery pipelines, and instant scalability are provided out-of-the-box on Heroku.

Integration and ETL

The Salesforce1 Platform provides a variety of ways to integrate with other systems and perform data migrations & synchronization. The major interfaces for these types of data integrations are:

  • Heroku Connect – A standard, high-performance SQL interface to the data on Salesforce.
  • SOAP APIs – Schema-rich web services.
  • REST APIs – Simple JSON web services.
  • Streaming APIs – Event-driven messaging service.
  • Data Import & Export – Numerous tools, wizards, and web services provide easy access to import and export Salesforce data.
  • Email Notifications – Apex and Workflow can be used to send email notifications from Salesforce.
  • Mobile Notifications – Mobile notifications are built into the Salesforce1 mobile app and custom notifications are also supported.
  • OAuth 2.0 – The Salesforce web services use OAuth 2.0 to handle authenticating users so that integrated applications can make API requests on their behalf.
  • SAML – Enterprise Single Sign-On.
  • Mobile SDKs – Native, Hybrid, and HTML5 SDKs for custom mobile apps.
  • Integration Platform Vendors – Many integration platform vendors like InformaticaBoomiCast Iron, and MuleSoft have out-of-the-box support for integrating with Salesforce.

Massive Ecosystem

There is a massive galaxy of services, apps, frameworks, and libraries around the Salesforce1 Platform. Here are some of those:

The Platform You Can Trust

Because the Salesforce1 Platform is your foundation for business-critical data and apps, the foundation of Salesforce must be Trust. In large enterprise systems there are many aspects to Trust, like transparency of system uptime & responsivenessmulti-tier security, and privacy & certifications.

Get Started

Ready to dive in? The best way to dip your toes in the water and starting building something on the Salesforce1 Platform is by going through the Salesforce Developer Workshop. It won’t cost you anything except time and will help you to understand many of these components by using them. Let me know how it goes!

Posted in Salesforce.com | 1 Response

Building & Deploying Reactive Service Pipelines — Live in Salt Lake City

This Wednesday (Aug 6, 2014) I will be presenting Building & Deploying Reactive Service Pipelines at the Utah Scala Enthusiasts group in Salt Lake City. Here is the abstract:

Composition of micro-service is a modern integration pattern that couples nicely with Reactive and Continuous Delivery. These paradigms enable small teams to move quickly while integrating cross-silo data stores for modern JavaScript UIs and REST services. This session will use Scala, Play Framework, and Heroku to illustrate how to build and deploy Reactive Service Pipelines.

RSVP Now! Hope to see you there.

Posted in Reactive, Scala | 2 Responses

Going Reactive at OSCON 2014

This year at OSCON I will be leading a hands-on lab and presenting about Reactive, Play Framework, and Scala. Here are two sessions:

  • Reactive All The Way Down (lab) – 9:00am Monday, July 21

    In this tutorial you will build a Reactive application with Play Framework, Scala, WebSockets, and AngularJS. We will get started with a template app in Typesafe Activator. Then we will add a Reactive RESTful JSON service and a WebSocket in Scala. We will then build the UI with AngularJS.

  • Building Modern Web Apps with Play Framework and Scala – 2:30pm Wednesday, July 23

    Play Framework is the High Velocity Web Framework For Java and Scala. It is lightweight, stateless, RESTful, and developer friendly. This is an introduction to building web applications with Play. You will learn about: routing, Scala controllers & templates, database access, asset compilation for LESS & CoffeeScript, and JSON services.

Hope to see you there!

Posted in Conferences, Play Framework, Reactive, Scala | Leave a comment

Scala vs Java 8 at the Scala Summit

Bruce Eckel will be hosting the Scala Summit in Crested Butte again this summer. The Open Spaces conference will be September 15 – 19 which is a perfect time of year up in the Colorado Rockies. The theme of the Scala Summit this year is Scala vs. The New Features in Java 8. So there will definitely be some fascinating discussions. I’m also looking forward to working on some IoT projects during the hackathons. Bruce and I have a few pcDuino devices that will be fun to get Scala working on. Hope to see you there!

Posted in Conferences, Scala | Leave a comment

Salesforce Gradle Plugin

As part of the Salesforce Wear Developer Pack for Android Wear I created a Gradle plugin that fetches and deploys Salesforce code (Apex). Gradle is the default build tool for Android but it can also be used with many other languages. For instance, here is an example build.gradle file for a project that fetches all of the Apex classes and Visualforce pages:

buildscript {
    repositories {
        mavenLocal()
        mavenCentral()
    }
    dependencies {
        classpath 'com.jamesward:force-gradle-plugin:0.1'
    }
}
 
apply plugin: 'force'
 
repositories {
    mavenLocal()
    mavenCentral()
}
 
force {
    username = forceUsername
    password = forcePassword
    unpackagedComponents = ["ApexPage": "*", "ApexClass": "*"]
}

The unpackagedComponents definition uses the Salesforce Metadata Types and pulls everything specified down into the src/main/salesforce/unpackaged directory when you run the forceMetadataFetch Gradle task. The forceMetadataDeploy Gradle task deploys everything in the src/main/salesforce/unpackaged directory to Salesforce.

Try this out:

  • Install Gradle
  • Create a new project directory containing the build.gradle above
  • In your project directory create a new file named gradle.properties containing your Salesforce username & password:

    forceUsername=foo@bar.com
    forcePassword=password
  • Fetch your Salesforce Metadata:

    gradle forceMetadataFetch
  • Make a change and deploy the Metadata:

    gradle forceMetadataDeploy

For a complete example check out the Visualforce + AngularJS + Bootstrap project.

All of the code for the Salesforce Gradle Plugin is on GitHub: https://github.com/jamesward/force-gradle-plugin

Let me know what you think. Thanks!

Posted in Gradle, Salesforce.com | 5 Responses


  • View James Ward's profile on LinkedIn