Scaling JavaScript Apps – Part III: Ant Build Process

A multi-part look at how modern JavaScript development techniques and best practices are driving the Rich Internet Applications of tomorrow. Project structure, writing specifications, build processes, automated testing, templating and applying “separation of concerns” to the world of JavaScript are all to be covered. The IDE used will be Eclipse.

The build process extracts your most mundane and repetitive tasks from an iterative development loop and bundles them into one neat little script to be used and abused as often as you see fit. The use of such a step in JavaScript development hasn’t caught on terribly well just yet, but with increasing complexity in architecture and an ever expanding list of helpful utilities, it won’t take long for it to become a staple of all web app production.

There are a number of tools which will run your script – make, cake, rake, _ake are a few I’ve come across, but for ease of integration with Eclipse, the builder we’ll focus on is Apache Ant. Help yourself to the user manual to garner details on the tasks at your disposal, and how you can malleate them.

It all begins with build.xml.

<?xml version="1.0"?>
<project name="tux" default="build" basedir="../">

	<target name="build">
		<antcall target="lint-src" />
		<antcall target="lint-test" />
		<antcall target="test" />
		<antcall target="build-modules" />
	</target>

	<!-- ... -->

</project>

The structure above, which we’ll flesh out in a bit, gives a high-level look at what our build process will do. The top element — project — defines the name of the build process, and the default target to execute (if none is given). Think of a target as a single type of task or function, such as concatenation or compression, and configurable through the use of attributes and nested elements. Nested elements can even call other ant targets referenced by name, as witnessed. Given that I’ve stored all build-related files in a build directory hanging from the root folder, adding basedir="../" to the project tag will ensure all future path references will be relative to the root.

Lint

The first thing we should do is lint both the source, and test spec files. Verifying your code is clean and syntactically correct will be essential to running tasks further on in the process. Whether to use JSLint or JSHint is a matter of personal preference, but keep in mind there may come a time when you need to bend the rules in your favour and you’ll find JSLint much less friendly in this respect.

We’ll use Mozilla’s Rhino JavaScript environment to do this. Add the latest versions of rhino.jar, jshint.js and the adaptor jshint-rhino.js to your build directory before updating the XML. Script variables (“properties” in the Ant vocabulary) should be placed at the top of the file, or even in a separate file for improved maintenance. Add the option flags and predefined variables like so…

<project name="tux" default="build" basedir="../">

	<property name="jshint.flags" value="browser=true,maxerr=25,undef=true,curly=true,debug=true,eqeqeq=true,immed=true,newcap=true,onevar=true,plusplus=true,strict=true" />
	<property name="jshint.predef" value="console,$,namespace,noop,tux,Backbone,Store,_,format,parse" />
	<property name="jshint.predef.test" value="${jshint.predef},describe,xdescribe,xit,it,beforeEach,afterEach,expect,sinon,jasmine,loadFixtures,setFixtures,loadTemplate,fillForm" />

	<!-- ... -->

These are the parameters which will be passed to Rhino, the JSHint-Rhino adaptor will receive and parse before sending to JSHint. Confused? Open up the adaptor source code and you’ll find it’s nowhere near as daunting as it sounds. Notice that the jshint.predef property is extended by jshint.predef.test – due to additional global variables being made available via the testing libraries. These are functions and objects which would not normally be available to the production code. Right, here is how the linting has been carried out.

	<!-- lint source -->
	<target name="lint-src">
		<antcall target="lint">
			<param name="dir" value="src" />
			<param name="predef" value="${jshint.predef}" />
		</antcall>
	</target>

	<!-- lint tests -->
	<target name="lint-tests">
		<antcall target="lint">
			<param name="dir" value="specs" />
			<param name="predef" value="${jshint.predef.test}" />
		</antcall>
	</target>

	<!-- lint -->
	<target name="lint">
		<apply dir="build" executable="java">
			<fileset dir="${dir}" includes="**/*.js" />
			<arg line="-jar rhino.jar jshint-rhino.js" />
			<srcfile />
			<arg value="${jshint.flags}" />
			<arg value="${predef}" />
		</apply>
		<echo>${dir} JSHint Passed</echo>
	</target>

The two lint subjects (src and specs) differ only in the subject directory name, and predefined variables. The similarities have been abstracted into a target, which takes these two parameters, before running JSHint and the script file through the Rhino engine. Notice the use of the target parameters and properties defined earlier through the ${variable.name} syntax.

Test

The build process is run many a time during development, so we’ll safely assume that JS Test Driver previously explained is already running. With that in mind, all you need to do is define the following task to run your test suite in all captured browsers:

	<!-- run unit tests -->
	<target name="test">
		<java failonerror="true" dir="build" jar="build/JsTestDriver-1.3.2.jar" fork="true">
			<arg line="--reset --tests all --basePath ${basedir}" />
		</java>
		<echo>Jasmine Specs Passed</echo>
	</target>

All we’re doing here is running the JS Test Driver Java Archive and setting the execution context to /build, where it will find the jsTestDriver.conf listing all files to be loaded and in what order. Note – you can glob all files inside a directory, but not recursively.

server: http://localhost:9876

load:
  - lib/jasmine.js
  - lib/JasmineAdapter.js
  - lib/underscore.js
  - lib/jquery-1.6.1.js
  - lib/backbone.js
  - lib/*.js

  - src/util.js
  - src/core/*.js
  - src/accounts/*.js
  - src/tags/*.js
  - src/ledger/*.js
  - src/forms/*.js
  - src/schedule/*.js
  - src/reports/*.js

  - specs/specs-helper.js
  - specs/util.spec.js
  - specs/core/*.js
  - specs/accounts/*.js
  - specs/tags/*.js
  - specs/ledger/*.js
  - specs/forms/*.js
  - specs/schedule/*.js
  - specs/reports/*.js

During this process, you should see the output from the test suite with a ‘.’ to mark a passed test, and an ‘F’ for those that failed. Lastly, when the test suite has completed, a summary of pass/failures will appear. You can set the build process to fail and halt completely with failonerror="true". Otherwise, the build process is free to carry on to the next task.

Concatenate & Minify

Now that you’re coding like a boss, you’ll have developed the habit of breaking your source files into tiny, distinct units of functionality. On the flipside, you’ll immediately notice the pain of having to stitch each of these scripts into your page individually. How you structure your project is up to you, but when concatenating these script files you should aim to produce a single file for each top-level module. Here’s something I prepared earlier:

	<!-- build each module -->
	<target name="build-modules">
		<copy file="src/util.js" tofile="scripts/util.js" />
		<subant target="build-module" genericantfile="build/build.xml">
			<dirset dir="src" includes="*" />
		</subant>
		<echo>All modules built</echo>
	</target>
	
	<target name="build-module">
		<basename file="${basedir}" property="module" />
		<property name="modulefile" value="../../scripts/${module}.js" />
		
		<!-- concat src js -->
		<concat destfile="${modulefile}">
			<fileset dir="." includes="*.js" />
		</concat>

		<!-- build compressed version -->
		<java jar="../../build/compiler.jar" fork="true" dir="../../scripts">
			<arg line="--js ${module}.js --js_output_file ${module}.min.js" />
		</java>
		
		<echo>${module} module build successful</echo>
	</target>

Minification above is tasked to Google’s Closure Compiler. After this final task in the build process, the file will be readily available to include in your page, a la:

<script type="text/javascript" src="/scripts/module.min.js"></script>

Clean, no? Lose the .min while developing to use the pre-minification, debug-friendly code.

Automate

Most Eclipse packages come with support for Ant build files. If not, install the Eclipse Java EE Developer Tools. With your build process defined, you’ll want easy access to each of the targets from within the IDE. Right-click on your build file in the Project Explorer, and select Run As > Ant Build…. This should invoke a new configuration window, allowing you to save the build task. Tick and possibly reorder the targets you’d like to kick off, then hit save. Rinse, repeat, and voilà!

The final piece of Eclipse integration involves dosing the iterative TDD cycle with steroids, by having the test suite run whenever you save a new test case or update the source code. Hit up Project > Properties > Builders and Import… your previously defined “Run Tests” task. Edit the newly created builder and head to the Targets tab. Add the test target to the Auto Build list, if not already present, and save. Now whenever the project is updated, your test suite will automatically be executed for immediate TDD feedback. This can be easily toggled via Project > Build Automatically.

In the next and final part of the series, we’ll be looking at how to clean up script files by providing a dedicated file structure for template markup.

Scaling JavaScript Apps – Part II: Test Driven Development

A multi-part look at how modern JavaScript development techniques and best practices are driving the Rich Internet Applications of tomorrow. Project structure, writing specifications, build processes, automated testing, templating and applying “separation of concerns” to the world of JavaScript are all to be covered. The IDE used will be Eclipse.

Coverage of web application test-driven development and its practice in JavaScript had previously been scarce at best. That is, until efforts from Christian and others brought it out from the dark and into the spotlight. Since then, a wealth of tools and libraries have begun sprouting from every corner of the web.

This post assumes you are familiar with TDD and have been won over by its benefits. You should already have an understanding of the iterative process to follow, as well as the necessity of stubbing and mocking dependencies to deliver true (rather… truthy) unit tests. If you’re still in the dark, I recommend Christian’s series of excellent tutorials. To see how deep the rabbit hole really goes, be sure to pick up a copy of his book Test-Driven JavaScript Development.

Getting right down to business, the following is an example spec for a core unit which accepts an array of apps and loads each in turn – appending each app’s output to the DOM. Don’t fret if it’s not suddenly clear what the test is trying to achieve. Simply familiarize yourself with the vocabulary of the testing libraries.

it('should load app and append its wrapped view', function() {
    // create test namespace
    namespace('tux.test');

    // create a test app and spy on any calls made to it
    this.testView = $('<div>')[0];
    tux.test.TestApp = Backbone.View.extend({
        el: this.testView
    });
    this.TestApp = sinon.spy(tux.test, 'TestApp');

    // initialize the unit under test, passing a reference to the fake app
    this.app = new App({
        modules: [{
            app: 'test',
            obj: 'TestApp',
            title: 'Test App'
        }]
    });

    // add the result to the test environment DOM
    setFixtures(this.app.el);

    // check that the app was initialized
    expect(this.TestApp).toHaveBeenCalled();

    // check that test app was bundled inside the core app
    expect($(this.app.el)).toContain('div#test');
    expect(this.app.$('#test')).toContain(this.testView);

    // check that the test app was prepended with an h2 header
    var h2 = $(this.testView).prev();
    expect(h2).toBe('h2');
    expect(h2).toHaveText('Test App');
});

The Testing Stack

There are quite a few libraries and tools at play here.

  • Jasmine – the base testing framework that defines how your specs are written and how to verify that the tests have passed successfully or failed miserably.
  • Sinon – flexible spy, stub and mock library to intercept dependencies. It also provides fake servers and timers which allow you to avoid asynchronous testing scenarios and keep your test suite running times to a minimum.
  • jasmine-sinon – provides syntactic sugar (in the form of Jasmine “matchers”) for verifying specs that involve sinon objects. This is also useful when running tests, as the matchers ensures the error messages are contextually accurate about why a given test has failed.
  • jasmine-jquery – most useful for interaction with the DOM. Provides management of remote and local markup dependencies and a comprehensive set of Jasmine matchers to verify end-result elements, attributes and even event handlers. Read more here.

This collection should provide all you need to write your specs. The last two pieces of the puzzle are JsTestDriver and the adaptor for Jasmine enabling you to execute your Jasmine-based syntax against all browsers that have been “captured” by JsTestDriver. Let’s see how these all stack up.

That sure is a lot of moving parts, but you’ll be glad to know that the authors have produced some fine source code that’s not beyond debugging should anything go wrong. I’ve highlighted the application itself is to emphasise that at the end of the day, it should be the only code to reach the production environment.

Detailed instructions on how to load your application, supporting libraries and finally app specs into the test runner environment can be found at this Wiki. Just a few dry runs should make painfully obvious the need to automate test execution, bringing us to the next part of the series – scripting a JavaScript build process to seamlessly clean, verify and compile your source code.