Open And Free Discussion, The Myth

Yesterday marked for me a renewed passion to engage and participate within the web development community. As my career enters a new phase, I am finding myself wanting to move beyond the technical discussions that can be reduced to an objective list of pros and cons, and into an area that is far less black and white. Conversations about interaction design, the roles we each play, and the social impact of what we create are becoming gradually more commonplace. Spurred on, in large part, by a moving talk given by Wilson Miner, When We Build.

Naturally, the less objective a subject, the more likely there’ll be a difference of opinions. As I discovered when I happened to read this article – The Rise of Product Design, over on Q&A website Quora.

The article tends to be rather dismissive of the great work done by the User Experience (UX) field. As a front-end developer, working on the side of the web that faces the people using it, I feel the the ideas put forward by UX teams are inspired and motivating. So I thought I’d put forward my view in the comments section. Read the comment here.

Of course, on first glance you won’t see it. Neither on the original article, nor on the direct link. What you will see is a short piece of text – “Comment downvoted”. You’ll need to click that text before viewing the comment.

Downvoting allows a reader to vote down something they see as trolling, inflammatory, inappropriate, incorrect, or something they generally do not agree with. The voters are typically not required to explain their reasoning. Most sites then use these votes to decide whether or not to display or remove a user comment. In this case the comment has been hidden from view, discarded into a realm where most people won’t see or read it, essentially removing it from the conversation. Whether or not you agree with what I wrote, I’m sure you will agree that it was well structured enough to deserve a tiny spot at the bottom of the page. Unfortunately, the team at Quora did not, and decided to both hide it and move it to the bottom of the list.

For a site that appears to encourage healthy debate, this comes as a huge surprise. I had hoped my thoughts would evoke a reply, one that would perhaps mention some other aspect to the subject and help to educate all involved. Sadly the discussion was neither open nor free and this did not happen. I hope this makes clear the need to improve the experience of user comments on the web, so that the conversation can continue.

Perhaps if the team at Quora had employed the skills of someone in the UX field, my user experience wouldn’t have left such a bad aftertaste.

Using “this” Responsibly

Over the past few weeks, I’ve come across a number of libraries that have decided to manipulate the value of this in a callback function. Presumably, to provide additional functionality to the callback by way of methods of the this keyword. To offer a somewhat trivial example:

// lib.js
lib.method = function(callback) {
    var newThis = {};
    newThis.function1 = function() {};
    newThis.function2 = function() {};
    callback.call(newThis);
}

// your-source.js
lib.method(function() {
    this.function1('a');
    this.function2('b');
});

Imagine if you only had access to “your-source.js”, or the source of “lib.js” was obfuscated. For a more real-world instance of this pattern, we can take a look at how Backbone Marionette allows a developer to define a module:

var app = new Backbone.Marionette.Application();

app.module('mymodule', function() {
    this.add = function(x, y) {
        return x + y;
    };
});

It may already be obvious why this might not be such a great idea (pardon the pun), especially when sharing code. Without looking at the source of the library, or having its documentation on hand, there isn’t much clue as to where the value of this is being set, and what it actually refers to. How does a reader know that our new module, “mymodule”, now has a method called “add”? Or that this refers to the module and not the actual app? The code within the callback is far too ambiguous. Thankfully, Backbone Marionette also offers a much more readable approach:

app.module('mymodule', function(myModule) {
    myModule.add = function(x, y) {
        return x + y;
    };
});

Here, you can plainly see that the library is passing the module back to the callback function, and a new method is being defined on that module. You can also name the parameter as descriptively as you see fit. With this approach, what the developer is trying to achieve becomes much more obvious. As an added bonus, the name of the variable is also a candidate for minification when optimizing your source code for production.

This is all not to say that the this keyword should be avoided like the plague. There are instances where it makes complete sense to use, such as when defining prototype methods.

var Person = function(name) {
    this.name = name;
};
Person.prototype.sayHello = function() {
    return 'Hello, ' + this.name;
}

In this context, this refers to an instance of the object you’re defining in the same file. Another example, DOM events:

element.onclick = function(event) {
  // `this` refers to the element that was clicked
};

In both these examples, the value of this is immediately available to the reader. The mention of this and its subsequent uses are grouped together in one cohesive unit. Much more readable, no?

Serve static content from the current directory with node.js

This micro-tutorial assumes that you have already installed node.js.

Often when writing front-line JavaScript, you’ll find yourself wanting to serve content in the current directory without resorting to writing the node server for each folder. Thankfully http-server solves this problem with a nifty, node-based command-line tool to do just that.

npm install -g http-server

Once installed, jump into the terminal and place yourself in the directory that holds the static content. From there, it’s just a matter of keystrokes:

http-server [path] -p [port]

Path and port are optional. Should it find a sub-directory named ‘public’, it will be chosen as the HTTP root, if not – the current directory will assume its place. Port defaults to 8080.

More info: https://github.com/nodeapps/http-server

16 Awesome Chrome Dev Tools Features

How did I miss this? Back at Google I/O in May, Paul Irish & Pavel Feldman gave an awe-inspiring 45 minute presentation to set about enlightening the web debugging community. If you’ve got the time, I highly recommend watching the video below. For those with tighter schedules to keep, skip to the summary of the main points beneath.

Styling

  • Cycle through colour formats
    Styles Accordion > Click Color Swatch
  • Create a new selector and rule-set on-the-fly
    Styles Config > New Style Rule
  • Edit style resources in-line
    Resources Tab > Double-click Any Stylesheet > Edit > CTRL /⌘ + S
  • Save updated stylesheets
    Right-click Any Stylesheet > Save As
  • View and revert to previous revisions
    Expand Any Stylesheet > Right-click Timestamp > Revert
The last few points show Chrome Dev Tools entering the giant world of the IDE, however Pavel was quick to point out this is not a path they’re planning to follow too deeply.

Network

  • Easily view the exchange of cookies
    Select Any HTTP Request > Cookies Tab

Pavel also pointed out that a JSON tab should be available soon, to assist navigation through the returned structure.

Timeline

  • Record a session, view the make up of loading, scripting and rendering events
    Record Icon > Perform action/s > Record Icon

Console

Scripting

This is where the Chrome Dev Tools really shines, and with Paul Irish joining from the Firebug team – expect more great things to come.

  • Beautify obfuscated code, and set breakpoints against the results (that’s gotta be some fancy, behind-the-scenes mapping magic)
  • Access scope variables inside the console, while stopped at a breakpoint
  • Add breakpoints to DOM elements, for instance, when the innerHTML is updated
  • Add breakpoints to user keyboard and mouse actions, DOM mutations, Timers and other events
  • Jump straight to the relevant line of code, from the Timeline tab’s scripting events
  • Inline editing a la stylesheets, remember to hit CTRL /⌘ + S to save
  • Remote debugging. Since the Dev Tools are essentially a single page web-app, they are exposed via HTTP from the local Chrome client. More at http://code.google.com/chrome/devtools/docs/remote-debugging.html

Scaling JavaScript Apps – Part III: Ant Build Process

A multi-part look at how modern JavaScript development techniques and best practices are driving the Rich Internet Applications of tomorrow. Project structure, writing specifications, build processes, automated testing, templating and applying “separation of concerns” to the world of JavaScript are all to be covered. The IDE used will be Eclipse.

The build process extracts your most mundane and repetitive tasks from an iterative development loop and bundles them into one neat little script to be used and abused as often as you see fit. The use of such a step in JavaScript development hasn’t caught on terribly well just yet, but with increasing complexity in architecture and an ever expanding list of helpful utilities, it won’t take long for it to become a staple of all web app production.

There are a number of tools which will run your script – make, cake, rake, _ake are a few I’ve come across, but for ease of integration with Eclipse, the builder we’ll focus on is Apache Ant. Help yourself to the user manual to garner details on the tasks at your disposal, and how you can malleate them.

It all begins with build.xml.

<?xml version="1.0"?>
<project name="tux" default="build" basedir="../">

	<target name="build">
		<antcall target="lint-src" />
		<antcall target="lint-test" />
		<antcall target="test" />
		<antcall target="build-modules" />
	</target>

	<!-- ... -->

</project>

The structure above, which we’ll flesh out in a bit, gives a high-level look at what our build process will do. The top element — project — defines the name of the build process, and the default target to execute (if none is given). Think of a target as a single type of task or function, such as concatenation or compression, and configurable through the use of attributes and nested elements. Nested elements can even call other ant targets referenced by name, as witnessed. Given that I’ve stored all build-related files in a build directory hanging from the root folder, adding basedir="../" to the project tag will ensure all future path references will be relative to the root.

Lint

The first thing we should do is lint both the source, and test spec files. Verifying your code is clean and syntactically correct will be essential to running tasks further on in the process. Whether to use JSLint or JSHint is a matter of personal preference, but keep in mind there may come a time when you need to bend the rules in your favour and you’ll find JSLint much less friendly in this respect.

We’ll use Mozilla’s Rhino JavaScript environment to do this. Add the latest versions of rhino.jar, jshint.js and the adaptor jshint-rhino.js to your build directory before updating the XML. Script variables (“properties” in the Ant vocabulary) should be placed at the top of the file, or even in a separate file for improved maintenance. Add the option flags and predefined variables like so…

<project name="tux" default="build" basedir="../">

	<property name="jshint.flags" value="browser=true,maxerr=25,undef=true,curly=true,debug=true,eqeqeq=true,immed=true,newcap=true,onevar=true,plusplus=true,strict=true" />
	<property name="jshint.predef" value="console,$,namespace,noop,tux,Backbone,Store,_,format,parse" />
	<property name="jshint.predef.test" value="${jshint.predef},describe,xdescribe,xit,it,beforeEach,afterEach,expect,sinon,jasmine,loadFixtures,setFixtures,loadTemplate,fillForm" />

	<!-- ... -->

These are the parameters which will be passed to Rhino, the JSHint-Rhino adaptor will receive and parse before sending to JSHint. Confused? Open up the adaptor source code and you’ll find it’s nowhere near as daunting as it sounds. Notice that the jshint.predef property is extended by jshint.predef.test - due to additional global variables being made available via the testing libraries. These are functions and objects which would not normally be available to the production code. Right, here is how the linting has been carried out.

	<!-- lint source -->
	<target name="lint-src">
		<antcall target="lint">
			<param name="dir" value="src" />
			<param name="predef" value="${jshint.predef}" />
		</antcall>
	</target>

	<!-- lint tests -->
	<target name="lint-tests">
		<antcall target="lint">
			<param name="dir" value="specs" />
			<param name="predef" value="${jshint.predef.test}" />
		</antcall>
	</target>

	<!-- lint -->
	<target name="lint">
		<apply dir="build" executable="java">
			<fileset dir="${dir}" includes="**/*.js" />
			<arg line="-jar rhino.jar jshint-rhino.js" />
			<srcfile />
			<arg value="${jshint.flags}" />
			<arg value="${predef}" />
		</apply>
		<echo>${dir} JSHint Passed</echo>
	</target>

The two lint subjects (src and specs) differ only in the subject directory name, and predefined variables. The similarities have been abstracted into a target, which takes these two parameters, before running JSHint and the script file through the Rhino engine. Notice the use of the target parameters and properties defined earlier through the ${variable.name} syntax.

Test

The build process is run many a time during development, so we’ll safely assume that JS Test Driver previously explained is already running. With that in mind, all you need to do is define the following task to run your test suite in all captured browsers:

	<!-- run unit tests -->
	<target name="test">
		<java failonerror="true" dir="build" jar="build/JsTestDriver-1.3.2.jar" fork="true">
			<arg line="--reset --tests all --basePath ${basedir}" />
		</java>
		<echo>Jasmine Specs Passed</echo>
	</target>

All we’re doing here is running the JS Test Driver Java Archive and setting the execution context to /build, where it will find the jsTestDriver.conf listing all files to be loaded and in what order. Note – you can glob all files inside a directory, but not recursively.

server: http://localhost:9876

load:
  - lib/jasmine.js
  - lib/JasmineAdapter.js
  - lib/underscore.js
  - lib/jquery-1.6.1.js
  - lib/backbone.js
  - lib/*.js

  - src/util.js
  - src/core/*.js
  - src/accounts/*.js
  - src/tags/*.js
  - src/ledger/*.js
  - src/forms/*.js
  - src/schedule/*.js
  - src/reports/*.js

  - specs/specs-helper.js
  - specs/util.spec.js
  - specs/core/*.js
  - specs/accounts/*.js
  - specs/tags/*.js
  - specs/ledger/*.js
  - specs/forms/*.js
  - specs/schedule/*.js
  - specs/reports/*.js

During this process, you should see the output from the test suite with a ‘.’ to mark a passed test, and an ‘F’ for those that failed. Lastly, when the test suite has completed, a summary of pass/failures will appear. You can set the build process to fail and halt completely with failonerror="true". Otherwise, the build process is free to carry on to the next task.

Concatenate & Minify

Now that you’re coding like a boss, you’ll have developed the habit of breaking your source files into tiny, distinct units of functionality. On the flipside, you’ll immediately notice the pain of having to stitch each of these scripts into your page individually. How you structure your project is up to you, but when concatenating these script files you should aim to produce a single file for each top-level module. Here’s something I prepared earlier:

	<!-- build each module -->
	<target name="build-modules">
		<copy file="src/util.js" tofile="scripts/util.js" />
		<subant target="build-module" genericantfile="build/build.xml">
			<dirset dir="src" includes="*" />
		</subant>
		<echo>All modules built</echo>
	</target>
	
	<target name="build-module">
		<basename file="${basedir}" property="module" />
		<property name="modulefile" value="../../scripts/${module}.js" />
		
		<!-- concat src js -->
		<concat destfile="${modulefile}">
			<fileset dir="." includes="*.js" />
		</concat>

		<!-- build compressed version -->
		<java jar="../../build/compiler.jar" fork="true" dir="../../scripts">
			<arg line="--js ${module}.js --js_output_file ${module}.min.js" />
		</java>
		
		<echo>${module} module build successful</echo>
	</target>

Minification above is tasked to Google’s Closure Compiler. After this final task in the build process, the file will be readily available to include in your page, a la:

<script type="text/javascript" src="/scripts/module.min.js"></script>

Clean, no? Lose the .min while developing to use the pre-minification, debug-friendly code.

Automate

Most Eclipse packages come with support for Ant build files. If not, install the Eclipse Java EE Developer Tools. With your build process defined, you’ll want easy access to each of the targets from within the IDE. Right-click on your build file in the Project Explorer, and select Run As > Ant Build…. This should invoke a new configuration window, allowing you to save the build task. Tick and possibly reorder the targets you’d like to kick off, then hit save. Rinse, repeat, and voilà!

The final piece of Eclipse integration involves dosing the iterative TDD cycle with steroids, by having the test suite run whenever you save a new test case or update the source code. Hit up Project > Properties > Builders and Import… your previously defined “Run Tests” task. Edit the newly created builder and head to the Targets tab. Add the test target to the Auto Build list, if not already present, and save. Now whenever the project is updated, your test suite will automatically be executed for immediate TDD feedback. This can be easily toggled via Project > Build Automatically.

In the next and final part of the series, we’ll be looking at how to clean up script files by providing a dedicated file structure for template markup.

Scaling JavaScript Apps – Part II: Test Driven Development

A multi-part look at how modern JavaScript development techniques and best practices are driving the Rich Internet Applications of tomorrow. Project structure, writing specifications, build processes, automated testing, templating and applying “separation of concerns” to the world of JavaScript are all to be covered. The IDE used will be Eclipse.

Coverage of web application test-driven development and its practice in JavaScript had previously been scarce at best. That is, until efforts from Christian and others brought it out from the dark and into the spotlight. Since then, a wealth of tools and libraries have begun sprouting from every corner of the web.

This post assumes you are familiar with TDD and have been won over by its benefits. You should already have an understanding of the iterative process to follow, as well as the necessity of stubbing and mocking dependencies to deliver true (rather… truthy) unit tests. If you’re still in the dark, I recommend Christian’s series of excellent tutorials. To see how deep the rabbit hole really goes, be sure to pick up a copy of his book Test-Driven JavaScript Development.

Getting right down to business, the following is an example spec for a core unit which accepts an array of apps and loads each in turn – appending each app’s output to the DOM. Don’t fret if it’s not suddenly clear what the test is trying to achieve. Simply familiarize yourself with the vocabulary of the testing libraries.

it('should load app and append its wrapped view', function() {
    // create test namespace
    namespace('tux.test');

    // create a test app and spy on any calls made to it
    this.testView = $('<div>')[0];
    tux.test.TestApp = Backbone.View.extend({
        el: this.testView
    });
    this.TestApp = sinon.spy(tux.test, 'TestApp');

    // initialize the unit under test, passing a reference to the fake app
    this.app = new App({
        modules: [{
            app: 'test',
            obj: 'TestApp',
            title: 'Test App'
        }]
    });

    // add the result to the test environment DOM
    setFixtures(this.app.el);

    // check that the app was initialized
    expect(this.TestApp).toHaveBeenCalled();

    // check that test app was bundled inside the core app
    expect($(this.app.el)).toContain('div#test');
    expect(this.app.$('#test')).toContain(this.testView);

    // check that the test app was prepended with an h2 header
    var h2 = $(this.testView).prev();
    expect(h2).toBe('h2');
    expect(h2).toHaveText('Test App');
});

The Testing Stack

There are quite a few libraries and tools at play here.

  • Jasmine – the base testing framework that defines how your specs are written and how to verify that the tests have passed successfully or failed miserably.
  • Sinon – flexible spy, stub and mock library to intercept dependencies. It also provides fake servers and timers which allow you to avoid asynchronous testing scenarios and keep your test suite running times to a minimum.
  • jasmine-sinon – provides syntactic sugar (in the form of Jasmine “matchers”) for verifying specs that involve sinon objects. This is also useful when running tests, as the matchers ensures the error messages are contextually accurate about why a given test has failed.
  • jasmine-jquery – most useful for interaction with the DOM. Provides management of remote and local markup dependencies and a comprehensive set of Jasmine matchers to verify end-result elements, attributes and even event handlers. Read more here.

This collection should provide all you need to write your specs. The last two pieces of the puzzle are JsTestDriver and the adaptor for Jasmine enabling you to execute your Jasmine-based syntax against all browsers that have been “captured” by JsTestDriver. Let’s see how these all stack up.

That sure is a lot of moving parts, but you’ll be glad to know that the authors have produced some fine source code that’s not beyond debugging should anything go wrong. I’ve highlighted the application itself is to emphasise that at the end of the day, it should be the only code to reach the production environment.

Detailed instructions on how to load your application, supporting libraries and finally app specs into the test runner environment can be found at this Wiki. Just a few dry runs should make painfully obvious the need to automate test execution, bringing us to the next part of the series – scripting a JavaScript build process to seamlessly clean, verify and compile your source code.

Scaling JavaScript Apps – Part I: File Structure

A multi-part look at how modern JavaScript development techniques and best practices are driving the Rich Internet Applications of tomorrow. Project structure, writing specifications, build processes, automated testing, templating and applying “separation of concerns” to the world of JavaScript are all to be covered. The IDE used will be Eclipse.

File structure.

Making folders, nesting directories, arranging files.

It’s certainly not the blockbuster blog opener I had intended. Household chores elicit higher levels of adrenaline. Yet — as a method of collecting thoughts and giving each piece of work meaning — getting it right can have far greater benefits than most realise. A well designed structure can deliver you right to where you need to be in an instant, while those poorly designed will have you spending hours each week in an infernal expand-collapse nightmare of your own making.

Where do we begin?

Folders stemming from the root are usually the first port of call. They provide an immediate overview into how both the app and the development process itself are deconstructed. Obviously this will vary based on the purpose, context and individual needs of the project, but the vanilla-flavoured tree remains essentially the same. With this in mind, let’s dive into the layout of the example JavaScript project we’ll be building upon through the course of this series.

  • /lib – external libraries, such as Underscore, Backbone and jQuery
  • /src – source files and templates – more details further on
  • /specs – specifications for automagically testing your source
  • /build – all tools and scripts required to lint, test and compile your project
  • /scripts – the end game – compiled JavaScript files for production use

Whether other types of resources such as stylesheets and images deserve their own root folder, or whether they are bundled together with JavaScript in the src directory is up to you. After learning more about the build process which brings the entire show together you should have a clear idea of what approach is better tailored for your team.

Once libraries have been selected and the build process has been nailed, all of your sweet time will be spent navigating through the src and specs. This is where you should look at the app as a whole and begin drawing clearly defined lines around each component. The goal here is to conjure up a list of these modules, and sub-modules where necessary, to use when populating these directories. Aim to keep the number at any level of nesting below a comfortable maximum – if you’ve grouped more than ten folders together, try to plant them under another level of grouping. It’s always beneficial to keep in mind how you think the application will grow, so as to avoid excess file renaming and updates of path references later on.

Only leaf folder nodes will contain files, a la namespacing. You should always avoid placing files in a folder that already contains child folders – a folder should be dedicated to either grouping sub-folders, or holding the source files that make up a single building block, not both. The exception to this rule are folders that group resource types, such as JavaScript templates, stylesheets and images.

The specs folder should simply mirror the src – wherever you find a specifications file, you should be able to find the source file it tests sitting in an identical path within the source structure.

I’ll take this opportunity to give an example while shamelessly plugging tux, an open-source personal finance app I’m currently building. Beneath the source folder I’ve created a folder for each of the functions served – accounts, ledger, schedule, budget, reports, goals, and so forth. These parts are all underpinned by an additional core module and all split the templates out into a jst resource folder.

One potentially helpful feature in Eclipse when dealing with large file structures is Working Sets. Each defined set can group related resources, and when activated will display only those resources in the Project Explorer panel. This is a fantastic way of de-cluttering the explorer for when you know the work to be undertaken relates to just one or a few parts of the app.

So that’s enough of that. In the next part, we’ll take a look at writing the specifications that define each unit before any production code is written, and we’ll begin to look at how we can use a build process to neatly run these tests without leaving the editor.