After the recent fuss over Yahoo potentially shutting down delcious, I thought it might be wise to automatically back up my delicious bookmarks on a regular basis. Here’s what I did:

1. Created a shell script to download delicious bookmark data to my Dropbox folder

cat $HOME/Development/Scripts/backup-delicious.sh
curl -s https://BillHiggins:MyPassword@api.del.icio.us/v1/posts/all > $HOME/Dropbox/delicious-bookmarks.xml

This sucks down an XML file of current bookmark data and drops it in my Dropbox folder that is automagically backed up to the Dropbox cloud.

2. Created a cron job to automatically run said script each week

crontab -e
# backup bookmarks every Friday at 2pm
0 14 * * 5 $HOME/Development/Scripts/backup-delicious.sh

7 April 2011 Update: My friend and former colleague Richard Backhouse has written an excellent companion to this blog entry talking about how he actually implemented many of these patterns in Jazz and Zazl.

Warning: The following blog entry will definitely be unintelligible to readers who are not software developers.

It may possibly be unintelligible to readers who are software developers. 🙂

Motivation

Between mid-2005 and the end of 2009, I worked in IBM Rational on the software infrastructure for the Rational Jazz browser-based user interfaces, hence “web UIs”. For various reasons I decided that we should provide an “extreme Ajax” architecture [1] which required us to have to load a large amount of JavaScript and CSS. Since people don’t like UIs that load slowly, we [2] spent a lot of time exploring patterns and techniques that would cause the JavaScript and CSS code to load as quickly as possible.

Recently an IBM Software Group Architecture Board workgroup asked me to document some of these techniques, and based on the positive response my internal write-up received, I thought I would tweak the write-up and publish it externally.

Preamble: Modular Software Development

In the “Motivation” section I mentioned that the design decision to build “Extreme Ajax” UIs led to a technical problem of needing to load a large amount of JavaScript and CSS code quickly. Users of course don’t care how you load code, they just care that the UI loads quickly. Theoretically you could solve the code loading problem by coding a single large JavaScript file and a single large CSS file, but of course developing in this way would eliminate all of the benefits of modular software development, which have been discussed in depth elsewhere [3].

The only reason I mention this point is that the practice of developing modular Ajax software complicates the task of making the code load quickly, as the following sections will show.

Overview of Optimization Techniques

Fundamentally, there are only a small number of things you can do to make Ajax code load faster:

  • Deliver Less Code
  • Concatenate a Large Number of Files into a Smaller Number of Files
  • Compress Code Content
  • Cache Code Content

The following sections address each of these techniques in some detail.

Deliver Less Code

Though it may sound trite, the simplest way to improve code loading performance is to deliver less code. This can be accomplished in two basic ways:

  1. Design simpler web applications that require less code
  2. Do not load code until it is needed by the UI

It’s outside the scope of this topic to discuss the pros and cons of simpler vs. more complicated web UIs, so I won’t say much about 1. other than to re-observe that speed is a feature and that little bits (or large bits) of functionality tend to add up and make a web page slower to load.

The second topic is a bit more mechanical and thus fits better into this blog entry. In a nutshell, the code required to power a particular UI is often much smaller than the total product code base, especially for feature-rich products like Rational Team Concert [4]. A common optimization technique in any system that has this characteristic [5] is to defer loading code until you know you need it. This technique is usually referred to as “lazy loading”.

One implements lazy loading by first understanding the relationship between a UI part (e.g. a web page, a dashboard widget, an Eclipse view, etc.) and the code required for that part to run. This requires that your programming language or framework have some notion of modules and module dependencies. Although the JavaScript language proper has no such construct, in IBM we use a JavaScript toolkit called Dojo that provides a module construct. Basically each JavaScript file can declare a module name (like “com.ibm.team.MyUIPart”) and also can declare dependencies on a number of other modules (like “com.ibm.team.MyUIPartController”). The set of all modules and their dependencies allow you to build an internal representation of the modules’ dependency graph [6]. Once you have the dependency graph, you only need a mechanism for defining UI parts and their top-level dependencies and then you can quickly and easily walk the dependency graph calculate the complete set of modules required to execute the UI part.

For a fuller explanation of computing JavaScript and CSS dependency graphs, see Appendix B below.

There is some complexity involved with the dependency calculations to implement lazy loading, so it’s only worth pursuing if you can defer loading a large amount of unnecessary code. This was true in Rational Team Concert where I estimate a given UI (like the bug submission form) probably contains less than 5% of the total code base.

The simplest lazy loading approach is to consider the web page itself the “UI part” and thus load all of the JavaScript code that the page needs as a simple <script> tag when the page loads. However sometimes it’s necessary to load additional code later in a page’s lifecycle. This leads to some more advanced lazy loading techniques.

Consider a “dashboard UI” that aggregates a bunch of little UI widgets in a single web page. Using the simple lazy loading approach described above, we could calculate the total code needed by the dashboard by considering the dashboard framework and the set of all dashboard widgets to be the UI parts, and unioning the JavaScript modules transitively required by each of these.

However a common feature of a dashboard is to allow a user to add new dashboard widgets to the page. If we have a large number of possible widgets we probably don’t want to load these whenever we load the dashboard, especially when you consider that probably 95% of the time the user will not modify the dashboard. So how do we lazily load the code for the new dashboard widget? Well, as you’ve probably guessed, it’s just a matter of performing set subtraction between the currently loaded set of modules and the set required by the new dashboard widget, then loading the difference (i.e. the modules that are needed but not yet loaded).

Although this deferred lazy loading technique is easy to describe, it’s a bit tricky to implement, so I recommend you stick with the single file loaded as part of the top-level page. But this of course begs the question “How to go from a set of fine-grained modules to a single monolithic file?” This is the topic of the next section.

Concatenate a Large Number of Files into a Smaller Number of Files

Each JavaScript file or CSS file that is loaded requires its own HTTP request and load processing by the browser. Therefore each load of a JavaScript or CSS file introduces a non-trivial amount of latency, bandwidth, and local CPU overhead that delays the user from actually using the web application. A common technique for reducing this sort of overhead is to batch – that is turn a large set of small resources into a small set of large resources. As with the “lazy loading” technique, the most common approach to concatenate files is to first determine the subset of all files needed, and then to append them one after another after another into a single file. Though concatenation is conceptually simple, there are several design considerations.

The first consideration is “What should be the granularity of the concatenated file?” An extreme answer to this question is “A single file containing only the modules required by the user’s immediate UI”. This is the approach that the Jazz Web UI framework takes. Another approach would be “Several logical layers of file sets, in the hopes that the layers can be reused by multiple UIs and therefore benefit from cache hits”. This is the approach taken by the Dojo build system. It is hard to judge the pros and cons since the efficacy of each depends on other factors like the nature of the UI, the nature of the layers, and the usage of layers across different UIs.

A second consideration is concatenation order. This is important because one module might require the presence of another module to function correctly. This can be easily solved by concatenating (and therefore loading) modules in reverse dependency order. I.e. the module that depends on nothing but is depended upon by everything loads first while the module that depends on everything but is depended upon by nothing loads last. This is also important in a web UI that uses Dojo since the Dojo module manager will make expensive synchronous XHR requests if it determines that a module requires another module that is not yet loaded. If you load your modules in reverse dependency order, each dojo.require statement becomes a noop.

Reducing the number of module HTTP requests via concatenation will reduce the bandwidth overhead incurred by many HTTP requests, it does not help with the bandwidth required to load a very large file. However, it is possible to significantly reduce the raw bytes of required code by compressing the code content. I cover this in the next section.

Compress Code Content

Imagine that you’ve taken 50 JavaScript files and 40 CSS files and concatenated them down to a single JavaScript file and a single CSS file. Each of these files may still be huge because of raw code size plus the size of whitespace and comments. There are two ways to makes these files smaller:

  1. Gzipping
  2. Minification (JavaScript only)

Enabling gzip when serving JavaScript and CSS probably provides the best return on investment of any of these techniques. In Jazz we often see files get 90% smaller simply by running them through Gzip. When you’re delivering hundreds of kilobytes of code, this can make a large difference. There is some overhead required to zip on the server-side and unzip on the client-side, but this is relatively cheap vs. the bandwidth benefits of the Gzip compression. Finally, Gzipping has the nice characteristic that it is a simple transformation that has no impact on code content, as observed by the ultimate recipient (the browser runtime).

Minification is the process of stripping unnecessary code content (e.g. whitespace and comments) and renaming variables to shorter names in a self-consistent way. Unlike Gzipping, minification is a unidirectional transformation (you never “unminify”). The following code demonstrates minification.

// A simple adder function
function add(firstNumber, secondNumber) {
    var sum = firstNumber + secondNumber;
    return sum;
}

… might be transformed into …

function add(_a, _b) { var _c = _a + _b; return _c; }

Note that you can only minify tokens that meet the following criteria:

  • Their semantics don’t change when you rename them. By this rule you cannot rename ‘function’ since it is a JavaScript programming language keyword.
  • You can consistently rename all instances of the token across the entire set of loaded code. Because of this it’s dangerous to rename API names like ‘dojo’ or CSS class names since it is hard to find all references to these names. Therefore usually only local variables are renamed, but this can still yield a non-trivial savings.

Although it should be obvious, it is possible to use both minification and gzipping together, though you must always minify before gzipping.

Cache Code Content

A final high-yield technique to improve code loading performance is to cache responses to reduce either bandwidth used or latency. There are two basic caching techniques [7]:

  1. Validation-based caching – Where you check each time to see if you have the most recent version of some document, and if you already have the most recent version you load it from your cache rather than fetching the document again.
  2. Expiration-based caching – Where the document server tells you that a certain document is good to use for some specified period of time. If you need to load the document and its expiration date is later than the current time, you may load the locally cached version of the document without even asking if you have the most recent version.

Obviously expiration-based caching is going to be faster than validation-based caching [8] (because you don’t even have to ask) but it is trickier to implement because you have to know when code is going to change. Consider the following scenario:

BigCo updates its web site every six weeks and therefore sets expiration dates on all of its web code from +6 weeks from time of last deployment, which means that each user only has to load new code once after each deployment. However, two weeks after a certain deployment, BigCo discovers a nasty security bug in its code that forces an unexpected patch deployment. However, any customer that accessed BigCo’s web site since the previous planned deployment does not received the patch because their browsers have been told not to load the new code for another four weeks.

The solution we found to this problem for Jazz was to use validation-based caching on web pages, and expiration-based caching with versioned URLs on JavaScript and CSS files referenced within the web page. Here’s an example:

<html>
<head>
    <script type="text/javascript" src="../code.js/en-us/I20101005-1700"></script>
    <link rel="stylesheet" type="text/css" href="../code.css/en-us/I20101005-1700" />
</head>
</html>

In this example, the HTML page includes a script and CSS reference to files with versioned URLs where the version ID corresponds to the build in which the code was last change. Each code request responds with an expiration header of plus one year. If the code were updated, the value of the URLs would change which would prompt the browser to fetch the new file which again will contain an expires header of plus one year. By driving the state off of the URL rather than a last-modified header, we are able to use expiration-based caching safely and since in our Jazz applications the size of JavaScript and CSS are much larger than the size of the HTML pages, the vast majority of our code will be loaded from the user’s disk on subsequent visits to a web UI.

You can also take advantage of caching with CSS references to images. Basically you can rewrite any ‘background’ image URL to use a similar versioned URL and a long expires header. This can be an especially big win since some images get loaded many times.

Appendix A: Debug-ability

A related issue to optimization is the ability to debug the web application. Several techniques above (concatenation, minification) change the code content vs. how it appears in a developer’s workspace. In fact, raw optimized code is effectively impossible to debug because of its inscrutability. The basic solution to this problem is to enable a “debug mode” where the code is loaded in a less optimized form so that the code matches what the developer sees in his or her development workspace. In the Jazz Web UIs, we’ve made debug-ability a first class concern since day one, and you can enable it simply by appending ?debug=true to any web page. The page may load significantly slower, but this is tolerable since you only experience this when you explicitly ask for it (i.e. a user would never see it) and since you would otherwise not be able to debug the code.

Appendix B: Determining Dependency Graphs in JavaScript and CSS

Several of the techniques above (lazy loading and concatenation) depend on understanding the dependency relationships across all sets of code. This section describes the basic mechanics of building the dependency graph for JavaScript and CSS.

The abstract solution for any dependency graph is straightforward: determine what each module depends on via inline dependency statements or external dependency definitions and then compute the union of each module’s dependencies to build the dependency graph.

It is straightforward to compute a JavaScript dependency graph with Dojo. Simply treat each dojo.require statement as defining a unidirectional dependency relationship between the module containing the dojo.require statement, and the module referenced by the dojo.require statement. Each of these “A depends on B” dependency statements is an arc in the overall dependency graph, so once you’ve analyzed all of the Dojo-based modules, you have your JavaScript dependency graph. Obviously a similar technique could be used with non-Dojo modules either via comparable inline statements (foo.dependsOn(“bar”)) or via external dependency definitions (foo.js depends on bar.js).

Once you have the JavaScript dependency graph, you simply need to know the root nodes needed by any particular UI. Consider the following simple dependency graph (the arrow direction indicates the “depends on” relationship).

A -> B

B -> C

B -> D

C -> E

If a UI knows that it has a root dependency on module C (“Jazz Work Item Editor”), then it needs only load C and E. However, if a UI depends on module A (“Team Concert Web UI”),t hen it needs to load all of the modules. This introduces a secondary dependency relationship; the dependency relationship between some logical notion of “a UI” and “the set of top-level JavaScript modules required to run the UI”. In Jazz we express this relationship via server-side Eclipse extension points (e.g. “the page at /work-items depends on the module com.ibm.team.WorkItem.js and all of the modules depended upon by com.ibm.team.WorkItem.js”), however the relationship could be specified in any number of ways.

There is no common solution to build a CSS dependency graph, even with Dojo. Theoretically you could just externally define a bunch of dependencies between CSS files.

A.css -> B.css

B.css -> C.css

… and then use a technique similar to the one described above where each logical UI describes its dependency on top-level CSS module (or modules) and then you simply walk the dependency graph for the top level module (or modules) until you have all of the CSS.

In Jazz we took a slightly different approach. Rather than declaring CSS to CSS dependencies, we declare JavaScript to CSS dependencies and then derive the CSS dependency graph from the JavaScript dependency graph:

A.js -> B.js

A.js -> A.css

B.js -> B.css

Using this example we know that whenever a UI requires A.js, then it transitively requires the JavaScript A.js and B.js and the CSS A.css and B.css. The reason why we chose this approach is because it allowed us to only surface JavaScript as a top-level API and CSS dependencies can remain an internal implementation detail and therefore change without breaking anyone. For instance in the example above we can imagine B.js is a shared library delivered by team B and A.js is an application delivered by team A. When A gets loaded, all of the JavaScript and CSS provided by both teams A and B will be loaded, but team A need not care about the CSS delivered by team B.

A final issue of course is circular dependencies. Basically you have to decide how tolerant your system will be of circular dependencies and how it will try to recover from them. In Jazz I believe we try to tolerate them but loudly complain about them via WARN-level log messages so people usually eliminate them pretty quickly. In my view, a circular dependency is either a symptom of poor design, an implemnetation bug, or both.

Footnotes

[1] In hindsight I believe we went a bit too far with the “extreme Ajax” approach, but that’s for another blog entry.

[2] When I say “we” in this article I basically mean myself and Richard Backhouse, who collaborated on the design of the Jazz Web UI code loading infrastructure between 2005 and 2009. Though I was quite involved with the design, Richard implemented everything. Randy Hudson took over this code in 2010 and has added quite a few of his own ideas and improvements.

[3] My favorite writing on modular software development is Clemens Szyperski’s “Component Software, 2nd Ed.

[4] The very first Ajax code we wrote starting in early 2006 evolved into the Rational Team Concert web UI. Later we factored out a subset of this code to be the Jazz Foundation web UI frameworks and common components which were then used by a number of other Rational products. You can actually see Rational’s self-hosting instance of Rational Team Concert at Jazz.net, though this requires you to first register with Jazz.net (frowny face).

[5] E.g. the Eclipse Platform, from which I learned about lazy loading, and many other design patterns.

[6] It is obviously necessary to avoid circular dependencies between modules.

[7] I’ve written up a longer article on validation-based caching vs. expiration-based caching for anyone interested.

[8] It’s a bit trickier than it should be to make expiration-based caching work consistently across all browsers. The short version of the solution is to use every possible directive you can to tell the browser to use expiration-based caching; e.g. expires, cache-control, etc. Mark Nottingham has a blog entry with some more detail on this topic.

I had a fun chat today with Coté on his “make all” podcast. Here are some of the topics I remember discussing:

  • What’s important (and what’s not important) about HTML 5
  • The increasing ubiquity of JavaScript
  • Java-to-JavaScript translation technologies (e.g. GWT and JDojo)
  • Ajax, RPC, and REST
  • The nirvana of mobile devices plus cloud-based data
  • I recommend Nick Carr’s “The Shallows

Happy listening!

Update: I’ve created a test page for this scenario, but I haven’t had a chance to test it yet on my iPad.

In our Jazz UIs, we tend to use “hover previews” quite a bit. That is, if you hover over a link, after a second or so it will show a small preview of the resource at the other end [1]. Jazz style guide example below (copyright IBM).

This was always broken on the iPhone’s Mobile Safari browser because I couldn’t figure out how to perform a mouseover action on a mouse-less interface.

I just noticed that the iPad seems to support onmouseover. I believe I’ve observed the following behavior:

  • If a link has no “on mouseover” actions (e.g. hovers), a tap follows to the link.
  • If a link has an “onmouseover” action, a tap activates the onmouseover action and a double-tap follows the link.

Obviously this implies that you should provide visual indications to your user whether the links provide onmouseover actions or not. For the Jazz links with hover previews we immediately decorate the link on mouseover (different color, double-underline, with a small chevron) and if the user remains over the link for a second or two we then show the preview.

[1] I realize that onmouseover actions attached to links represent a UX minefield. A page with too many / too aggressive hover actions can feel like an actual minefield!

I know this is total n00b stuff but I always forget the particular so I thought I’d write it down in a blog entry.

To add a custom scripts directory to your path, do the following:

In your root directory, create a file called .bash_profile with something like the following:

if [ -f $HOME/.bashrc ]; then
        . $HOME/.bashrc
fi

Then in .bashrc, put something like the following:

export PATH=$PATH:$HOME/Scripts

Make both scripts executable:

chmod +x .bash*

For extra credit, create a softlink from your Dropbox folder to your scripts folder and you can have the same scripts on all of your computers!

ln -s $HOME/Scripts $HOME/Dropbox/Scripts

I hit a very annoying issue using Audible.com on my Mac with iTunes and my iPhone. I thought I’d document it in case others hit it.

Setting the Stage

I got a MacBook Pro in early 2009. I buy audiobooks from Audible.com and store them in iTunes. From iTunes I sync the audiobooks to my iPhone so that I can listen to them while driving, cleaning, etc. The audiobooks are DRM-protected (urgh), so the first time you try to play an audiobook in iTunes it asks you to enter your Audible user ID and password. It then validates your Audible user ID and password and verifies that you have purchased the audiobook file. Once you have authorized your Audible.com account in iTunes, you never have to enter your Audible.com user ID and password again. Also, once you have authorized your Audible.com account in iTunes, you can sync audiobooks to your iPhone.

The Symptoms

Recently IBM got me a new 2010 MacBook Pro for work (thanks IBM) so I gave my 2009 MacBook Pro to my wife. So that I wouldn’t have to set everything up from scratch, I backed up my entire 2009 MacBook Pro to an external drive using Time Machine and then restored this configuration onto the new MacBook Pro. After a couple of hours of data transfer both ways, my new MacBook Pro was a clone of my old MacBook Pro, or so I thought.

Unfortunately I realized quickly that something was wrong with my Audible.com audiobooks. The first symptom was that none of my Audible.com Audiobooks were on my iPhone. I tried to play one in iTunes and it asked me to authorize my Audible.com account. This was a surprise because I would have expected the Time Machine backup / restore to have made this unnecessary (since my iTunes configuration was already authorized), so I typed in my Audible.com user ID and password, hit sync, and sure enough the ebooks synced successfully to my iPhone.

Later that week I noticed that the audiobooks had disappeared from my iPhone again!

The Bug

To make a long story short (too late), after some experimentation and support discussion with Audible.com, I discovered the following steps to reproduce the problem.

  1. Start iTunes
  2. Double-click an Audible.com audiobook
  3. iTunes prompts you to authorize your computer for your Audible account by asking you to enter your Audible.com user ID and password
  4. Enter Audible.com user ID and password and successfully authorize
  5. Sync iPhone
  6. Audiobooks copy successfully
  7. Close and re-open iTunes
  8. Sync iPhone
  9. Books are deleted from iPhone

Basically anytime I closed iTunes, my Audible authorization was lost. This indicated that authorization was succeeding but was not being persisted to disk. I sent the steps to reproduce to Audible customer support.

The Solution

As I suspected, it was a persistence problem. The Audible support person pointed me to the file /Library/Preferences/com.audible.data.plist (a Mac property list file) where (apparently) the Audible / iTunes code persists your Audible authorization information (I peaked in the file and it contained a single ‘data’ property whose value was 4 KB of encrypted something or other). Interestingly, even though I’d at this point authorized my Audible account in iTunes many times this week, the file’s date stamp was from last year, indicating the file wasn’t getting updated and thus my authorization went *poof* whenever I closed the iTunes application and its process died.

The Audible customer support rep first suggested to check the file’s permissions to make sure that my user account was authorized to modify the file. No problems there – I had read/write access to the file. Her second suggestion was simply to close iTunes, delete the Audible property file, open iTunes and reauthorize the computer. I did this and sure enough when I reauthorized the computer, a new property file was created with the current time as the timestamp and I was able to close and re-open iTunes without having to re-authorize.

Lessons Reiterated

This experience reinforces two software principles I already believed.

  • DRM sucks ass
  • Data persistence is hard

Solution Proposal

I sent the following suggestion to the Audible.com support person:

Thank you <redacted>. It’s working now.

FYI, it was not a permissions issue – the permissions were correct. I had to delete the com.audible.data.plist file and restart iTunes / reauthorize Audible to create a new version of the file.

I suspect this is related to my restore from Time Machine on to a new computer. I would suggest your engineers who work on the Mac iTunes integration test this case:

1. On a Mac, authorize to read Audible content via iTunes

2. Backup a Mac onto Time Machine

3. Restore from Time Machine onto a new Mac

4. Attempt to play Audible content

I’m not sure what the ideal behavior is (e.g. just working or requiring a single authorization of the new machine) but I know what should not happen is what happened to me 🙂

Thanks again for your help.

– Bill

Occasionally I need to produce YouTube HD videos showing me doing something in a web browser. Up ’till now I’ve had to painfully, manually resize the browser window to 1280×720 so that the video didn’t get unnaturally shrunk or expanded by my screencast rendering software. Tonight I figured out a way to automatically resize my browser window to the exact dimensions using AppleScript.

Here is the magical incantation:

tell application “Safari”

activate
set the bounds of the first window to {0, 0, 1280, 720}

end tell

Run this application in AppleScript Editor (/Applications/Utilities/AppleScript Editor) and your browser will be the perfect size for recording movies. You can save the script to a folder for later reuse.

Shortly before winter vacation, an IBM colleague contacted me to learn more about how extensibility – particularly web UI extensibility – works in the IBM Rational Jazz Platform on which I work. After typing up my response I thought it might be interesting to a broader audience so I’ve reposted it here (with some minor tweaks for clarity). Note that some but not all of the links below require Jazz.net registration (sorry).

—–

Hi <redacted>,

You are correct that as of Jazz Foundation 1.0.x (shipped this year), one contributes “viewlets” (a Jazz-specific widget mechanism comparable to OpenSocial Gadgets and iWidgets) to Jazz Dashboards via a server-side Eclipse extension point.

However, the Jazz extensibility story is more complex than OSGi bundles and Eclipse extension points. I will explain this below, with the assumption that you are interested.

Original Extensibility Mechanism Based on Bundles and Extension-points

When we first started the Jazz server work in 2005 we made a design decision that the Jazz server would be based on Java/OSGi and you contribute capabilities to the server by adding additional OSGi bundles. I’m guessing you’re very familiar with this architecture; it’s of course very similar to the Eclipse workbench except instead of contributing actions and views (e.g. to implement a Java IDE) you contributed web services and data definitions (e.g. to implement a bug tracking system). We went so far as to enable this style of programming for Dojo-based Ajax code so that developers could add “web bundles” and have the Jazz Web UI framework discover and surface their web capabilities in the Jazz Web UI. The Dashboard viewlets are one example of an extension-point-based extension. There are other extensions also enabled by extension-points (as in Eclipse, this extensibility model is open-ended).

Shift Towards a RESTful Architecture

Around 2007 we came to the conclusion that the “extend the server” model wouldn’t enable us to achieve our broad goal of “enabling integration of heterogenous tools across the software lifecycle”. There were several blocking problems with this architecture. I will list a few below:

We came to the conclusion that the RPC style service APIs were too fragile over time and would make it difficult to allow for independent upgrades of different tools in an environment containing many tools.

We could only support a narrow range of technologies (i.e. OSGi-ified Java and JavaScript code built on top of our server architecture). This limitation is readily apparent when you compare it to something like OpenSocial Gadgets or iWidget where you need only specify a gadget/widget URL to achieve integration.

This caused us to take a step back and ultimately decided to use the REST architectural style to achieve integration across tools. For instance this year we enabled several integrations between our Requirements Composer, Team Concert, and Quality Manager products using RESTful protocols and APIs. For an example of these integrations, see this video on Jazz.net that demonstrates how we use REST and Discovery to drive linking and “picker” integration between two tools.

Implications of RESTful Architecture on Extensibility Model

The shift to the RESTful architecture left a gap w/r/t extensibility. I.e. how do we meet extensibility needs if we can’t assume a homogeneous OSGi/Java/JavaScript-based programming model? The solution we came up with was something that we generically call “Discovery” (an overloaded term, I know). Basically each tool in a Jazz environment has a single “root services” document that describes the basic capabilities of the tool and has pointers to other documents that contain more details about a specific service. You configure tools to know about each others’ service documents, and from there they can discover and (if appropriate) consume each others’ capabilities using spec’d protocols (as opposed to physically installing code from Tool A within Tool B’s environment).

For instance, we have a “viewlet provider” service type that announces that the tool in question has viewlets that can be embedded in Dashboards. Here’s an example of how it works:

  • Tool A has Jazz Dashboards capabilities
  • Tool B claims to be a “viewlet provider”
  • Tool A is configured to connect to Tool B (this includes specifying Tool B’s root URL and configuring an OAuth consumer / provider relationship)
  • Tool A is now able to host Tool B’s viewlets in its Dashboards

The interesting thing about this is that Tool A and Tool B could be implemented in wildly different ways… e.g. OSGi/Java vs. PHP.

* As an aside, we are currently working to deprecate Jazz Viewlets and adding the ability to host iWidgets and OpenSocial gadgets in Jazz Dashboards. This is in plan for 2010.

Going Foward

Even though we have moved to a REST-centric architecture, OSGi and Extension Points are still important, but less so than they used to be. We will continue to ship and use the OSGi and Extension Point-based Ajax frameworks and building blocks for the near future, but our major development investments and the Jazz SDK‘s focus will only assume fundamental web technologies like HTTP, HTML, JavaScript, and CSS and provide optional higher-level frameworks around emerging web standards like OAuth and OpenSocial.

I hope this was useful for you.

– Bill

Don Knuth: Email is a wonderful thing for people whose role in life is to be on top of things. But not for me; my role is to be on the bottom of things.

I used to read a lot of tech books, like a an hour or two a day. Several years, several promotions, two kids, and one deteriorating eye later, I realized I wasn’t reading much anymore. The combination of work commutes / Audible.com / iPod helped me keep up with novels and non-techy non-fiction, but I still had trouble keeping up with tech books.

Luckily, tech publishers are starting to publish more and more ebooks. Here’s some of the good and bad points of reading tech ebooks.

Tech Ebooks: The Good

  • My Macbook Pro‘s multi-touch trackpad makes it very easy to zoom and scroll so I end up with huge font that I can read without glasses from several feet away.
  • I can carry all of my ebooks on both my computer and iPhone.
  • Publishers like O’Reilly and Pragprog have relatively mature ebook stories; DRM-free, mobile integration, and available in both computer-friendly (PDF) and mobile-friendly (EPUB) versions and notification for free updates of new printings.
  • I can quickly search across all ebook content using Spotlight and quickly open ebooks using Quicksilver and Preview.
  • They’re significantly less expensive than their dead tree counterparts and you can acquire them instantly vs. a trip to the bookstore or, more likely, a two day wait for an Amazon Prime delivery.
  • Using Dropbox you can easily replicate your ebooks across all of your computers while easily staying under Dropbox’s 2GB free account max.

Tech Ebooks: The Bad

  • Can’t (ethically) lend to friends; can’t give away when done with a book.
  • I have a 1st gen Kindle I never use. While I love the selection, price, and integration with Amazon.com, I hate the hardware design, the user experience, the $@#! DRM, and the fact that I can’t (easily/legally) view the books on my computer. Don’t buy a Kindle.
  • The selection of ebooks is relatively small vs. all books I’d want to buy. For instance, no How Buildings Learn ebook (though I’ve sent the publisher an email on this).
  • Many ebook-selling publishers still insist on using DRM. This is stupid. As the argument goes, honest people like me will gladly pay for quality IP and a good user experience. DRM-selling publishers: please learn from O’Reilly and Pragprog.
  • You need a different account for each publisher. Luckily password management apps like KeePassX simplify this, but it’s still a pain.

Tech Ebooks I’ve Recently Bought

Now, for your reading pleasure, are the ebooks I’ve bought in the past few months (updated 25 Oct. 2009).