My friend Coté and I started a podcast called Under Development. It’s been really fun to chat with him about software and cultural topics every other week or so. Please give it a listen and tell what you think and if you have any feedback or topic suggestions.
Below is the URL for the feed if you want to subscribe via a podcast app. I use Pocket Casts on my iPhone.
Note to current and future colleagues: If I ever put the term ‘architect’ in my job title, I encourage you to punch me in the face.
My Twitter posts, which are public, automatically propagate to my Facebook timeline, which is private. I hadn’t considered “How will people react?” when I tweeted, and I was surprised by the range of reaction across Twitter and Facebook. It ran the gamut from “I’m amused” to “I’m surprised” to “I agree” to “I’m offended”. Whenever you have a range of reaction like this, it usually means you’re on a point that people either find interesting, or care about, or both. Given this, I decided to write my thoughts down more fully.
I don’t have a full thesis worked out in my head, so I’ll explain my feelings through a series of stories from my career.
I distinctly remember meeting Simon Johnston. It was approximately 2003 at IBM Research in Hawthorne New York, for an IBM Academy study on some software engineering topic. At the time I worked in the old Application Management Services division and Simon had recently joined IBM through our acquisition of Rational Software. I distinctly remember meeting Simon for two reasons:
I was blown away by his degree of sophistication and articulation when he spoke about software engineering and architecture
I remember him telling me – in a very polite way – that I was basically a bozo for calling myself an architect
At the time I was about 25 and had been out of college for two years. In that part of IBM, the career progression for software developers was well-defined: you want to get promoted out of development and into architecture. At the time, my primary work goal was always “get promoted, get promoted” so I had managed to move from development to architecture in just a year and a half. Simon’s point was that I hadn’t done anything interesting enough to call myself an architect. Also, at the time I was the breed of architect who spends all his time talking about business requirements and drawing the occasional UML diagram, while eschewing code. I don’t recall his exact words, but Simon made the point that good architects – i.e. non-bozos – are highly technical, work very closely with development, and still write important code.
I hadn’t thought about it until just now, but Simon’s description of architect in 2003 pretty much describes his role today as CTO of Amazon Fresh.
I had an unusual start with the Rational Jazz project, as I described a few years ago. The Jazz team were the descendants of the Eclipse team and thus OTI, a late-1990s IBM acquisition. OTI had a wonderful development culture, which I learned about once I joined the Jazz team and over time embraced fully. The Jazz team leadership had a small presentation called “OTI Culture” that distills the essential values and principles. It’s short enough to include here:
“People, not organizations, build software.”
We succeed because of our people.
Our culture attracts top people and empowers them to succeed.
Our culture is the impetus for our success. Without it we could not exist.
If it helps ship products, it’s good. If not, it’s bad.
He who ships gets to speak.
Do the right thing.
Get it done or get out of the way.
Ask: Why are we doing this?
Having fun is survival, not icing.
The team succeeds or fails together.
Everything we do reflects on all of us – a matter of personal pride.
A responsible, caring organization attracts responsible, caring people.
When we ship, we all ship.
You can’t build good software without emotion – you have to care.
What the leaders are, the people will soon become.
You do not learn by agreeing with people.
Convictions are meant to be acted on.
Now think about these principles in the context of architects you might have known. Did their work help ship products? If you were a developer, did the architect empower you to succeed?
On Jazz, there were several people who you could call architects, even though they didn’t call themselves architects. They were John Wiegand, Erich Gamma, and Scott Rich. Each of these guys were essential to helping us ship products and each of these guys empowered developers to succeed. I don’t think it was even a conscious thing – it was baked into their DNA.
The trick was that they only hired really strong developers, not blubs. And because of this, they could confidently delegate quite a bit of technical decision-making – even architectural decisions – to these developers. Their role was to establish priorities, provide light guidance, and to spot patterns and connect dots across different components. By delegating technical decision-making, we were able to move faster, developers felt more of a sense of ownership, and decisions were made closer to the code, and thus reality.
For instance, my first job on the Jazz team was to create our web UIs. Because this was 2005 at IBM, I started with … oh god I hate to say it … JSF. Let’s just say that in two months of work, it didn’t go well. One Friday afternoon Scott pulled me aside and said that he, John, and Erich had talked, and they were observing that the web UIs weren’t progressing fast enough or with enough pizzazz. He suggested that the team take two weeks to experiment with alternative technologies that were popular outside IBM and come back with a recommendation. They only gave us two requirements: we had to come up with an extensibility story to enable future products and it had to be cool. We looked at several technologies and ultimately chose to have a go with a single page Ajax/REST architecture that was inspired by Gmail, which was new at the time – in fact no one at IBM was using non-trivial Ajax in a product back then. Scott, John, and Erich supported us to give it a try, and it led to a great result that probably tens of thousands of people still use every day for their work.
To me this is a great example of architects being helpful. Give developers a clear problem statement, provide gentle course correction when something’s not going well, but otherwise let the developers do their thing.
A couple of years ago I was at the O’Reilly campus in Sebastopol for some meetings prior to Foo Camp. Mike Loukides was nice enough to pull me into a demo where some O’Reilly web developers were showing an early version of O’Reilly Atlas to Tim. During the demo, I realized that Peter Norvig from Google was also in the room. I’d never met him before but I’d certainly heard of him, since he was and is Director of Research at Google. A few hours later during a Foo Camp barbecue, I introduced myself and asked him how a super senior guy like him was able to keep it real and stay technical.
His answer was as simple as it was brilliant. Beyond doing his own coding for work and fun, he said that he regularly performs code reviews with his researchers. He said this has bi-directional benefits. For him, it helps him keep current on emerging techniques and technologies since his researchers are always on the cutting edge. And for his developers, he’s able to provide insights based on his deep and broad experience and also connect dots across projects and researchers.
The reason I wrote that negative tweet is because recently I’ve been running into a bunch of architecture astronauts. If you’re a little younger and not familiar with this term, take a minute to read this classic 2001 article from Joel Spolsky where he coined it. Maybe read it twice – it’s important.
My job these days is essentially the same sort of architect as John, Scott, and Erich were on Jazz. But I don’t feel comfortable calling myself an architect because there are so many architecture astronauts running around and I want to avoid guilt by association. Also, somehow I feel that calling myself an architect would be somehow gauche – like I’m sure Wes Anderson doesn’t refer to himself as an auteur and I know John Allspaw doesn’t use the phrase DevOps much.
All that being said, every day I aspire to be the sort of architect that Simon, Scott, Erich and John are. I try to avoid the trap of endless meetings and PowerPoints. I try very hard to stay connected to the code and the developers. Finally, where it’s within my sphere influence, I try to nuke the astronauts and empower the developers. Hopefully, my sphere of influence will continue to expand.
The last two years have zoomed by, mostly because of being extremely busy at work. I might come back to work stuff in a later entry, but this post is focused on my recent experience buying and using the new PlayStation 4 – or PS4 – video game console.
New game consoles seem to arrive every eight years or so. In the previous generation I bought both a Nintendo Wii and an Xbox 360. I don’t remember playing either very much, though like many families with young kids, I’m fairly certain we played a good amount of Wii Sports in the early days. I played the 360 again for a spell after I built a nice home theater, but mostly they both just collected dust.
In the same timeframe, Apple released the iPhone and the iPad and I did the majority of my game playing on them – still a small amount, probably a few hours a month – but greater than on the Wii and Xbox, where each time we’d play, we’d have to replace dead batteries.
Because of this general disinterest in consoles, I was only vaguely aware that nextgen consoles were arriving. For instance, I didn’t even realize the Wii U was a nextgen console until maybe six months after its release – I thought it was just some Wii add-on. I heard about the new Xbox One and PS4 via offhand comments in tech podcasts focused on other topics.
As 2013 wore on, the only real awareness I had of either console was that they were in short supply, which is pretty standard for new consoles, especially around holiday time. So I was surprised one Saturday morning in mid-Decemember when I walked into the local Target at 8:10 AM, planning to get something else and asked the guy in the video game area “So when will it be easy to get an Xbox One or PS4?” He said in a very serious voice “The time is now!” and told me that they had received about a dozen Xbox Ones and half a dozen PS4s that morning. I asked him which one he recommended and he said the Xbox One if you want a general purpose home media device and the PS4 if you were focused on great games.
I already use the Apple TV for home media and have bought far too many movies to ever consider switching, so I said “I’ll take a PS4!”. I got the second to last one, 90 seconds later, another 30-something guy showed up and picked up the last one. I bought an extra controller and a game called “Knack”, and I brought the PS4 home and hid it in the garage so I could surprise everyone for Christmas.
A week later my wife told me that Target had suffered a major security breach and that she needed to review transactions. Disappointed, I told her the secret about the PS4 which I had hoped would be a surprise. She was kind of indifferent since she’s not into video games but she was also supportive because she knows I enjoy them.
After that I read some PS4 game reviews. Most of the reviews for Knack were extremely negative, so I decided to return it. I brought it back to Target and asked the video game guys about other games. During this conversation I learned that on the PS4, you don’t actually need to buy discs – you can download the same games, AppStore style. So I returned Knack and didn’t buy any other games.
My last day of work in 2013 was Friday December 20th, so the next day I woke up at 5am and disconnected the Xbox 360 for good. It took me approximately 2 1/2 hours to disconnect the Xbox 360, find the original box, and put all of the original parts back into the box in a somewhat reasonable way. It then took me about 15 minutes to unbox and connect the PS4 to my home theater.
This unboxing and connection process was my first clue that the PS4 was way better than the Xbox 360:
The contents of the box were well-organized and easy to remove
The connectors were much simpler:
A single HDMI cable (PS4) vs. a proprietary component video cable and optical audio cable (Xbox 360)
A simple power cable rather than the massive Xbox 360 power brick
An ethernet cable, for improved connection stability and bandwidth
The system software setup was fast and intuitive, much like setting up a new iPad
Buying games was a bit more challenging. I tried to buy a couple of games from the Sony Entertainment Network site, but my credit card always failed. Luckily I had some time since Christmas was a few days away. After a couple of days of trying and failing, I trolled some Sony forums and it sounded like a common problem – something about Sony not being able to program ecommerce or something, which wasn’t too surprising when I recalled reading about their incompetence that lead to their own massive hack a few years ago. So I called the support number, and after providing a bogus quasi-explanation about something possibly being wrong with my credit card, the support person recommended paying with my credit card indirectly by using Paypal. I tried this and – voilà! – it worked. I bought Need for Speed: Rivals and Assassin’s Creed 4.
I thought I would hold out until at least Christmas Eve, but after getting the system all set up and managing to buy some games, I couldn’t wait. So Saturday after my kids finished their Chinese school exam, I invited them to the home theater antechamber, where all the components live. I asked them “What’s different in here today?” Neither one of them noticed the missing Xbox 360 at first, so I asked them “Where’s the Xbox 360?” Neither of them got it at first, but finally my son noticed the new big black component on the middle of the entertainment center. He said “What’s that?” and I said “Read it”. He looked at it and said “PS4?!” By now he’d heard how hard they are to find.
So we played that day and every day since. I’ve bought several more games and I’ve been waking up early to play Assassin’s Creed 4, as I decompress from a tough work year.
A couple of other nice features I’ve discovered since day one:
The PS4 plays blu-ray disks and an Amazon Prime Video app; this means I can get rid of my hated slow blu-ray player. I generally use Apple TV to watch purchased movies, but for some reason the six Star Wars still aren’t available on iTunes, despite many emails to Tim Cook on this topic
The PS4 has an option that allows you to play the audio via headphones connected to the controller. I’ve been using this for my early morning games of Assassin’s Creed, so the sound from the home theater doesn’t wake everyone.
I really like the size and feel of the PS4 controllers much better than the bulkier Xbox 360 controllers. Also, the PS4 controllers use a rechargeable battery, which is much nicer than worrying about changing AA batteries when they die.
The PS4 controllers charge via a mini-USB connection, which really makes me appreciate the design of the Apple Lightening connector even more than I did before.
Now it’s time to take the kids for a walk around the neighborhood.
Like last Spring, I am looking for some college students to work with me on a project at IBM. This time though it’s not an Extreme Blue project, it’s a Summer/Fall co-op.
The problem area is this: How can we make development activities (e.g. design and coding), cloud-based infrastructures, and operational activities (e.g. deployment, monitoring) work together seamlessly to allow us to evolve IT and other types of living systems (e.g. buildings) faster and with higher quality?
The ideal candidate is a junior or rising senior undergrad or graduate student in computer science or similar major. Must be extremely passionate about technology, an proficient hacker, and able to make progress without micromanagement/babysitting. Interest in DevOps ideas and Cloud technologies a plus. Interest/knowledge in living ecosystems a plus.
The location is Research Triangle Park, North Carolina from May to December 2012.
If interested, send me an email (email@example.com) with the word “co-op” somewhere in the subject line. I will provide more details to candidates.
Like many, I was profoundly saddened by the death of a man I never met, but who has affected my life – Steve Jobs.
A hundred years from now, the world will be very different than if Steve Jobs had never lived. However, it won’t be (directly) because he helped create the Mac, the iPod, the iPhone, and the iPad.
It will be because he fundamentally altered the intellectual and creative DNA of people who create technology.
I don’t just mean Apple; there are a large number of technologists outside of Apple whose views on design and engineering have been shaped by Jobs and Apple circa 1997 to 2011. This influence will result in small and large changes to the technical landscape as these people deliver technology and teach the next generation of technologists.
How will it be different? It’s impossible to capture precisely, so let me instead sketch a few examples (admittedly a caricature):
Don’t worry about speeds and feeds; focus on addressing everyday human needs
Design not as “how it looks”; design as “how it works”
Money not as end unto itself; money as both “fuel” and also the result of helping to improve the human condition
Not settling; striving for excellence
It’s possible for large organizations to do amazing things 
It will be subtle, slow, and sometimes invisible, but this DNA will alter the evolution of technology at a pretty deep level. In fact, it has already begun.
In 2005 I joined the Rational Jazz project. I was relatively young at the time (28) and it was pretty cool when I saw a meeting invite to present to Erich Gamma on my technical area – “web UIs”. I worked hard to create a good presentation that described the basic vision, architectural approach, and issues we expected. Approximately five minutes into the presentation, Erich asked, “This is nice, but where’s the demo?”
I had no demo, so it was a bit awkward.
How do we solve problems? Well, it depends on the type of the problem. If the problem is “Dishes need to be put away”, it’s pretty easy because it’s a well-defined problem and there are not that many different ways you can choose to solve it – you will end up with the same result. But of course there are harder classes of problems such as “What business model and technical strategy should we adopt for the next five to ten years?” or “What do I want to accomplish in life?” Of course, these examples are on the other extremes – essentially wicked problems – but I find most of the problems I deal with these days are more wicked than they are trivial. And sometimes the hardest thing is figuring out what problem you should focus on trying to solve…
In the early days of Jazz, my manager was Scott Rich, and I remember when I first met him I was amazed by his technical knowledge and his ability to crank out hundreds of lines of good code. Over the course of several years, he got promoted to become an IBM Distinguished Engineer and I remember (half-) joking with him that he had changed his editor of choice from Eclipse to PowerPoint.
A Steve Jobs presentation is mesmerizing. Don’t believe me? Watch the original iPhone introduction then let me know if you still disagree. But what’s mesmerizing about it? To me the magic of the Steve Jobs presentation is that he shows us how to complete a puzzle that’s been unsolved for several years. After the presentation, the solution seems obvious. To quote Jony Ive, “How could it be any other way?” But of course, the presentation represents an end state. From a problem solving perspective, the interesting part is what happened that you didn’t see that led up to the presentation.
These days when I work on some new technical area, by default my inclination is to get a couple of smart developers together and start prototyping. My assumption is that not only is it impossible to solve a complex problem without getting our hands dirty in code, but we won’t even understand what problem we’re trying to solve until we’ve gotten our hands dirty in code.
But prototyping is not enough to solve the problem. Prototyping helps you understand the problem. Prototyping helps constrain the solution to the adjacent possible, as opposed to the fantastical. But prototyping alone doesn’t solve the problem. Prototyping doesn’t produce the narrative.
I really believe that for anything to succeed – a philosophy, a product, a movie, a technology – it has to tell a good story. I don’t know how to articulate this in conceptual terms – I just really have come to believe that if you can’t tell a compelling story, you’re doomed to failure or at best mediocrity. I think that’s the magic of Steve Jobs’ presentations – he’s a great storyteller and he describes the problems he’s trying to solve in very simple terms with which we can identify. And I think this is the value of presentations – IF – and it’s a big “IF” – you do them right.
What’s the difference between “getting it wrong” and “getting it right” with presentations? Unfortunately there’s many more examples of the former than the latter. But for canonical examples of each check out Edward Tufte’s “The Cognitive Style of PowerPoint” and Jobs’ iPhone introduction (above), respectively. I think in a nut, the difference is that when done right, a presentation should help visualize and complement a story that’s mainly told verbally, and when done wrong, well… there are many failure patterns for presentations.
In the past two weeks, I’ve spent approximately 60% of my time working on two variations of a presentation – one focused on customer value and adoption of of a new solution and one focused on internal execution of delivering that same solution. My colleague John Ryding chided me with the same “from IDE to PowerPoint” line as I used with Scott several years ago. But now that the shoe’s on the other foot, I believe that there’s potential real value in creating these presentations.
It forces me to take a step back from the code and try to clearly articulate what we’re building and why it’s valuable. This exercise has actually led to new insights on what we should build and how we should build it. From an even higher level, it forces me to think about what we’re building in terms of a story that’s simple, coherent and compelling.
Going back to Erich’s original question: “This is nice, but where’s the demo?” In hindsight the problem was that I was trying to describe a set of concepts and a plan prior to having enough experience and running code to back up my assertions. But this isn’t to say that there’s no use to presentations. My view these days is that you have to work in a very iterative manner to learn at a concrete level, then take a step back and reflect on the problem you’re trying to solve and how, rinse and repeat. If you can’t articulate it in a simple and compelling manner, it’s a good sign you’re not done yet.
I’ll close with a Steve Jobs quote on the design of the iPod:
Look at the design of a lot of consumer products — they’re really complicated surfaces. We tried to make something much more holistic and simple. When you first start off trying to solve a problem, the first solutions you come up with are very complex, and most people stop there. But if you keep going, and live with the problem and peel more layers of the onion off, you can often times arrive at some very elegant and simple solutions.
I remember when Mac OS X 10.6 Snow Leopard came out a few years ago, I updated the day it was released. A few days later, I asked my next door cube neighbor Pat Mueller what he thought of it and he made a face like “are you serious you dipshit?” and then said “I’ll try it in six months or so. I was sort of stunned.
Fast forward to this year. In July I got a new high-end iMac and upgraded to Mac OS X 10.7 Lion as soon as it was released.
Then the problems began.
Problem 1: The computer video would freeze often if I viewed videos either in Safari or iTunes. Given that one of the prime use cases for the Higgins family iMac is for Higgins children to watch cartoons, this was a big problem. The only “fix” was to power cycle the computer. Not good.
Problem 2: Wi-fi networking would just crap out after several hours. The workaround was to restart the computer whenever networking crapped out.
Problem 3: After a restart, user switch, or wake from sleep, the OS would report “Could not find any of your preferred wifi networks” and then proceed to show a list of available wifi networks… with my preferred network at the top of the list, which begs the question… if you can display the fracking network in the list, why can’t you connect to it?
Some strange data points:
- My two year-old MacBook Pro has exhibited exactly zero problems since upgrading to Lion.
- The new iMac had zero of these problems before I upgraded to Lion.
So the bad combination seems to be new Apple hardware + new Apple OS. My only speculation can be that the folks developing the new hardware and new software were testing with the previous generation of each others’ stuff.
Alas, as of a couple of hours ago, all of my problems are fixed. Problems 1 and 2 (video freeze + networking crapping out), were fixed via the first Lion fix pack (10.7.1) – in fact these two problems represent two of the four bullet points in the release notes. I became semi-obsessed with problem 3 and spent approximately twenty hours troubleshooting it by myself and with the help of Apple level 2 support. Finally tonight I decided to throw a hail mary and Google the symptom to see if anyone had discovered a fix since I first encountered the problem. Lo and behold, the first or second Google result had an Apple forum thread where someone explained that if you simply create a bogus new “Location” for your Networking preferences, it fixes the problem. I tried this and for reasons I won’t even attempt to comprehend, it worked.
The lesson I learned from this little fiasco is that Pat was right – best to wait six months or so and let other poor schmucks work out the kinks with Apple.
On the bright side, at least I don’t have to use Windows or Linux on my desktop every day
Now I can’t wait for iOS 5 and iCloud… I bet they work great…
I thought my family was on the do not call list for telemarketers but I guess not because recently we’ve been getting a lot of calls at the house. At first we got annoyed but one day by chance I found a way to have some fun. You can do this too. All it takes is a phone with a mute button and a young child.
If you receive a call from a number you don’t recognize, have your child answer it. If it’s a telemarketer, typically they talk for the first thirty seconds or so to give you their basic spiel. Turn the phone on mute and take this time to think of funny things to say. The basic game is to tell your child funny things to say while your phone is on mute. When the telemarketer asks you a question, turn the phone off mute and indicate to your child that they should say the funny thing. Repeat until the telemarketer finally gives up.
Here’s a sample conversation:
Telemarketer: “May I tell you about our new timeshares?”
Child: “What’s a timeshare?”
Telemarketer: <long-winded explanation of how a time share works, ending in a repeat of “May I tell you about our new timeshares?”>
Child: “Do you like flowers?”
Telemarketer: (pause, nervous laugh) “Well, yes.” (pause) “May I tell you about our new timeshares?”
Child: “Is it fun to play with?”
Telemarketer: (pause) <somewhat tortured explanation about how it *is* fun to play with a timeshare, ending with “May I transfer you over to a sales specialist?”>
Child: “Did you see the new Winnie the Pooh?”
Telemarketer: (pause) “May I transfer you over to a sales specialist?”
Child: “I saw the new Winnie Pooh with my friend Jill and Jill’s nanny Jaqlyn”
Telemarketer: (longer pause) “May I transfer you over to a sales specialist?”
Child: “What’s your favorite flower?”
Telemarketer: (pause) “Thank you ma’am. Have a good evening.”
Child: “I like purple flowers.”
A few interesting observations:
They go right ahead with their pitch, even when the person answering the phone obviously sounds like a young child.
Last year on the Jazz project, I helped design and implement a simple REST protocol to implement long-running operations, or long-ops. I’ve explained the idea enough times in random conversations that I thought it would make sense to write it down.
I’ll first write about the concrete problem we solved and then talk about the more abstract class of problems that the solution supports.
Example: Jazz Lifecycle Project Creation
Rational sells three particular team products that deal with requirements management, development, and test management, respectively. These products must work individually but also together if more than one is present in a customer environment. Each product has a notion of “project”. In the case where a customer has more than one product installed in their environment, we wanted to be able to let a customer press a button and create a “lifecycle project” that is basically a lightweight aggregation of the concrete projects (e.g. the requirements project, the development project, and the test project).
So we created a rather simple web application called “Lifecycle Project Administration” that logically and physically sits outside the products and gives a customer the ability to press a button and create a lifecycle project, create the underlying projects, and link everything together.
This presented a couple of problems, but I want to focus on the UI problem that pushed us towards the RESTy long-op protocol. Creating a project area can take between 30 seconds to a minute, depending on the complexity of the initialization routine. Since the lifecycle project creation operation aggregated several project creation operations plus some other stuff, it could take several minutes. A crude way to implement this UI would be to just show a “Creating lifecycle project area, please wait” message and perhaps a fakey progress monitor for several minutes until all of the tasks complete. In a desktop UI operating on local resources, you would use a rather fine-grained progress monitor that provides feedback on the set of tasks that need to run, the current running tasks, and the current percent complete of the total task.
We brainstormed on a way that we could come up with something like a progress monitor that could show fine-grained progress while running the set of remote operations required to create a lifecycle project and its subtasks. The solution was the RESTy long-op protocol. First I’ll talk about how one would typically do “normal, simple RESTful creation”.
Simple RESTy Creation
A common creation pattern in RESTful web services is to POST to a collection. It goes something like this:
HTTP/1.1 201 Created
The 201 status code of course indicates that the operation resulted in the creation of a resource and the Location header provides the URI for the new resource.
From a UI point of view, this works fine for a creation operation that takes a few seconds, but not so well for a creation operation that takes several minutes, like the lifecycle project administration case. So let’s look at the RESTy long-op protocol.
The RESTy Long-op Protocol
In this example, I’ll use a simplified form of lifecycle project creation:
Just to explain the request body, the name is simply the display name and the template is the ID of a template that defines the set of concrete projects that should be created and how they should be linked together.
Rather than responding with a URL for a resource that was created, the server responds with a 202 'Accepted' status, and the location of a “Job” resource, that basically reports on the status of the long-running task of creating (or updating) the resource.
Now the client polls the location of the “job”; the job is a hierarchal resource representing the state and resolution of the top level job and the sub-jobs (called ‘steps” below). It also includes a top-level property called resource that will eventually point to the URI of the resource that you are trying to create or update (in this case the lifecycle project).
At some point the top-level task has a non-null resolution and a non-null resource, at which point the client can GET the resource, which is the complete URI for the original thing you tried to create/update (in this case the lifecycle project).
GET /lifecycle-projects/bills-lifecycle-project HTTP/1.1
(I’ll omit the structure of the lifecycle project, as it’s not relevant to this discussion.)
Here’s a demo I recorded of an early version of Lifecycle Project Administration last year, that shows this protocol in action:
This protocol supports a set of related patterns:
You can use this protocol to support one or a combination of these patterns. E.g. you could have a single task (i.e. not a composite) that takes a long time and therefore you still want to use an asynchronous user experience.
Here are a few good things about this protocol:
Facilitates better feedback to people who invoke long-running, perhaps composite operations, through your UI.
Decouples the monitoring of a long-running composite operation from its execution and implementation; for all you know the composite task could be running in parallel across a server farm or it could be running on a single node.
Supports a flexible user experience; you could implement a number of different progress monitor UIs based on the information above.
Here are a few not-so-nice things about this protocol:
Not based on a standard.
Requires some expectation that the original create/update request might result in a long-running operation, and the only way you have to know that it’s a job resource (vs. the actual created or updated resource) is by the 202 Accepted response code (which could be ambiguous) and/or by content sniffing.
Doesn’t help much with recovering from complete or partial failure, retrying, cancelation, etc. though I’m sure you can see ways of achieving these things with a few additions to the protocol. We just didn’t need/want the additional complexity.
I would like to write a bit about some of the implementation patterns, but I think this entry is long enough, so I’ll just jot down some important points quickly.
Your primary client for polling the jobs should be a simple headless client library type thing that allows higher level code to register to be notified of updates. In most cases you’ll have more than one observer (e.g. the progress widget itself that redraws with any step update and the page that updates when the ultimate resource becomes available).
Your backend should persist the job entries as it creates and updates them. This allows you to decouple where the tasks in the composite execute from where the front-end can fetch the current status. This also allows you to run analytics over your job data over time to understand better what’s happening.
The persistent form of the job should store additional data (e.g. the durations for each task to complete) for additional analytics and perhaps better feedback to the user (e.g. time estimate for the overall job and steps based on historical data).
Of course you’ll want to cache all over the place on the job resources since you poll them and in most cases the status won’t have changed.
I don’t think this protocol is perfect, and I’m sure I’m not the first one to come up with such a protocol, but we’ve found it useful and you might too. I’d be interested if anyone has suggestions for improvement and/or pointers to similar protocols. I remember I first learned about some of these basic patterns from a JavaRanch article my IBM colleague Kyle Brown wrote way back in 2004.
Pretty much as soon as I published this, several folks on Twitter cited similar protocols:
I have never been a big fan of making public predictions about what might happen with the industry, a company, or a technology. I strongly agree with Alan Kay‘s famous quote: “The best way to predict the future is to invent it.” Of course, inventing the future is hard, especially if you’re spending precious time writing articles stating in unequivocal terms what will happen in the future (e.g. “Why and how the mobile web will win”).
Of course, none of us know precisely what will happen in the future (tired clichés aside), especially for things as complex and volatile as the economy or the technology industry. I frankly am baffled why people continue to make such confident and sometimes smug predictions on top of shaky or non-existent logical foundations. Luckily the web makes it easy to record these predictions and compare them to what really happened in the fullness of time, so there is some measure of accountability.
Of course, this doesn’t mean that it isn’t worth reasoning about potential futures – as long as you follow some simple guidelines:
State your evidence – What historical examples, current data, trends, and published plans lead you to your conclusions?
State your assumptions – What things have to happen for the potential future to become reality? What makes you think these things will happen?
State your conflicts of interest – Do you have something to gain if your predicted future becomes reality?
State your confidence level – Where are you on the continuum from wild-ass guess to high-probability outcome?
Another question to ask yourself is “Should you prognosticate publicly or privately?” I believe it’s very helpful to prognosticate privately (e.g. within a company) to help drive product strategy and semi-publicly to help customers chart their course (though in this case stating conflicts of interests is very important, for the obvious ethical reason and for the pragmatic goal of building customer trust). What I personally despise is predicting some future that aligns with your financial and/or philosophical interests and not stating the conflict of interest. It’s fine to advocate for some preferred future, but if you do so please be honest about your motivations – don’t dress up advocacy as prognostication.
Finally, if you have made prognostications, you should periodically perform an honest assessment of what you got right, what you got wrong, and why. Your retrospective should be at least as public as your prediction and you should be brutally honest – for one thing it’s unethical not to be brutally honest and for another thing people will quickly detect if you’re being honest or hedging which will obviously cause them to trust you more or less, respectively.
I originally planned to link to some of the prognosticating articles that put me in this obviously grumpy mood, but I’ve decided not to because A) Why promote trash? and B) I assume other people can think of plenty of examples of this sort of thing. Instead I will point to someone who I think does a great job of doing the reasoned, data-driven prognostication that I find incredibly valuable, Horace Dediu and his web site covering mobile information technology.