humans are terrible and wonderful Big Data systems

Recently I’ve been deep diving on the topics of big data and analytics. For the benefit of non-technical family members who read this blog, let me give two quick layman’s definitions:

Big data simply refers to massive data sets and the techniques to make sense from these massive data sets. A good early example of harnessing big data was when Google realized that there was value in being able to crawl and index the entire web’s contents [1].

Analytics is simply the art and science of harnessing data to learn things, identify problems, identify opportunities, and to predict things in a way that is hard to do with simple querying. It’s a bit of a fuzzy term. For instance, I wouldn’t consider it analytics that Amazon.com shows me my recent orders, but I would consider it analytics that Amazon.com makes scarily accurate recommendations based on what I’ve ordered and viewed in the past vs. what every other customer has ordered and viewed in the past [2].

I am actually quite a novice when it comes to both of these topics, but when I was thinking about leaving the Jazz team last Fall, my friend and mentor Rod Smith made the suggestion that I work in an area where I’m a total novice as a way to stretch myself. This turned out to be a fantastic bit of advice as I have learned more in the past four months than I learned in the last several years on Jazz, simply because I had become an expert in my focus areas there.

Anyhow, that’s not the topic of this journal entry but it’s relevant. Because I am most assuredly not an expert on these topics, I have been trying hard to learn from people who understand these topics much better than I do. One person I found [3] who has been a great help is Jeff Jonas of IBM’s Information Management division. Jeff has an amazing set of blog entries on topics related to analytics mostly, but also big data [4].

Today we were chatting on the phone and he told me I should explore a particular topic. I told him I had and offered to forward him an email with more details on the topic. He immediately said “NO, PLEASE DON’T”. My initial assumption was that he was worried about IP contamination, but it turned out that he simply gets too much email and as long as I was aware of the topic, he was happy to leave it at that.

This made me smile because one of my current understandings about big data and analytics is that more data is always a good thing, even if it’s bad. Jeff even makes this point in one of his blog entries. But it made me smile because I have the same habit. Like everyone, I struggle with information overload and do what I can to limit my consumption to interesting new ideas or new analysis that helps refine or connect known ideas. So from this point of view, humans are poor big data systems because we simply can’t handle large volumes of incoming data.

But this mode of thinking is obviously shortsighted and does a big disservice to our biological and cultural achievements. We are the ones who create things and we can only do this because of our brains’ wonderful abilities to synthesize I have no idea how much knowledge and sensory input that we acquire through a lifetime of living.

It’s interesting to think about how our brains are better and how the Amazons and Google big data systems of the world are better in terms of making sense of it all and forming new ideas [5].

Footnotes

[1] Re: Google indexing the entire web, I believe that because Google has become such a ubiquitous part of first-world culture, we rarely think about what a massive technical achievement that was. Recently I’ve been reading Steve Levy’s excellent new book “In the Plex” which helped remind me of the magnitude of this achievement.

[2] I’m sure Amazon’s recommendation system is actually quite a bit more involved than this.

[3] The story on how I “found” Jeff is actually quite ironic considering the topic of this blog entry. I didn’t find him through any sort of enterprise knowledge management analytics system, I found him because my friend James Governor or RedMonk tweeted about having beers with Jeff in Vegas. Big data and analytics are great, but there is still a place for personal relationships and serendipity.

[4] You may actually also know Jeff from this “Smarter Planet” ad.

[5] This also reminds me of a chat with a friend who went to Google in the mid-2000s. I asked him what he thought and he said “I think we’re building Skynet“. I laughed. He then said, “No I’m serious, I think we’re building Skynet.” Hopefully it concurs with the “Don’t be evil” bit 🙂

Posted in Uncategorized | Tagged , , , , , , , , | Leave a comment

this is what a technical paradigm shift looks like in realtime

From Andy Ihnatko’s iPad 2 review:

After a week with the iPad 2, I’ve come to realize that Apple’s true revolutionary change has been conceptual. The first iPad wasn’t just a new product … it was a whole new category of computer. I think in 2010, Apple instinctively understood that with something this different on their hands, they couldn’t go for broke. They could only lay out their cards and imply the iPad’s many strengths and then they’d have to stand back and watch what happened. After all of their efforts, they could only hope that consumers and developers figured out what the iPad was on their own. Only then could Apple make their next move, based on those reactions.

It all could have gone very badly. If Apple had sold the iPad explicitly as an ebook reader, the first complaint would have been “Why does this cost twice as much as a Kindle?” If they had gone the other way and suggested that the iPad was a substitute for your notebook, then any sensible consumer would have pointed out that while the iPad 1 was far more affordable than the cheapest MacBook, $500-$875 could buy any of a number of powerful, name-brand Windows notebooks.

Selling 15,000,000 iPads in nine months must have filled Apple with a certain degree of confidence that the world had truly gotten the point.

The public got it: the iPad was no mere accessory to a desktop and while it certainly earned best-in-class honors as a reader, media player, and document-viewer, there was no need to limit one’s perceptions of the device. The iPad was, and is, truly an entire new class of computer. Many of you were around for the transition from text to graphical user interfaces. Some of you were even around when the world shifted from mainframes to personal computers. Well, congratulations: you’ve lived to see your third revolution in computing.

Oftentimes paradigm shifts aren’t really obvious until years after the fact – sometimes many years. I think Andy is on to something that the iPad represents a paradigm shift in computing, and we are watching the creativity and destruction from such a fundamental change in realtime.

I have a feeling that I could make a good cross-reference to Kuhn’s The Structure of Scientific Revolutions if only I’d read the book rather than just the Wikipedia summary.

Ironically I have it here next to me, on my iPad 2.

Good times.

Posted in Uncategorized | Tagged , , , , , | 1 Comment

when the tailgate drops, the bullshit stops

A friend pointed out the other day that I hadn’t posted anything on my journal, but I don’t really have anything to say at the moment because I’ve been heavily in output mode (writing code and such) vs. my previous input mode (reading books, articles, having discussions, etc.) Ironically perhaps, when I’m in input mode, I feel like I have more to say on this journal because I have more half-formed thoughts and dots connecting in my mind.

But tonight, for some reason I thought of a favorite passage from one of my favorite books, A Man In Full by Tom Wolfe, and I thought I’d write it down to share with friends and to make it available to myself on the web 🙂

The context is that the book’s primary protagonist, Charlie Croker, is getting grilled in a passive-aggressive manner from some bankers to whom he owes a considerable amount of money over a bad real estate investment. His primary antagonist in this scene is Harry Zale, who works for the bank as a “workout Artiste”, i.e. he busts the balls of people like Charlie for a living.

Still standing, Harry took a deep breath, which thrust his chest out and flaunted the skull-and-crossbone suspenders even more flagrantly. Then he sat down and raised his big chin and looked down his nose once more and gave Charlie Croker another lingering stare and said:

“Okay, Mr. Croker, we’re all waiting. The floor is now open for concrete proposals for paying back money. As I said, simple we like, no assembly necessary, batteries included.”

It was probably the Artiste’s infatuation with this little metaphor of his that finally did it. Croker had been no assembly-necessary’d, batteries-include’d, why-are-we-here’d, dead-dracaena’d, coffee-burned, lectured at, and trifled with long enough. He leaned forward with his huge forearms on the table and the testosterone flowing. His shoulders and neck seemed to swell up. He thrust his own square jaw forward, and the lawyers and the accountants all hunched forward with him; and so did Peaches.

A small and ominous smile was now on Croker’s face. His voice was low, controlled, and seething: “Well now, friend. I wanna ask you sump’n. You ever been huntin’?”

Harry said nothing. He just put on a smile exactly like Croker’s.

“You ever headed out in a pickup truck early inna moaning and lissened t’all’ose’ol’ boys talking about alla birds ‘ey gon’ shoot? People, they shoot a lotta birds with their mouths onna way out to the fields … with their mouths … But comes a time when you finally got to stop the truck and pick up a gun and do sump’m with it … see … and whirr I grew up, in Baker County, theh’s a saying: ‘When the tailgate drops, the bullshit stops.'”

He eyed Harry even more intently. Harry just stared back without blinking, without altering his little smile so much as an eighth of an inch.

“An’eh’s been a certain amount a bullshit in ‘is room ‘is morning,” Croker continued, “if you don’t mind the introduction of some plain English into these proceedings. Well now the tailgate’s dropped.”

This sort of sums up my approach to getting things done at work: try to minimize talking and maximize making tangible forward progress. Often the best way to make tangible forward progress is via code.

Posted in Uncategorized | Tagged , , , , | 1 Comment

horseless carriages and smartphones

We have a tendency to not be able to comprehend emerging transformative technologies. Rather we tend to only be able to reason about them in a very limited way in terms of current well-understood technologies.

Kevin Kelly talks about this in his recent book What Technology Wants:

We make prediction [of the consequences of a new technology] more difficult because our immediate tendency is to imagine the new thing doing an old job better. That’s why the first cars were called “horseless carriages.” The first movies were simply straightforward documentary films of theatrical plays. It took a while to realize the full dimensions of cinema photography as its own new medium that could achieve new things, reveal new perspectives, do new jobs. We are stuck in the same blindness. We imagine e-books today as being regular books that appear on electronic paper instead of as radically powerful threads of text woven into the one shared universal library.

Paul Graham also spoke about this eloquently in his article “Tablets“:

I was thinking recently how inconvenient it was not to have a general term for iPhones, iPads, and the corresponding things running Android. The closest to a general term seems to be “mobile devices,” but that (a) applies to any mobile phone, and (b) doesn’t really capture what’s distinctive about the iPad.

After a few seconds it struck me that what we’ll end up calling these things is tablets. The only reason we even consider calling them “mobile devices” is that the iPhone preceded the iPad. If the iPad had come first, we wouldn’t think of the iPhone as a phone; we’d think of it as a tablet small enough to hold up to your ear.

Like many engineers I know, I have a sort of “goofiness radar” where my mind intuits that something isn’t quite right, whether it’s a goofy term or a goofy technology or a goofy design decision. When I was younger I assumed I must be missing something and I would try harder to understand. This often led to frustration because the problem was not my understanding, but rather that I was encountering actual goofiness. As I get older and more experienced, I trust myself more and assume that if something doesn’t make intuitive sense, it’s highly likely that I’m not the problem – there is goofiness afoot.

I often get this feeling when I hear terms like “smartphone” or “3D printer” that, if you think about them for about a minute, are pretty terrible in terms of conveying the significance and potential of the technology.

The unfortunate thing is not that the terms are goofy. The unfortunate thing is that these anachronisms actually constrain our imaginations.

From a more positive perspective, if we can spot these sorts of terms in use today, it might enable us to spot transformative technologies that are not yet generally understood to be transformative, which can obviously lead to interesting opportunities.

Posted in Uncategorized | Tagged , , , , , , , | Leave a comment

my biases with regards to software frameworks

I believe that the topic of “how to use software frameworks” is one of the most complex and problematic in the world of software development. They are seductive because they offer the promise of making rapid progress, but they are dangerous because they tend to introduce a pretty rigid view of the world, which can cause problems if the architecture you want isn’t well aligned with the framework, in the best case you will spend a lot of cycles trying to bend the framework to your will or in the worst case, you will fail to produce the architecture you really want because the framework defeated you.

These are my basic biases towards frameworks:

If I am trying to build an application fast and I don’t care about maintaining the code for a number of years, I will eagerly use a framework that seems like a good fit.

If I am writing new code that will form the basis of a product family that will be used for years, perhaps decades, I will never use a framework (though I will use libraries). It is highly likely that in the course of writing my own code, simple custom frameworks will emerge, but they will not assume that they “own the world”; in other words not monolithic.

And then of course between these two extremes (quick and dirty application vs. product family foundation) it gets a bit fuzzier.

I believe (based on only anecdotal evidence) that very good developers who are building software that will have to endure for years tend to create their own simple frameworks rather than adopting existing framework.

Or using my friend Pat Mueller‘s words: “Every framework is evil. Except mine.”

Posted in Uncategorized | Tagged , , , | 1 Comment

diffusion of innovations and logical fallacies

I woke up in the middle of the night as I sometimes do and, not being sleepy, I did some reading on the web. Via a friend on Twitter, I found an article by Philippe Kruchten called “The Elephants in the Agile Room“. The article itself was quite good as it addressed some of the big issues facing agile development that leading agile methodologists rarely speak about, as they are uncomfortable topics.

One passage in particular piqued my interest, because it gave me a new idea with regards to the general problem of diffusion of innovations, that I spoke about in my “being right is not enough” journal entry. The context for this passage is that Philippe is enumerating a set of reasons why the agile community tend to exaggerate the benefits of agile approaches while understating cases where agile approaches have problems or don’t work:

[Reasoning fallacies] Such as Hasty generalization (it worked in two cases, therefore it will in all cases), and cognitive biases: anchoring, golden hammer, cargo cult, etc. Others reasoning fallacies include: non sequitur, and Post ergo propter ergo (correlation implies causation, or “guilt by association”),  etc. Our community is rife with these, even if they are not all that bad.

Of course, it is not only the agile community who exhibit these sorts of logical fallacies and cognitive bias to promote an idea – it happens all of the time when someone deeply believes in an idea or innovation and wants to spread it. Thinking about this pattern gave me a new idea about diffusion of innovations.

Perhaps if you are trying to promote an innovation, it seems like it would be extremely helpful to think about whether you or your message demonstrates any of the the well-known logical fallacies and cognitive biases.

There are a number of obvious reasons why you should be careful to avoid logical fallacies and cognitive biases when attempting to promote an idea. In the best case people will think you’re naive (or even stupid), in the worst case people will think you’re dishonest. Either way they will have their guard up and be less inclined to consider adopting your innovation. Another major risk of course is setting expectations too high, then failing to meet expectations, which will likely negatively affect peoples’ perception of your innovation which will slow or stop its adoption.

To be clear, very few if any of the potential adopters will think about your message in terms of the actual named logical fallacies and cognitive biases, but people have an intuitive sense for these and can detect their foul presence, even if they don’t name them [1]. However since they are well-known, it seems like a simple, valuable exercise to occasionally review them to consider if you are committing them.

Footnotes:

[1] One I do hear named quite frequently is “Law of the instrument“, which is more easily recognized by the aphorism “When you’ve only got a hammer, everything looks like a nail.”

Posted in Uncategorized | Tagged , , , | 1 Comment

hall of mirrors

This afternoon, Pratik Gupta (currently responsible for Virtualization Management at IBM) came into my office so I could give him a copy of a VM with some recent interesting IBM software. As I was demoing the VM, I momentarily got confused about whether I was in the host operating system (the Mac OS X instance running on my MacBook Pro) or the guest operating system (an Ubuntu Linux operating system running inside VMWare Fusion). Pratik then told me about a configuration he was toying around with that involved three or four layers of virtualization and how confusing that could get. He calls it “the hall of mirrors”.

It somehow reminded me of the movie Inception from last year that deals with dreams and dreams within dreams and dreams within dreams within dreams, and how you can get confused about what “layer” you are actually existing in at any given time. It’s interesting to think about the similarities between a computer environment – like an operating system, or The Matrix, or The Grid – and a psychological plane of reality like “the real world”, a dream, or a dream within a dream.

Posted in Uncategorized | Tagged , , , | Leave a comment

being right is not enough

In early 2010, I started a skunkworks project in IBM that ultimately failed. This left me quite disillusioned because I was so certain that the goal of the project was critical to the success of the business yet the project had failed so utterly. I asked Danny Sabbah (at the time the General Manager of Rational) for a chat and he gave me some advice that I haven’t forgotten. I don’t remember Danny’s exact words so I will paraphrase:

“Being right isn’t enough. There are a lot of smart people who never get things done because they don’t know how to persuade people that their idea is a good one and something that we should actually work on versus the other thousand things we might choose to work on instead. You have to learn how to sell your idea.”

Danny’s statement resonated because I realized I had been working under the implicit assumption that my ideas were good, and as people observed them they would recognize the goodness of the ideas, some unspecified good things would happen that would lead to project success. But this is actually overstating the situation. I really hadn’t done anything other than “just” trying to build good technology.

This realization led me to study how ideas spread, and this eventually led me back to a book I had purchased a long time ago, but had never actually read, called “Diffusions of Innovations (5th Ed.)” by Everett Rogers. Wikipedia has a pretty good summary of the theory. I would like to explore this theory in more detail in my journal later, but for the purposes of this entry I will simply observe that there is a whole science to spreading ideas and I was stumbling along in the dark not really understanding how to do that.

But the happy perspective on this situation is that Danny’s point resonated, I identified a massive gap in my personal approach to collaborative development, and I’ve been studying and applying the lessons to close the gap. But it’s an ongoing learning process and I’ll write more about it here as I learn. Some other good more recent books near this topic that I will hit in the weeks and months ahead are Kevin Kelly’s “What Technology Wants” and Steven Johnson’s “Where Good Ideas Come From“.

Posted in Uncategorized | Tagged , , , , , , | 11 Comments

Dieter Rams’ ten principles for good design

I learned about Dieter Rams via my frequent reading about Apple design methodology. I love his “ten principles for good design”. I first started studying Dieter when I learned about a book called “Less and More: The Design Ethos of Dieter Rams” and ordered it. While I was waiting for it to arrive I did some web reading on Dieter and realized that I had actually seen him before as one of the first interview subjects in Gary Hustwit‘s “Objectified” (a 2009 documentary on industrial design).

Here are Dieter’s ten principles.

  • Good design is innovative
  • Good design makes a product useful
  • Good design is aesthetic
  • Good design makes a product understandable
  • Good design is unobtrusive
  • Good design is long-lasting
  • Good design is thorough down to the last detail
  • Good design is environmentally friendly
  • Good design is as little as possible

I enjoyed these principles so much that I even made it into desktop wallpaper for my computers:

Sometimes when I am feeling creatively stifled I reveal my computer desktop and look at the ten principles for inspiration.

Posted in Uncategorized | Tagged , , , | 2 Comments

my commonplace book

Recently I’ve been reading and enjoying very much Steven Johnson’s book “Where Good Ideas Come From“. A particular passage from this book inspired me to start a new blog – a journal – to record everyday thoughts. Here is the passage from the chapter “The Slow Hunch” that inspired this journal:

Darwin’s notebooks lie at the tail end of a long and fruitful tradition that peaked in Enlightenment-era Europe, particularly in England: the practice of maintaining a “commonplace” book. Scholars, amateur scientists, aspiring men of letters – just about anyone with intellectual ambitions in the seventeenth centuries was likely to keep a commonplace book. The great minds of the period – Milton, Bacon, Locke – were zealous believers in the memory-enhancing powers of the commonplace book. In its most customary form, “commonplacing,” as it was called, involved transcribing interesting or inspirational passages from one’s reading, assembling a personalized encyclopedia of quotations. There is a distinct self-help quality to the early descriptions of commonplacing’s virtues: maintaining the books enabled one to “lay up a fund of knowledge, from which we may at all times select what is useful in the several pursuits of life.”

A few paragraphs later:

Each rereading of the commonplace book becomes a new kind of revelation. You see the evolutionary paths of all your past hunches: the ones that turned out to be red herrings; the ones that turned out to be too obvious to write; even the ones that turned into entire books. But each encounter holds the promise that some long-forgotten hunch will connect in a new way with some emerging obsession. The beauty of Locke’s scheme was that it provided just enough order to find snippets when you were looking for htem, but at the same time it allowed the main body of the commonplace book to have its own unruly, unplanned meanderings. Imposing too much order runs the risk of orphaning a promising hunch in a larger project that has died, and it makes it difficult for those ideas to mingle and breed when you revisit them. You need a system for capturing hunches, but not necessarily categorizing them, because categories can build barriers between disparate ideas, restrict them to their own conceptual islands. This is one way in which the human history of innovation deviates from the natural history. New ideas do not thrive on archipelagos.

So the basic idea of this journal is to write down what I think about every day, or at least frequently; much more frequently than my blog. On my blog I have been concerned about publishing only high quality content, and as my standards of what constitutes “quality” have increased, the frequency of my blog publishing has decreased. The purpose of this journal is to intentionally write down half-formed ideas to help me better remember them both through the act of writing them down and through revisiting them later.

I am making this journal public in the hope that others may benefit from my half-formed ideas, and that I might benefit from others’ input.

Posted in Uncategorized | Tagged , , , , | 2 Comments