Product Marty Cagan

Flying Blind

I know this topic is going to sound far-fetched to many of you, but I am finding too many product teams out there that either aren’t instrumenting their product or site to collect analytics, or they do it at such a minor level that they really don’t know what users are doing on their site or how their product is being used.

My own teams and most teams I work with have been doing this for so long now that it’s hard to imagine not having this information.  It’s hard for me to even remember what it was like to have no real idea how the product was used, or what features were really helping the customer versus which ones we thought had to be there just to help close a sale.

Certainly this is easiest to do with cloud-based products and services, and most of us use Web analytics tools like Google Analytics or Omniture SiteCatalyst, but sometimes we use home-grown tools for this.

But good product teams have been doing this for years not just with cloud-based sites but also with installed mobile or desktop applications, on-premise software, hardware and devices that “call home” periodically and send the usage data back to the teams.

A few companies are very conservative and they ask permission before sending the data.   But mostly this just happens silently.

We should all be anonymizing the data so there’s nothing personally identifiable in there, but occasionally we see in the news that a company gets in a little trouble for sending raw data in the rush to market.

Sometimes in the press they think we’re tracking this data for nefarious purposes, but we’re simply trying to make the products better – more valuable and more usable – and this has long been one of our most important tools for doing so.

The way this process works overall is that we first ask ourselves what we need to know about how our products are used, then we instrument the product to collect this information (the particular techniques depend on the tool you’re using and what you want to collect), then we generate various forms of online reports to view and interpret this data.

For everything we add, we ensure we have the necessary instrumentation in place to know immediately if it is working as we expect, and if there are significant unintended consequences.  Frankly, without that instrumentation I wouldn’t bother to roll out the feature.  How would you know if it was working?

For most product owners and designers, the first thing we do in the morning is to check our analytics to see what has happened as of last night.  We’re usually running some form of test almost all the time so we’re very interested in what’s happened.

There are of course some extreme environments where everything lives behind very strict firewalls but even then the products can generate periodic usage reports to be reviewed and approved by systems administrators before being forwarded (via electronic or printed reports if necessary) back to the teams.

I’m very big on radically simplifying products, but without knowing what is being used, and how it’s being used, it’s a very painful process to do this when you don’t know what’s actually going on.  We don’t have the data to back up our theories or decisions so management (rightfully) balks.

My view is that you just need to start from the position that you simply must have this data, and then work backwards from there on the best way to get it.