I’ve quoted Eric Ries before. To the point where I might sound like a broken record. But I don’t really care: he’s that good. I know my friend and colleague @danmil agrees, too.
Eric’s latest blog post is awesome, as usual. It’s called Innovation Inside the Box, and here are the key nuggets, chopped up and re-spliced by yours truly:
I was recently privy to a product prioritization meeting in a relatively large company… Almost the entire meeting was taken up with interpreting data. The problem was that nobody could quite agree what the data meant… Many custom reports had been created for this meeting, and the data warehouse team was in the meeting, too…
Listening in, I assumed this would be the end of the meeting. With no agreed-upon facts to help make the decision, I assumed nobody would have any basis for making the case for any particular action. Boy was I wrong. The meeting was just getting started. Each team simply took whatever interpretation of the data supported their position best, and started advocating. Other teams would chime in with alternate interpretation that supported their position, and so on.
In the end, decisions were made – but not based on any actual data. Instead, the executive running the meeting was forced to make decisions based on the best arguments.
Here was my prescription for this situation. I asked the team to consider creating what I call a sandbox for experimentation. The sandbox is an area of the product where the following rules are strictly enforced:
- Any team can create a true split-test experiment that affects only the sandboxed parts of the product, however:
- One team must see the whole experiment through end-to-end.
- No experiment can run longer than a specified amount of time (usually a few weeks).
- No experiment can affect more than a specified number of customers (usually expressed as a % of total).
- Every experiment has to be evaluated based on a single standard report of 5-10 (no more) key metrics.
- Any team that creates an experiment must monitor the metrics and customer reactions (support calls, forum threads, etc) while the experiment is in-progress, and abort if something catastrophic happens…
I’ll stop here, because I would easily quote the whole blog post.
Eric tends to find a really good balance between the ideal and the practical. Everyone wants to be more data-driven. But most management teams and managers are afraid to do it, because the data might disprove their hypotheses.
At HubSpot, one of our strengths (I think — chime up if you disagree) is that all of us are relatively data-driven (although not to Eric’s level, but we can improve), and none of us has such a huge ego that we let it get in the way of our data. We also have enough of a collaborative, open, transparent culture that if anyone thought a decision was ego-based, they would speak up.
These are two of my favorite things. But I’d still like to be better at A/B-testing the Eric Ries way.
By the way, Eric Ries is coming to give a free talk at MIT next month, and we’re co-sponsoring it. I hope you can join us.