A couple of years ago, when designer Doug Bowman left the Google team, he revealed that data was driving the company’s design process, to the granularity of almost 50 shades of blue. From Doug’s blog post:

Yes, it’s true that a team at Google couldn’t decide between two blues, so they’re testing 41 shades between each blue to see which one performs better. I had a recent debate over whether a border should be 3, 4 or 5 pixels wide, and was asked to prove my case. I can’t operate in an environment like that. I’ve grown tired of debating such minuscule design decisions. There are more exciting design problems in this world to tackle.

Pragmatism doesn’t have to conflict with innovation. The process doesn’t have to pit designers and engineers against one another.

Human Convention

A seasoned designer can tell you why certain design patterns are the way they are, like why all the fields in a credit card form are aligned. This is a known pattern that works, just like how a flight of stairs works.

A designer presents their intention, a user recognizes parts of it, and then it leads her to an objective. (Of course, the objective is not necessarily the user’s, as we see with dark patterns.)

Most people don’t practice interaction design and so wouldn’t be able to tell you why things are arranged the way they are, or why this or that decision was made. They’re not used to describing these types of problems, and so don’t feel comfortable arguing it either way. But you can ask them how it makes them feel, and they’ll likely have an opinion, because everyone has instincts.

People are innately drawn to different things because of our emotions. Studying current data won’t always tell the whole story.

Saving Us From Ourselves

As humans, we fall easily to observed patterns and become obsessed until we’re swimming in our own confirmation bias. Often, data will pose more questions than it answers. As tools like Google Analytics become more and more powerful, it will be easier to obsess about data until the cows come home.

Poring through all of the metrics can be a waste of time. Only several need to be tracked. You just need to find specific things that indicate if something is performing well, such as time on site, or how many repeat visits it takes for someone to convert.

The most important thing to measure is the impact of a team’s design decision. This is what builds morale within a team, and helps gauge the cost of a certain decision. There will be many tests that have bewildering results, and having a concise, provable (or disprovable) hypothesis will make for better takeaways come reporting time.

A perfect test will have lessons that can be applied in other areas because it clearly showed why something happened. If nothing is learned, then you end up with many small, unjustifiable changes. This is where the paralysis starts.

It’s human nature

Without context, what actually happened can be argued in multiple ways, just truthier because of data.

Data should be incorporated into an already strong design process to give guidance on future, broader iterations of the product—not to make little changes one-by-one on a whim. Blindly following data can yield a bunch of band-aid solutions and an incoherent design.

In order to make something extraordinary, we need to be thinking about the humans who are using the product, and not just what the data said. In the end, we design things for our fellow users, not for the numbers.