← Back

Fooled by data

Data can often mislead designers and product owners alike, paving a way to detrimental strategic decisions. The worst part is, you will remain convinced you made all the right calls and that “data was solid.” This is a story of one such decision and its fallout.

One of my clients asked me to run some UX checks on their online shop, as they were worried about the shop’s conversion rate dropping in recent months.

“What’s changed since then?” I asked, hoping for a lead. “Not much in terms of design,” the owner replied after a few moments. “We’ve switched to a new checkout system that is supposed to be faster and easier to use, but it looks the same.”

The experience was on surface faster and easier to use than their former checkout system, but instead of generating higher conversions, somehow it was bringing the numbers down.

After checking data consistency between two date ranges, I started cross-referencing traffic between mobile and desktop devices and noticed something strange…

“Your desktop traffic is higher at the checkout, and it has generated slightly higher conversion rate than before as well,” I pointed to my screen to show the data I was talking about. “But the mobile conversion rate suffered a hit and it drops off significantly at the checkout stage.”

I could tell my client was considering rolling everything back to how it was before. But there was no point to do that until we could figure out what was causing the problem. I spent some time digging through analytics, comparing results and trying to see which areas were affected most significantly.

What I found was surprising even to me.

“It’s just a theory,” I prefaced. “But it seems that a huge chunk of your mobile audience is abandoning the checkout because its interface is not optimised to smaller screens.” I have laid out the proof in screenshots, showing him how broken the forms were on smaller devices, and demonstrated on his phone how people had to fight with form accessibility issues, such as incorrect input types for email and telephone fields.

“Why does it not affect the desktop traffic?,” he asked. The answer lied in platform differences between the two computing paradigms: desktop and mobile. “It’s because an on-screen keyboard in your phone relies on cues from the input field,” I replied. “Each field tells your phone which keyboard to display, and they are all messed up, preventing people from completing the checkout with minimum effort. As you can see, typing on a tiny screen is difficult and people can’t even use autocomplete here.”

It was a theory, but a plausible one. So we made an experiment. Together with engineers we’ve gone through all checkout elements one by one and made sure they were sized, labelled correctly, displayed correctly and accessible to touchscreen input. We’ve improved autofill experience by adding relevant input attributes, as well as typing experience, so that email field would no longer use a generic keyboard with autocorrect enabled.

After the work was done, we’ve deployed the changes ensuring we measured everything the same way as before. “What now?” My client was anxious to hear the verdict. “Now, we wait. I’ll be in touch.”


A few weeks later, I looked again through the same cross-referenced data views I have looked at last time but this time I compared them to the period before the recent changes were made.

“What’s the score?,” asked the client when he picked up my phone call later that day. I could sense the anxiety in his voice.

“It worked,” I said. “The numbers are up.”

The last four weeks of data revealed a triple-digit improvement to mobile checkout conversion rate. What’s more, the desktop conversion rate improved as well, while the traffic stayed at the same level.

The issue was with the client’s team assuming the new system would work better than the old one out of the box. They rushed through, not taking the time to properly vet the checkout implementation from the user’s point of view. While the desktop experience was fine, the same wasn’t true about the mobile experience.

What’s more important, is that analytics didn’t show them exactly what was the problem, because mobile drop-off was likely overshadowed by product desirability. Majority of sales moved from mobile to desktop, transferring the numbers across, so the problem was partially obscured and the circle was closed: data was correct, but without context it was preventing the team from making optimal decisions.


There’s a reason why UX and product design consultants like to couple deep dives into data with a hypothesis-driven experimentation. It often reveals the full picture of a product or service, and allows them to make better decisions about improving the experience.

There’s no magic code or design eurekas in any of this. There’s hard facts and half-truths that we often need to uncover and navigate around to get to the bottom of a problem.