I was talking to my friend Steffen Kehr today, who’s spent many years as a user researcher at Airbnb & PayPal in the Valley. I asked him to share some of the more surprising lessons he’s learned there.

“One thing that surprised me is this perception of usability testing. It seems to be the ugly child of user research. Most people think it’s for Junior Researchers, and don’t really want to do it.”


“Well”, he says, “people think usability testing is simple, easy, repetitive and not as exciting as working on exploratory research. It sometimes feels like the whole evaluative side of research is on a decline overall.”

Okay. Before we dig in deeper, let’s get an overview of what Steffen is talking about here:

So all of the stuff on the right is less interesting these days?, I ask him.

“Well, that’s been the general sentiment anyway. You ship off the usability testing to junior researchers or your designers, and then off you go and fly to Brazil to do your field research.”

So what changed?

“Well, one of the reasons for it is, I think, the rise of product analytics and behavioral data”, Steffen explains. “We now have this influx of ‘hard data’ on user behaviors from Data Scientists. So once a product is launched, you’ll know all about how a product is being used by looking at these engagement metrics.”


“Product managers don’t need the research team to understand ‘What’s going on?’ anymore”, he continues. “Instead, you come to research and ask ‘Why?’”

Interesting. Do you prefer it that way?

“Well, it’s good, but it’s a little hard because you’re back again at having to explain to people why your research only included a sample size of 5 or 7 users and why you can draw meaningful conclusions from it. People go ‘What do you mean sample size?’ They’re now used to looking at what every user does as the default sample size, and there’s this illusion that you get the full picture by doing that. So you have to explain how qualitative research works and why it’s relevant.”

Got it. So that means the role is changing, then?

“A bit, in an interesting way. Because we can now take all of the behavioral data from the data science team to go ask: okay, we see these patterns, let’s figure out why they occur and how to change them. Because behavioral metrics really stop at a certain level of insight, and that’s where user research comes in.”

Okay. So it sounds like it’s still necessary to do the evaluative research, including usability testing?

“Oh, totally, it absolutely is. Like I said, the general sentiment is that usability testing isn’t as attractive, but I guess my real lesson at PayPal when I first moved here, was how important the balance between foundational and evaluative research is. The amount of rigor in PayPal’s usability testing was extremely high — with eye-tracking, 10-15 users as a default sample size, and ‘voice of God’ style moderation, etc — and it produced such deep and strong insights. It was a great lesson, even after having been in the industry for a quite a while.”

Thanks Steffen!

So do people agree with him?

I talked to my neighbor, Ashley Reese, who’s been a user researcher at Google for four years, to get her take on some of the things Steffen said.

“Oh, I agree with his conclusion. Usability testing is sooo valuable. I remember this story of how Yahoo’s research team decided they didn’t want to do usability studies, just foundational work, and how as a result they completely lost all of their impact as a team. Usability testing is a great way to have a quick win, it’s always actionable and relevant.”

I totally agree — especially now that it’s easier to do it remotely. But why do some researchers prefer then to ‘outsource’ the usability testing to designers, or not do it at all?

“You know, I’ve heard of this approach too and I’m a little sceptical. I’m sure designers could do it well, but you’d still want to be involved as a coach to help out, rather than just get it off of your plate.”

At Google, how has the rise of data science impacted your role?

“Well, I don’t think data science can ever replace usability studies. Once a product launches, yes, you can look at product metrics to some extent, but before that, how will you know if your solution works? Also, my experience is that generally, data science provides the most basic engagement data which just gives a tiny piece of a story. So I’ve never felt threatened by data science or anything like that, instead, I think partnering with them brings the greatest understanding of the users to the team.”

Let’s summarize

  • To know whether a solution works, we use evaluative research (which includes usability testing). It’s got a bad rap for being boring, but it’s really valuable.
  • Designers can learn how to do usability testing, and if there’s a research team around, they shouldn’t be out of the picture — instead, be engaged as coaches.
  • Data Science and User Research teams can complement one another to answer both the “what” and the “why” of user behavior.

And that’s it for today — happy researching!