Building products customers love: A conversation with Dan Olsen

unitQ CEO and co-founder, Christian Wiklund, recently connected with Dan Olsen, bestselling author of The Lean Product Playbook and advisor to product teams at Amazon, Google, Meta, and Uber. Dan has spent two decades helping companies figure out what customers actually want—not what they say they want, or what executives think they want, but what they truly need.
The conversation covered everything from why behavioral data can mislead you to how AI is changing a Product Manager's (PM) skill set. But the thread that ran through it all? Most product teams aren't short on feedback. They're short on action.
Here are the highlights.
On gut feel versus real-time insight
Dan, in your experience, where do manual, biased, and late customer insights show up most often in a product org? And how do you assign a cost to them?
I love the way you describe it. Your users, your customers are the early detection system. When I started in PM, QA (quality assurance) was a thing. Everybody had QA teams. These days you're lucky if you have some QA resources. There are still many companies that value that. But a lot of times, products are not being regularly tested.
Even if you have automated testing, the test coverage isn't that great. So to your point, your customer base is testing every release. They're hammering on every user use case, path, action they can take. So they become your extended QA team. They're going to be the first to detect stuff.
And I think that before we even get into, “How can you have higher quality insights?,” one of the biggest error modes—the biggest error mode I see—is that there's just a lack of customer insight in the first place, or not enough.
It's like what people said in our poll, which is, “We just go with the senior executive's gut feel,” or when some large client is banging on the table saying, “We need you to build this feature X for us.” Then that becomes the top priority. It's all hands on deck, and they're usually the large-scope items. Then you launch that and you're onto the next thing.
I call it shiny object syndrome—there's always a next shiny object.
Then, you never get feedback post-launch. You launch something and you just assume, "Oh, of course it's going to hit the OKRs that we said, of course it's going to be successful." Of course that's not going to happen. That’s a very naive view.
As I like to point out, even the top tech companies in the world don't crush their V1s. The way they are successful is that they launch V1, they get it out there, and then they listen and get feedback from people and iterate, tweak, and improve it and address the issues that they couldn't have discovered until it was launched.
So that's the idea—first, you should have a post-launch feedback mechanism. And then there's all these different categories of data that we're talking about here.
If it's a manual process every time after every launch, if it takes a lot of effort to do that, of course people are not going to do it as often because it's a pain. You could look at app store reviews every week. You could put a little reminder on your calendar. But then you have to go look at the data manually.
You don't want to have to go manually do all that stuff if it can be automated. You want to be able to ingest automated data and do it real time ideally if you can.
📌 Our take → This is exactly the problem unitQ was built to solve. Most teams know feedback exists—it's in app stores, support tickets, social media, surveys. But by the time someone manually compiles it into a quarterly report, the issues have already compounded. Real-time visibility changes the game. When you can see emerging issues the moment they start trending, you can fix them before they become churn.
On what behavioral data misses
Dan, I know you talk a lot about behavioral data—what users do—but that doesn't cover perhaps the why. So can you talk about the value of voice-of-customer data and what sources you've seen being underappreciated or underrated that people should listen to?
Definitely. It's funny, from the initial poll that we did, analytics was one choice and then qualitative data was another choice. You know, those are two fundamentally different types of data. Behavioral data is what people do versus attitudinal data—what people tell you.
A friend of mine, Christian Rohrer, is a UX leader. He has a great framework that is a 2x2 of qualitative, quantitative, behavioral, and attitudinal. And the bottom line is, these are all different ways to learn and hopefully gain insights. And sometimes organizations prefer one modality over another.

Source: When to Use Which User-Experience Research Methods (Nielsen Norman Group, 2022)
I like to personify these to make them fun and interesting.
Organizations that are all about analytics—I say they're like Spock from Star Trek. They're all about logic and analytics. And companies that value the attitudinal qualitative data, they're more like Oprah Winfrey who does these great interviews of getting to know people really, really well. A company or a person might have a preference, but the reality is they complement each other.
Let’s talk about behavioral data for a moment—say we've got some analytics that, at a certain point in our flow only 30% of users converted, and that’s lower than we would expect. If we have thousands of data points, that can be very statistically significant. We know that it's 30.0%, plus or minus 0.1%.
But the interesting thing is that we won't know why 30% got through and 70% didn’t. We won't know any of the whys behind the conversion.
So sometimes you start with quantitative behavioral data, and then you have to dig in to get the whys behind it. You might find that it's a UI issue or a bug under certain browsers—who knows what. Quantitative might give you a clue. You can see there's smoke, but you don't know the source of the fire.
📌 Our take → This disconnect shows up constantly. Your analytics dashboard can tell you exactly where users drop off, how long they spend on a page, and which features they click. But it can't tell you what they were thinking when they left, or why they never came back. That's why the best teams layer qualitative feedback on top of their behavioral data—it's the only way to understand the why behind the what.
On the feedback-action gap
So companies are not short on feedback, or they're sometimes short on actions—they're getting feedback, but then they don't take action on it. Where do you see that gap show up most often? And by the way, it doesn't have to be related to just product. The customer experience is end-to-end, right?
Yeah, it's super interesting. I just got an email from a product that I use and love. This is something I started using last summer and I got some kind of free promo code to try it out.
All of a sudden, out of the blue, I get this email saying, "Your trial has ended, we're going to move you to this plan." I'm like, "Wait a minute." I had to double check. I'm like, "I thought I had a year of this plan." And they're ending it after six months.
So out of nowhere, they're annoying me and detracting someone who's recommended their product to everyone. Now I'm going, "What are you guys talking about?" My first reaction to support was to give them all the details. Here's the date I signed up. It was a 12 month promo code, not sure what's going on. And their first reply was, "Oh, we see you switched plans on this day, like two days ago."
I replied, "No, you guys switched the plan. Didn't you see? You guys switched it. Your email said..." They didn't read my email. Anyway, long story short, it got resolved, but it's what I call a self-inflicted wound. Everything was going great. Your users were loving you. And then you just did something, which to your point, had nothing to do with the actual product.
There are all these what I call “policy decisions,” whether it's pricing or whatever it is, that can really, like you said, impact the whole customer experience.
📌 Our take → Dan's point about self-inflicted wounds is spot-on. The best teams monitor feedback across the entire customer experience—not just the product. Pricing changes, support interactions, push notifications—they all generate signal. And if you're not listening, you won't know you've just annoyed your biggest advocates.
On quality and the feature mill
What I love about this is when you think about quality—the feature mill, we're going to build a lot of features, but if the foundation is shaky, these features on top of a shaky foundation are not going to thrive as much as if the foundation was rock solid.
Yes, if we let our customers guide us on what to fix and where to go—like a bug, for example—then people don't have a reason to reach out to your support team. So you're going to save money there. You're not going to get that one or two star review—your reputation gets better.
What about word of mouth organic growth? The more death by a thousand cuts you have, the worse it gets. But that's the beauty here. When I think of quality, it really supercharges the entire growth function of a company. These companies exist because of the product, because of the experience. And if you can make that core shine stronger and better, it supercharges everything.
Your marketing, as a result, is going to spend into a funnel that's more optimized because there's no quality tax.
📌 Our take → This distinction matters more than most teams realize. You can have the best product strategy in the world, but if your app crashes, your UI confuses people, or your onboarding flow frustrates them, none of it matters. Quality isn't a feature—it's the foundation everything else is built on.
On AI and the future of product work
PM skills in an AI-first world—what are you seeing? In the PM world, are people stoked because they can prototype faster? What are the best PMs doing right now with AI?
If a company told me their average PM is matched up with 12 or more engineers, I'd be like, "Yeah, of course they're too busy spending time in Jira and running scrum ceremonies and don't have time to do customer discovery or insights." That's a bad ratio. Once you get down to six to eight, that's typically the healthy range where PMs can execute well.
Interestingly, Andrew Ng—AI pioneer and thought leader—was speaking at a Y Combinator event several months ago. He said that as AI tools make engineers more productive, the bottleneck is shifting from engineering to PM. Historically, engineering was the scarce resource. Now, many teams are lowering their PM-to-engineer ratio—four or even three engineers per PM.
He even mentioned that one company wanted to flip the ratio entirely: two PMs for every engineer. The point is, as coding becomes more efficient, we need more resources upstream—discovering, prioritizing, and deciding what to build.
One thing we didn't talk about is prioritization. Obviously prioritization is key. You're going to use all this customer insight data to make the right prioritization calls. Not only once you decide what to build and how to build it, right, but which things should even be prioritized. So that's really critical.
I think we're at this inflection point. You can either stick your head in the sand and say, "Oh, this too shall pass. I'm not going to learn about this AI stuff." I don't think that's a wise move. You don't have to go get a PhD in AI. It's pretty easy to devote some time to learning about it.
The PMs investing in AI—and specifically vibe coding—are seeing the biggest gains. Here's why that matters: before, if you wanted to test a prototype before launch, you needed a designer to create a clickable Figma prototype. Many teams don't have that resource, so they skip prototyping entirely—just take the PRD (Product Requirements Document) straight to engineering, ship it, and launch.
Now with vibe coding, anyone can create a robust prototype quickly. There's less excuse not to test ideas with customers before committing engineering resources.
📌 Our take → The teams that thrive in the next decade will be the ones who use AI to accelerate their judgment, not replace it. AI can cluster feedback, spot patterns, and surface emerging issues faster than any human could. But it still takes a product leader to decide what to prioritize, how to communicate changes, and whether the fix aligns with the broader strategy. That's not going away.
What it means for your team
The teams winning right now bring signals together from everywhere—app stores, support tickets, social media, surveys. They act systematically, not ad hoc. They separate "loud" from "important."
Most importantly, they don't wait for quarterly reports to tell them what's broken. They already know, because they're listening in real time.
Want to hear the full conversation with Dan?


