🌟 Data vs. Findings vs. Insights in UX
In many companies, data, findings and insights are all used interchangeably. Slack conversations circle around convincing data points, statistically significant findings, reliable insights and emerging trends. Unsurprisingly, often conversations mistake sporadic observations for consistent patterns.
But how impactful is the weight that each of them carries? And how do we turn raw data into meaningful insights to make better decisions? Well, let's find out.
If you'd like to support my work, take a look at a friendly UX bundle: practical techniques on UX and design patterns in Smart Interface Design Patterns 🍣 and How To Measure UX 🔮 — hands-on video courses on UX. Use a friendly code LINKEDIN to save 15%.
          
        
Why It All Matters
At first, it might feel like the differences are very nuanced and merely technical. But when we review inputs and communicate outcomes of our UX work, we need to be careful not to conflate terminology — to avoid wrong assumptions, wrong conclusions and early dismissals.
When strong recommendations and bold statements emerge in a big meeting, inevitably there will be people questioning decision making process. More often than not, they will be the loudest voices in the room, often with their own agenda and priorities that they are trying to protect.
As UX designers, we need to be prepared for it. The last thing we want is to have a weak line of thinking, easily dismantled under the premise of “weak research”, “unreliable findings”, “poor choice of users” — and hence dismissed straight away.
Data ≠ Findings ≠ Insights
People with different roles — analysts, data scientists, researchers, strategists — often rely on fine distinctions to make their decisions. The general difference is easy to put together:
          
      
        
    
Here’s what it then looks like in real-life:
Data
6 users were looking for ”Money transfer” in “Payments”, and 4 users discovered the feature in their personal dashboard.
Finding
60% of users struggled to find the “Money transfer” feature on a dashboard, often confusing it with the “Payments” section.
Insight
Navigation doesn’t match users’ mental models for money transfer, causing confusion and delays. We recommend to rename sections or reorganize the dashboard to prioritize “Transfer Money”. It could make task completion more intuitive and efficient.
Hindsight
After renaming the section to “Transfer Money” and moving it to the main dashboard, task success increased by 12%. User confusion dropped in follow-up tests. It proved to be an effective solution.
Foresight
As our financial products become more complex, users will expect more simple task-oriented navigation (e.g., “Send Money”, “Pay Bills“) instead of categories like “Payments”. We should evolve the dashboard towards action-driven IA to meet user expectations.
Only insights create understanding and drive strategy. Foresights shape strategy, too, but are always shaped on bets and assumptions. So unsurprisingly, steakholders are interested in insights, not findings. They rarely need to dive into raw data points. But often they do want to make sure that findings are reliable.
That’s when eventually the big question about statistical significance comes along. And that’s when ideas and recommendations often get dismissed without a chance to be explored or explained.
“But Is It Statistically Significant?”
Now, for UX designers, that’s an incredibly difficult question to answer. As Nikki Anderson pointed out, statistical significance was never designed for qualitative research. And with UX work, we’re not trying to publish academic research or prove universal truths.
As Nikki continues, what we are trying to do is reach theoretical saturation, the point where additional research doesn’t give us new insights. Research isn’t about proving something is true. It’s about preventing costly mistakes before they happen.
When a question about statistical significance emerges, I try to find the reasons and motivations behind that question first. Now, I'm not a UX researcher by any means, but I do need to know about the expected margin of error and confidence levels. The real question is how many users we can dismiss as "irrelevant" or "not representative" until we realize that there is a problem that's worth solving.
Nikki suggests to use the following talking points to handle the question:
          
      
        
    
How To Turn Findings In Insights
Once we notice patterns emerging, we need to turn them into actionable recommendations. Surprisingly, this isn’t always easy — we need to avoid easy guesses and assumptions as far as possible, as they will invite wrong conclusions. To do that, we can rely on a very simple but effective framework: What Happened + Why + So What:
          
      
        
    
To better assess the “so what” part, we should pay close attention to the impact of what we have noticed on desired business outcomes. It can be anything from high-impact blockers and confusion to hesitation and inaction.
To learn more, I can wholeheartedly recommend to explore Findings → Insights Cheatsheet in Nikki Anderson’s wonderful slide deck, which has examples and prompts to use to turn findings into insights well — and explained much better than I ever could.
Stop Sharing Findings; Deliver Insights
When presenting the outcomes of your UX work, focus on actionable recommendations and business opportunities, rather than patterns and anecdotes that emerged during testing. Surely it requires not just analysis of what happened, but a study of the future state that we want to move to.
But also, it’s all about telling a damn good story. Memorable, impactful, feasible, convincing. Paint the picture of what the future could look like, and the difference it would produce. And as you do, I would highly recommend to record timestamps when interesting patterns emerge in your testing session. Nothing is more impactful than showing video clips of real customers struggling with a real product completing real daily tasks.
Once you highlight them in a presentation, one after another, even the biggest critics might at least want to study the problem and explore how to address those issues. And ultimately, that's exactly the goal of UX research.
Useful Resources
          
      
        
    
Happy Birds: How To Measure UX (Video + Live UX Training)
I've been spending quite a bit of time reviewing and drafting new sections for the video courses on UX:
          
      
        
    
Thank you so much for your support, everyone — and happy designing! 🎉🥳
FREELANCER - UX Research and Product Strategy | Using behavioural science to design a more human-centred and strategic world | UX Psychology | Products from 0→1 | Speaker, Advisor & Educator 🚀
5moThis is so great. It also reminds that different company stages, requires different strategy for information.
Service designer
5moIvonne Palomino look
Teaching Brands What Sells & Why | 2x Founder | 1x exit | International Keynote Speaker
5moExperience Designer / CX Strategist 🇫🇷🇫🇮🇧🇪
5moAnd then actionable insights ;)