We're Building Better AI Even as Our Systems Get Dumber (/RANT)
ChatGPT Image Generation

We're Building Better AI Even as Our Systems Get Dumber (/RANT)

You're building on OpenAI. So is everyone else. Microsoft has tied its cloud business to it. Google competes but also depends on a similar concentrated infrastructure. Meta, Amazon, Nvidia, Oracle, SoftBank, they are all betting billions of dollars on a handful of providers staying stable, staying accessible, staying aligned with your interests. What's your plan when those changes occur? You don't have one. That's not a strategy. That's hope dressed as innovation. We are growing extremely powerful systems that we do not fully understand.

Article content
Image Source: FT
The problem isn't whether AI is intelligent enough. The problem is whether you're building systems smart enough to govern AI. Right now, you're not. And the gap between your technological capability and your organisational wisdom grows wider every day.        

The Netherlands Broke 26,000 Families With an Algorithm

Consider what happened when the Dutch government automated fraud detection for childcare benefits. The system worked exactly as designed. It processed applications, flagged anomalies, calculated risk scores, and issued debt notices with impressive efficiency. The technology was sophisticated. The implementation was catastrophic. 

The algorithm treated minor administrative errors, a missing signature, and a small documentation gap as evidence of fraud. Families received automatic classifications as criminals. There was no meaningful human oversight, no effective appeals process, no pathway for collective wisdom to catch what the algorithm missed. The system also flagged families with dual citizenship at substantially higher rates than others. The bias wasn't malicious. It was mathematical, which made it harder to see and easier to ignore.

Twenty-six thousand families were wrongly accused. Many were forced to repay tens of thousands of euros immediately. Some lost their homes. Some took their lives. The entire Dutch cabinet resigned over the scandal. The government now pays €30,000 in compensation per family, but no amount of money can return what was taken. The technology scaled perfectly. The wisdom didn't scale at all.

You're about to do the same thing. Not with childcare benefits. With something else. Hiring decisions, perhaps. Credit approvals. Medical diagnoses. Welfare determinations. The domain doesn't matter. The pattern does. And you're replicating it on a much larger scale.

The Gap Isn't Closing. It's Accelerating

Technology moves exponentially. Your organisational capacity to govern it moves linearly at best. This creates an asymmetry that compounds daily. You're deploying AI systems that make consequential decisions faster than you can build the wisdom infrastructure to guide them. You call this "moving fast and breaking things." The Dutch called it that, too. Now they call it a national tragedy.

The fundamental problem is infrastructural. You've built extraordinary systems for moving and processing data. You've invested billions in compute, storage, and bandwidth. You've created a technical infrastructure that operates at a scale previously unimaginable. But you haven't built equivalent infrastructure for making sense of what that data means. You can process millions of transactions. You cannot process millions of stories. The Dutch system moved data perfectly, fast, accurate, efficient. What it couldn't do was hear 26,000 families explaining that something was deeply wrong. 

Your systems can't hear them either. Not because you're callous, but because you haven't built the infrastructure for organisational listening at scale. You have dashboards, metrics, and compliance reports. What you don't have are feedback loops that actually change behaviour before disasters unfold. The Dutch had all the dashboards, too. They also had multiple whistleblowers raising alarms for years. The system was structurally incapable of responding because information flow and wisdom flow are different things, requiring different infrastructure.

Article content
AI Circular Economy (?)

Concentration Risk Masquerading as Strategy

Your AI strategy likely depends on a small number of foundation model providers. This dependency isn't incidental. It's structural. The economics of AI development favour massive scale and concentrated investment. The result is an ecosystem where strategic partnerships, infrastructure deals, and cloud integration all flow toward a handful of chokepoints. When you deploy AI systems across your operations, you're not just adopting technology. You're accepting systemic fragility.

What happens when your primary provider changes its pricing? Its terms of service? Its strategic direction? What happens when it gets acquired by a competitor? When does it face regulatory action that limits its capabilities? When geopolitical tensions restrict access across borders? You're building critical systems without redundancy, without alternatives, without genuine exit strategies. This isn't resilience. It's the illusion of stability created by everyone depending on the same foundation. If we can be wrong about animals, if we can be wrong about other people, if we can be wrong about our own babies then we can be very wrong about AIs.

The parallel to financial infrastructure before 2008 should be obvious. Concentrated risk. Systemic dependencies. Sophisticated technology obscuring fundamental fragility. Everyone is building on the same assumptions, which means when those assumptions fail, they fail everywhere simultaneously. You're creating the conditions for cascade failures while calling it innovation.

"Miso’s kitchen robots already fried 4m+ baskets of food for brands like White Castle. With 144% labor turnover and $20/hour minimum wages, they’re not alone. Partnered with NVIDIA and Uber’s AI team"

Article content
Image Source: MISO

Compliance Theatre Versus Actual Wisdom

You probably have AI governance frameworks. Ethics boards. Review processes. Compliance checkboxes. The Dutch had those too. They had audits, approvals, and governance structures that looked impressive on paper. None of it prevented disaster because compliance and wisdom are fundamentally different things.

Compliance asks: "Did we follow the process?" Wisdom asks: "Is the process producing outcomes aligned with our actual values?" Compliance is backward-looking, checking whether you satisfied requirements. Wisdom is forward-looking, sensing whether you're heading toward catastrophe before you arrive. The Dutch system was compliant right up until it destroyed 26,000 families. Your systems will be too.

The distinction matters because most organisational responses to AI risk focus on compliance. You're creating review boards, ethical guidelines, and audit processes. These aren't useless. They're insufficient. They're designed to catch known problems using established frameworks. They're not intended to sense emergent issues that don't fit existing categories. They're not built for 'collective sensemaking' at the speed and scale your AI systems operate.

What Collective Sensemaking Actually Means

Collective sensemaking isn't another committee. It's infrastructure. Think about how you built systems to move data: you created protocols, invested in bandwidth, established standards, and deployed technology at every node. Now consider how you make sense of what that data means: you hold meetings, write reports, and escalate through hierarchies. The asymmetry is absurd.

David Hall
Sketch by David Hall

Researchers like Cynthia Kurtz have demonstrated that participatory narrative inquiry allows communities to understand complex situations through collective story work. Dave Snowden's SenseMaker platform shows how micro-narratives, when people interpret their own experiences, reveal patterns invisible to both individual perception and top-down analysis. Nora Bateson describes this as "aphanipoesis"—the unseen creativity through which living systems learn and evolve. These aren't academic theories. They're proven methodologies for creating the organisational capability you currently lack.

Imagine if the Dutch system had infrastructure for collecting and interpreting stories from affected families. These are not complaint forms that get filed and forgotten. Actual narrative infrastructure where patterns emerge from lived experience. The discrimination against dual-citizenship families would have been obvious months or years earlier. The mounting distress would have created signals the organisation couldn't ignore. The system would have adapted because it was structurally capable of learning from the people it affected.

Your organisation needs this infrastructure now. Not for ethical optics. For survival. Because the AI systems you're deploying will make mistakes, complex systems always do. The question isn't whether failures happen. It's whether you can sense them fast enough to respond before they cascade.

The Work That Actually Matters

You need to build a wisdom infrastructure that matches your technical infrastructure. This means creating systems where stories from affected stakeholders reveal patterns before they become disasters. Where frontline workers can raise concerns that actually change organisational behaviour. Where expert knowledge and lived experience connect meaningfully rather than existing in separate domains, where feedback loops operate fast enough to matter.

This isn't soft skills development. It's hard infrastructure work. You need platforms, protocols, resources, and organisational commitment equivalent to what you've invested in your technical stack. You need to treat collective sensemaking as a core capability, not a peripheral activity. You need to measure your capacity for organisational learning with the same rigour you measure computer efficiency.

Article content
Dave Snowden's The Cynefin Framework

The alternative is repeating the Dutch disaster in whatever domain you operate. The pattern is clear. Deploy sophisticated technology without equivalent wisdom infrastructure. Optimise for efficiency without building capacity for collective learning. Trust compliance processes to catch what they're structurally incapable of sensing. Watch failures accumulate until they become catastrophic. Then express surprise, pay compensation, and move on.

The Clock Is Running

Every day you deploy AI more widely without building a wisdom infrastructure, the gap widens. Every system you launch without genuine feedback loops adds fragility. Every dependency you accept on concentrated providers increases systemic risk. The Dutch had years of warning signs. Multiple whistleblowers. Clear patterns of harm. They ignored them all until the system collapsed spectacularly.

How long until your system breaks? Which stakeholders pay the price? And what's your actual plan for preventing it?        

Not your AI strategy. Not your governance framework. Your wisdom strategy. Your infrastructure for collective sensemaking at scale. Your systems for organisational learning that operate as fast as your AI systems make decisions.

Because without that infrastructure, you're not deploying AI responsibly. You're just building faster, more sophisticated ways to fail. The Netherlands proved what happens when technology outpaces wisdom. You're next unless you build differently. The work starts now.


How does your organisation create space for collective sensemaking? What systems are you building to match the technological pace with collective wisdom?
Rebecca Plantz

🔷 Leader who drives strategy, execution, and change 🔷 Transformational Delivery and Operations Leader 💫 Information Security & Technology Executive 💫 Risk Management Expert 💫

2w

Great article!!

ED JONES

Incident Management Guru

2w

Great thoughts! My biggest concern is who is responsible/liable when failures happen? The Dutch example that you cited is another example where assumptions are made and likely not fully systemically thought out and clearly not properly validated. Another example that I am faced with daily is self driving cars. The problem is that AI is a TOOL that operates in defined constraints, which we all know don't always cover all conditions. If we don't provide the definition, then the AI makes mistakes.

Ryder Stevenson

AI Consultant & Founder @ Ryder AI | Machine Learning Expert | Transforming Brisbane Businesses with Custom AI Solutions | I help companies achieve 40%+ efficiency gains through intelligent automation

2w

The Dutch needed to have human in the loop before the accusatory stage!

Carla Liuzzo PhD

Lecturer, Graduate School of Business

3w

Couldn't agree more Vibhor Pandey, great post! I also wonder whether sensemaking should extend to questioning to what end and who is benefiting from this 'distraction' from what's actually happening!

Like
Reply
Tirrania Suhood

context sensemaker & networkweaver for the commons/common good

3w

Thanks Vibhor Pandey. Great reference to the Dutch disaster, which has similarities with Australia's #robodebt - Centrelink's automated debt recovery system that was erroneous and had a lack of human oversight, with almost 500,000 average Australians hounded and at least two suicides. I highly recommend Special Broadcasting Service (SBS) Australia's recently released docu-drama on this "The People vs Robodebt". Lyndsey Jackson Kiki Fong Lim https://www.sbs.com.au/ondemand/tv-series/the-people-vs-robodebt

Like
Reply

To view or add a comment, sign in

More articles by Vibhor Pandey

Explore content categories