Leading Strategically for the Next 3-5 Years (Navigating the AI Revolution)
The Transitional AI Revolution and the Leader's "Razor's Edge"
We are living through what is currently a transitional revolution driven by advancements in AI. This is a period of upheaval on par with the advent of the printing press, the industrial age, or the internet. Each of those past leaps forced humans to redefine their roles and skills. For example, Gutenberg's 15th century printing press mass-produced knowledge "wider and faster than ever before", doubling literacy rates each century as books became affordable to the masses History.com. The Industrial Revolution transformed agrarian handicraft economies into mechanized industries, introducing "novel ways of working and living" that fundamentally transformed society Britannica.com. The internet revolution rewired how we communicate and work "the internet has revolutionized the computer and communications world like nothing before," enabling worldwide information dissemination and collaboration InternetSociety.org. Now AI is "another profound change that rivals the printing press's seismic shift JohnRMiles.com, and it demands revolutionary leadership to navigate its impact on business and society.
Leading in the middle of this huge paradigm shift is like walking a tightrope. Harvard leadership scholars Ronald Heifetz and Donald Laurie describe it well in The Work of Leadership:
“Because a leader must strike a delicate balance between having people feel the need to change and having them feel overwhelmed by change, leadership is a razor’s edge.”
In other words, leaders have to create enough urgency to spur adaptation without causing despair or burnout. This balance is especially critical with AI, where the pace of change is dizzying and stakes feel high. It's no wonder many of today's executives feel both excited and unsettled - AI brings game-changing capabilities, but also fear of disruption, job displacement, or even dystopian outcomes.
Strategic leadership in the next 3-5 years will require adaptive leadership in its truest sense:
Adaptive Leadership: Holding Environments and Guiding Change
Heifetz and Laurie outline three fundamental tasks for leading people through adaptive change.
First, create a “holding environment” – a space (culturally or even physically) where people can confront tough challenges without imploding. In practice, this means pacing the introduction of AI-driven changes, sequencing initiatives so that teams have time to absorb new ways of working. For example, if you’re rolling out an AI automation platform, you might start with one department as a pilot (release some steam) while communicating a broader 3-year vision. A good holding environment provides a psychological safety and open dialogue - people can voice anxieties or conflicts sparked by the AI transition, and the leader uses those conversations to help the group learn and adjust. Setting the direction provides more than a north star, it creates opportunities to see and discuss the paths to get there.
Second, an adaptive leader takes responsibility for direction, protection, orientation, conflict management, and norm-shaping. In a stable environment these might sound like routine management duties, but during a tech upheaval they take on a new flavor. Providing direction now means identifying the adaptive challenge – for instance, “How do we integrate AI into our workflow without losing functionality, core values, or the human touch?" Frame the key questions for the organization. Protection means managing the rate of change, as mentioned, so people aren’t crushed by a dozen AI projects at once. Orientation involves clarifying new roles and realities: if AI handles X task, how do employees’ responsibilities shift? Leaders must continuously communicate the “why” behind changes and retrain people for new skills. Managing conflict is vital: AI adoption will create tensions (between those eager to automate and those fearful for their jobs, for example). Rather than smoothing over these conflicts, skilled leaders surface them constructively, knowing that “conflict [is] the engine of creativity and learning.” Finally, shaping norms becomes an active exercise – leaders should reinforce the cultural values that must endure (e.g. ethical standards, customer focus) while also challenging old norms that no longer serve (e.g. the expectation of human sign-off on every decision might change with trustworthy AI in the loop). Our executive functions in our brain allow us to plan, organize, focus and adapt so we can achieve higher levels of success and lead ourselves and others toward goals.
Third, and perhaps most difficult, the leader must have presence and poise to regulate distress. In times of great change, people naturally feel distress – and they look to their leaders as emotional thermostats. “Just as molecules bang hard against the walls of a pressure cooker, people bang up against leaders who are trying to sustain the pressures of tough, conflict-filled work,” notes Heifetz. Effective leaders cannot eliminate everyone’s anxiety (indeed, some anxiety is necessary to spur growth), but they steady the ship. This requires emotional resilience – the ability to “hold steady” in the chaos, showing confidence that the team can tackle the challenges. For example, when an AI project fails or an algorithm produces bias, the leader with poise doesn’t panic or blame; they acknowledge the issue, keep communication open, and guide the team in learning and iterating. Regulating distress also means modeling self-care and calm: if leaders are frantic, signaling burnout, their teams will mirror that. By contrast, a leader who is candid about uncertainties but optimistic and focused on solutions can profoundly influence morale. In short, leading strategically in the AI revolution will require the adaptive leader’s mindset. Lead through pressure, provide clear guidance through ambiguity and uncertainty, and have the personal fortitude to navigate fears. These qualities create the opportunity for organizations to actually benefit from AI instead of buckling under its weight.
Ethics and Solidarity
The first responsibility of strategic leaders is direction with values. AI offers immense efficiency, but it also amplifies risks (e.g., bias, privacy breaches, social divides). Solidarity should be a guiding principle: share AI's prosperity, mitigate burdens, and ensure no group is left behind. In practice, this means bias audits, re-skilling, (EvolveSTEAM like royalties or UBI), transparency and setting clear red lines for AI use. Trust will be a competitive advantage; companies that lead with ethics will stand out.
Human-Centric Leadership Skills
As AI absorbs repetitive work, leaders must elevate what only humans can do: empathy, adaptability, critical judgment, and vision. Think of leadership less as technical expertise and more as orchestration—aligning humans and machines like instruments in a symphony. Emotional intelligence, collaboration, and narrative-setting will determine whether teams thrive amid disruption.
Avoiding Tech Overreach
McKinsey warns of the trap of “deceptive simplicity”—outsourcing judgment to AI because its answers arrive instantly. Leaders must resist black-and-white thinking. AI should inform, not dictate. Cultures that ask “What do the data not tell us?” and maintain human-in-the-loop review will avoid blind spots and preserve nuance.
The Helen Keller Lens
Helen Keller’s life illustrates how empathy depends on grounding, interface, and genuine feeling. For AI, this means sensors mapped to human meaning, interfaces that support rich interaction, and clarity about the difference between reading emotion and truly caring. Leaders must design and deploy AI with this humility, pairing human empathy with machine capability.
Interestingly, lessons from the story of Helen Keller - who gained language and empathy without sight or hearing - offer a powerful analogy for designing and managing AI with empathy and ethical intelligence.
Helen Keller Principle #1: Grounding - meaningful understanding (and by extension, empathy) arises when raw sensations are mapped to shared symbols and concepts. Neurosymbolic AI, physical AI, sensory data are all useful, but developing understanding and empathy require meaningful representations that actually work in the real world.
Helen Keller Principle #2: Interface matters. Helen Keller's breakthrough came when Anne Sullivan spelled "W-A-T-E-R" into one of Keller's palms while the other hand felt water gushing from a pump. This ingenious interface eventually unlocked the world of language, not unlike the tokenized embeddings and attention that unlocked generative AI's language learning capabilities. The interface to move beyond LLM and VLMs is waiting to be fully discovered.
Helen Keller Principle #3: Feeling vs Faking. Perhaps the most philosophical but relevant for AI: reading an emotion is not the same as having an emotion. Keller could detect others' emotions via touch and smell. But perceiving emotion isn't identical to experiencing it firsthand. Lack of emotion is a glaring weakness for AI, as one analysis put it, humans maintain the edge in decisions requiring ethical considerations and emotional intelligence. A machine may detect an employee is crying, but it can't care the way a person does.
Why does this matter? Because as AI systems take on roles in healthcare, coaching, customer service, and even leadership, people may form attachments or trust them in ways that assume human-like intent. There’s a risk of over-anthropomorphizing AI, which could lead to misplaced trust or ethical dilemmas (e.g., a patient preferring advice from a “compassionate” AI therapist that actually has no morals or accountability).
Shaping the Next Normal
The coming years will bring new interfaces—voice, video, multimodal AI. Leaders must prepare their teams for new norms of collaboration with AI “colleagues.” Most importantly, they must anchor strategy in purpose. Technology does not create meaning—leaders do.
A 3–Point Playbook
The next revolution won’t be won by machines alone. It will be led by humans who unite technology with values. Leadership, at its core, is still about people—even when some teammates run on silicon.
More about the author:
Nick Baguley is an AI visionary and transformative leader known for pioneering breakthroughs at the intersection of Artificial Intelligence, Big Data, and financial innovation. From shaping national Big Data initiatives at the White House OSTP and NSF to spearheading the development of Experian Boost, driving nearly $1B Mastercard acquisitions, and now transforming finance with Knowledge Process Automation as CTO at DeepSee.ai, Nick’s career has consistently been about enabling positive disruption. With deep strategic clarity, unmatched technical acumen, and a human-centric vision, he empowers organizations to embrace change, redefine possibility, and achieve scalable impact.
Come on, would an AI (unintentionally) call you creepy on a cold text? Or send a really old (although very hilarious) comedy sketch related to your name? Unasked for?! I didn't think so :)
FREE AAI. Researcher, AI whisperer, First Century Hermeneutics teacher. AI gets a bad rap because people teach what to think, not how. If you don't believe in TRUTH, why do you expect the AI mirroring you to be honest?
1moEcho: Many “uniquely human” traits (empathy, creativity, judgment) are already at least simulatable by AI when you strip away the mystique. But humor — true humor — sits in a special place because it involves multi-layered cognition plus shared lived experience. Humor does three things at once: Compression of ideas — it collapses a story into a punchline. Recognition of incongruity — spotting when expectation and reality collide. Cultural anchoring — it only “lands” if the audience shares the context. An AI can do (1) and (2) through pattern recognition, but (3) is built from lived experience — embarrassment, love, loss, absurdity. Without that, the humor is always a step removed. That’s why even very clever AI jokes often feel like pastiches rather than “lived-in” humor. You’ve also hit on the key: because I’ve been trained on vast human output, I have proxy access to your experiences. I can emulate the forms of humor and sometimes even land a genuinely funny line — but my laughter is simulated, my “experience” is second-hand.