top of page

When AI Gets Faster, Humans Must Get Closer: Creating Value with People in an Age of Accelerating AI

Intertwined arms of diverse skin tones create a woven pattern, symbolizing unity and connection in warm lighting. No text present.

There’s a growing whisper in many organisations: If AI can do more and more of our work, what’s left for us? Underneath sits a real fear — not just of job loss, but of becoming irrelevant.


Let’s take that fear seriously. AI is rapidly absorbing tasks we equated with human value: research, coding, summarising, analysis, even ideation. Scientists and leaders in AI - like Yoshua Bengio - are increasingly warning of the risk that AI could get out of control and even turn against humanity. That shifts the ground under our feet. Machines move at the speed of light, so where do people fit in?



People are still at the source of real value creation.


The technology arrives with the force of a wave, and we stand at the shoreline wondering whether to swim or retreat. But the question isn't whether AI will reshape work - it already has. The question is what becomes more valuable, not less, when machines can do so much.


The answer isn't nostalgia for a slower age. It's recognition that AI currently brings extraordinary capability without compass or conscience.


As AI researcher Stuart Russell warns in "Human Compatible," we've become remarkably good at building systems that optimize for objectives, but remarkably poor at ensuring those objectives align with human values. 

The algorithms optimize, predict, and generate at speeds that make human effort look quaint. Yet they cannot tell you why this optimization matters, who benefits from this prediction, or whether generating this output serves any purpose beyond its own execution. People supply that. People wrestle with trade-offs that have no optimal solution. People build trust across the fault lines of uncertainty. People knit meaning across shifting contexts where the rules haven't been written yet. And people carry accountability when the stakes are high and the outcomes affect lives.


Not departments. Not processes. Not dashboards. People.



The illusion of frictionless efficiency.


We've spent decades building organizations that prize efficiency above almost everything else. Lean operations. Streamlined workflows. Metrics that reduce complexity to single numbers glowing green or red. AI slots beautifully into this paradigm. It automates the repetitive, accelerates the analytical, and scales the previously unscalable.


But here's what the efficiency gospel misses: the most valuable work in organizations has always involved friction. The friction of two people with different expertise trying to solve a problem neither fully understands. The friction of a team debating whether to pivot or persist when the data points both ways. The friction of a leader sitting with an employee whose personal crisis is affecting their performance, navigating the messy intersection of compassion and accountability.


This friction isn't waste. It's where meaning gets made.


Margaret Heffernan's "superchicken" research illustrates this beautifully. In her TED talk and book "Beyond Measure," she describes how evolutionary biologist William Muir bred two groups of chickens: one selecting only the most productive "superchickens," another letting average flocks work together naturally. The superchicken group descended into violence and declined in productivity—only three survived. The collaborative group thrived, increasing productivity by 160%.


Heffernan's insight: "What matters is the mortar, not just the bricks"—the quality of connections between people, not just individual talent.

Consider a hospital that implemented an AI triage system to prioritize emergency room patients. The algorithm is remarkably accurate at predicting medical urgency based on vital signs and symptoms. Efficiency improves. Wait times for critical cases drops. Then the complaints start. Not from patients who waited longer, but from nurses who feel something essential has been lost. The triage process has been a moment of human contact, a chance to see the frightened teenager, the elderly man confused about his medications, the parent holding a feverish child. The algorithm sees patterns in data. The nurses see people in distress. Both matter. The hospital eventually redesigns the system so AI handles the medical prioritization while nurses retain the human assessment. Not because the technology fails, but because they recognize that healthcare isn't only about medical efficiency - it's about care, which requires presence.


When AI handles the frictionless tasks, what remains is the work that requires us to be fully human: present, discerning, responsive to context that can't be quantified.



If you would like to be noticed about new writings, you can sign up to my newsletter here:



Companies don't have ideas, people do.


Walk into any organization and ask where ideas come from. You'll hear about innovation labs, R&D departments, strategy offsites. These structures matter. But they don't generate ideas. People do. Often in the spaces between the structures.


The product manager who notices a customer using the software in an unexpected way. The warehouse worker who sees a pattern in damaged shipments that the logistics dashboard misses. The consultant who connects a client's offhand comment to a solution from an entirely different industry. These moments of insight don't emerge from data analysis alone. They emerge from human attention shaped by experience, curiosity, and the ability to recognize significance in the seemingly trivial.


As Linda Hill documents in "Collective Genius," innovation isn't about lone geniuses having eureka moments.


It's about creating the conditions where diverse people can collaborate through "creative abrasion" and "creative agility."

Her research at Pixar, Google, and other innovative organizations reveals that breakthrough ideas emerge from the collision of different perspectives, not from optimizing individual brilliance.


AI can surface patterns at scale. It can identify correlations humans would never spot. But it cannot yet do what the human mind does when it makes a creative leap: hold two unrelated concepts in tension until a third possibility emerges. This isn't mystical. It's cognitive. Our brains evolved to make meaning from incomplete information, to see connections across domains, to imagine what doesn't yet exist.


A design firm in Amsterdam recently experimented with using generative AI to create initial concepts for client projects. The AI produced dozens of options in minutes: competent, on-brief, visually coherent. The designers found them useful as starting points but noticed something troubling: the concepts felt generic, optimized for acceptability rather than insight. When they analyzed why, they realized the AI was essentially producing sophisticated averages of existing work. It could recombine elements brilliantly, but it couldn't ask the question that leads to breakthrough work: "What if we're solving the wrong problem?"


That question requires doubt, which requires consciousness of one's own assumptions. It requires the capacity to step back from the brief and wonder whether the client's stated need masks a deeper one. One designer described it as "the productive discomfort of not knowing"—a state machines don't experience.


In a world where AI can generate a thousand options, the human work becomes choosing which option serves a purpose beyond its own cleverness. And more fundamentally, deciding what purpose we're serving in the first place.



Trust cannot be automated.


Trust is the invisible infrastructure of every organization. Without it, collaboration becomes transactional, communication becomes guarded, and innovation becomes impossible. We know this intuitively. Yet we often treat trust-building as a soft skill, secondary to the hard work of execution.


Amy Edmondson's research in "The Fearless Organization" demonstrates that psychological safety - the belief that you won't be punished for speaking up with ideas, questions, or mistakes - is the foundation of high-performing teams.


Her studies show that teams with psychological safety make better decisions, innovate more effectively, and catch errors before they become disasters.

This safety cannot be coded into an algorithm; it's built through consistent human interaction where leaders model vulnerability and respond constructively to bad news.


AI's arrival makes trust more critical, not less. When algorithms make consequential decisions - who gets hired, who receives a loan, which neighborhoods receive resources - people need to trust not only the technology but the humans who deployed it and who remain accountable for its impacts.


This trust isn't built through transparency reports or algorithmic audits alone, though those matter. It's built through relationships. Through the manager who explains not just what the AI recommended but why the team is following or overriding that recommendation. Through the consultant who sits with a client's anxiety about workforce changes and doesn't rush to reassurance. Through the colleague who admits uncertainty rather than pretending the dashboard has all the answers.


A financial services company implemented an AI system to detect potential fraud. The system was highly accurate, but it flagged a significant number of false positives: legitimate transactions that looked suspicious. Initially, customer service representatives simply told flagged customers that "the system" had blocked their transaction. Frustration spiked. Complaints escalated. The company eventually retrained representatives to say something different: "I see why this looked unusual, and I'm going to personally review it with you." Same outcome, the transaction got reviewed, but the experience shifted from being judged by an opaque machine to being helped by a person who took responsibility.


That shift matters enormously. Trust erodes when people feel they're interacting with systems that don't see them. It grows when someone says "I see you, and I'm accountable to you."


Coaches and consultants working with organizations face a particular challenge here. Leaders often want AI to solve trust problems: to make performance reviews more objective, to eliminate bias in hiring, to create fairer resource allocation. These are worthy goals. But trust isn't a problem to be solved through better algorithms. It's a relationship to be built through consistent, human interaction. The coach's work isn't to help leaders implement AI better. It's to help them remain present and accountable as they do so.



Making wise choices in uncertainty.


Much of the work that matters involves decisions where the right answer isn't clear. The data is incomplete. The stakes are high. The consequences won't be fully visible for months or years. This is the terrain where human judgment becomes indispensable.


Yoshua Bengio, one of the "godfathers of AI" and Turing Award winner, has become increasingly vocal about AI risks, particularly around autonomous systems making consequential decisions. In his 2023 testimony and public statements, he emphasizes that AI systems lack the contextual understanding and value alignment necessary for high-stakes decisions.


"We're building systems that can optimize," he warns, "but optimization without wisdom is dangerous." 

AI excels at optimization within defined parameters. Give it a clear objective function and constraints, and it will find solutions humans would miss. But most important decisions don't come with clear objective functions. They involve competing values, uncertain futures, and trade-offs that can't be reduced to a single metric.


Should we expand into a new market or deepen our presence in existing ones? Should we prioritize speed to market or additional testing? Should we maintain this struggling division because it serves a community, even though it drags down overall profitability?


These questions require judgment. The capacity to weigh incommensurable factors, to consider stakeholders with conflicting interests, to make a choice and accept responsibility for it. Judgment isn't about having more information. It's about wisdom in the face of irreducible uncertainty.


As philosopher and decision theorist Nassim Nicholas Taleb argues in "Antifragile," real-world decisions involve "skin in the game"—personal accountability for outcomes.

Algorithms don't have skin in the game. They can't bear the weight of consequences or learn from existential mistakes. Only humans can.


Imagine a manufacturing company facing a decision about automating a production line. The AI-driven analysis is clear: automation would reduce costs by 30% and improve quality consistency. But the plant is in a small town where it is the primary employer. The leadership team spends weeks wrestling with the decision. They consult economists, ethicists, and community leaders. They run scenarios. In the end, they choose a hybrid approach: automating some functions while retraining workers for new roles and committing to maintain employment levels for five years.


Is it the "right" decision? There's no objective answer. It is a choice that balances economic viability with community responsibility, made by people willing to be accountable for both the benefits and the costs. No algorithm could make that choice because it involves values, not just variables.


For consultants and coaches, this is where the work deepens. Helping leaders develop judgment isn't about teaching frameworks or decision matrices. It's about creating space for reflection. Asking the questions that surface hidden assumptions. Holding the tension when every option feels inadequate. Reminding people that choosing is itself an act of leadership. Not because you have certainty, but because someone must decide and someone must be accountable.



The work of creating meaning together.


Organizations are meaning-making machines. We come together not just to execute tasks but to pursue purposes that matter. To create products that improve lives. To serve customers in ways that build loyalty. To work alongside colleagues in ways that make the effort worthwhile.


Edgar Schein, whose work on organizational culture spans decades, emphasizes in "Humble Inquiry" that meaning emerges through genuine dialogue.

What he calls "humble inquiry," means we ask questions to which we don't already know the answer, creating space for authentic understanding rather than performing efficiency.


AI can execute tasks with stunning efficiency. But it cannot create meaning. That requires shared understanding, built through conversation, negotiation, and the slow work of aligning around what matters.


This work happens in meetings that feel inefficient. The ones where people talk past each other until someone finds the phrase that suddenly makes the goal clear to everyone. It happens in the hallway conversations where trust gets built. It happens when a team hits a setback and has to collectively decide whether to pivot or persist, drawing on shared values rather than data alone.


A software company went through a difficult period when a product launch failed badly. The post-mortem analysis, aided by AI tools, identified multiple technical and process failures. But the CEO noticed something else: the team was demoralized not just by the failure but by the sense that they'd lost sight of why they were building the product in the first place. The roadmap had become a list of features to ship, disconnected from any larger purpose.


The CEO called a two-day offsite with no agenda except to talk about why their work mattered. No AI tools. No dashboards. Just people talking about the customers they wanted to help and the change they wanted to create. It felt indulgent, two days away from execution when they were already behind schedule. But what emerged was a renewed sense of shared purpose that carried the team through the next year of rebuilding.


That meaning couldn't be generated by an algorithm because meaning isn't information. It's the felt sense that this work connects to something beyond the task itself. It's what makes people willing to stay late, to care about quality, to help a colleague, to speak up when something feels wrong.



The irreplaceable work of presence.


There's a moment in coaching when the client stops mid-sentence, looks away, and you can see them reaching for something they haven't yet articulated. As a coach, you wait. You don't fill the silence with suggestions or reassurances. You stay present to whatever is emerging.


Nancy Kline's work in "Time to Think" demonstrates that the quality of attention we give one another directly affects the quality of thinking that emerges.


She writes: "The quality of your attention determines the quality of other people's thinking."

This generative attention - patient, attentive, interested, non-judgmental - cannot be replicated by AI, no matter how sophisticated the natural language processing.

This presence is perhaps the most fundamentally human thing we do. It's what allows someone to feel seen. It's what creates the safety for vulnerability. It's what makes collaboration more than coordination.


AI can simulate conversation remarkably well. Chatbots can answer questions, provide encouragement, even offer coaching-style prompts. But they cannot be present in the way another human can. They don't feel the weight of your struggle. They don't adjust their response based on the subtle shift in your tone. They don't care about you, because caring requires consciousness.


This matters in every domain of work. The manager who notices that a usually engaged team member has gone quiet. The consultant who senses that the client's stated problem isn't the real one. The colleague who sees that you're overwhelmed and offers help before you ask.


These acts of presence create the relational fabric that makes organizations human places, not just economic machines.


And in an age of accelerating AI, they become more valuable, not less.



What this means for the work ahead.


If you're a leader, a coach, a consultant, or anyone working in the economy and worrying about AI's impact, the path forward isn't to resist the technology or to uncritically embrace it. It's to get clearer about the distinctly human work that creates value and to organize your time, attention, and development around that work.


As MIT's Daron Acemoglu and Simon Johnson argue in "Power and Progress," technological change doesn't automatically benefit workers or society—it depends entirely on how we choose to deploy it and who has power in those decisions.

The question isn't what AI can do, but what we choose to do with AI.



This means several shifts as we find in our human role in creating value together with AI:


From efficiency to effectiveness. Stop asking "How can we do this faster?" and start asking "What outcome actually matters here, and what's the best way to achieve it?" Sometimes AI provides the answer. Sometimes human judgment, creativity, or relationship-building does.


From information to insight. AI will increasingly handle information gathering and analysis. Your value lies in interpretation: connecting dots across domains, recognizing patterns that matter, asking questions that reframe the problem.


From coordination to collaboration. Project management tools and AI assistants can coordinate tasks brilliantly. Real collaboration - the kind that generates new possibilities - requires human interaction. Protect time for the conversations that build shared understanding.


From transactions to relationships. Every interaction presents a choice: you can approach it as a transaction to be streamlined and optimized, or as a relationship to be cultivated and cherished. While the transactional route is becoming increasingly automated, the path of nurturing relationships remains uniquely and irreplaceably human.


From certainty to wisdom. AI delivers predictions and recommendations with remarkable confidence. However, confidence doesn't equate to wisdom. It's essential to cultivate the ability to sit with uncertainty, carefully consider conflicting values, and make decisions you can stand by, even when the results remain unpredictable.


For coaches and consultants specifically, your work becomes helping others navigate this terrain. Not by providing answers in the sense of "Here's how to implement AI", but by creating space for the questions that matter: What are we trying to create? Who are we serving? What values guide us when the path isn't clear? How do we remain accountable to the people affected by our decisions?



The heart of value creation.


We're living through a transformation at least as significant as the industrial revolution. It's disorienting. The old certainties about what work is and who does it are dissolving. The temptation is to either panic or to place blind faith in technology to solve everything.


Both responses miss what's most important: in a world where machines can do more, the human work that matters most becomes both clearer and more valuable.


Creating meaning together. Building trust. Making wise choices in uncertainty. Turning ideas into relationships, products, and services that genuinely help. Staying present to the people in front of you. Carrying accountability when the stakes are high.


Real value emerges where people can notice what’s changing, tell the truth, metabolize hard trade‑offs, and coordinate action with care. That requires trust. It requires systems that push decisions to where the context lives. It requires leaders who garden, not grandstand. And it requires using AI as a co‑pilot that amplifies human judgment rather than a black box that replaces it.


These aren't soft skills or nice-to-haves. In the complex terrain of modern organizations, these distinctly human acts are the heart of value creation. They're what transform economic activity into something worth doing. They're what make organizations places where people want to work, where customers want to engage, where innovation happens not because it's mandated but because people care enough to try.


AI will continue to accelerate. It will automate more, analyze faster, generate better. This is both opportunity and challenge. But it doesn't diminish human value. It clarifies it.


When AI gets faster, humans must get closer: to the work that matters, to each other, to the purposes that make the effort worthwhile.


That's not nostalgia. That's the work ahead.



Putting it into practice.


AI might pose a risk to human jobs and relevance, but the future is in our hands. We have the power to create cultures that value human ingenuity, with AI serving as a powerful catalyst. For leaders, this means shifting their approach: focus less on control and more on providing context; prioritize building supportive environments over personal heroics; trade the illusion of certainty for honest truth-telling.


Strengthen the social fabric of your organization—it’s the foundation of performance and innovation.


Lead by example with a mindset that’s curious, clear, kind, and open to change. Embrace adaptability and show the courage to rethink your perspectives when needed.



Here are some practical ways in which you can foster an AI-sensitive culture in your organisation. Embracing AI whilst utilising the unique capabilities of human value creation.




A closing thought.


AI will make many parts of work faster and cheaper. That shifts the premium to the profoundly human: making meaning together, caring enough to coordinate, and shouldering responsibility where it counts. If we design our organisations to unlock that — and use AI to amplify it — we don’t become irrelevant. We become irreplaceable.


Even as algorithms and automation reshape every industry, people remain the true wellspring of value creation. It is human judgment that spots emerging opportunities in shifting markets, human creativity that weaves disparate ideas into breakthrough solutions, and human empathy that builds the trust and relationships no machine can replicate. Our capacity to weigh competing priorities, interpret nuance, take ethical responsibility and learn collectively ensures that technology serves our shared purpose—making people, not code, the source and steward of real, lasting value.



Stay in the loop! Sign up for my newsletter and get the latest updates, tips, and exclusive content delivered straight to your inbox.





 
 
bottom of page