A new (landmark) study proposes a rigorous, worker-centered approach to assessing where and how AI should augment (not just automate) human labor. Thanks to Yijia Shao and Humishka Zope (cc Erik Brynjolfsson) for leading up a large-scale effort. Rather than focusing narrowly on sectors like software or support, the new paper draws from the full U.S. labor market using O*NET’s task-level classifications with responses from over 1,500 workers across 100+ occupations. Critically, the authors introduce a new Human Agency Scale (H1–H5) to quantify how much human judgment is still required in each task -- even when AI is in the loop. 1) It’s not just about whether AI can perform a task, but whether workers want it to (and why). 2) It breaks from a binary view of “automation vs. not” and explores the middle ground of augmentation. 3) It cross-references worker perspectives with assessments from 50+ AI researchers, offering a more grounded map of actual versus perceived capabilities. The authors create WORKBank -- a public dataset linking AI readiness with task-level data and human agency preferences. It’s a blueprint for how organizations, policymakers, and researchers can evaluate AI integration not just for efficiency, but for alignment with human values. This is one of the most grounded efforts yet to build a shared language between labor and technology -- and to ensure we don’t sacrifice agency for automation. #FutureOfWork #GenerativeAI #HumanCenteredAI #LaborEconomics #ResponsibleAI #WORKBank
Understanding AI: Augmentation Vs. Automation
Explore top LinkedIn content from expert professionals.
Summary
Understanding the difference between AI augmentation and automation is key to using artificial intelligence responsibly. While AI augmentation works to enhance human capabilities and decision-making, automation seeks to replace manual tasks with machine-driven processes entirely.
- Focus on collaboration: Design AI systems with the goal of working alongside humans, amplifying their strengths and ensuring tools adapt to team workflows.
- Balance autonomy with oversight: Encourage a balance where AI performs repetitive tasks while humans handle situations requiring judgment or creativity.
- Measure team performance: Evaluate how AI improves overall human-AI collaboration rather than focusing solely on the system's isolated capabilities.
-
-
Should we be creating AI that augments human workers or replaces them? One of the factors that will determine our path is the choice of what benchmarks we use to evaluate our AI models. Benchmarks matter because they give AI companies a target to aim for and influence societal opinions about which models are the best. At present, though, we might actually be measuring the wrong things. In the paper linked to here, researchers Andreas Haupt and Erik Brynjolfsson argue we should stop testing AI systems in isolation and start evaluating how well *humans and AI work together* -- what they call "centaur evaluations." The paper proposes concrete methods, like: competitive "bring-your-own-human" formats and controlled trials measuring human-AI team performance. The goal? Benchmarks that reflect how AI is actually used and incentivize building technology that amplifies rather than replaces human potential. Key points: -Current AI benchmarks are like multiple-choice tests that miss real-world complexity -Most AI applications already involve human collaboration, yet we can't test crucial capabilities like personalization or explainability without actual humans -Our Turing Test obsession with human imitation steers development toward replacement rather than augmentation -Centaur evaluations could redirect AI development toward making humans more capable, not obsolete The evaluation methods we choose today will determine whether we get AI that makes us superhuman or simply makes us unnecessary. https://lnkd.in/gUM5ay3G
-
(Artificial) Intelligence is a parasite. It can't survive without a host. Watch a brilliant doctor work alone in the wilderness with no tools, no references, no colleagues. Their diagnostic genius diminishes to educated guesswork. Intelligence isn't something we possess—it's something we access. We discovered this the hard way deploying AI systems. Our most sophisticated systems failed spectacularly when we tried to make it completely autonomous. Customer satisfaction plummeted. Support tickets multiplied. But when we rebuilt the same technology as part of the support team's workflow—letting it access context, escalate intelligently, and learn from human decisions—something magical happened. Resolution rates improved 40%. Not because the AI got smarter. Because it got more connected. This pattern repeated across every deployment. Isolated AI systems underperformed. Integrated ones exceeded expectations. The math is simple but counterintuitive: → An AI system operating at 70% accuracy in isolation creates chaos → The same system at 70% accuracy, knowing when to involve humans, creates excellence Add contextual awareness of organizational goals, and it becomes transformative Consider how your best employees operate. They don't work in isolation. They tap into institutional knowledge, collaborate with colleagues, understand unwritten rules. Their value comes from how well they navigate and contribute to collective intelligence. The most valuable AI systems make everyone around them smarter. They surface relevant information at the right moment. They connect disparate knowledge across departments. They remember what others forget. They amplify human judgment rather than trying to replace it. This changes everything about AI strategy. Stop asking "How can we automate this role?" Start asking "How can we amplify this team's intelligence?" AI adoption is fundamentally about enhancing collective intelligence, not creating autonomous agents for end-to-end workflows. The companies winning with AI understand this. They're not building robot employees. They're building intelligence amplifiers.
-
The more we study human/AI collaboration the more we realize how difficult it is to speak in absolutes. We are easily sucked into the idea that #AIautomation will solve all of our problems, until it doesn't. Thx to my good friend Bas van de Haterd (He/His/Him) for sharing this excellent study "Falling Asleep at the Wheel: Human/AI Collaboration in a Field Experiment on HR Recruiters," by Fabrizio Dell'Acqua of Harvard Business School. The study explores the dynamics of human effort and AI quality in recruitment processes and reveals yet another paradox of AI: Higher-performing AI can sometimes lead to worse overall outcomes by reducing human engagement and effort. When it comes to hiring, this finding is pretty significant. Especially when one layers in the presence of bias that (hopefully) can be mitigated by the efforts of recruiters to be objective (We can dream can't we!). Here is a quick summary of the article's findings and implications. Key Findings: 💪 Human Effort vs. AI Quality: As AI quality increases, humans tend to rely more on the AI, leading to less effort and engagement. This can decrease the overall performance in decision-making tasks. 🙀 Lower Quality AI Enhances Human Effort: Recruiters provided with lower-performing AI exerted more effort and time, leading to better performance in evaluating job applications compared to those using higher-performing AI. 🎩 Experience Matters: More experienced recruiters were better at compensating for lower AI quality, improving their performance by remaining actively engaged and using their expertise to supplement the AI’s recommendations. Implications for Talent Acquisition Leaders: ⚖ Balanced AI Integration: While it may be tempting to implement the most advanced AI systems, it’s crucial to ensure that these systems do not lead to complacency among human recruiters. Talent acquisition leaders should focus on integrating AI tools that enhance rather than replace human judgment. 💍 Training and Engagement: Investing in training programs that encourage recruiters to critically assess AI recommendations can help maintain high levels of human engagement and performance. 🛠 Custom AI Solutions: Consider developing AI systems tailored to the specific needs and skills of your recruitment team. Custom solutions that require human input and oversight can prevent "falling asleep at the wheel" and ensure optimal performance.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development