Consequences of Over-Reliance on AI in HR

Explore top LinkedIn content from expert professionals.

  • View profile for Davidson Oturu

    Rainmaker| Nubia Capital| Venture Capital| Attorney| Social Impact|| Best Selling Author

    32,613 followers

    I find it interesting that a firm like Deloitte is turning to AI to solve “hard problems.” This year has presented it with a challenging paradox: while it hired several graduates to meet growing demands, it also downsized existing staff. Thus, despite engaging 130,000 new employees this year, Deloitte has notified its US and UK employees about the potential redundancy of their roles. A HR nightmare. Consequently, Deloitte has chosen to leverage AI to evaluate the skills of its employees and reassign them to more promising areas within the company. The primary objective is to shift individuals from less active areas to roles that are currently in high demand. This initiative goes beyond merely preventing large-scale layoffs; it also encompasses a recalibration of future hiring plans. In an era where automation poses a threat to certain jobs and skills, Deloitte sees a silver lining—AI can be a valuable tool in facilitating the transition of workers to more sought-after roles. By leveraging generative AI, popularized by ChatGPT, it hopes to enhance the management of its large workforce. However, while Deloitte's use of AI appears innovative, there are potential challenges and risks that could lead to unintended consequences: 1. 𝐁𝐢𝐚𝐬 𝐢𝐧 𝐀𝐈 𝐀𝐬𝐬𝐞𝐬𝐬𝐦𝐞𝐧𝐭𝐬: If the algorithms used for skills assessments are not properly calibrated or incorporate biases, there is a risk of unfair evaluations. This can lead to employees being assigned to roles that don't align with their skills. 2. 𝐋𝐚𝐜𝐤 𝐨𝐟 𝐇𝐮𝐦𝐚𝐧 𝐓𝐨𝐮𝐜𝐡: Overreliance on AI for workforce decisions may lead to a perceived lack of human empathy. Employees may feel their unique skills, experiences, and aspirations are not adequately considered in the reshuffling process. 3. 𝐏𝐫𝐢𝐯𝐚𝐜𝐲 𝐂𝐨𝐧𝐜𝐞𝐫𝐧𝐬: AI assessments involve the collection and analysis of large amounts of employee data. If not handled carefully, this could raise privacy concerns, potentially leading to trust issues and legal challenges. 4. 𝐎𝐯𝐞𝐫𝐞𝐦𝐩𝐡𝐚𝐬𝐢𝐬 𝐨𝐧 𝐀𝐮𝐭𝐨𝐦𝐚𝐭𝐢𝐨𝐧: An overemphasis on automation might neglect human intuition, creativity, and emotional intelligence in certain roles. This could impact the overall dynamics and effectiveness of the workforce. 5. 𝐌𝐢𝐬𝐦𝐚𝐭𝐜𝐡 𝐢𝐧 𝐩𝐫𝐞𝐝𝐢𝐜𝐭𝐞𝐝 𝐚𝐧𝐝 𝐚𝐜𝐭𝐮𝐚𝐥 𝐬𝐤𝐢𝐥𝐥 𝐝𝐞𝐯𝐞𝐥𝐨𝐩𝐦𝐞𝐧𝐭: If the AI's predictions are inaccurate or if the business landscape shifts unexpectedly, there is a risk of a mismatch between the predicted skills needed and the actual skills developed, leading to inefficiencies. Clearly, the approach of relegating intricate problems to AI without a nuanced understanding can be fraught with risks. In the case of workforce restructuring, a thoughtful and strategic integration of AI is essential to ensuring successful outcomes. Nevertheless, it can be said that Deloitte's response to these challenges by utilising AI reflects a forward-thinking approach to workforce management.

  • View profile for Brian Spisak, PhD

    C-Suite Healthcare Executive | Harvard AI & Leadership Program Director | Best-Selling Author

    8,400 followers

    𝐇𝐨𝐰 𝐍𝐎𝐓 𝐭𝐨 𝐥𝐞𝐚𝐝 𝐝𝐢𝐠𝐢𝐭𝐚𝐥 𝐭𝐫𝐚𝐧𝐬𝐟𝐨𝐫𝐦𝐚𝐭𝐢𝐨𝐧: A job paying $400k recently appeared on my LinkedIn feed. The ad asked prospects if they're excited about AI's ability to “𝘳𝘦𝘱𝘭𝘢𝘤𝘦 human decision-making” and whether they're enthusiastic about a world where AI 𝘧𝘶𝘭𝘭𝘺 manages services. I have nothing against the company; they're great. However, such thinking promotes 𝐮𝐧𝐜𝐡𝐞𝐜𝐤𝐞𝐝 𝐀𝐈. 𝐓𝐡𝐢𝐬 𝐢𝐬 𝐛𝐚𝐝 𝐛𝐞𝐜𝐚𝐮𝐬𝐞 𝐢𝐭 𝐥𝐞𝐚𝐝𝐬 𝐭𝐨: 👉 Reduced Accountability – Human oversight ensures responsibility. 👉 Increased Bias – Unchecked AI can perpetuate biases. 👉 Ethical Concerns – AI decisions may conflict with human values. 👉 Operational Complexity – Overreliance on AI can complicate processes. 👉 Stifled Innovation – Excluding humans limits creative solutions. 𝐈𝐧𝐬𝐭𝐞𝐚𝐝, 𝐥𝐞𝐚𝐝𝐞𝐫𝐬 𝐦𝐮𝐬𝐭: 👉 Foster Collaboration – Engage AI as a partner, not a replacement. 👉 Ensure Transparency – Make AI decisions understandable. 👉 Empower Oversight – Humans guide and validate AI choices. 👉 Prioritize Learning – Continuous improvement through human-AI synergy. 👉 Embrace Diversity – Include diverse perspectives in AI development. 𝐈𝐧 𝐚 𝐧𝐮𝐭𝐬𝐡𝐞𝐥𝐥: The future is hybrid, where diverse stakeholders and machines collectively revolutionize how we live and work.

  • View profile for Dr. Cari Miller

    AI Procurement Global Thought Leader | AI Governance & Policy SME | Workplace AI Enablement Advisor

    7,890 followers

    Thank you, Merve Hickok, for the brilliant new article: A policy primer and roadmap on AI worker surveillance and productivity scoring tools…written with her equally brilliant friend Nestor Maslej...in this issue of AI & Ethics from Springer. I found this quote particularly insightful: “The products collect a plethora of data points and compare them against subjective rules to provide a score for a worker or infer certain behavioral characteristics. These scores can then be used by human managers to make determinations about the workers’ efficiency, productivity, risk to company’s assets and reputation. The scores also drive decisions about wages, benefits, promotions, disciplinary action or even terminations.” Hickok and Maslej write fervently about this tech's encroachment on #HUMANRIGHTS and #HUMANDIGNITY. They are not wrong. These systems are an undeniable infringement on human rights and human dignity in countless ways, and they elaborate on this extensively in the article. I like to use another lens familiar to HR practitioners to connect the adverse outcomes of this type of tech DIRECTLY to #WORKPLACE #DEIB initiatives and common goals shared by every organization. AI-based worker surveillance has the same consequence as an overbearing, abusive, biased, micro-managing boss. When AI surveillance is present, employees experience a whole host of psychological “events” that negatively impact the corporate culture and an employee’s sense of belonging, which generally organizations DON'T want workers to experience, such as: 🔸 Shame, embarrassment, and fear of job loss 🔸 Stress that turns into workplace hostility, burnout, and/or physical conditions 🔸 Mental distractions that lead to reduced productivity and suppressed innovation 🔸 High dissatisfaction towards leadership and low approval of corporate culture 🔸 Increased absences, turnover intent, and unplanned attrition And…research shows that quitting is contagious. When one goes, the unfulfilled responsibilities from the open position fall to the coworker(s), and more quitters will soon follow. Question: What do organizations really gain with excessive surveillance that psychologically burdens workers into burnout?   (Answers: Negative/toxic corporate culture? Reputational harm? Difficulty hiring? Lawsuits? Slower service? Uninspired solutions? Lower profit? Downward stock prices? #Alloftheabove.) That said, there are ways to harness the good and mitigate the bad in these systems. Good governance practices are essential…but you’ll have to read my upcoming book or DM me for those tips. 😊 Read the article and see for yourself: https://lnkd.in/eXuRxnxM #ResponsibleAI #AIethics #HRtech #HRLeaderhsip #CAIDP #diversityequityinclusion

  • View profile for CheeTung (CT) Leong

    AI & SaaS | Podcast Host | 2X Founder | ex-Navy | TEDx

    15,524 followers

    I invite you to disagree with me. HR leaders shouldn't rush into deploying #generativeAI. Here's why. In the rush to embrace the latest technology trends, it's crucial for HR professionals to pause and consider the implications. Generative AI, while promising, poses unique challenges in the realm of HR. 1. Ethical Considerations: AI's decision-making processes can be opaque, raising ethical concerns around bias and fairness in talent acquisition and management. 2. Human Touch: AI cannot replicate the nuanced understanding and empathy of human HR professionals. The risk is that over-reliance on AI could erode the human-centric approach vital in HR. 3. Data Privacy: The use of AI in HR requires handling sensitive employee data. Ensuring privacy and security in this context is not just a technical issue, but a matter of trust and credibility. 4. Skills Gap: There's a burgeoning skills gap in understanding and managing AI technologies effectively. HR teams need time to develop these competencies. 5. Cultural Readiness: Adopting AI in HR processes requires a cultural shift within organizations. It's not just about the technology, but about preparing people and processes for this change. AI in HR isn't a one-size-fits-all solution. It demands careful consideration of various factors, beyond the allure of technology. AI strategy came in at 9th out of 10 priorities for HR leaders we interviewed for our HR Outlook 2024 report. Why? Be the first to get your hands on the download at the link in the comments. PS: The image below is generated by AI - amazing in some ways, less so in others. Go figure. #HRLeadership #AIinHR #HRStrategy #FutureOfWork

Explore categories