A recent research article published in APA''s Journal Consulting Psychology Journal titled, Some Ethical and Legal Issues in Using Artificial Intelligence in Personnel Selection by Dr. Olaf Ringelband and Dr. Christian Warneke looked at the use of AI in the hiring process. The article can be found on psycnet.apa.org
If you have experienced AI in the sandbox which is LinkedIn you might have a predisposed view - that is negative, on the use of AI in the hiring process. But more and more companies are using AI not only to match and screen but also to interview candidates for important roles. For my part, thanks to LinkedIn's matching system, I now get daily notifications about exciting opportunities that require 10 years of experience in technologies that were invented 3 years ago, which really makes me feel like I'm nailing this whole career thing.
The Problem with Algorithmic Hiring
Ringelband and Warnke remind us that artificial intelligence is rapidly reshaping the landscape of personnel selection. What began as a promising tool for streamlining recruitment has evolved into a complex ethical and legal minefield. AI systems, trained on historical data, now influence hiring and promotion decisions across industries. Yet the opacity of these systems—particularly those based on deep learning—raises troubling questions. Their “black box” nature makes it difficult to discern how decisions are made, and whether they are fair. The risk is that AI may not only replicate existing biases but institutionalize them, all under the guise of efficiency.
The authors of this article set up the discussion in the following way:
In the present article, we aim to translate research into practice by highlighting the points of contact between assessment and AI and by focusing on the data available for assessment AI tools. As the American Psychological Association’s (APA) Guidelines for Psychological Assessment and Evaluation (2020) states, “Inasmuch as computer technology, test instrument usage, and new instrument design are constantly evolving, the responsibilities and challenges to the psychologist practitioner using these modalities are likewise substantial.” (p. 22).It's All Too Seductive
The allure of AI in hiring is undeniable. Algorithms score assessments, generate interview questions, and produce written feedback at a rate far exceeding most HR departments. They promise speed, cost savings, and the promise of objectivity. But the same systems can also discriminate in ways more insidious than human bias. A model trained on data from successful executives might favor middle-aged men with degrees in engineering, thereby excluding women and minorities. Worse, AI can exploit behavioral data—mouse clicks, dwell time, social media activity—of debatable relevance to the job requirements or as a predictor of success, without explicit consent, raising serious privacy concerns. As the authors state, as 'participants' (willing or not) in these solutions, there is a requirement to be better informed how and when these tools are being used. Of course, transparency may not be in a corporations best interest.
Fixing the Machine
Candidates should know when AI is involved, what data are being used, and how decisions are made. Human oversight is essential; AI should assist, not replace, recruiters. Bias audits and retraining of models can help mitigate discrimination, though these require technical expertise and organizational will. The authors advise, consulting psychologists, with their grounding in ethics and statistical methods, are well-positioned to lead this charge. But they must be able to speak the language of algorithms—and to challenge them and the supporters when required.
Prudent Regulation
While the authors may be slightly naive in their advice to cautiously embrace AI since the train is now long gone from the station, they are right that corrective measures must be adopted which are grounded in established psychological theory and ethical standards. Additionally, regulatory frameworks must evolve to keep pace with technological disruption. While no one seems to be saying that AI in hiring is implemented without any human interaction, decisions affecting human lives should not be left to machines alone. Which might seem obvious but that is probably a goal written on a ppt deck somewhere in Palo Alto. Governance comes up a lot when AI is mentioned - possibly because of the lack of it, but as the technology matures, so too must the governance surrounding it. The future of hiring depends not just on smarter algorithms, but on wiser humans.
(Note: I wrote this with help from Co-pilot - like a class assignment. Claude came up with the post title. And CGPT for the image).



.jpg)

%20New%20York.jpg)
%20Austraila.jpg)