The most recent evidence on artificial intelligence’s workplace impact refutes significant job destruction, but does affirm reconfigured roles.
Indeed’s AI at Work Report, released Thursday, reports that around 26% of jobs the company has listed on its site over the last year could be transformed by generative AI (as opposed to discriminative AI), while only a minuscule fraction of a very tiny set of those skills are ripe for complete automation. The signal is loud and clear: For the majority of people, AI will be a powerful copilot rather than a pink slip.
Indeed’s analysis steers the discussion from “jobs replaced” to “skills rebalanced.” The report’s focus on what AI can really do inside a job is a way of capturing the everyday reality many workers are already encountering — more AI in the workflow, but with humans in the loop.
Inside Indeed’s new findings on AI’s impact across skills
The report assesses nearly 3,000 individual work abilities and rates how advanced models — OpenAI’s GPT-4.1 and Anthropic’s Claude Sonnet 4 — can run them. Indeed packages this into its GenAI Skill Transformation Index (GSTI), a measure of how likely any specific skill is to be enhanced, accelerated, or repurposed by AI.
Intentionally, this is a task-level perspective. Instead of wondering whether it is safe to be a “marketing manager” or “financial analyst,” the analysis asks which parts of their work — drafting a brief for a campaign, summarizing earnings, cleaning a dataset — can realistically be handled by an algorithm and which must still be done with human judgment, physical proximity, or subtle interaction.
Only 19 job skills in the complete set, 0.7 percent, were labeled very likely to be fully automatable using generative AI. That’s an increase over past editions, indicating model improvement, but it is still small in absolute terms. Most skills lie on a spectrum where AI might speed up production or alter how a task gets done, rather than simply supplant the worker.
Where AI’s impact on text-heavy and data tasks is strong
Skills that have a lot of cognitive, text-based aspects are very vulnerable. Software teams are already using models to automatically produce:
- unit tests
- boilerplate code
- documentation
In marketing, first drafts, descriptions of audience segments, and summaries of performance are, too. Data work — cleaning logs, writing SQL snippets, summarizing results — is also pretty high on the transformation index.
That tracks similar findings by Microsoft Research and other academic groups: large language models are good at routine, information-processing tasks such as translation, customer messaging, and classifying. Roles that rely heavily on physical dexterity, site coordination, or interpersonal nuance — nursing, construction, or cooking work — will remain less fully automatable, but will still ingest AI in the form of documentation processing, scheduling management, safety checks, and inventory.
Why transformation often means task rebalancing at work
The day-to-day for many jobs will feel different before job titles change. Time is not spent the same on drafting as it is on editing, on research or verification, and on production or oversight. A financial analyst may use a model to write a variance narrative and spend the time saved stress-testing assumptions; a recruiter could do resume screens as quickly as possible via AI, then put more energy into candidate conversations; and a teacher could co-create lesson outlines with models and focus on 1-on-1 coaching.
That rebalancing sets the bar for quality control higher still. Teams that rebuild workflows around AI — to clarify inputs, insert review checkpoints, and define metrics — generally find sustained increases. Those that simply slap a chatbot onto their legacy processes wind up inheriting new error modes without realizing reliable lift. Work from MIT and advice from Gartner support this: value arrives in process change, not just from accessing the tool itself.
What employers can do now to pilot AI and govern usage
Start with a skills audit. If you can map high-volume, repeatable tasks where accuracy is quantifiable and risk is the lowest possible, then pilot AI in those lanes. Indeed’s report is a reminder that model selection makes a difference, so experiment with options on your own workflows — some may write or summarize better, others may reason or follow instructions more reliably.
Build governance early. Create standards for human-in-the-loop reviews, data processing processes, and red-teaming schedules. Establish ownership — AI product leads, prompt libraries, and clear escalation paths — and track ROI with simple metrics such as cycle time, error rates, and customer satisfaction. Training must aim at both being proficient with a tool and critical thinking, as employees become more about validating and curating AI output.
What it means for workers building AI-ready skill sets
Three skills transfer well across functions:
- Data literacy (what a model says and what it doesn’t)
- Prompt strategy (give instructions with context)
- Domain expertise (troubleshooting when the answers do not make sense)
In practice, the highest-performing employees combine fluency with deep expertise in their own field — they not only get answers faster; they ask better questions.
Market indicators say that those skills pay off. Labor economists and data analytics firms like LinkedIn and Lightcast have noted wage premiums and faster hiring for jobs that combine domain work with proficiency in AI, whether marketing or finance, operations or design. The premium is a simple calculus: when AI amplifies throughput, human judgment and taste gain value, not lose it.
The practical and optimistic bottom line from Indeed’s analysis? Generative AI is making what many jobs can do faster and broader, but wholesale replacement remains the exception to the rule, rather than vice versa. The short-term narrative isn’t about automation — it’s about redesign, rewiring tasks, and upskilling teams to use machines to augment human work.