How Technology Influences News Reporting

Explore top LinkedIn content from expert professionals.

Summary

Technology is transforming the way news is reported, with AI and digital tools playing a pivotal role in shaping modern journalism. From improving investigative processes to raising concerns about misinformation and ethical standards, the impact of technology on news reporting is profound and multi-faceted.

  • Embrace AI for research: Use artificial intelligence to streamline tasks like data analysis, transcription, and story discovery, but maintain human oversight for critical decisions.
  • Prioritize transparency: Clearly label AI-generated or AI-assisted content to build trust with audiences and address rising concerns about misinformation.
  • Develop newsroom policies: Establish clear AI usage policies and train journalists on ethical reporting practices to ensure accuracy and maintain credibility in an evolving digital landscape.
Summarized by AI based on LinkedIn member posts
  • View profile for Jeremy Tunis

    “Urgent Care” for Public Affairs, PR, Crisis, Content. Deep experience with BH/SUD hospitals, MedTech, other scrutinized sectors. Jewish nonprofit leader. Alum: UHS, Amazon, Burson, Edelman. Former LinkedIn Top Voice.

    15,130 followers

    Everyone’s talking about Muck Rack’s 2025 State of Journalism report. It’s a doozy. But too many takeaways stop at the surface. “Don’t be overly promotional.” “Pitch within the reporter’s beat.” “Keep it short.” All true. All timeless. But if you work in crisis communications or anywhere near the intersection of trust, media, and AI, those are just table stakes. The real story is what the report says about disinformation and AI’s double-edged role in modern journalism. Here’s where every in-house and agency team should be paying the closest attention: 🧨 The Risk Landscape: What Journalists Are Actually Worried About: 🚨 Disinformation is the #1 concern Over 1 in 3 journalists named it their top professional challenge—more than funding, job security, or online harassment. 🤖 AI is everywhere and largely unregulated 77% of journalists use tools like ChatGPT and AI transcription; but most work in newsrooms with no AI policies or editorial guidelines. 🤔 Audience trust is cracking Journalists are keenly aware of public skepticism, especially when it comes to AI-generated content on complex topics like public safety, politics, or science. 🤖 ‼️ Deepfakes and manipulated media are on the rise As I discussed yesterday in the AI PR Nightmares series, the tools to fabricate reality are here. And most organizations aren’t ready. 🛡️ What Smart Comms Teams Should Do Next 1. Label AI content before someone else exposes it: → Add “AI-assisted” disclosures to public-facing materials—even if it’s just for internal drafts. Transparency builds resilience. 2. Don’t outsource final judgment to a tool: → Use AI to draft or summarize, but ensure every high-stakes message—especially in a crisis—is reviewed by a human with context and authority. 3. Get serious about deepfake detection: → If your org handles audio or video from public figures, execs, or customers, implement deepfake scanning. Better to screen than go viral for the wrong reasons. 4. Set up disinfo early warning systems: → Combine AI-powered media monitoring with human review to track false narratives before they go wide. 5. Build your AI & disinfo playbook now: → Don’t wait for legal or IT to set policy. Comms should lead here. A one-pager with do’s, don’ts, and red flag escalation rules goes a long way. 6. Train everyone who touches messaging: → Even if you have a great media team, everyone in your org needs a baseline understanding of how disinfo spreads and how AI can help or hurt your credibility. TL/DR: AI and misinformation aren’t future threats. They’re already shaping how journalists vet sources, evaluate pitches, and report stories. If your communications team isn’t prepared to manage that reality (during a crisis or otherwise), you’re operating with a blind spot. If you’re working on these challenges—or trying to, drop me a line if I can help.

  • View profile for Carly Martinetti

    PR & Comms Strategy with an Eye on AI | Co-Founder at Notably

    96,839 followers

    Last month, the NYT made a huge AI policy shift that will transform how we pitch journalists. After years of fighting AI companies in court, they've finally embraced AI tools in their newsroom—and the implications for PR pros are immediate. What’s interesting is where the NYT drew the line in the sand: ✔ Writing headlines, drafting interview questions, suggesting edits, creating social copy   ✖ Drafting articles, bypassing paywalls, using copyrighted materials without permission In other words, they’re trying to balance technological advancement with traditional journalistic values. Here’s what this means for PR pros: 1. Journalists might use AI to initially screen pitches. 2. Coverage will hopefully move faster as AI helps journalists with research and promotional copy. 3. The fundamentals of good storytelling become more important as AI handles routine tasks. Three ways we're adapting our media pitches: 1. Front- and end-loading key information: LLMs tend to weigh the importance of tokens in a U shape, meaning words that come first and last matter more than those in the middle. We’re experimenting with restructuring our pitches accordingly—leading with the big ‘so what’ and ending with ‘why it matters’. 2. Using clear industry categorization: not that we haven’t done this before, but we're being more intentional about explicitly stating which beat/topic the pitch fits into, and ensuring it aligns with the journalist’s beat. We don’t want an AI classifier discarding our pitches before they’re even considered. 3. Including datapoints in standardized formats: LLMs are not yet very good at parsing complex PDFs, so we’re experimenting with including data/statistics in more AI-friendly formats such as spreadsheets, CSVs, etc. To my PR friends: Would love to know what your thoughts are on this move and how you’re adapting 👇

  • View profile for Francesco Marconi

    AppliedXL

    6,774 followers

    LLMs, when used alone, cannot reliably be deployed in journalism, especially in real-time information generation. Here are the key issues and the ways to address them: 1. Inability to Adapt to New Information: LLMs excel at processing existing language data but struggle with “innovative thinking” and real-time adaptation, which are crucial in news reporting. Since they are trained on pre-existing datasets, they can’t dynamically update their knowledge post-training. For instance, when mining local government data, LLMs might overlook recent policy changes or budget updates. The solution involves developing real-time event detection systems that can monitor and analyze local government records, such as council meeting minutes or budget reports. Such systems use what is called an ‘editorial algorithm’ to identify noteworthy changes in the data based on criteria defined by journalists. 2. Lack of Guaranteed Accuracy: LLMs cannot ensure the accuracy of their output, as their responses are based on patterns from training data and lack a mechanism for verifying factual correctness. Continuing with the example above, an LLM might inaccurately write an analysis of a significant policy change detected by an editorial algorithm. To address this issue, we can develop domain-specific models trained to understand a particukar coverage area (like a beat reporter). Any analysis produced by an LLM should be subjected to automated fact-checking against quantifiable editorial benchmarks using reinforcement learning with AI feedback (RLAIF). These benchmarks involve cross-referencing with official records, verifying historical accuracy, and ensuring alignment with journalistic standards. This method, known as ‘editorial AI,’ makes the AI follow journalistic guidelines to maintain the integrity and accuracy of news content derived from complex data.

  • View profile for Damian Radcliffe

    Carolyn S. Chambers Professor in Journalism at University of Oregon | Journalist | Analyst | Researcher | Journalism Educator

    4,655 followers

    📌 I have a new report for the Thomson Reuters Foundation out today, on how journalists in the Global South and emerging economies are using AI, and the challenges they face in using these technologies. The research is based on a Q4 2024 survey and responses from more than 200 journalists in over 70 countries. 📊 Some key findings: 1️⃣ More than 80% of our sample uses AI, with many journalists using it for transcription, translation, and content editing. 2️⃣ Yet, only 13% of respondents said their newsroom has an AI policy. 3️⃣ Skill gaps are a challenge – over 50% of journalists using AI are self-taught, emphasizing the need (and opportunity) for better training. 4️⃣ Ethical concerns, western bias in LLMs, and lack of awareness of how to use AI, are all factors inhibiting further take-up and adoption. 5️⃣ AI tools remain expensive – affordability is an additional barrier for many newsrooms in the Global South. 6️⃣ Respondents believe that regulation is needed to address a myriad of factors, from ethical concerns to fears around misinformation, and more. Awareness of existing policies and discussions is low among journalists. 🤔 So, where do we go from here? The report outlines key recommendations for journalists, policymakers, funders, and media development organizations, designed to foster the responsible and ethical development of AI and its integration into journalistic work in emerging economies and the Global South. 🎯 📖 Read the full report here: https://lnkd.in/gYZKRdg3 #AI #Journalism #Digital #DigitalTransformation #Media #MediaDevelopment #Research

    • +1
  • View profile for Josh Lawson

    Product Policy @ OpenAI

    6,532 followers

    AI is becoming a critical (and audited) tool for journalists uncovering facts buried in data. Pulitzer winners and finalists offer a glimpse into how AI is reshaping investigative journalism: >> WSJ’s Musk analysis: Used semantic vector mapping to chart Musk’s political shift via 41K+ X posts — leveraging embedding models to visualize ideological drift. >> Reconstructing “40 Acres”: Custom image-recognition model scanned 1.8M Freedmen’s Bureau records to uncover erased land grants to freed Black Americans. >> Gaza forensic reporting: The Washington Post used satellite AI from Preligens to challenge claims of nearby military targets. >> “Lethal Restraint” database: AP & Howard Center used OCR, Whisper, and Textract to index 200K+ documents, exposing non-firearm police killings. Great write-up by Andrew Deck!

  • View profile for Adriana Lacy

    CEO, Field Nine Group | Journalism Professor | Forbes 30 Under 30

    5,542 followers

    From the latest Adriana Lacy Consulting newsletter | The Washington Post announced a content partnership with OpenAI, making its journalism available within ChatGPT. Under this agreement, ChatGPT will surface Post content, including summaries, quotes, and links to original articles, when relevant to a user’s query. For The Post, it’s a new avenue for distribution. For OpenAI, it’s a chance to ground AI responses in credible journalism. On one hand, these partnerships promise broader reach. ChatGPT and other AI platforms engage hundreds of millions of users each week. Getting journalism in front of those users, in a context that includes links back to original articles, is potentially powerful for audience development and brand visibility. But there are real risks. Will readers click through to the full article, or stop at the summary? Can AI platforms accurately reflect the nuance of quality reporting? And what does it mean for journalism if distribution becomes intermediated by tools publishers don’t control? For journalism to survive and thrive in an AI-driven world, publishers must not only experiment but they must lead. I’ll say that again. They must lead! That means proactively shaping partnerships with technology companies, insisting on transparent attribution, and investing in their own AI infrastructure to retain editorial control. The Washington Post–OpenAI deal is a signal moment, not a one-off. It suggests that the future of journalism isn’t a fight against AI but perhaps a strategic push to ensure journalism’s values, standards, and economics guide the evolution of these tools.

  • View profile for Jon Laurence

    Supervising EP at AJ+ | Senior Fellow, John Schofield Trust | Peabody, Emmy and Murrow Award Winner

    8,403 followers

    The news industry is about to undergo our version of the horsemeat scandal - thanks to generative AI. For those who aren’t familiar, this happened in Europe in the 2010s. Meat sold as beef had partly come from horses. Consumers started looking for food that was traceable as a result - as they always do in the aftermath of revelations like this. Traceability is about to become much more important for news organizations. Audiences are going to be less and less certain about the provenance of what they’re consuming - as generative AI will shortly allow anyone to replicate the visual grammar of news content, and publishers themselves may not always be transparent about how their material is generated. I think that there are four things that the news industry can do now to get ahead of this. 1) the NYT uses ‘enhanced bylines’, that summarize the newsgathering process/the expertise of the authors. These should become the norm. 2) news organizations should consider what published editorial standards should look like - to anticipate the needs of consumers who have more questions than ever about how the material has been gathered. 3) journalists should start documenting their process more within their reports - and being more transparent about how they are made. The how is going to become as important as the what. 4) both brands and individual journalists should build opportunities into their routines to engage with the audience on questions like these. There was little point in doing this when there was a closed circle of publishers with high costs to entry. Stories about journalistic process were boring. But more and more - people are going to want to know exactly how the sausage gets made.

  • View profile for Parry Headrick

    Founder at Crackle PR 🎙️ Text me for tech PR: 415.246.8486

    75,656 followers

    💥 AI and large language models (LLMs) are actually seriously good news for PR agencies and clients playing the long game. Media relations in particular is key. Here’s why: First, LLMs amplify the influence of *credible media sources* ChatGPT, Google Gemini, etc., are trained on a boatload of publicly available content, much of which comes from trusted news outlets like The New York Times, TechCrunch, Wired, etc. In other words? Credible media coverage *becomes part of the training data* or is surfaced as high-ranking source material. This means solid media mentions carry long-term weight 💪 Second, owned media ≠ visibility in AI-driven environments 👀 Sure, you might control your blog, press releases, or social content… That’s good. But *LLMs prioritize 3-party validation over branded self-promotion*, so getting featured or quoted in reputable media increases the chance your story/brand/company/execs are included or favored in AI-generated summaries. This is PR gold 👆 Third, LLMs reframe how people search for stuff like your products/services. As people shift from Google searches to natural language prompts, they get: ✔️Fewer, more synthesized answers instead of long, annoying lists of links. ✔️In those summaries, earned media coverage becomes a PRIMARY input, while *paid and owned channels may be marginalized or nixed altogether*. Read that again ☝️ Earned media is king in LLM land 👑 Fourth, consistent media visibility shapes LLM memory over time. As coverage accumulates, LLMs “learn” from repeated mentions of your company, leaders, or products. But this part’s key: Positive or negative coverage may shape how you are represented in generated answers—influencing reputation far beyond your site or social channels. So obviously you need solid PR storytelling to make sure what surfaces about your company is what you want people to read. Bottom line: Make sure your PR agency is working with you to set you up for long term visibility in the AI realm. The game has changed, ya’ll. You’ve got this. This is the way ✊

  • View profile for Lizzy Harris

    PR & New Media for High-Growth Companies | CEO @ The Colab | Co-Founder @ The Colab Brief

    23,502 followers

    AI is going to force journalism back to its roots, and I’m glad. For years, we’ve been treating journalism as a commodity. Something that’s based on a journalist’s ability to “break” a story. And while that’s kept us informed (to an extent), it’s also created a world in which everyone feels totally overwhelmed, overstimulated, and underconfident about how to handle this onslaught of information. There’s a certain value in knowing what’s happening in real time. But there’s a greater value in truly understanding something. Most of us know just enough about things that we could carry on a conversation with someone else who knows just enough about the same topic. Our comprehension is a mile wide, but only an inch deep. Now, with AI, people have the ability to know “just enough” in a matter of seconds without even having to click through to an article. Traffic to publishers is plummeting, which will upend their entire business model. My prediction is that we’ll revert to more long-form stories that require days and weeks of interviews, research, and in-depth journalism, rather than minutes. This will result in pieces that inform, educate, and emotionally connect to the reader on a deeper level. Think 30 For 30 but in print (or other mediums). I still remember reading The Really Big One, an article published in The New Yorker in 2015 by Kathryn Schulz, discussing the inevitable earthquake and resulting tsunami that will occur when the Cascadia subduction zone gives way. It was an incredibly well-done piece that must have taken weeks of research and dozens of interviews. Ten years later, it’s still an article I return to, both because it’s informative and the writing is impeccable. In my opinion, the media has been broken for a long time. Every journalist has reverted to being a breaking news reporter, and instead of informing us, it has crippled us into inactive submission, paralyzed with fear about what to do next. There are so many stories that have been suppressed in favor of reporting the next headline faster than other publishers. Now that people are turning to AI to satiate their news fix, publishers will have to revert to something that can’t be explained in a headline. They’ll have to go back to their roots of storytelling. Uncovering the angles that can’t be understood in 15 or fewer words. Pieces that stick with us for weeks, months, years. And while this shift will be initially devastating for many, I think we will all be better off for it. I’m constantly seeking that next figurative New Yorker piece that gets me thinking, feeling, and learning. After all, isn’t that what true journalism is meant to do?

  • View profile for Nicholas Diakopoulos

    Professor at Northwestern University

    4,275 followers

    Clare Spencer writes up the case for why the most promising journalistic use case for GenAI isn’t pumping out final copy — it’s surfacing new story ideas: https://lnkd.in/g7Da2aex Story discovery is low-risk, high-reward: Using AI to sift through massive document dumps (like FOIA materials) helps you find leads without exposing readers to inaccuracies. Yes, generative AI can rewrite or summarize text, but every bit of copy must be triple-checked — which often negates the speed benefits. Story discovery, by contrast, is an internal research use; accuracy is still vital, but the stakes are lower when you’re the only one seeing the AI’s output. Once you dig up a lead, you can verify it using your usual journalistic rigor. Bottom Line: For now, let AI power your newsroom’s investigative curiosity rather than final copy. That’s where it delivers the biggest editorial win.

Explore categories