ÌÇÐÄTV¹ÙÍø

Journalism Dean Works to Help Newsrooms Spot AI-Generated Fakes

Practical guide addresses growing concerns of AI-generated media's impact on accurate reporting

Photo illustration of computer software examining a photo of a man's face and flagging it as a possible deepfake.

OXFORD, Miss. – As artificial intelligence-generated images, audio and video reshape the information landscape, newsrooms and classrooms must add "fake-checking" to their daily toolkit, a University of Mississippi journalism educator advises.

Andrea Hickerson, dean of the , working with co-authors Christopher Schwartz and Matthew Wright of the , hopes to help with (Routledge 2025). The book offers journalists, educators and students a practical framework for identifying and verifying AI-generated media without treating the technology as an insurmountable threat.

"The technology itself isn't inherently harmful," Hickerson said. "The danger comes when people don't have the knowledge or context to question what they're seeing.

Headshot of a woman wearing a red jacket over a black blouse.
Andrea Hickerson

"Our responsibility as educators is to prepare both students and communities to navigate this environment thoughtfully, ethically and confidently. When communities lose confidence in what they see and hear, misinformation fills that void."

underscores how urgently the tools are needed. Studies have consistently found that people, including , struggle to distinguish real media from manipulated content.

The technology's speed and sophistication of this technology are growing rapidly. Researchers estimate that millions of manipulated and video files are circulating online, contributing to a surge in digital misinformation, identity fraud and public distrust.

As generative artificial intelligence becomes more accessible, the ability to fabricate realistic content is no longer limited to specialized labs.

The three authors are part of the team that has been researching deepfake detection since 2019 through the $2 million - and -funded Project. Wright, chair of cybersecurity at RIT's Golisano College of Computing and Information Sciences, brought technical expertise to the collaboration.

"Even trained experts have difficulty picking out some fakes," Wright said. "The quality of these fakes is getting higher and higher over time, making them harder to distinguish from real images."

"Fake-Checking" approaches the problem from philosophical, historical, technical and methodological angles. It frames deepfakes as the latest chapter in a long history of image and video manipulation, and it includes details of image manipulation as far back as Abraham Lincoln's presidency.

Headshot of a man wearing glasses and a striped dress shirt.
Matthew Wright

Rather than relying solely on detection software, the book equips journalists to combine editorial judgment, source verification and technical tools in a way that mirrors the best practices already used by reporters.

The collaboration between and RIT reflects a deliberate effort to bridge two disciplines. Schwartz and Wright brought the technical depth of cybersecurity research, while Hickerson provided the journalistic framework for what reporters need most.

Slowing down to observe the context of shocking media images is an important step, Wright said.

"It can help you understand the potential for deception," he said. "Unfortunately, the more newsworthy the information is, the more likely it is to be fake. Secondly, get more context from other sources.

"Most videos were captured from multiple viewpoints. Most audio these days comes with video. In many cases, you can find confirmation from the depicted person themselves.

"As a journalist, you should ask them for it together with other questions you may have. Only after that would I recommend trying to analyze the media itself."

The book features advice and practical steps to support journalists explaining deepfakes to their audiences, something easily overlooked in the rush toward technological solutions. One chapter addresses how reporters can communicate the complexities of synthetic media to audiences that are increasingly worried and increasingly skeptical.

"In a world where audio and video can be convincingly fabricated, our traditional signals of credibility are being challenged," Hickerson said. "Journalists and communities must understand not only how these tools work, but how to verify information in ways that rebuild confidence."

Top: Artificial intelligence facial recognition software maps and analyzes a human face for signs of digital manipulation in an example of deepfake detection technology. As synthetic media becomes increasingly difficult to distinguish from authentic content, journalists face mounting pressure to verify what they see, hear and read. Graphic by Cole Russell/University Marketing and Communications

By

Marvis Herring

Campus

Published

April 13, 2026

School