Artificial Intelligence in the Newsroom: The Double Edged Sword

By: Alyssa N. Salcedo

Artificial Intelligence (AI), while scary to some, can come in handy in many ways. From making grocery shopping lists to checking our grammar, we’ve learned how to automate several daily practices. AI use is now creeping into newsrooms, leaving many fearful for the future of the industry.

Newsrooms like the Associated Press, Bloomberg and The New York Times have incorporated some form of AI use into their practices. From a management perspective, AI can help to automate tasks that in some cases took several people to complete.

Felix M. Simon, a communication researcher and doctoral candidate at the University of Oxford conducted a study on the use of AI in newsrooms. He interviewed news workers at 35 news organizations around the world on how AI has affected their work.

“AI is now applied across an ever greater range of tasks in the production and distribution of news. Contrary to some assertions, many of the most beneficial applications of AI in news are relatively mundane, and AI has often not proved to be a silver bullet in many cases,” Simon wrote in his study.

AI is being used to increase efficiency by writing headlines, transcribing audio, managing paywalls and completing many other mundane and time consuming tasks. However, Simon argues that while AI has and will play a transformative role in newsrooms, there are still several constraints on AI use, stemming from many factors, including resistance from news workers and audiences.

“Currently, AI aids news workers rather than replaces them, but there are no guarantees this will remain the case. AI is sufficiently mature to enable the replacement of at least some journalism jobs, either directly or because fewer workers are needed,” Simon wrote.

However, while AI use can be beneficial, it may not be the best practice to rely on it completely.

According to an investigative report published by Futurism, Sports Illustrated made the mistake of publishing AI generated articles written by fake authors. Each of the fake authors came with AI generated headshots and biographies linked under the articles they were said to have written.

This scandal seriously damaged the publication’s credibility. According to The Guardian, The Arena Group, publisher of Sports Illustrated, has since fired its CEO. They claim that this decision was unrelated to the AI scandal, and that the articles in question were sourced from the advertising company AdVon Commerce. However, the timeline left readers feeling suspicious.

If we are to incorporate AI use in the newsroom, we need to ensure that there are ethical guidelines in place to avoid cases like these.

The Society of Professional Journalists (SPJ) released a statement following another investigation published by Futurism on CNET’s failure to disclose the use of AI to write articles that contained errors.

“While there is no need for a ban on artificial intelligence in journalism, its use is best limited and considered on a case-by-case basis,” said Claire Regan, SPJ National President. “AI, for example, can be an efficient, cost-effective way to convert huge volumes of numbers-based corporate data into short, routine stories on business reports. But so much of journalism is more personal…Humans are best at connecting intimately with humans to tell their stories.”

In the statement, SPJ encouraged news workers to keep their code of ethics in mind when using AI, to take responsibility for their work and to explain their choices to their audiences to “encourage a civil dialogue” about journalistic practices.

Whether we like it or not, AI use will become an essential tool in news production. We as reporters are responsible for ensuring that we’re using this powerful tool ethically and transparently.

Just for fun, let’s see how AI would perform as a student in the Advanced Reporting class at the Center for Journalism Integrity and Excellence.
Each student in Advanced Reporting is asked to dig up 12 truly unique facts about our guest speakers. I asked OpenAI’s ChatGBT program to generate 12 facts about our professors Carol Marin and Lisa Parker Weisman. The facts the program gave me were relatively well known, and some were even completely incorrect!

“A significant portion of her reporting has been dedicated to consumer protection, helping viewers solve problems related to fraudulent practices and poor service from businesses,” the program generated for Parker Weisman.

While factually correct, this fact is not unique. Anyone can learn this information by simply googling Parker Weisman and reading the first few links that pop-up. Therefore, this fact wouldn’t do too well in the Advanced Reporting class.

Another fact generated by AI would have gotten it into some trouble in class.

“Marin has served as a visiting faculty member at the University of Chicago, where she contributed to the development of future journalists and shared her expertise in investigative reporting,” generated the program, for Marin.

While Marin is a professor and contributes to the development of future journalists such as myself, she never taught at University of Chicago! Incorrect facts never fly at the center, so the program wouldn’t have performed very well in this activity.

I searched on google for any other “Carol Marin’s” on the University of Chicago’s staff, and I found there is one woman named Carol Marin-Sanabria who is a systems administrator for the university’s Joseph Regenstein Library–hence the program’s confusion.

This proves that while AI can help to do quick research, we must always fact check that research ourselves for accuracy. As reporters, we must also disclose when we use AI in our reporting process in order to ensure transparency with our readers and viewers.

# # #

Leave a Reply

Your email address will not be published. Required fields are marked *