Google is reportedly developing an AI tool that could automate news writing. The tool, known internally as Genesis, is still in the early stages of development, but it has the potential to revolutionize the way news is written.
Genesis works by ingesting large amounts of text data, including news articles, social media posts, and other sources. It then uses this data to generate new text that is similar in style and content to the original data.
Google believes it can help them by introducing AI to the mix. But AI isn’t what journalists need. And the search giant’s foray into newsrooms should concern readers, too. Google said it was in the early stages of exploring the AI tool, which it said could assist journalists with options for headlines or different writing styles. It stressed that the technology was not intended to replace journalists.

The report comes as several news organizations, including NPR and Insider, have notified employees that they intend to explore how AI could responsibly be used in their newsrooms.
News organizations around the world are grappling with whether to use artificial intelligence tools in their newsrooms. Many, including the Times, NPR and Insider, have notified employees that they intend to explore potential uses of AI to see how it might be responsibly applied to the high-stakes realm of news, where seconds count and accuracy is paramount. Some news organizations, including The Associated Press, have long used AI to generate stories for things like corporate earnings, but these news stories represent a small fraction of the organization’s articles overall, which are written by journalists.

Last December, for the first time, I also put those documents to another reporter — ChatGPT, which had been released to the public a few days before. It made its way through all those journalistic debates in just a few seconds. The story that resulted was purple-prosed in places (like calling the shooting “tragic” three times) and a little racist (like gratuitously mentioning “several black men” who were allegedly standing nearby). It felt obliged to use something from each person the dayside reporter had interviewed, even when they were repetitive or ethically questionable; it called both victim and suspect gang members without hesitation. What ChatGPT produced would have been a pretty bad crime story, but to be honest, I’ve read worse ones written by humans.