AI is engulfing journalism for many of the same reasons why humans do: develop a concrete understanding of the world; Think critically; to differentiate what is true and what is not; become a better writer; and to distill the history and context in something accessible. But what happens to AI when our journalistic institutions fall apart? On what basis will everyone answer the questions? Write your emails? Do they do their jobs? Because while the alarm bells have a sonar on the legs for journalism for decades, the final call of the search feels like the possible death of death. What does that mean for AI, and for us we try to make sense of an increasingly confusing world?
In our haste to integrate the generative AI in every corner of our lives, we have ignored a fundamental truth: AI cannot work without a baseline or verified facts. And, at this time, that baseline is built and maintained by the so -called “traditional” journalism (of the type with facts of facts and editors). As AI threatens to change the search, the monetization of the media and the behavior of news consumption, it is also undermining the same industry that feeds the facts on which it depends. A society cannot work without objective journalism, and Neith can AI.
Loss or precision
Apple’s recent research says that “much is not needed to make the generative AI fall into ‘total precision collapse.” “He continues to demonstrate that the generative models of AI lack a strong logical reasoning, unable to function beyond their complexity. I immediately thought or a recent piece of The New YorkerIn which Andrew Marantz joins several examples of autocracy, against thousands of years of history, to (try) to make sense of what is happening in the United States at this time. I imagined that the AI tried to make the same essential short circuit before being able to form the outstanding points that make the piece so shocking. When asked to think too much, the AI breaks.
An even more conviction of the BBC reports that AI cannot accurately summarize the news. He asked Chatgpt, Copilot, Gemini and Perpleity to summarize 100 news and asked the expert journalists to qualify each answer. “In addition to containing objective inaccuracies, the chatbots fought to differentiate between opinion and made, editorialized I did not include an essential context,” says the report. Almost a fifth of summaries included false facts and contribution distortions: 19%!
There are more, or course. This study by Mit Sloan shows that AI tools have an appointment manufacturing history and reinforce gender and racial bias, while this is Fast company The article argues that the “sufficiently good” standards of Journalism driven by AI due to the income that these tools are accepted.
And that, of course, is the least human reason, is swallowing journalism: money. Nothing of that money is financing the journalistic institutions that promote all this experiment. What happens to our society when the central pillar of a true and free press collapses under the weight of the thing that has consumed it careless? Our lords of AI must give a real value in the right of reports now to guarantee their continuous existence.
Josh Rosenberg is CEO and co -founder of the first day agency.