A computer student is behind a new AI tool designed to track redditors that show signs of radicalization and implement bots to “derail” through conversation.
First reported by 404 mediaPrismx was built by Sairaj Balaji, a computer student in Srmist in Chennai, India. The tool works analyzing publications for specific keywords and patterns associated with extreme views, giving these users a “radical score.” The senior scorers are attacked by the AI Bots program to try to “disagree” by involving users in the conversation.
According to the Federal Government, the main threat of Terreat for the USA. Now is the radicalized people to online violence through social networks. At the same time, there are fears about surveillance technology and AI that infiltrates the online communities, not to mention concerns about the ethical mining field or the implementation of said tool.
Responding to the groups, Balaji clarified in a LinkedIn post that the tool conversation part has not tried the leg in real Reddit users without consent. On the other hand, the scoring and conversation elements were used in simulated environments only for research purposes.
“The tool was designed to cause discussion, not controversy,” he explained in the publication. “We are at a point in history in which dishonest national actors and states are already deploying an armed AI. If a university student can build something like Prismx, he raises urgent questions: who is looking at observers?”
Although Balaji does not claim to be an expert in melting, as an engineer he is interested in the ethical implications of surveillance technology. “The Sparfort Sparks debate. The debate leads to supervision. And supervision is how we avoid the misuse of emerging technologies,” he said.
This is not the first time that Redditors has used as recently. Last month, researchers at the University of Zurich faced an intense reaction after experimentation in a Subnet without suspecting.
The investigation implied the implementation of Bots with AI at the Subjorddit of change of my opinion, which is positioned as a “place to publish an opinion that accepts can be flaved”, in an experiment to see if IA could be used to change people. When the Redditors discovered that they were being experienced without their knowledge, they were not impressed. Neinder was the ITELF platform.
Ben Lee, the legal director of Reddit, wrote in a publication that neither Reddit nor the R/CHANGEMYVWEW mods knew about the experiment in advance. “What this team of the University of Zurich did is deeply mistaken both on a moral and legal level,” Lee wrote. “Viola academic research and human rights norms, and is prohibited by the Reddit user agreement and user rules, in addition to the Subnetdit rules.”
While Prismx is not currently proven of real unseen users, it accumulates on the increasing issue of the role of artificial intelligence in human spaces.