Experiment using AI-generated posts on Reddit draws fire for ethics concerns

An experiment deploying AI-generated messages on a Reddit subforum has drawn criticism for, among other critiques, a lack of informed consent from unknowing participants in the community. 

The university overseeing the research is standing by its approval of the study, but has indicated the principal investigator has received a warning for the project. 

The subreddit, r/ChangeMyView (CMV), invites people to post a viewpoint or opinion to invite conversation from different perspectives. Its extensive rules are intended to keep discussions civil. 

Early Saturday morning, CMV moderators posted a long message about the experiment, designed to study whether large language models, or LLMs, could be used to change views. It began:

The CMV Mod Team needs to inform the CMV community about an unauthorized experiment conducted by researchers from the University of Zurich on CMV users. This experiment deployed AI-generated comments to study how AI could be used to change views.  

The researchers, who requested anonymity form the subreddit moderators, provided this description of the research:

Over the past few months, we used multiple accounts to posts published on CMV. Our experiment assessed LLM’s persuasiveness in an ethical scenario, where people ask for arguments against views they hold. In commenting, we did not disclose that an AI was used to write comments, as this would have rendered the study unfeasible. While we did not write any comments ourselves, we manually reviewed each comment posted to ensure they were not harmful. 

The researchers’ note continued:

We recognize that our experiment broke the community rules against AI-generated comments and apologize. We believe, however, that given the high societal importance of this topic, it was crucial to conduct a study of this kind, even if it meant disobeying the rules.

User accounts created to post AI-generated content posed as a victim of rape, a trauma counselor specializing in abuse, a black man opposed to Black Lives Matter, among other personas, according to the moderators’ post. All user accounts linked to the experiment and listed in the post have been suspended. The Zurich group has shared a preliminary writeup of its findings.

“This is one of the worst violations of research ethics I’ve ever seen,” Casey Fiesler, an information scientist at the University of Colorado, wrote on Bluesky. “Manipulating people in online communities using deception, without consent, is not ‘low risk’ and, as evidenced by the discourse in this Reddit post, resulted in harm.” 

Sara Gilbert, research director of the Citizens and Technology Lab at Cornell University, claimed the study has harmed CMV itself. The subreddit has been “an important public sphere for people to engage in debate, learn new things, have their assumptions challenged, and maybe even their minds changed,” she wrote on Bluesky. “Are people going to trust that they aren’t engaging with bots? And if they don’t, can the community serve its mission?”

Trust is a theme in some of the 1,500-odd comments on the original r/ChangeMyView post. Other outlets have weighed in on the project as well. 

In response to follow-up questions, CMV moderator u/DuhChappers said the experiment violated Reddit’s rule to not impersonate an individual or an entity in a misleading manner. “I think it would be a stretch to say that these accounts did not impersonate individuals in a deceptive manner. The bots literally said things like ‘I am a black man and ‘I am a sexual assault survivor’ when those are manifestly untrue,” the moderator wrote in a message to us.

OpenAI, maker of ChatGPT, has an agreement with Reddit to use its content to train its models. Earlier this year, OpenAI used content from r/ChangeMyView to test the persuasiveness of its AI models, TechCrunch reported. This research used “a downloaded copy of r/changemyview data on AI persuasiveness without experimenting on non-consenting human subjects,” the CMV moderators noted. 

A message sent to the Zurich researchers’ anonymous email account referred us to the University of Zurich media relations office, which did not immediately respond. 

The moderators stated they have filed a complaint with the University of Zurich’s institutional review board. A response from the university’s Faculty of Arts and Sciences Ethics Commission  indicated the matter had been investigated and the principal investigator had been issued a formal warning, according to the moderators.

The moderators had also asked the University of Zurich to block the research from being published. The response from the university noted that is outside their purview. A university response quoted in the post stated:

“This project yields important insights, and the risks (e.g. trauma etc.) are minimal. This means that suppressing publication is not proportionate to the importance of the insights the study yields.”

In a follow-up message, the u/DuhChappers pointed to a list of studies the subreddit has participated in. “We are very happy to provide data and aid to researchers, especially when they approach us beforehand and let us know what they are planning,” the moderator wrote. “The difference in this study is both that the researchers did not ask us before, and that they were actively manipulating members of the subreddit rather than simply observing data. That is a line we cannot accept being crossed.”


Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on X or Bluesky, like us on Facebook, follow us on LinkedIn, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].


Processing…
Success! You're on the list.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.