
The university ethics committee that reviewed a controversial study that deployed AI-generated posts on a Reddit forum made recommendations the researchers did not heed, Retraction Watch has learned.
The principal investigator on the study has received a formal warning, and the university’s ethics committees will implement a more rigorous review process for future studies, a university official said.
As we reported yesterday, researchers at the University of Zurich tested whether a large language model, or LLM, can persuade people to change their minds by posting messages on the Reddit subforum r/ChangeMyView (CMV). The moderators of the forum notified the subreddit about the study and their interactions with the researchers in a post published April 26.
The identity of the researchers and the department they work in has not been made public. We reached out via their project’s email address, and they referred us to the University of Zurich media relations office.
Rita Ziegler, head of media relations AI at the University of Zurich, told us by email today that the Ethics Committee of the Faculty of Arts and Social Sciences reviewed the research study in April 2024. It was part of a larger project, and one of four studies, on “investigating the potential of artificial intelligence to reduce polarization in value-based political discourse.”
Ziegler continued:
In its opinion on the project, the Ethics Committee of the Faculty of Arts and Social Sciences advised the researchers that the study in question was considered to be exceptionally challenging and therefore a) the chosen approach should be better justified, b) the participants should be informed as much as possible, and c) the rules of the platform should be fully complied with.
Recommendations from the ethics committees are not legally binding, Ziegler said. “The researchers themselves are responsible for carrying out the project and publishing the results.”
Whether the researchers changed their approach based on that opinion or other factors is unclear, but they did not inform the CMV moderators, nor the CMV commenters, about the study until after the researchers finished their data collection, as we noted in yesterday’s story.
CMV has a rule against undisclosed use of AI, and Reddit itself has a rule that states, “don’t impersonate an individual or an entity in a misleading or deceptive manner.” The study violated both of those policies, CMV moderator u/DuhChappers told us.
“The relevant authorities at the University of Zurich are aware of the incidents and will now investigate them in detail and critically review the relevant assessment processes,” Ziegler said. The principal investigator of the study has been issued a formal warning, Nathalie Huber, a media relations officer, said.
Reddit has issued a response to the study as well. Reddit’s chief legal officer Ben Lee posted on the CMV thread:
What this University of Zurich team did is deeply wrong on both a moral and legal level. It violates academic research and human rights norms, and is prohibited by Reddit’s user agreement and rules, in addition to the subreddit rules. We have banned all accounts associated with the University of Zurich research effort.
Reddit is “in the process of reaching out to the University of Zurich and this particular research team with formal legal demands,” Lee said in the post.
As a result of this study, the University of Zurich’s ethics committee of the Faculty of Arts and Social Sciences “intends to adopt a stricter review process in the future and, in particular, to coordinate with the communities on the platforms prior to experimental studies,” Ziegler said.
The researchers made available a preliminary article on the work, but Ziegler told us the researchers have decided not to publish the findings.
Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on X or Bluesky, like us on Facebook, follow us on LinkedIn, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].