The findings were, to say the least, shocking: A researcher in New Zealand claimed that Google searches about violence against women soared during the early months of the Covid-19 pandemic — raising the prospect that quarantines were leading to a surge in intimate partner violence and similar crimes.
Shocking, yes, but now retracted because the methodology of the study was “catastrophically wrong,” in words of some critics.
The paper, “COVID-19, suicide, and femicide: Rapid Research using Google search phrases,” was written by Katerina Standish, of the University of Otago’s National Centre for Peace and Conflict Studies and appeared online in January 2021 in the Journal of General Psychology.
As the author’s institution claimed in its headline for a press release about the article:
Pandemic response creates perfect storm for self-harm and domestic violence
For her study, Standish attempted to use Google to track the frequency of certain search terms associated with feelings of helplessness and despondency, as well as violence against women — strings like “lost my job”, “I don’t have anywhere to go”, “I want to die”, “no one will help us”, “how to hit a woman so no one knows”, and “he will kill me”.
Per Otago:
Results showed an “overwhelming upsurge” from all six categories from 31 per cent to 106 per cent.
The release quoted Standish as saying:
“The focus on tackling the pandemic means, globally, the first objective is to control its ability to replicate and mutate. We know this is necessary, but the lockdowns and social distancing have other effects that are only beginning to reveal themselves and they are deadly too,” she says.
Not surprisingly, Standish’s paper garnered significant attention in the media — social and otherwise — with more than a dozen news stories about the article (those data are no longer available because the paper has been retracted). In an April 26 column on MSNBC.com, Liz Plank used the study in her argument that:
Men are becoming more violent against women around the world. Google shows how.
But critics who read Standish’ paper after reading the MSNBC piece pointed out that her analysis was, in the words of one group who looked at her paper, “catastrophically wrong.”
Writing on Medium, a trio of social scientists — Alison Gerber and Ov Cristian Norocel of Lund University in Sweden and Francesca Bolla Tripodi of the University of North Carolina at Chapel Hill — noted that Standish included not discrete Google searches but rather the results of her own searches — a critical distinction that invalidated her conclusions.
MSNBC quickly corrected Plank’s column and removed references to Standish’s paper, but the post is largely premised on the notion that Google can and should be doing more when it comes to violence against women.
The inevitable retraction notice reads:
Since publication, the author has identified inaccuracies in the methodology. Specifically, the article is based on problematical use and interpretation of search entries and their results to draw conclusions about the presence of psychological stress in online searches. As this directly impacts the validity of the reported results and conclusions, the Author notified the Editor and Publisher and all have agreed to retract the article to ensure the integrity of the scholarly record.
Standish did not immediately respond to a request for comment.
For Gerber and her colleagues, the problematic paper raises serious issues about the practice of science:
How did this article get through peer review? Like the author, the journal’s reviewers and editors seemed to have been glamoured by the shine, tech fetishism, and naive empiricism of even the most poorly executed digital methods — without the methodological humility to work together with colleagues from information science, or at least check in with someone familiar with the basic workings of tools like Google. If we want to catch scientific missteps like these, we must recognize that good science takes time. And this mishap shows how desperately we need more robust digital literacy education at all stages of life — because if PhDs don’t understand the basics of what Google returns are and what they are telling us, what hope do the rest of us have?
Like Retraction Watch? You can make a one-time tax-deductible contribution or a monthly tax-deductible donation to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].
“[T]he journal’s reviewers and editors seemed to have been glamoured by the shine, tech fetishism, and naive empiricism of even the most poorly executed digital methods — without the methodological humility to work together with colleagues from information science, or at least check in with someone familiar with the basic workings of tools like Google.”
You can blame the editors for not bringing in a technology expert, but how are the reviewers to blame here? They don’t know what they don’t know. Not to mention that this sounds like it was written by a college freshman trying to impress someone with fancy vocab and phrasing. Yeesh.
> “You can blame the editors for not bringing in a technology expert, but how are the reviewers to blame here? They don’t know what they don’t know. ”
I’d say that a reviewer who does not have the expertise to review a paper or parts of it should either clearly identify the aspects which they cannot really comment on, or reject the review request altogether. (Maybe they did and the editor nevertheless published the paper).
> “Standish did not immediately respond to a request for comment. ”
For the record, Dr. Standish already commented on the paper’s flaws on Twitter when it received attention/criticism:
https://twitter.com/Katerinasyoga/status/1387082486643970050
“An article came out yesterday that has generated a lot of scrutiny about the web research paper I wrote last November looking for online indicators of psychological stress related to unemployment, suicide and indicative and intentional male violence. Firstly, I want to thank Liz Plank for amplifying VAW during the pandemic. I also want to acknowledge and honour the heartfelt and intelligent discourse that has surrounded inquiry into the study online. And inquiry into VAW. I now recognise that the results of my study are flawed. I have reached out to the Journal to discuss inaccuracy and its shortcomings. New methods need sunlight…that’s how we create knowledge. How we improve our research. I want to thank those of you who have taken the time to reach out to me to personally to signal its shortcomings. I also want to thank those that have contacted me to thank me for bringing to light a reality of VAW during the pandemic. Web researchers have cogently detailed that despite best intentions, the method I utilised in this paper, and therefore my results are inaccurate. Thank you for your detailed and empathetic communication!! Some of you were less polite…there is risk in inquiry but… I had no intention to put others at risk and my seeking to research psychological stress…has now potentially harmed others. The first time I googled these terms my heart skipped… I was shocked. I deeply regret that my work is injurious and unsettling and although my goal was-To look for signs of existential stress during the first 6 months of the pandemic, to look for signs of despondency, insecurity, helplessness, hopelessness and VAW my technique was indeed flawed. I sincerely apologise and will make amends. The topics I looked at deserve better methods. I am very grateful that the citizen scholars and methodological experts who have suggested other ways up this mountain have reached out to me. To activists, advocates and victims of VAW I’m sorry I failed you. For researchers seeking to use my method…don’t. It is flawed. To the Journal of General Psychology that saw merit in my work, I am grateful, but let’s take a closer look please.”