Can linguistic patterns identify data cheats?

JLAPSCunning science fraudsters may not give many tells in their data, but the text of their papers may be a tipoff to bad behavior.

That’s according to a new paper in the Journal of Language and Social Psychology by a pair of linguists at Stanford University who say that the writing style of data cheats is distinct from that of honest authors. Indeed, the text of science papers known to contain fudged data tends to be more opaque, less readable and more crammed with jargon than untainted articles.

The authors, David Markowitz and Jeffrey Hancock, also found that papers with faked data appear to be larded up with references – possibly in an attempt to make the work more cumbersome for readers to wade through, or to tart up the manuscript to make it look more impressive and substantial. As Markowitz told us: Continue reading Can linguistic patterns identify data cheats?