Springer Nature’s *Scientific Reports* has retracted four papers by a researcher in Saudi Arabia who claims “irrelevant reviewers” just couldn’t understand “a new area of statistics.”

Here’s the notice for one of the articles, “Neutrosophic statistical test for counts in climatology,” which appeared in September 2021 and has been cited once, according to Clarivate’s Web of Science:

The Editors have retracted this Article. Following publication, concerns were raised about the rationale for the approach presented and the underlying reasoning. A post-publication review of the authors’ mathematical arguments revealed a lack of clarity in the terms presented and inferences that are not adequately justified. The Editors therefore no longer have confidence in the conclusions presented.

The three other papers are:

- “Monitoring of production of blood components by attribute control chart under indeterminacy,” published in January 2021 (three citations);
- “Monitoring the temperature through moving average control under uncertainty environment,” from July 2020 (three citations);
- and, from November 2020 “Forecasting of the wind speed under uncertainty” (one citation).

Muhammad Aslam, of the Department of Statistics at King Abdulaziz University in Jeddah, implied that the journal simply couldn’t understand the sophistication of his research:

neutrosophic statistics is a new area of statistics. The theorm and equations using neutrosophic statistics are derived using the same lines as in classical statistics but with some modifications.

We provided clear and compenhsive derivations (and evaluated by neutrosophic experts before submissions) at the post publication review stage. In our opinion, it was reviewed by irrelevant reviewers who has no basic knowledge about neutrosophic statistics. The reviewers rejected our proofs without indicating any mistakes. We tried to convince the Editor but he didn’t accept of explanation and decided to retract papers. Therefore, we disagreed with these retractions.

We asked if he felt the work had been adequately peer reviewed and edited prior to publication:

Our four papers were reviewed by technical staff, acedamic Editor and more than 10 reviewers and published after several revisions. One of the Editor raised conserns on four papers and retracted them. We are surprised by this action. The journal Editor accepted papers after multiple revisions and it was retracted by Editor again. It is very surprised

(The both-sidesism about the armada of “irrelevant” reviewers sounds a bit like the joke about the food at the resort: bad, and such small portions!)

We reached out, via the Springer Nature press office, to *Scientific Reports* about the retractions, with specific questions about how the journal learned about the problems with the articles and whether it had any regrets about the thoroughness – or lack thereof, it seems – with the peer review and editing processes.

The response was this statement from Rafal Marszalek, the chief editor of the journal: ** **

We were alerted to potential issues with these four papers in December 2021 and began a careful investigation, in line with COPE Guidelines and Springer Nature policies. We conducted post-publication review, seeking advice from carefully considered experts with the right expertise to appreciate and understand the subject matter of the papers, as well as from our Editorial Board. As stated in the Retraction Notes, this post-publication review revealed a lack of clarity in the terms presented in the mathematical arguments, and inferences that were not adequately justified. In light of this, we decided to retract the four papers.

And what about the notion that the issues with the articles might have been identified prior to publication?

Sorry, you’ll just have to trust us, a spokesperson told us:

I’m afraid the details of the investigation are confidential, and for that reason we cannot comment on how the journal was alerted to the issue or the peer review process.

*Like Retraction Watch? You can make a **tax-deductible contribution to support our work**, follow us **on Twitter**, like us **on Facebook**, add us to your **RSS reader**, or subscribe to our **daily digest**. If you find a retraction that’s **not in our database**, you can **let us know here**. For comments or feedback, email us at team@retractionwatch.com*.

A bland and unsatisfactory statement from the journal. One hopes that, behind the scenes, they do make improvements to their review process….

“Neutrosophic” isn’t a term that inspires confidence.

Neutrosophistry.

Quite funny to see them retract a paper like this, while largely ignoring a much worse paper that I reported. All I got was a brief correspondence with the authors, whose response came down to “read the WHOLE paper” (I merely found more issues when I dug deeper into the parts that were not directly within my area of expertise). Even the Associate Editor stated that I probably would not be satisfied with the response – or rather, non-response – but never took it further.

Neutrosophic statistics is real according sources shown when googled. Just because something is unfamiliar doesn’t mean it is invalid. Sounds to me like someone was offended by the term.

There was a time when editors would dismiss any Bayesian based analysis automatically. While neutrosophic statistics are not industry standard, if the article authors’ analysis aligns with neutrosophic methods then the article should be accepted. Let the marketplace of ideas, to paraphrase George Box, decide if these wrong models are useful.

Why not just call it fuzzy statistics & be done with it?

Any new stats method should stand up to scrutiny – even from those who are not experts in that area. If it did not pass the smell test this time, then the authors should think how they can more clearly indicate their assumptions & methods so that it can become more understandable & the work can get a proper peer review. And the important point is that review should also come from other statisticians who might not have expertise in that particular method.

Complaining about ‘irrelevent’ reviewers is not going to convince anyone that this new stats method has gravitas I am afraid…

Could it be relevant that there were changes in SciRep’s editorial staff? For example, Christian Matheou is no longer a Deputy Editor…

But overall, SciRep seems to be heavily infiltrated by dishonest editors. I won’t put names here, only some fields where I recognize the actors:

– Catalysis

– Mechanical Engineering

– Mathematical Physics

– Chemical Engineering

– Materials: Nanotechnology

Ok, I somewhat lied. One name is safe to be written, Mohsen Sheikholeslami. Look no further than the RW database: http://retractiondatabase.org/RetractionSearch.aspx#?auth%3dSheikholeslami%252c%2bMohsen

Wow, I am just scratching the surface. I always understood that journals have been compromised for quite some time but this is the first time I have dug in…. Terrifying

And yes, one of the SciRep editors (Tahir Mahmood) is a fan of “neutrosophic method” – to the surprise of… no one!

😃

By the way, Tahir Mahmood at PubPeer:

https://pubpeer.com/search?q=tahir+mahmood

https://pubpeer.com/publications/79055B3F14DAAC436E40F0914FA0FA

Neutrosophic statistics is a generalization of classical statistics, fuzzy statistics and interval statistics, the reader may refer to recent publications. The neutrosophic statistics works like classical statistics but with additional parameter of uncertainty. The proofs can be understood by any one easily. But, the scientific evaluation can be done by those who are working in this area and have update knowledge. From post review it was clear when the proofs were rejected by Editor/reviewers without providing any reports or indicating any mistakes.

I am afraid the reviews at this journal are rather political…

It damages the journal’s reputation.

It’s strange to see retraction of few papers in the area of statistical quality control using neutrosophic statistics which is one of the burning area nowadays to work. I am very much concerned about journals peered review process and editorial board. It’s unfair to put all garbage on authors rather to trust or improve journals peered review process. In my opinion editorial boards of these journals did not play their role fairly.

Neutrosophic Statistics is one of the area which has wide application in medical, mathematics, financial phenomena and other related fields and also it depicts from the publication of this nature journals. In this particular issue as all retraction is made by single Deputy Editor. Journal post reviewed policy looks biased and influenced by particular Editor.

It seems that these are unfair retractions.

Neutrosophic statistics is a hot area of research. I disagree with the unfair retractions of four papers in this area.

Astonishing. It shoes the inability of journal review process.

The inability of editor to understand the new findings should not be weights much…. It sounds what need to scratch more at journals end rather the hindering new thoughts, new perspectives, new knowledge….

It damages the journal’s reputation.

The neutrosophic statistics is rapidly growing new area of research in comparable with the classical statistics. Its natural, every new area has to face tough-criticism. If I dig-up the introducing phase of say, Bayesian statistics then even today this important area has to face strong denigration in statistical inference. Lot of literature on neutrosophic statistics has been published in internationally reputed journals in recent past. The retraction of these papers by Editor looks unethical decision based upon the irrelevant reviewers.

“irrelevant”

“You keep using that word. I do not think it means what you think it means.”

~ Inigo Montoya

Getting the vibe from this comment section that the same person or group of people is/are writing most of the recent comments.

It reminds me of the Edit War that happened at Wikipedia years ago, when the Neutrosophy enthusiasts tried to create an advertisement there. All lost in time, like tears in rain, leaving only this:

https://en.wikipedia.org/wiki/Wikipedia:Articles_for_deletion/Paradoxism-Neutrosophy

In reading the beginning of “Neutrosophic statistical test for counts in climatology” I am startled that it was published at all. On the editing side, it is so riddled with obvious errors that the journal should be deeply ashamed to have published it in this state. But much, much worse, on the science side, the authors have misinterpreted their basic data. They use a table from the Global Historical Climatological Network. This table gives number of weather stations which broke a historical record per day and per month for various categories (temperature, precipitation, etc). It lists a “High Min” and “High Max” for upper temperature records. The authors interpret this as a range–a single value is being estimated with uncertainty, and falls between the limits of High Min and High Max. –It should have been a strong hint to them that Min is usually greater than Max, but apparently not.

What these numbers actually are: High Min is the number of times the *minimum* temperature for that day was higher than the minimum-temperature record, while High Max is the number of times the *maximum* temperature for that day was higher than the maximum-temperature record.

All of the data analysis is therefore nonsense, because it is based on a false reading of the data. Review should have caught this. It does not take any statistical training: it only takes asking yourself, if this is meant to be a range, how come Min is higher than Max? And how did we get a range out of a count of stations–why doesn’t the web page explain what this range is? The answer is, there is no range here, only two quite different sets of observations: how hot was it at the hottest time of day, and how hot was it at the coldest time of day?

The original reviewers, if there were any, should also be ashamed of themselves. This took about 10 minutes of work to determine.

Thanks for your nice explanation and also right, I think there are phenomenas in medical for example you are measuring the pain response of a dog after administering any surgery or treatment, you have decided to take take three reading of it’s pain on the basis of there observation in term of different parameters and they scored like 7,8 and 7 here one way is that we will take average response other is that we take in term of interval or we will say there is indeterminacy in measuring the response. Other way is in stock exchange data where share changes it behavior multiple time in the day if we consider in day it would be as you are explaining but I think to understand it or model it we can take it’s behaviour after every hour and it is not staying at point in interval it’s floating, so either we take average or the interval I think it make sense. In interval it is showing indeterminate behaviour.

You are correct that you think it make sense.

Otherwise, perhaps you should quit hurting those dogs.

There is no interval in these data.

The number you have interpreted as the top of the interval is the number of times the daily *high* temperature broke the record. The number you have interpreted as the bottom of the interval is the number of times the daily *low* temperature broke the record.

This is not a statistics problem. This is a very basic data interpretation problem. It makes all of your analysis of these data nonsense. If the paper were not already retracted your best course of action would be to retract it yourself immediately.

Dear Prof. Mary Kuhner,

Many thanks for your comments on the paper.

Thanks for this interesting question. Let “a” be the lower value and “b” be the upper value in the interval. In neutrosophic theory, we can consider that [a, b]=[b, a] i.e. it does not matter if a = b. In addition, in neutrosophic form, the first value (determinate) presents classical statistics. The second part presents the indeterminate part. The neutrosophic form can be written as, say XN=XL+/-XU*IN; IN belong to [0,1]. Therefore, the tables given in the paper are OK according to the neutrosophic theory.

During the prior publication review and post-publication review, there was no issue found in the data as is clear from the retraction note.

It does not matter what your statistics are. If you have a measure of blood pressure and you are interpreting it as a measure of body mass, your results will be meaningless. You are making a fundamental data analysis error of exactly that type. The results are therefore meaningless and should not have been published.

“Therefore, the tables given in the paper are OK according to the neutrosophic theory.”

Declaring that meaningless nonsense is suitable for neutrosophic analysis does not exactly inspire faith in neutrosophic theory.

Dear Prof. Smut Clyde,

Thanks for your comments.

“Declaring that meaningless nonsense is suitable for neutrosophic analysis does not exactly inspire faith in neutrosophic theory.”

It is not a matter of like or dislike. The scientific contribution should be appreciated and the researchers should be encouraged. Non-scientific words should be avoided in the scientific discussion and one should try to convenience others with evidence and arguments. Anyway, the neutrosophic theory was found to be more efficient than classical statistics in terms of adequacy, information, and flexibility, see the following papers.

References

Chen, J., Ye, J., & Du, S. (2017). Scale effect and anisotropy analyzed for neutrosophic numbers of rock joint roughness coefficient based on neutrosophic statistics. Symmetry, 9(10), 208.

Chen, J., Ye, J., Du, S., & Yong, R. (2017). Expressions of rock joint roughness coefficient using neutrosophic interval statistical numbers. Symmetry, 9(7), 123.

“The original reviewers, if there were any, should also be ashamed of themselves.”

At least they were

relevant!Dear Prof Mary Kuhner

Thanks again for comments again. Actually, the minimum value and the maximum value are expressed in interval for the same time period. To apply the proposed test, the data is rearranged in the [a, b] form. We have discussed/answered this type of questions many times in our several works.

I recommend you contact the maintainers of the data web site and ASK THEM what the two columns in the data mean. I do not believe your interpretation, but the originators of the data will have the final word.

Again, this is not a question of statistics. Either those two columns represent the minimum and maximum of a single value, or they do not. If they do not, your entire argument is nonsense.

This has been, in some ways, an enlightening exchange.

https://www.psychologytoday.com/us/blog/management-rewired/201005/changing-the-mind-the-mule

Dear Prof. Mary Kuhner,

Thanks for your interest.

“Again, this is not a question of statistics. Either those two columns represent the minimum and maximum of a single value, or they do not. If they do not, your entire argument is nonsense”.

(reply): The data is for the same (single) time period and selected very carefully.

I do not dispute that the data is for the same single time period.

I dispute that Min and Max in this table are to be interpreted as bounds on a single number. I have given my reasons repeatedly; you do not seem able to address them. CHECK WITH THE DATA PROVIDER please. The responsible practice of science demands self-criticism and correction of errors. If you are right, the data provider will confirm this and all will be well. If I am right and you are wrong, you will be prevented from making this error in the future, and that is also good.

By responding in the way that you have been, you create the impression that you do not care if your work is actually correct but only if it is perceived as correct. This is not good science, and does serious harm to your reputation.

Yeah, I’m reading “Forecasting of the wind speed under uncertainty” since this is somewhat adjacent to the kinds of questions my close colleagues are working on, and, I’m sorry, but the article doesn’t even provide enough clarity for the reader to evaluate whether there is anything useful in this fuzzy statistics methodology:

“The application of the proposed semi-average method under indeterminacy is given using real wind speed data. The data of the year 2020 is collected from the Pakistan Meteorology department. The last available data of the first three months have been used to explain the proposed method. The wind speed (mph) data having the minimum value and the maximum value is taken. The energy expert is interested to forecast the wind speed on the basis of the given data. As the wind speed (mph) data is recorded in the interval, therefore, the use of the existing semi-average method under CS is not suitable. The proposed semi-average method under indeterminacy is suitable to apply for such wind speed data. The necessary calculations for the wind speed data of month January, February, and March 2020 using the proposed method are explained.”

There is *one* plot using real wind speed data, and while one can guess by looking at the x-axis that the data is daily, it is completely unclear if this is data from a particular weather station, or averages over an area, or … what precisely. The data is plotted or each month separately (why? wind speeds don’t care about calendar intervals). Obviously short-term linear trends have little predictive skill in wind speeds, and by squinting at the graph I can guess that the methodology somewhat captures, or aims to capture, the degree that linear trends break down. There is no discussion of how far out it is reasonable to extrapolate wind speeds with this kind of method. There are references to high and low values, but the text is too fuzzy to even find out what these refer to. The study is done entirely retrospectively with no attempt to forecast anything in particular.

The dire need for language editing shouldn’t distract the reader from the absence of even an attempt to state the scientific question clearly and then treat it.

I’ll offer an English translation. Your post suggests you arrived at one sufficient for your purposes, but you piqued my curiosity as to what that would look like.

The passage you quote says roughly this:

We used three months of minimum and maximum wind speed data (Jan-March 2020) from the Pakistan Department of Meteorology to illustrate our method. The problem is to determine a linear trend for wind speeds based on the data. Since a range of values is given for each entry, the usual semi-averaging method for time series is not applicable. Our method for working with data given as interval-ranges is applicable. We will explain the calculations.

The data used are minimum and maximum wind speeds for various days. Unsurprisingly the minima tend to be zero for long periods (see the tables).

While there is not much content in the passage cited, it is more coherent than what follows. The first two equations (or “steps”) appear to be both incorrectly labeled and incorrectly written, if you compare their verbal descriptions, or the tables – where L and U stand for lower and upper, and one can guess what N stands for under the circumstances.

My sympathies.

Thanks to all. I am working to update these papers for next submissions. I will try my level best to improve them in the light of your kind suggestions as much as possible.

Pointless.

Dear Prof Mary,

“I dispute that Min and Max in this table are to be interpreted as bounds on a single number”

“I dispute that Min and Max in this table are to be interpreted as bounds on a single number”

Can you please explain what is meant by bond on single number/single number? I am updating this paper and add details about it in light of your comment.

You are interpreting the data like this: Max and Min are the bounds of a single number (number of stations reporting a record-breaking high temperature in that time period) that was estimated with error.

This is incorrect. Max refers to the number of stations at which the *highest* temperature (generally a daytime temperature) broke the highest-temperature record. Min refers to the number of stations at which the *lowest* temperature (generally a nighttime temperature) broke the lowest-temperature record. (These will of course be two different records.) It is meaningless to treat these as two ends of an interval. Note that Min is often higher than Max, which would make no sense if Min was supposed to be the low end of an interval.

You could make an analogy with blood pressure. When blood pressure is reported as 119/79, this does not mean that there is a range [79,119] within which the true blood pressure lies. It means that the blood pressure when the heart relaxes is 79, and the blood pressure when the heart contracts is 119. It is meaningless to treat such data as an interval expressing measurement error or uncertainty.

I would not recommend revising this paper. You can get it published, because some journals will publish anything. But it will not do you any credit.

If you insist, here are two more serious problems to start with: (1) You describe using a Poisson but many of the examples you give are of continuous data, for which the Poisson is NEVER appropriate. (2) Because each time a record is broken it sets a new, more extreme record, the rate at which records are broken over time is not expected to be constant, and therefore violates the Poisson constant-rate requirement.

That’s as far as I got in reading it; three very serious errors very early. Something went horribly wrong with the process that led you to write this paper. I don’t think you should revise it or write any more until you get things straightened out.

Florentin Smarandache explained that neutrosophism derived from paradoxism, an arts protest against the communist regime in Romania. https://digitalrepository.unm.edu/math_fsp/163/

It is unclear whether neutrosophism is likewise performance art. Here’s some of his deep philosophical musings:

“As in the Primitive Society, the modern society is making for MATRIARCHATE –

the woman leads in the industrialized societies.

From an authoritarian PATRIARCHATE in the Slavery and Feudalism towards a more democratic MATRIARCHATE at present.

The sexuality plays an immense role in the manipulation of men by women, because the women “have monopoly of the sex” as was justifying to me an American friend kept by his wife henpecked!

A cyclic social development.

The woman becomes the center of the society’s cell, the family.

The sexual pleasure influences different life circles, from the low class people to the leading spheres. Freud was right…

One uses women in espionage, in influencing politicians’ decisions, in attracting businessmen – by their feminine charms, which obtain faster results than their male proponents.

The women have more rights than men in western societies (in divorce trials). “

http://eusa-riddled.blogspot.com/2014/06/full-metal-zizek-we-read-self-promoting.html

http://1.bp.blogspot.com/-PcHxm0WxJN0/U6YL7Db_NBI/AAAAAAAAPKU/qDhlBQ0ooH4/s1600/neutro.JPG

A sexy neutrosophic conclusion to be sure.

certainly, I lived stories close to this type… one, the publisher refused the publication of my paper for 2 years despite two positive reports…

Even renowned Publishers publish articles for the eyes of the author who has a “high” h_index by self-citations (~50% of references) despite being 10 years young in scientific research…

also the reviewers are sometimes incompetent, but for the editor it is enough to be able to write 2 pages in English.

the colors, the writing in literary English despite the scientific content…

we see disasters sometimes.

We provided a critique of some neutrosophic statistical methods. Please see

https://ieeexplore.ieee.org/document/9893805

In my view much of the neutrosophic approach is nonsense.

I am confident that “parts of it are excellent”.

“Citation needed.” So –

https://en.wikipedia.org/wiki/Curate%27s_egg

Somehow this reached me in the form “parts of it are very nice” which is apparently somewhat adrift from the main line, but holds my affection.