Guest Post - When Editors Fail to Scientifically Review Papers: A Case Article from Scientific Reports by Springer Nature
Published on: 5 August, 2025
The article under criticism today was published on 1 July 2025 in Scientific Reports (Clarivate’s SCIE Impact Factor 2024: 3.9, Scopus (Q1), DOAJ, PubMed etc.), a journal published by famous Springer Nature. The title of the article is "Factors influencing the intention of textile and garment SMEs to adopt digital technologies and its impact on performance"; it was authored by scholars from Indonesia and South Korea. I shall mention only a few significant issues reflecting the complete editorial failure:
First, I find that the article is full of unacceptable English language mistakes. Here are a few instances only:
1. "Hypothesis 3 (H3): Social Influence (SI) have a positive effect on Behavioural Intention (BI) to use digital technology in textile and garment SMEs”; 2. “Hypothesis 4 (H4): Facilitating Condition (FC) have a positive effect on Actual Behaviour to Use (AU) digital technology in textile and garment SMEs”. Similarly, H4 to H10 all lack subject-verb agreement (p. 7 – p. 8).
2: Look at Table 5 and Figure 4 (pp. 16-17). Do you know what "betha” means? The authors repeatedly mentioned this word, which reflects that it’s not a mistake; rather, they don’t know what the statistical term for slope in regression analysis is.
3: In Table 5, look at the result of H9. It’s “Rrejected.” What an editorial failure!
Second, on page 9, under the heading “Research method,” subheading “Step of conducting the research” (it’s single-step research. LOL!) The authors mention:
"This study employed a mixed-methods approach, integrating both qualitative and quantitative methodologies. Initially, this research used the qualitative approach to develop a thorough understanding of the research topic, as outlined. It was instrumental in identifying the research problem, constructing the research model, formulating hypotheses, and creating measurement items. This research used the PRISMA framework, a robust and widely accepted tool, to find a good research model based on the literature review."
The authors further endorsed the use of the qualitative method in Figure 3 (p. 10). I wonder why the authors think PRISMA is a qualitative method. PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) is a reporting guideline. It is a standardized set of recommendations and a checklist for how to report the methods and results of a systematic review. PRISMA guidelines can be used to synthesize either quantitative or qualitative studies. It is not a qualitative method at all. However, in this paper, the authors claimed that they conducted literature review, developed hypotheses, conceptual model, and measurement items using qualitative method i.e., PRISMA.
Third, now bring your attention to hypothesis testing results. See Table 5 [already reported above] (p. 16) for the results of H2, H8, and H9. For H2, the authors reported t-stats = 1.698 (p = .053); for H8, they reported t-stats = 1.389 (p = .082); and for H9, they reported t-stats = 1.389 (p = .082). Based on these reported results, they accepted H2 and H8; however, they rejected H9. Our readers can see that the t-stats and p-values for H8 and H9 are similar, but the authors accepted H8 and rejected H9. Strangely, on page 14, in the section "Result of Hypothesis Testing," the authors accepted H2, H8, and H9 in the text explanation.
I have calculated the actual p-values for the given t-stats by selecting a 0.05 significance level for the two-tailed t-test and found that for H2, t-stats = 1.698 (p = .092); for H8, t-stats = 1.389 (p = .167); and for H9, t-stats = 1.389 (p = .167). Our readers may observe that the authors manipulated the p-values to show that the results were significant. However, the authors forgot that PLS-PM uses t-stats for hypothesis testing and the values attained must be greater than 1.96 to achieve significance. This practice is often known as p-hacking.
In this article, I provided clear evidence of complete editorial failure of the editors of the Scientific Reports and its publisher, Springer Nature. Editors and reviewers had no knowledge of wrong methodology reported by the authors, incorrect analysis results reporting, p-hacking, and processing the manuscript for orthographic editing. Do our readers want to publish their work in “Scientific Reports”? I doubt if any serious researcher would agree after reading this piece.
Don’t forget to leave a comment and share this post.
About the Author: This investigation was conducted by Dr. Ch. Mahmood Anwar (independent critic), who is known for editing, statistical analysis, criticism of published business research, research methods, theory development, new psychological construct development and validation, etc. The contact details of the author are given "here".
"Scholarly Criticism" is launched to serve as a watchdog on Business Research published in so-called Clarivate/Scopus indexed high quality Business Journals. It has been observed that, currently, this domain is empty and no one is serving to keep authors and publishers of journals on the right track who are conducting and publishing erroneous Business Research. To fill this gap, our organization serves as a key stakeholder of Business Research Publishing activities.
For invited lectures, trainings, interviews, and seminars, "Scholarly Criticism" can be contacted at Attention-Required@proton.me
Disclaimer: The content published on this website is for educational and informational purposes only. We are not against authors or journals but we only strive to highlight unethical and unscientific research reporting and publishing practices. We hope our efforts will significantly contribute to improving the quality control applied by Business Journals.