Heliyon by Elsevier/CellPress Publishes Anything You submit: A Golden Opportunity to Publish in Clarivate/Scopus Indexed Journal for PhD, Promotion, and Tenure
Based on the abstract and core findings, we present the detailed theoretical, methodological, and analytical issues with this study:
Abstract
In the abstract, the authors claimed that Structural Equation Modeling (SEM) in AMOS was used to analyze the study relationships (p. 1). However, this is completely wrong. The authors used the path analysis method, which only handles the manifest variables. In contrast, SEM analyzes the latent variables. The model reported on pages 5 and 7 is clearly not an SEM model. Therefore, the claim of using SEM is, per se, flawed.
Introduction
The paper doesn’t present problem, research question(s), and theoretical gap at all.
Literature Review
This paper doesn’t report “theoretical underpinnings.” Readers can’t figure out how many variables were reviewed and included in the model. Everything is messy; the separate theoretical linkages among study independent variables (IVs), mediating variables (MVs), and dependent variable (DV) were not established. Although the authors mentioned in the abstract (p. 1) “the hypothesis has been proven correct,” “Organizational support was hypothesized to be… (p. 7),” and “A hypothesis on the relationship between boredom and mental health was made… (p. 8),” strangely, hypotheses were not conjectured in the LR section. In addition, the study model/framework was also not presented at the end of the LR.
Methods
The Method section starts with the statement “The purpose of this study chooses to experimentally assess the model…” (bad English everywhere in the paper). It seems that the authors, editors, and reviewers have no idea that the study used a survey design, which is not an experimental method.
The study model/framework is placed in the “Methods” section with a strange label, “Model framework” (Fig. 2; p. 5). We really don’t know what “model framework” means. From the methods section, we came to know that the study included 7 variables in the model.
The authors report “A total of 556 lecturers and staff from a public sector university became the research subjects. (p. 4)”. We wonder how the results of one university sample could be generalized. This study has limited generalizability and external validity.
The study doesn’t provide any information on non-response bias (NRB), response bias (RB), and common method bias (CMB). The study mentions “N = 556 was the total number of questionnaires used in this study…” (p. 5). The study totally remains silent on how many subjects were contacted, how many responded at the first shot, and how many responded after follow-ups. The study was a cross-sectional survey research, hence, CMB could not be ignored. Strangely, we cannot find any clue on CMB.
Let’s look at instruments now. The study totally ignores the sources of scales. We don’t know whether the scales were adopted, adapted, or developed. Besides the fact that the study claims to conduct SEM, the factor loadings were not reported. We tried to search on web for the mentioned sample items of various scales as given on Page 6, but we found nothing. We wonder, without establishing credibility and validity of the psychometrics, how the study results can be trusted.
Results
Strangely, the hypothesis testing results are mentioned in this way: “The influence of organizational support on job crafting is β 0.14, t-values > 2.57, and p-values 0.001”; “organizational support on boredom β − 0.27 t-values > 1.96, p-values < 0.005” (p. 7). All results are reported in this way which is quite strange, ignorant, and wrong. The authors, editors, and reviewers don’t know that AMOS doesn’t calculate t-statistics. The study should have reported path coefficients, C.Rs, S.Es, and significance values. In addition, for some hypotheses, the authors reported t-value > 1.96 and for some t-value > 1.96. This is also wrong because researchers select one significance level (alpha =0.05/ CR > 1.96) and apply it consistently to all paths in their model. Similar blunder can be observed in Table 2 (p. 8). The authors mentioned “Path results of direct impact and indirect impact”, but indirect effects (mediation analysis) were not conducted and reported at all. In the model “Model Description of framework” Annex-1 (p. 12), the code for Work Engagement i.e., SE_AVG is not found in the study framework.
As we highlighted some major issues that should not have been published, this shows a complete editorial and peer review failure. We provide undeniable evidence to support our stance that Heliyon publishes articles without conducting well-informed peer review. We believe this article deserves to be retracted.
Don’t forget to leave you comments if you find this article interesting.
"Scholarly Criticism" is launched to serve as a watchdog on Business Research published in so-called Clarivate/Scopus indexed high quality Business Journals. It has been observed that, currently, this domain is empty and no one is serving to keep authors and publishers of journals on the right track who are conducting and publishing erroneous Business Research. To fill this gap, our organization serves as a key stakeholder of Business Research Publishing activities.
For invited lectures, trainings, interviews, and seminars, "Scholarly Criticism" can be contacted at Attention-Required@proton.me
Disclaimer: The content published on this website is for educational and informational purposes only. We are not against authors or journals but we only strive to highlight unethical and unscientific research reporting and publishing practices. We hope our efforts will significantly contribute to improving the quality control applied by Business Journals.