Elite Journal Presents Amiss Research Model: An Instance of Editorial Failure
Published on: 1 Jan, 2025
Do you know that research models are the backbone of any research paper because they drive the research methodology and analysis technique? The exact depiction of the research model is essential which should clearly show hypothesized relationships proposed in a study. Today we shall analyze the article titled "Loaded with knowledge, yet green with envy: leader–member exchange comparison and coworkers-directed knowledge hiding behavior" published in the "Journal of Knowledge Management" which is published by Emerald (UK). The journal is indexed in, inter alia, Social Sciences Citation Index (SSCI) [IF: 6.60 for 2023], Scopus (Q1), Cabell's Management Directory, Current Contents/Social and Behavioral Sciences (CC/S&BS), PsycINFO. This article was published in 2020 and until now, the article received 102 citations according to Google Scholar.
In this article the authors proposed the following model (p. 1658):
On page no. 1661, the authors mention that they used PROCESS macro Model 9 to test H3, H4 and H5. On the surface, the model seems right but only to a novice. However, experienced researchers will immediately detect that the model is flawed and doesn't rightly present the proposed hypotheses of the study and PROCESS macro Model 9. Below we present the original PROCESS macro Model 9 as presented by Hayes (2013):
As shown above, PROCESS model 9 is a moderated mediation model with dual moderators connected to X-M path. Now our readers may think: Did the authors depict the wrong model or did they wrongly use PROCESS macro Model 9 to test H3, H4 and H5 was wrong?
The answer to this question is interesting. Before answering this question, we would like to present PROCESS macro Model 11 below:
Our readers will be amazed to know that the authors rightly used the PROCESS macro Model 9 to test H3, H4 and H5 but they wrongly depicted PROCESS macro Model 11 as their proposed study model. We think that the authors were confused because they were not able to spatially visualize the modelled relationships, therefore, depicted PROCESS macro Model 11 in lieu of PROCESS macro Model 9. However, every professional who understands different kinds of moderated mediation models can easily detect the difference between PROCESS macro Model 9 and PROCESS macro Model 11. In the Model 9, two moderators directly connect to the IV-MV path, however, in Model 11, one moderator directly connects to the IV-MV path while another moderator connects to the first moderator. We know that PROCESS macro Model 9 equations will be solved by estimating 7 regression parameters, whereas PROCESS macro Model 11 equations will be solved by estimating 9 regression parameters. Therefore, both are distinct moderated mediation models.
By looking at the proposed hypotheses, we conclude that the authors wrongly drew PROCESS macro Model 11, in fact, they were supposed to draw PROCESS macro Model 9 as their study model. We wonder that along with the authors, the editors and reviewers were also confused and were not able to identify, and rectify this stern mistake.
Although, Weng et al. (2020) embed many other errors but we are not interested to immerse in details. However, we would like to mention “dishonest reporting of psychometrics” by the authors. For instance, the authors mention on page no. 1659:
“We followed the work of Wu et al. (2015) and used the goal interdependence scale developed by Chen and Tjosvold (2006) to measure cooperative and competitive goal interdependence…”
The problem lies in the above statement. Chen and Tjosvold (2006) originally developed the scales to measure Cooperative goals and Competitive goals for "employee-superordinate", however, Weng et al. (2020) tapped "employee-employees" goals by changing the word "manager" to "co-workers". We know that this type of scale lies within the category of "adapted scale" not "adopted scale"/"used scale" but Weng et al. (2020) dishonestly presented the scale as "adopted/direclty used" one. We know once wordings of established scales are changed, authors should conduct "exploratory factor analysis" and fulfil its complete reporting formalities, nevertheless Weng et al. (2020) didn't conduct and provide the EFA results. We find similar treatment for other reported scales.
Based on the above presentation and findings, we conclude that Weng et al. (2020) presented a completely wrong and misleading research model. In addition, the article presents an instance of dishonest psychometrics reporting. We wonder why editors and reviewers of the Journal of Knowledge Management failed to identify and rectify these crucial mistakes.
If you like our criticism, please comment and share your thoughts.
"Scholarly Criticism" is launched to serve as a watchdog on Business Research published in so-called Clarivate/Scopus indexed high quality Business Journals. It has been observed that, currently, this domain is empty and no one is serving to keep authors and publishers of journals on the right track who are conducting and publishing erroneous Business Research. To fill this gap, our organization serves as a key stakeholder of Business Research Publishing activities.
For invited lectures, trainings, interviews, and seminars, "Scholarly Criticism" can be contacted at Attention-Required@proton.me
Disclaimer: The content published on this website is for educational and informational purposes only. We are not against authors or journals but we only strive to highlight unethical and unscientific research reporting and publishing practices. We hope our efforts will significantly contribute to improving the quality control applied by Business Journals.