This article is full of blunders; we are going to identify a few only. Let's start dissecting it!
Let's dive straight into the "Methods" (p. 754). The non-response rate (NRR) for this study was 84.1%. It is required to properly justify the low response rate. Merely stating that previous studies got the same response rate cannot be considered a scientifically valid justification. Surprisingly, the study did not provide any information on common method bias (CMB). Common method bias (CMB) is a subtle phenomenon that researchers often fail to anticipate. Basically, if you are pulling all your data from the same source (like, everyone fills out the same survey), stuff can get weird fast. Suddenly, your results look way more connected than they actually are. Like, whoops, did that question about job satisfaction really impact their answers about productivity, or did people just answer everything the same way because they were bored or trying to look good? It totally messes with your results, and honestly, it can make your research look kind of shaky if you are not careful. So yeah, you have to watch out for that if you care about your study actually meaning something.
This study used 4 variables in its framework, i.e., internal ethical orientation toward employee-centeredness (EC), external ethical orientation toward people-centeredness (PC), internal ethical orientation toward organizational inclusiveness (OI), and external ethical concern for territorial growth and development (TD). Although the authors did not mention the category of scale (adopted, adapted, developed), we conclude by reading the text that all 4 scales tapping these variables were "developed" scales. However, we wonder why authors did not follow standard scientific procedure to create new scales. The paper cunningly presented the scales as they were adopted. The study does not report standard psychometric requirements such as translation validity, factor analysis results (factor loadings, dimensionality, Kaiser-Meyer-Olkin (KMO) test statistic, Bartlett's Test of Sphericity), discriminant validity, convergent validity, test-retest reliability, etc. In the absence of this important psychometric reporting, study scales and results cannot be considered reliable and valid, hence, unscientific.
Although not mentioned in the paper, the authors tested their conceptual model by running Andrew F. Hayes' PROCESS model 5. Algebraically, this model can be translated to this equation: "Y = b0 + b1(a0 + a1X) + c1'X + c2'W + c3'XW." In the conceptual model presented by the authors (p. 755), X (independent variable) is internal ethical orientation toward employee-centeredness (EC), whereas W (moderating variable) is external ethical concern for territorial growth and development (TD).
Now look at the hypothesis testing results presented (p. 759–760) in Table 2 and Table 3. The authors copy-pasted the results produced by statistical software. In both tables, one can observe that the authors calculated the interaction between EC and OI and entered the interaction term EC*OI into the regression model. However, OI is not the moderator but the mediator. The proposed moderator in the study model is TD. Therefore, interaction between EC and TD (EC*TD) should have been calculated and entered into the regression model. In both tables, it can be seen that it was not by mistake, because the authors repeatedly reported the interaction term EC*OI in both tables.
Surprisingly, we do not know who filled the study questionnaires. Whether they were employees, owners, or key informants, the study totally missed this important aspect.
Another very important observation is the absence of a correlation matrix. Reporting regression results without a correlation matrix is generally considered unprofessional and incomplete in academic and professional settings. Reporting correlations not only helps understand bivariate relationships but also indicates multicollinearity among study variables.
Our analysis clearly reveals that the editors, reviewers, and publisher of the Journal of Business Ethics (JBE) completely failed to exercise standard quality control. Publication of an article with such stern problems will only deteriorate the reputation of JBE. We have crystal clear evidence to include JBE in our list of journals with questionable or predatory practices.
If you are flabbergasted to learn FT 50-listed journal's bloopers, leave a comment to express your thoughts.
"Scholarly Criticism" is launched to serve as a watchdog on Business Research published in so-called Clarivate/Scopus indexed high quality Business Journals. It has been observed that, currently, this domain is empty and no one is serving to keep authors and publishers of journals on the right track who are conducting and publishing erroneous Business Research. To fill this gap, our organization serves as a key stakeholder of Business Research Publishing activities.
For invited lectures, trainings, interviews, and seminars, "Scholarly Criticism" can be contacted at Attention-Required@proton.me
Disclaimer: The content published on this website is for educational and informational purposes only. We are not against authors or journals but we only strive to highlight unethical and unscientific research reporting and publishing practices. We hope our efforts will significantly contribute to improving the quality control applied by Business Journals.