SAGE Open Publishes False Hypotheses Testing Results: An Easy to Publish Outlet
Published on: 21 July, 2024
Sage Open, a mega journal, which publishes anything within the domain of social science is indexed in Clarivate’s Social Science Citation Index (SSCI), Scopus (Q1), and Directory of Open Access Journals (DOAJ). The article processing charge (APC) for this journal is $1600 which is a hefty amount of money. The article on the table and under investigation was published in Sage Open in the April-June 2024 issue. The title of the article we are going to analyze is “Perceived Behavioral Factors and Individual Investor Stock Market Investment Decision: Multigroup Analysis and Major Stock Markets Perspectives”.
Although this article embeds many flaws, we shall focus on a major one which is the reporting of erroneous hypotheses testing results. In this study the authors used Partial Least Squares – Structural Equation Modeling (PLS-SEM) [which is in fact a Path Modeling (PM) technique]. We noticed that the Authors of this study are not familiar with the basic concept of PLS-PM because they mention on page 8:
For data analysis, two basic techniques, partial least squares and structural equation modelling, were used using SmartPLS (Shiau et al., 2019).
The statement shows that the Authors thought they used two (2) different techniques to analyze the data i.e., 1) partial least squares (PLS); 2) structural equation modeling (SEM). The Authors have no idea that PLS-SEM (in fact PLS-PM) is one (1) technique.
In this study, 5 hypotheses were proposed conjecturing 5 relationships between five (5) independent variables (IVs) and one (1) dependent variable (DV) as shown in below diagram:
Let’s look at Table 6 reported on page 9 showing hypotheses testing results. We will dissect it in detail to let our readers know how this table creates a big problem making the whole study questionable.
Table 6 reflects that all five (5) study hypotheses were supported. However, this is not the case. We will show you that in fact three (3) hypotheses were wrongly supported because the reported values of statistical analyses don’t support these results.
The reported t-statistic for Overconfidence-Investment relationship (OC-ID) for hypothesis one (1) is 0.047. We know that leaving the test significance value as default (0.05), t-statistic should be equal to or greater than 1.96 to accept a hypothesis. Since the t-statistic is below 1.96, the hypothesis can’t be supported.
The reported t-statistic for Anchoring-Investment relationship (AN-ID) for hypothesis four (4) is 0.061. We know that leaving the test significance value as default (0.05), t-statistic should be equal to or greater than 1.96 to accept a hypothesis. Since the t-statistic is below 1.96, the hypothesis can’t be supported.
The reported t-statistic for Herding-Investment relationship (HD-ID) for hypothesis five (5) is 1.710. We know that leaving the test significance value as default (0.05), t-statistic should be equal to or greater than 1.96 to accept a hypothesis. Since, the t-statistic is below 1.96, the hypothesis can’t be supported.
Now we shall show that the reported t-statistic figures were also wrong. For the first hypothesis, the t-statistic should be 0.114. For the fourth hypothesis, the t-statistic should be 0.672. Whereas, for fifth hypothesis, the t-statistic should be 1.661.
Now we shall calculate the significance value for all the three (3) affected hypotheses. For hypothesis one (1), the significance value should be 0.962. For hypothesis four (4), the significance value should be 0.545. Whereas, the significance value for the last hypothesis should be 0.088. We also want to mention that the significance value for hypothesis two (2) is also wrongly reported. The value should be 0.024. We can verify these significance by looking at very weak relationship values i.e., 0.008 (for H1), 0.041 (for H4), and 0.098 (for H5).
We wonder why were Editors failed to identify such stern mistakes at desk level and why were reviewers unable to catch them? We conclude that this article was either not peer reviewed or a very relaxed and nescient review might be conducted for this article. Whatever the case may be, it is very hard to believe and follow the science published in these journals because many of the academicians and researchers are not statisticians and they blindly believe whatever is published is these journals. Researchers perceive that the journals indexed in Clarivate’s SSCI or Scopus are credible journals but we have proved that the case is contrary to what they perceive. The science published in these journals cannot be believed and followed because these journals publish false and erroneous science. We recommend this article should be immediately retracted.
If you want to share your thoughts, please do comment below!
Update: On 20 July, 2024, Sara Parker (Managing Editor, Sage Open) contacted us and assured to initiate investigation and corrective action. We look forward to see update and corrective action.
"Scholarly Criticism" is launched to serve as a watchdog on Business Research published in so-called Clarivate/Scopus indexed high quality Business Journals. It has been observed that, currently, this domain is empty and no one is serving to keep authors and publishers of journals on the right track who are conducting and publishing erroneous Business Research. To fill this gap, our organization serves as a key stakeholder of Business Research Publishing activities.
For invited lectures, trainings, interviews, and seminars, "Scholarly Criticism" can be contacted at Attention-Required@proton.me
Disclaimer: The content published on this website is for educational and informational purposes only. We are not against authors or journals but we only strive to highlight unethical and unscientific research reporting and publishing practices. We hope our efforts will significantly contribute to improving the quality control applied by Business Journals.