The annual impact factors from the SSCI for journals have been published early in the fourth week of June and the chattering has already begun among those who follow these things.
As readers know I have a love hate relationship with these sorts of rankings of influence. Of course the big story of the year is how AOS’s two year impact factor fell greatly and the arrival of Management Accounting Research (MAR) on the scene as the fourth most impactful journal in accounting according to the two year impact factor.
There is so much at work here, I am going to attempt to put it into perspective as part of the big picture. First, let us look at the entire list of accounting related journals on the SSCI. This will become very important in understanding what is going on here. Second, in the following days I will discuss what we can learn from looking at the full list and what it portends for the future. In the mean time, enjoy!!!!
Abbreviated Journal Title Total Cites Impact Factor 5-Year
J ACCOUNT ECON 4681 2.724 4.679
J ACCOUNT RES 4302 2.384 3.387
ACCOUNT REV 4924 2.267 3.028
MANAGE ACCOUNT RES 935 2.125 2.29
ACCOUNT ORG SOC 3272 1.672 3.588
REV ACCOUNT STUD 944 1.379 2.007
CONTEMP ACCOUNT RES 1668 1.263 2.18
INT J ACCOUNT INF SY 268 1.219 n/a
ACCOUNT AUDIT ACCOUN 1361 1.188 n/a
ACCOUNT BUS RES 229 0.957 1.283
J BUS FINAN ACCOUNT 1212 0.914 1.246
ACCOUNT HORIZ 853 0.881 1.377
AUDITING-J PRACT TH 951 0.861 1.453
EUR ACCOUNT REV 653 0.84 1.555
ACCOUNT FINANC 526 0.746 0.983
J ACCOUNT PUBLIC POL 733 0.547 1.205
ABACUS 278 0.4 0.699
AUST ACCOUNT REV 164 0.381 0.55
J INT FIN MANAG ACC 140 0.278 0.622
COMPTAB CONTROL AUDI 41 0.258
SPAN J FINANC ACCOUN 84 0.22 0.297
ASIA-PAC J ACCOUNT E 46 0.152 0.093
Plus Critical Perspectives on Accounting will appear in 2015
well it is official, almost all of Jim Hunton’s academic record is based on fraud! The AAA today retracted almost every article he published in an AAA journal except for four articles where the co- author had collected the data, normally as part of their dissertation!
I anticipate CAR, JAR and others will soon be joining in! By the time it is all over Hunton will make the list of top ten authors retracted according to the numbers at retraction watch!
What is so amazing about all this is how long it has taken. By April 2013 when I left the CAR editorship I was on the verge of issuing notes of concern on all Hunton’s CAR articles! The evidence collection did not take long and the fact pattern was clear! Yet, Two of those articles still have not been retracted or even noted at CAR! I still wonder if it all would not have been swept under the rug given that I am the only person on the record that made an official complaint to Bentley University about this matter in December 2012!
I learned by way of an retracted email that Critical Perspectives on Accounting (one of the top three critical journals albeit they would not describe anything that denotes hierarchy as being appropriate) has been admitted to the SSCI (Social Sciences Citation Index sometimes called the Web of Science). The editors had a sense of humour about it that is often lacking in accounting research – they wrote an editorial critiquing their decision to submit themselves to this form of “disciplinary power” that is created by this multi-national corporation – Thomson Reuters. Further, they skewer some of the pretensions of those of the “critical variety” who condemn activities leading to global warming while jet setting around the world to academic conferences to condemn it. Hopefully they all buy carbon offsets but no doubt that too can be subject to critical analysis.
So folks welcome to the world of what you have called “performance.” Use your new power well!!!
Thanks for at least having the sense of self to acknowledge the inherent contradiction in CPA entering an establishment index. Heck, maybe this will help the careers of some critical scholars or would that be too instrumental?
Hopefully the editors at CPA (Marcia, Christine and Yves – two of whom I consider friends and one a friendly acquaintance) take this welcoming note in the spirit intended and do not think I am “attacking.” After all, I might some day submit a paper to CPA!
So what stimulated this little four blog post odyssey on RCC? As my regular readers know I scan the literature broadly and in the May 2015 issue of the European Accounting Review I found a “note of concern.” Notes of concern are to be used when there is suspicion that there has been a serious breach of substance of the academic integrity of the article and that for a variety of reasons the editors cannot conclude that the paper needs to be retracted. In other words – reader is put on notice to be concerned about the integrity of the underlying science and be wary about citing these results.
After reading the EAR’s note of concern, all that I knew for certain was that the Editor of EAR was upset with the authors’ lack of disclosure of a related study. The note contained speculation about what might of happened if the study had been disclosed during the editorial process (a clear NO NO!!! in the COPE guidelines). It makes no comments about attempts to reach the authors, whether the authors responded or not etc. All of these are clear departures from COPE’s best practices.
So off I go to read the papers in depth. Luckily it is a subject matter that I can claim expertise in (the BSC in the tradition of Lipe and Salterio 2000, 2002 and Libby Salterio and Webb 2004 (TAR, AOS and TAR respectively) and about which I had done a literature review in 2012). As the previous blog analyses show, when you get to the substantive level these are different papers but when you look at the surface features, there is a lot of self-plagiarism at work. However, self-plagiarism at this level does not change the fact that the two papers make substantively different contributions to the literature. One cannot deduce the effects that the main independent variables will have relying only on one study: role differences, accountability/dilution effects and timeline presence and absence etc. The science is not compromised but the editorial process might have been.
So what should have been done? A correction should have been issued that stated that the two paper are related and should have referred to each other. The cross references should have been provided. Indeed, even the issue of extensive self-plagiarism could have been noted as long as it was clear that there are substantive differences in the papers. But as far as I can tell the published evidence does not suggest that there is any concern about the academic integrity of the scientific record nor that one should be wary of citing either article. If that is not the case, then the EAR Editor and the EAA Publication Committee have a duty to disclose substantially more detail in their note of concern. As I am an expert reader in this area and I cannot tell based on the articles published and the “note of concern” that there is any reason that either article should not be cited then what would the non- expert reader do. Retractions and concern are a signal to be wary of the science! That is what retraction and notes of concern are all about – ensuring the integrity of the academic record not bolstering the editorial processes of journals!
As an academic community we have to get better at this. So far we have been like a group of clowns – the TAR original retraction but no reference from the AAA to Bentley U to insure it was investigated, the delayed CAR retractions, the long running ‘star chamber’ process at the AAA with respect to all the Hunton publications, JAR’s protracted processes on the same issue and now to the mix we add the EAR‘s potential misuse of a notice of concern.
I do not have all the answers but boy I do know we are capable of doing better than this!!!!!!
In order to appreciate this blog entry you need to read at least the previous day’s blog entry. Sorry.
When you go to write up two related experiments as described last day it is natural to use a lot of the same words to describe them in separate papers. The motivation for what you are doing is likely to be related given you are using the same experimental instrument (e.g. examining systems of multiple performance measures), the prior research that employs that instrument and focus is not going to change the describing of the past research literature to any great extent (Lipe and Salterio and the twenty some papers that followed from that), the experimental instrument description except for manipulation differences (remember the experiment was identical except for the theoretical differences (e.g., accountability and the dilution effect combined with role in one experiment versus strategic map content and strategic appropriateness judgment in the other) and the laying out of the basic research design is similar.
Unfortunately for the authors, they took absolutely no care with their use of words and paragraph after paragraph is highly similar between the two papers. Furthermore, the title of one paper is very similar to the other, indeed to the point that it is misleading in one paper as that title refers to the contextual variable (“strategy timeline”) that is NOT manipulated in the second paper (B’) BUT is a “sexy” idea (or as sexy as we get in accounting). Furthermore, the theory section describing the common variable (A) between the two studies is exceeding similar.
So in analogical reasoning we would say that there are a lot of surface similarities between the two papers. It is what COPE would call a case of “self-plagiarism” in almost all sections except where the theory was different and in the experimental results (which were different with a different set of participants and a different analysis based on the theory). Further, the authors did not refer to the other paper in the paper submitted to at least one of the two journals and it was not cross referenced in either of the published papers.
Bad boys (or girls) to be sure. Needing a clear rebuke from the editor – sure. Maybe a letter to the Deans of the various schools from the editor. Maybe banning the authors from submitting to the journal for a period of time.
BUT where does this fit on the COPE’s scale of “retraction, concern and corrections.” This is where we are in new ground in accounting and its seems to me that we do not yet understand the purpose of “retraction, concern and corrections.” The sole goal of this series of actions is to ensure that the scientific record is correct. In other words the results are not fabricated or being repeated across papers. That is the only goal of retraction, statements of concern and corrections. The goal is not to punish the author, that needs to be done by different means within the institution or professional society of the offending authors.
Next day, what was done in this case and why I question it!!!!!
Let us consider the following experimental scenario (while the context is an experiment the application is to any empirical study).
Authors run an experiment in a topic area. They manipulate two variables (A and B) and measure a third (C). Theoretically all three variables make sense. They use a sample of participants. They find interesting results and get them published in journal %.
Under the pressure to publish, the authors run another study in a topic area. They manipulate three variables (A, D and E) with one (A) being the same as one of the variables in the previous experiment and measure the same third variable (C). The combination of the previous variable with the two new variables makes sense theoretically. Further, the two new variables are not conceptually related to the variable not used in this study (B). They use the same experimental materials as in the first study modified for the differences in two of the three variables manipulated. They also use as a contextual variable in the new experiment one of the two manipulations of the variable (B’) from the previous experiment that is NOT manipulated in the current study. They use a new sample of participants that are similar in characteristics to the previous study but based on the descriptive statistics they are clearly a different sample. They submit this study to another journal $ and the study is published.
Neither study refers to the other.
When you set it up abstractly like this it seems obvious to me that these are two studies that share some features.
1. Same basic experimental instrument
2. Similar subject populations drawn from but not the same participants
3. One variable (A) in common across the two studies that is manipulated.
4. A measured variable (C) that is measured in both experiments
5. A context factor (B’) in the second experiment that is a one of two treatment variables in the first experiment.
What is different?
1. Two variables (D and E) manipulated in the second experiment
2. Interactions between variable A and (D, E)
3. Interactions between D and E.
There appear to be to be all sorts of advantages of this research design. Partial replication of results across experiments. Partial proof that the first convenience sample was not atypical. Common experimental instrument so that if replication did not succeed it would be due to not replicating rather than to instrument changes.
The next blog entry considers what the issues are when you move from the abstract to the concrete.
Accounting journal editors and publications committees continue to be somewhat befuddled by the new world order where authors have such strong incentives to publish that they are pushing and at times exceeding the bounds of what is acceptable academically and ethically.
COPE – the Committee on Publication Ethics does a wonderful job (my words not theirs) in attempting to help editors and publications committees make the often difficult judgments about the various “shades of grey” that exist in this realm. They have published very clear guidance on what constitutes the grounds for retraction, issuance of a Note of Concern and of Corrections. They also have a new memo on what to do about using the sets of words in related papers (e.g., “self plagiarism”). They make it very clear that the seriousness of “self-plagiarism” offense gets stronger the further one moves from the methods section and the closer one gets to reporting the same results from the same empirical work in the results section – i.e. publication of the same results twice. In other words, in the methods section it is silly to get too upset about using the same words to describe the same method – indeed as they observe using different words to describe the same method may lead to more inaccuracy rather than less. Further, it is very clear that publishing the same results in different papers is a clear violation of publication ethics and cannot be tolerated by the community and should result in severe sanctions.
Over the next several posts I am going to examine a current controversy from the view of an outsider who is an in-depth expert in the research substance but knows nothing of the behind the scenes drama. What I am going to get to eventually is an assessment of what is the point of the “retraction, concern and correction” RCC process and what it is not.