I recently had a case at BRIA where in the decision letter I stated “The key for successful resolution of the issues identified by the reviewers is proof that the results are not statistical method of analysis dependent.” The decision letter went on to suggest a variety of tests that could be done to rule out the issue of method dependency. How did the authors reply? A two page ramble in the text of the paper about why their method was most appropriate.
What inference can I draw as an editor from such a reply? That the results are so substantially weaker if other valid methods are employed that the authors did not want to disclose them. If the results are so method dependent then are they really results that we want to report in our literature? As Bob Libby used to say about experiments, if you cannot see the difference you are reporting by examining cell means, maybe what you found is a dust mite not a mountain (okay not exactly what Bob said but close enough to get the point).
Kris Hardies ( a heck of a good sort and one I am normally sypatico with) recently published a blog entry at the EAA ARC ( https://arc.eaa-online.org/blog/we-simply-cant-all-be-publishing-top-journals ). Really? “No sh*t Sherlock” as an expression, somewhat blue in nature, goes in North America! (Expression means, for non North American English readers, that is not a surprise with an allusion to the famous fictional British “consulting detective” Sherlock Holmes).
Kris’s argument is that there are at least 3000 publishing Accounting academics in the world and over a five year period there is not room for three thousand authors in the American Three! But who has agency here?
Is it not our fellow academy members in our own universities that decide we will only count Three ( or 4 or 5 or 6 or 10) journals? When I see commentaries like this I just cringe as it assumes that bright PHD’s have no agency in the matter!! You can find a different school, you can fight for change in your school, you can publish where you want and challenge any negative performance evaluation as arbitrary and unreasonable!
The one thing you should not do is to attempt to blindly conform and not publish your research at all in the best journals you can publish in! After all countless prize winning articles have been published outside the American Three. Yet to not publish at all is the reaction I have seen frequently to such edicts by schools and rants like Kris’s ( sorry Kris . . . .).
I thought I would be back blogging a lot sooner than I was! The end of summer rush at BRIA caught me by surprise (although I should know better). A bit more disconcerting was the passing of founding BRIA Editor Ken Euske over the summer! In any event I am back.
First my condolences to Professor Euske’s family. We at BRIA will not leave his death unmarked and there are several folks helping to get together an appropriate memorial article about his contributions to the academy.
Second, we continue to be fortunate with the set of papers sent into BRIA. But folks need to realize I am not, despite my rep in some circles, the journal. Sometimes I will suggest authors submit papers to the journal and both reviewers say “NO”! That is why I have only solicited one paper during my editorship and that paper is a 30 year retrospective about the academic accounting research world from a social and behavioural perspective. All other papers have outcome uncertainty associated with submission. Indeed, the only time I have ever gone against both reviewers saying “NO” is when I ex ante documented why I expected them to say “no” and that is what they said with the expected reasons!
Wow, it finally happened!!! The first interpretive field study has been accepted and is forthcoming at TAR and it happened in management accounting! Pfister and Lukka did a lot of work to make the study accessible to all readers, so some may not even realize the case study uses interpretive research methods until one gets to the section in methods entitled “abductive analysis”. Yes, a pure intrepretivist researcher could say that their use of these methods to claim to study causal relations is ” selling out”, but in my mind good case study research should inform all types of researchers, and that the rhetoric around contribution is just framing. We learn what we learn from the case research methods! I have been learning from reading interpretive research for 30 years and I did not turn out too badly in the posivitist world order. Congrats to the authors and editors and reviewers. It has been a long time in coming!
The link to the early views version of the paper is