Gotta love it! as I predicted last February, the standard two year invite to the American Accounting Association’s New Faculty Consortium would not be extended to me (again I might add)!! One year and your gone, despite outstanding participant reviews both times I have been invited (2011 and 2018). As some of you know the same thing happened when I was CAR editor back in 2010-11 – indeed they cancelled the journal editor’s panel the next year so as to not have to invite me back. See careditorsteve blogs in early 2012 that documents the fun back then!
Why? I tell it like it is, or at least, as I see it. I say nothing that I have not written in these columns, but these columns are seen as a threat by many in the establishment of the AAA and its related Big X funders. I never have figured out exactly what I say that is so threatening? Is it
1. Advocating a big tent for accounting research using all that social sciences have to offer not just the narrow perspectives of financial economics?
2. Recognizing that all PHD grads, hence new faculty, do not have equal chances of success and thus need to think about how they can be successful without buying the elite ” kool-aid” definition of the American 3?
3. Reminding new faculty that our role as educators requires us to develop independent evidence and examine current institutional structures with professional skepticism! Be it the Big X, the PCAOB, the SEC, the IAASB etc. . . . . Society is not served well by a bought and paid for academy.
4. Rolling back the lid on the black box about how the journal editing process works.
My guess is that number 3 is the big one. Indeed last year I warned the NFC Chair that he might want to reconsider my invite! In the end he said to his everlasting credit, that he was ” old enough” that it did not matter if inviting me caused him some problems. Brave lad, but not often found in the US academy where observation 3 might well be more true than we want to admit.
The following video is based on a simulation study our research team carried out to test the conjectures we made about knowledge transfer from audit research to standard setters (Hoang, Salterio and Sylph Accounting Perspectives, 2018 with an early draft on SSRN), and our analysis of prior attempts at knowledge transfer from academics to standard setters(Salterio, Hoang and Luo, 2018 available at SSRN and featured at the Illinois Audit Symposium on September 28-30, 2018). The video summarizes part of what we learned during our travels through evidence based policy making and the simulation will be one focus of our next working paper that provides proof of concept that evidence based policy making can inform audit standard setting.
In the mean time enjoy our brave new world of video research presentation – knowledge transfer for a new century!
Video link: https://www.youtube.com/watch?v=PLePove761E&t=7s
I have been mulling for years over quantum mechanics and it’s direct offshoot quantum physics. The quantum world is pretty mind bending and what’s more it has some interesting implications for social science research if one is willing to make an analogy.
Positivist research can be best explained and understood through classical models like those of Newton in physics. Highly deterministic general laws that are time invariant where once you discover the correct coefficients for the model parameters, know the initial state, one can derive very precise predictions. Furthermore for large non-atomic participles it works well, exceedingly well. The past predicts the future (with measurement error) and the present can be seen to be based on past history (again allowing for measurement error).
However, interpretive research can be seen to analogous to the quantum world. In the quantum world all predictions are probabilistic, any past history could have (or not) occurred ( or indeed all could have occurred in different multiverses), observation of current state of affairs changes the probabilistic outcomes so one can never know the current state and the outcome together as the measuring (or not) of that state affects outcomes. Of course, these phenomena are best seen at the subatomic particle level in the world of quarks, meons et al. And like classical physics it works well, exceedingly well, for is phenomena of interest.
The problem that faces physics, as it faces social science researchers, is that to date it appears both are the best theories for approaching their phenomena of interest. And while lots of work goes on to try reconcile, integrate, combine, metatheorize about, it is not clear yet if any are on the “right” track, if indeed there is such a track.
Something to think about . . . .
October’s stats on readership and page reads are in and it looks like a lot of first timers visited this month! How do you tell? The ratio of pages read to number of visitors is off the chart this month. Normally new visitors read a lot of the backfile and this month we have had more pages read than any in the past 18 months!
New registrations of those who gets these musings via email alert are also up!
So, thank you for dropping by. Musings is fun to write and I do it because of you!
There is a very fine line between taking advantage of opportunities to carry out research versus being opportunistic! The former is reacting to circumstances that make the timing, viability of project or the set of potential co-authors available sooner then you might expect. The latter is doing a project because others appear to be successful in publishing in that area and you once had a vague idea you might like to do some work there.
Now you think you may be able to bluff your way through, but I guarantee that those of us around a little longer will see through the bluff! And we will remember the attempt to bluff even if you get lucky and publish the paper in a good home.
In the end you only have one reputation. To have a viable research career sometimes means not running after apparent low hanging fruit! Especially if you cannot tell really why it is a fruit and not a vegetable.
If there ever was an experimental reference that was misused it has to be Libby Bloomfield and Nelson (2002) AOS. LBN summarize and integrate the then relatively new to the modern age line of experimental research in financial accounting. While unfortunately the key example in the first part of the paper is a retracted article, my problem lies with the misuse of the participant matching part of the paper.
It is often used to excuse using undergraduate student participants in financial accounting and tax experimental papers (and more rarely in management accounting and more rarely still in audit experiments). Now the principle is rather straightforward – acquire participants that are appropriate to the research question being asked. So when asking about work financial analysts do use financial analysts or analysts in training. If looking at controller’s accounting decisions use controllers and when using investors and taxpayers – specify the type.
The last one is where problems begin. Undergraduate students in business are at best novice investors but more often naïve investors. Ditto with many of the participant providing services (e.g. MTurk) if careful prequalification is not done. The problem is that rarely does an experimental issue call for naïve investors or taxpayers. Maybe novices but almost never naïve. Yet often young researchers will blindly quote LBN to justify the use of such participants. A strong suggestion – reread LBN through the eyes of a reviewer and then tell me that naïve participants are good surrogates for investors and taxpayers!!!!!
Yes, there is a place to use such participants when ensuring that basic cognitive processes apply to this task environment. But the moment you go beyond basic cognitive processes (ensuring that transfer and apply works) naïve participants are not warranted – at least without some pretty substantial corroboration of key results with more appropriate participants! See Panel musings for some ideas on this topic.
I laugh a lot when I see mixed prior results as a motivation for a study UNLESS is accompanied by an analysis of the likely “root causes” of those mixed results. Yet study after study is allowed to be published based on the dubious premise that yet another study, in a void by itself, will tell us something about prior research’s mixed results.
So I propose a simple criteria for my fellow editors – make every author who claims mixed results as a motivator for their work to provide a detailed root cause analysis and show how the current study will address those root causes! If the study does not, then do not let the mixed results argument as a motivation for the paper, only as factoid. For without an attempt at a “root cause” analysis, all the authors of the current paper have done is add another confusing data point to what is an already confused picture.
In the world of allowing “mixed results” to stand as a motivation for just another data point, no wonder some (most or indeed near all) social constructivists (also known as interpretivists) researchers in accounting suggest that positivist research has “failed” in its mission to produce a large number of empirically supported theoretically based regularities.
My belief is that with a wee bit more work, we could list many more empirically supported theoretically based conclusions in our research in financial accounting and auditing (and likely other areas of accounting research as well). Why do I highlight financial accounting and auditing? Because they are the closest to areas where we as accounting academics have the potential to inform society about the evidence basis for rules and standards and allowing such mixed results to stand and be added to without any attention to root causes of differences detracts from what the evidence could really show.