Musings on Accounting Research by Steve

Home » 2013

Yearly Archives: 2013

Barriers to accounting knowledge creation

I have been pondering my assignment for the 25th JMAR anniversary panel that I will be taking part in next Friday.

It has made me think about the wider question of how do we “know what we know” about accounting more widely?

As far as I can see we have made it very difficult to learn much about this subject by our own actions and inactions.

To start, we teach our new initiates to our subject about a mechanical exercise in bookkeeping rather than a rendering of economic accounts and accountability of persons.

We teach what passes as theory of accounting ( financial normally) as a fourth year or masters level course for the initiates.

Our early year textbooks are barren of research references unlike any other social science discipline except perhaps law.

Then within our research community we take pride in our ignorance! Paradigamic concerns trump substantive knowledge about accounting knowledge at every turn. Never has a discipline been where willful ignorance has been gloried like it has been in accounting these past 25 years.

Doctoral programs have little or no real breathe elements. Generations of doctoral students can barely read research in adjacent areas of financial accounting research let alone across research traditions.

We acknowledge journals like JAE and JAR as allegedly leading fonts of knowledge in accounting – journals that can count on one finger the number of management accounting articles they publish in a year!!! Yet management accounting represents a huge portion of both the professoriate and the practice of accounting.

Finally we take no responsibility for this ourselves. We do as one former TAR Editor did on a panel where he proclaimed that if you did not have six working papers and a major paper forthcoming when you obtained your PhD, you should start looking for your second job immediately. there would be no way in XXYY that you would get tenure at the school you would soon be joining. But this was not his fault or his preference, he blamed “them” for it!!!

I hope 2014 might be the year that we may finally get our act together and begin to act as a mature social science discipline. The excuse that we are a young discipline is wearing a bit thin given we are now at least 50 years into the modern era of accounting research and teaching!!!!

That is my hope and my prayer for the NEW Year. Blessings upon all of my readers and peace and joy of the season, however you celebrate it, be upon you!!

CAAA president as CAR EIC??

Reading the front page of the Canadian Academic Accounting Association’s newsletter one would think the CAAA President was CAR editor in chief!!! He talks of the great location of the CAR Conference in Kingston (note to Jim – that the location that you told me was not my job to get but I digress ) and up to date statistics on CAR submissions. Indeed his entire front page column is about editorial matters – things he is suppose to have NOTHING to do with.

The goodness is that the editorial spat at CAR did not affect the quantity of submissions. It is great to know that I left CAR on a upward trajectory!!! But either the CAAA President is doing nothing on his own that warrants mentioning or there is a lot of political interference at CAR if the President (who I think last published a serious academic accounting paper in the middle of the final decade of the last century) devotes most of his column to CAR.

Am I being catty???? My cat Wilson has welcomed me to the species!!!!!

The fall of normative accounting research

In what I found to be an interesting article, July 2013’s Accounting Organizations and Society features a paper on the “The tale of ARIA” (or the more provocative title is “Accounting academic elites:  The tale of ARIA” – I wonder if the authors when coming up with titles like this just want to ensure their readership is only those from the critical school – but I digress).

ARIA – Accounting Researchers International Association – was founded by a group of normative accounting researchers in 1974.  No doubt it was a reaction to the onslaught of positive empirical research that was rising steadily in the then handful of accounting journals in the first round of paradigm wars to sweep our discipline.

These were researchers such as Yuji Iijiri, Robert Sterling and the like would wanted to reason about what accounting should be from a deductive principles based approach with as little a node to empirical research as they could get away with.

This earned them the title, from the positivist empirical camp, of the “arm-chair researchers.”

Anyhow, the story of how this group of what was in their day some of the leading lights of the academic accounting profession’s attempt to maintain their research tradition in light of the positivist revolution in accounting research is instructive.

It also, at least to me, gives me some insights as to why an earlier generation of positivists researchers reacted so negatively to qualitative methodology based research.  More on this later.

In any case the 19 year history of ARIA is an interesting read albeit as usual I recommend a quick scanning of the theory section unless you want an incomplete tutorial on Bourdieu’s social theory.  pp. 370 onward tells and analyzes the story of this very interesting organization that was dedicated to what became a rearguard action as all the leading accounting journals but a couple, became to be completely dominated by positivist researchers.

Meaningless manipulation checks

As I have been teaching a doctoral course this fall I have found many recent articles where the published manipulation check was “do you recall what your instructions were?”

or “did you remember what we told you about your case?”

Really????  That’s all?????

When I first studied the predictive validity framework under Bob Libby in 1989 I remember clearly that manipulation checks, pretests, etc were aimed at providing evidence that the participants in experiments interpreted the manipulations the way that the authors/researchers had meant them to be interpreted.  NOT that the participants could parrot back that they were in an incentive condition where they were paid $2 per correct answer on a test.

What is the difference you ask?  Well if the incentive is about motivating greater performance, why not have a measure of motivation and see if those that received the incentive exhibit greater motivation than those that do not receive the incentive.  There are many very succinct scales that could be employed to quickly gather this information as part of a debriefing instrument at the end of the experiment.

An interpretation of a manipulation “manipulation check” is even more important when the author/researcher is attempting to create a manipulation that informs different participants about different states of the world.  For example, in an experiment I could manipulate whether the CFO had a professional accounting designation or not as part of a multi item manipulation of earnings management expertise within the company.  While it is nice to know that the participant remembered the CFO had a professional accounting designation it provides no evidence whatsoever that I was successful in my manipulation of relative differences in earnings management expertise across treatment conditions.  To do that I would have to ask something like “The firm has strong technical expertise in accounting”  and “the firm management has the ability to implement complex accounting schemes?”  Now I would have to pretest these questions to ensure they were not leading the witness, but they would provide me with some confidence that I was manipulating what I thought I was manipulating in a way that just recalling whether the CFO had a professional accounting designation does not.

Young reviewers also seem to miss this as well.  No wonder some very questionable manipulations are not being caught by reviewers if the reviewer is happy with what I like to call “factoid checking” as manipulation checks.

Surely we are better trained than that!!!!

Tightening up our data standards

Journals in other disciplines are beginning to require more disclosure about back of the house activities in the research process. Some economics journals require the final data set be posted. Some psychology journals are requiring disclosure of any manipulations that were omitted in the final data analysis or how many times different variants of the same experiment were run before a significant result is found!!!

I find the statement that the archival data used in our research is from publicly available sources and that is the end of the disclosure about data availability to be one of the most disingenuous statement very made. Especially by a group of scholars who are often studying the transparency of others disclosures.

It is especially worrisome as one sees paper after paper after paper eliminating large portions of the total population due to missing data, outliers, windsoring, inability to match observations across data sets etc. lets make it a required disclosure that the first data set used be disclosed so that others can replicate the exclusions and see if they are warranted.

I will elaborate on this theme in my next post or two with a concrete example from our literature.

Editing a paper to “death”

Every now and gain I come across strange cases where one reviewer recommends acceptance of a paper and the other reviewer keeps pushing for changes. The editor instead of drawing the matter to a close continues to let the process run on and on! What happens??? The paper in the end never satisfies the reluctant reviewer and gets to the point where the reviewer who recommended acceptance either changes his/ her mind or withdraws from the process talking the favourable voice out of the process.

You know what happens in this story don’t you???? Yep, the paper gets rejected!!! Strike another blow for strong editing!

Not mysterious enough

Yet again I have encountered what I call one of the silliest reasons for rejecting a paper – it is not complex or mysterious enough!

No it was not one of my papers but it has happened to two different sets of authors I know.

May I ask editors and reviewers what is wrong with clear straightforward and comprehensible writing in our papers??

One senior academic who is an editor at a major journal actually believes that scholarly reading should be ” hard work.” Excuse me, I thought doing research is hard work but I did not think we needed to make reading the results of those labours difficult.

The worst thing is that these folks are so certain of their position that no amount of talking with them will change their minds.

But maybe if I find a complex and difficult way to write about it they might listen! Nah!!!!!

%d bloggers like this: