Evidence-informed accounting standard setting – lessons from corona

Posted by Thorsten Sellhorn - May 23, 2020
0
335

This blog post originally appeared on the website of the German accounting research community TRR 266 Accounting for Transparency funded by the German Science Foundation (Deutsche Forschungsgemeinschaft – DFG).

Lots of new accounting research findings are published every year in countless academic journals, on pre-print servers, and as working papers on SSRN. Many of these studies are potentially policy-relevant, i.e., speaking to issues on accounting standard setters’ agendas. However, even field experts find it difficult to just stay on top of what’s new. And there is another challenge: Not all available research findings are equally pertinent to a given question, as studies face different threats to validity. In a recent presentation delivered to the European Financial Reporting Advisory Group’s Technical Expert Group (EFRAG TEG), Thorsten Sellhorn reflects on these validity challenges for empirical goodwill accounting research. Using analogies from the current corona crisis, he discusses how researchers and standard setters can address these challenges as they work towards more evidence-informed standard setting.

In the EFRAG TEG-CFSS webcast meeting on 25 March 2020, I summarized the conclusions that can be drawn from published empirical studies on goodwill accounting for evidence-informed standard-setting on this issue. My objective was to assess the evidence presented in these studies from two perspectives: First, can the studies inform current policymaking, e.g., at the IASB? And second, how valid are the empirical findings? In this post, I will focus on the validity challenges discussed in my talk, and how, despite these challenges, we, as a field, can work towards evidence-informed standard setting. I draw on the role of research in the current corona crisis for a few illustrative examples.

Evidence-informed vs. evidence-based standard setting

The term evidence-based was first coined in the 1990s in the context of medical research, and defined in Eddy (1990) as “explicitly describing the available evidence that pertains to a policy and tying the policy to evidence.” As much as we would like financial reporting and disclosure regulation to be evidence-based, there are some differences between the fields of accounting (as part of social science) and medicine that we cannot ignore. Outside of lab experiments, we struggle to deliver the same kind of ‘hard’ causal evidence as medical researchers strive to provide as a basis for policymaking. 

For example, rarely do we get to randomly assign people or firms to treatment and control groups, let alone give the control groups a placebo. Convincing causal treatment effects are therefore difficult to obtain. Likewise, managers’ incentives are not randomly assigned, and thus their effect on accounting decisions may be influenced by other factors – again making causal inferences very challenging. Furthermore, meta analyses and systematic reviews are rare in accounting research, due to a lack of replication and reproduction studies.

 

To illustrate these issues for the goodwill accounting context: Standard setters would like to know which accounting treatment of goodwill is “better” – amortization or the impairment-only approach. In addition to having to decide what “better” means, and how to measure it, researchers struggle to empirically test this research question in a clean “apples-to-apples” comparison.

Ideally, one would like to randomly assign amortization to one group of firms and impairment-only to another, otherwise identical, group of firms. Even better, both firms and researchers should be unaware which firm is in which group; a double-blind study. Subsequently, one would observe and compare the outcomes of interest in both groups, to infer whether amortization or impairment-only is “better”. Sounds impracticable? Right.

Evidence-based standard setting clearly seems a bridge too far for our field. Nonetheless, we can (and should) aim for evidence-informed­ policy decisions. Former IASB research director Alan Teixeira (2014) used this term in the accounting context to describe decisions that are made taking into consideration the already available studies and their findings, and then combining this research with the experience and expertise of those involved. To reach evidence-informed standard-setting, two conditions must be met. First, it is important that the findings are valid. Second, we should focus on the findings generated in a body of work comprising a field, instead of focusing on a single study

Validity

So what do I mean by “validity”? Of course, I am not arguing that the published goodwill accounting literature as a whole generally lacks validity. Rather, my point is that drawing concise, unambiguous recommendations from the results of a given study is rarely feasible. Rather, a number of key research design decisions (which standard setters should be aware of) influence the types of conclusions that can safely be drawn from a given study. Failing to appreciate these nuances could lead to a communication failure caused by an “expectations gap” between academic researchers and standard setters. To illustrate, let me draw on a current example from coronavirus research, and then try to draw parallels to goodwill accounting research.

 

An example from coronavirus research

A recent CNN headline reads, “Chloroquine, an old malaria drug, may help treat novel coronavirus, doctors say.” In one of his daily podcast interviews on German radio station NDR, Christian Drosten, chief virologist at Berlin’s Charité hospital, points out flaws that render the causal effect of chloroquine on corona-related health outcomes doubtful, at best.

He makes two main points: First, the treatment and control groups in the study are very different, rendering the comparison “apples-to-oranges”. Hence, the treatment effect of chloroquine on the health outcomes under study is likely confounded by other factors like patient age and severity of the infection. Second, the main health outcome under study – virus concentration in the throat – is hardly relevant for the current debate, which centers on virus concentration in the lungs, the severity of the disease course, and lethality.

 

An example from goodwill research

Similar concerns can, in principle, be raised for many empirical accounting studies, including those on goodwill. For example, standard setters may currently be very interested in this (fictitious) headline: “Goodwill impairment-only approach is less useful to investors than amortization, study shows.” If accounting research were headline material (which, sadly, it is not), this is how I imagine a journalist might have summarized this paper published in the respectable European Accounting Review.

The two above concerns concerning the coronavirus research can also be brought up here. First, the paper compares data from the year before (amortization method in most countries) and after IFRS adoption in the EU (impairment method). They use a pre-post test design that, in itself, does not support causal inference. The researchers are of course aware of this, but a better experimental design was not available to them.

Second, the outcome under study is ‘usefulness to investors’, or „accounting quality“. The authors, along with a long tradition of prior papers, empirically measure this as „value relevance“, the statistical association of goodwill-related accounting amounts with stock prices. Is „value relevance“ as an outcome relevant to standard setters? We don’t really know. In academia, battles have been fought over this.

Mind you: these validity challenges do not arise because researchers do not know what they are doing! Rather, they stem from the multi-faceted challenges of setting up clean causal studies that measure phenomena of interest to standard setters. These include data availability issues, a dearth of quasi-experimental settings (where treatment and control groups are randomly formed), and ignorance (for many reasons) about exactly what outcomes standard setters are interested in.

And let’s be honest: it also reflects the fact that “applied” studies geared towards the information needs of policymakers have lower odds of getting published in the top journals that define researchers’ reputations, status, resources (including DFG funding), and salaries. All of this becomes problematic if research findings are communicated imprecisely, taken out of context, or applied to issues they don’t really speak to – as is currently happening to some corona-related research.

No silver bullet

The second condition that must be met concerns the focus on the studies generated by a field. We should not expect a single study to deliver the one causal ‘silver’ bullet. It may be more productive to think of causal statements as being made by a field, i.e., a wide range of diverse papers that bring different methods and data to bear – slowly and steadily building evidence that will end up suggesting a plausible causal effect. Let’s not forget: the research linking smoking to lung cancer built over decades, with no single causal study that settled the issue once and for all.

As such, it is important for standard setters to base their decisions on the information delivered by an entire field. This means that they need to keep up with the body of literature available, and we researchers should be clear about the implications of our studies, bridging that “expectations gap”.   

From research to standard setting

So, how can we as researchers support standard setters in becoming (even) more evidence-informed?

First, we need to understand the questions that standard setters need evidence on, and the outcome measures they are interested in. For you as standard setters: You could clearly specify outcomes of interest, so what is it that you would like to have researched? Are you more interested in the value relevance, or rather in the degree of conservatism of accounting amounts? And how do you propose to measure these outcomes?

Second, we should be tuned into the timing of standard setters’ deliberations. Standard setters are not inclined to wait for years until a relevant study gets published in a top journal. Again, the corona crisis shows: research projects, publication processes and regulatory approval procedures can be sped up, if needed! Accounting standard setters, too, want the evidence when they need it.

Third, we need to communicate carefully, i.e., avoid suggesting causal effects when all we have to offer is statistical associations. Of course, a theoretically plausible and highly significant statistical association is more suggestive of an actual causal effect than some chance pattern observed in a big data mining exercise. But still, we should not mix up correlation and causation, especially when communicating with standard setters.

This brings me to my last point: our research approaches need to broaden out even more. The goodwill accounting field, for one, would benefit from the insights of more qualitative studies, including surveys and small-sample case studies. These could shed light on the motives, the actual behavior, and the interactions of the people involved, hopefully leading towards a deeper understanding of goodwill accounting decision-making in practice.

To bring these ideas together: Perhaps standard setters should take greater advantage of the possibility to commission research. Why not get together with a team of domain experts from academia and practice to “custom-build” a study to your specifications? You can then make sure to specifically ask for validity assessments, allowing you to draw clear implications from the results. You may also ask researchers to specifically consider the “real effects” of accounting treatments in an economic impact assessment.

And finally, you can help us researchers with our data collection. Where possible make relevant contacts and field data available, for example, by making yourself available for (research) interviews and by participating in and forwarding our surveys. And by attending our events that are aimed at knowledge exchange.

This blog post is loosely based on a presentation Sellhorn gave at the European Financial Reporting Advisory Group (see here).

 

WordPress Cookie Plugin by Real Cookie Banner