Uncovering where the econometric skeletons are buried
Download the WEA commentaries issue ›
By Lars Syll
A rigorous application of econometric methods in economics presupposes that the phenomena of our real world economies are ruled by stable causal relations between variables. Parameter-values estimated in specific spatio-temporal contexts are presupposed to be exportable to totally different contexts. To warrant this assumption one, however, has to convincingly establish that the targeted acting causes are stable and invariant so that they maintain their parametric status after the bridging. The endemic lack of predictive success of the econometric project indicates that this hope of finding fixed parameters is a hope for which there really is no other ground than hope itself.
Invariance assumptions need to be made in order to draw causal conclusions from non-experimental data: parameters are invariant to interventions, and so are errors or their distributions. Exogeneity is another concern. In a real example, as opposed to a hypothetical, real questions would have to be asked about these assumptions. Why are the equations ‘structural,’ in the sense that the required invariance assumptions hold true? Applied papers seldom address such assumptions, or the narrower statistical assumptions: for instance, why are errors IID?
The tension here is worth considering. We want to use regression to draw causal inferences from non-experimental data. To do that, we need to know that certain parameters and certain distributions would remain invariant if we were to intervene. Invariance can seldom be demonstrated experimentally. If it could, we probably wouldn’t be discussing invariance assumptions. What then is the source of the knowledge?
‘Economic theory’ seems like a natural answer, but an incomplete one. Theory has to be anchored in reality. Sooner or later, invariance needs empirical demonstration, which is easier said than done.
David Freedman: Statistical Models – Theory and Practice (CUP 2009:187)
Since econometrics aspires to explain things in terms of causes and effects it needs loads of assumptions. Invariance is not the only limiting assumption that has to be made. Equally important are the ‘atomistic’ assumptions of additivity and linearity.
Limiting model assumptions in economic science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we ‘export’ them to our target systems, we have to be able to show that they do not only hold under ceteris paribus conditions. If not, they are of limited value to our explanations and predictions of real economic systems.
Unfortunately, real world social systems are usually not governed by stable causal mechanisms or capacities. The kinds of ‘laws’ and relations that econometrics has established, are laws and relations about entities in models that presuppose causal mechanisms being invariant, atomistic and additive. But – when causal mechanisms operate in the real world they mostly do it in ever-changing and unstable ways. If economic regularities obtain they do so as a rule only because we engineered them for that purpose. Outside man-made ‘nomological machines’ they are rare, or even non-existent.
Another prominent trouble with econometrics is the way the so called error term is interpreted. Mostly it is seen to represent the effect of the variables that were omitted from the model. The error term is somehow thought to be a ‘cover-all’ term representing omitted content in the model and necessary to include to ‘save’ the assumed deterministic relation between the other random variables included in the model. Error terms are usually assumed to be orthogonal (uncorrelated) to the explanatory variables. But since they are unobservable, they are also impossible to empirically test. And without justification of the orthogonality assumption, there is as a rule nothing to ensure identifiability:
With enough math, an author can be confident that most readers will never figure out where a FWUTV (facts with unknown truth value) is buried. A discussant or referee cannot say that an identification assumption is not credible if they cannot figure out what it is and are too embarrassed to ask.
Distributional assumptions about error terms are a good place to bury things because hardly anyone pays attention to them. Moreover, if a critic does see that this is the identifying assumption, how can she win an argument about the true expected value the level of aether? If the author can make up an imaginary variable, “because I say so” seems like a pretty convincing answer to any question about its properties.
Paul Romer: The Trouble With Macro-economics
The theoretical conditions that have to be fulfilled for regression analysis and econometrics to really work are nowhere even closely met in reality. Making outlandish statistical assumptions does not provide a solid ground for doing relevant social science and economics. Although regression analysis and econometrics have become the most used quantitative methods in social sciences and economics today, it’s still a fact that the inferences made from them are usually of questionable validity.
From: p.2 of World Economics Association Newsletter 6(6), December 2016
https://www.worldeconomicsassociation.org/files/Issue6-6.pdf
This is an issue very known by the econometrician. Economics is a social science which matter of study is changing constanly, because of that there are many new ways to model the economic reality. No only in the econometric, but in the theory. All of that make the economic so exciting!
Economics has always interested me since childhood, when I read about perfect markets in my brother’s accounting textbook and told my classmate and friend Clive Grainger (Econ ‘N Prize’ Winner) that I thought it neat mathematically but too good to be true. I tell myself that that may have been Clive’s initiation into his career as an economist.
Your thoughts on invariants interested me. I ran a research programme to study organisations as information systems. It began while I was working in the Steel Industry and flourished at the London School of Economics when I joined a group there setting up teaching and research in Information Systems Analysis and Design, ISAD. I was then finishing a book on the nature of information in organisations, which framed the reseach programme. Of course, I asked my newfound colleagues in the Economics Dept about their views about information. Unanimously, they asserted that price was the only information you needed and END OF CONVERSATION. Horrified, I have never ceased wondering when economic would wake up. I detect some stirrings.
Our research adopted the Refutationist methodology of Sir Karl Popper, then one of my new and famous colleagues. That was wise. Our results are well-founded scientifically and they include some invariants, which are not all that easy to find in the social sciences. The invariants are the cause of some huge successes in applications to business problems.
I’m hoping to find, one day, in the field of economics, some interest in the modelling of systems of social norms. We do that using a strict formalism that can be used to generate a computer-based model for simulation, exploration of an institution’s structures or, in business, to an ICT application to support the activity specified by the social norms we’ve modelled. I have a feeling that our tools can be used to model various economic regimes.
We have had huge practical success with the invariant semantic schemas that form the foundations for all the other norms being modelled: huge productivity increases and improvements in an organisation’s adaptability to evolving circumstances. That Semantic Normal Form Is not the only invariant.
We have found an Organisational Kernel, a minimal structure that must not change when you reorganise how you do the work (your bureaucracy) without wishing to change the business itself. I know you economists look at much larger structures but I think the institutional ‘logic’ is essentially the same.
There are also invariants about processes of innovation that may be of use to your profession. Experience has taught us that concrete problems are the best sources of inspiration for advancing theory; we made frequent use of legal norms from legislation to contracts; particular organisations of a dazzling diversity were important; now I’d like to see what we might do with non-corporate institutions.
Thank you for an interesting paper. Ronald Stamper
(Ex LSE, retired prof. of Information Management, U. Twente, NL, currently still documenting the theoretical details of that research programme.)
My colleagues seem to have dubbed me founder of the new discipline of Organisational Semioitcs, which held its 17 annual conference in Brazil last year – and IFIP event.
Finally an acknowledgment of the obvious, the Economics profession needs to focus on proving the underlying assumptions governing their subject before moving into the deep waters of econometrics.