Game Theory — A Severe Case of ‘as if’ Model Platonism
Download the WEA commentaries issue ›
By Lars Syll
The critic may respond that the game theorist’s victory in the debate is at best Pyrrhic, since it is bought at the cost of reducing the propositions of game theory to the status of ‘mere’ tautologies. But such an accusation disturbs the game theorist not in the least. There is nothing a game theorist would like better than for his propositions to be entitled to the status of tautologies, just like proper mathematical theorems. |
When applying deductivist thinking to economics, game theorists like Ken Binmore set up ‘as if’ models based on a set of tight axiomatic assumptions from which consistent and precise inferences are made. The beauty of this procedure is, of course, that if the axiomatic premises are true, the conclusions necessarily follow. The snag is that if the models are to be real-world relevant, we also have to argue that their precision and rigour still holds when they are applied to real-world situations. They often do not. When addressing real-world systems, the idealizations and abstractions necessary for the deductivist machinery to work simply do not hold.
If the real world is fuzzy, vague and indeterminate, then why should our models build upon a desire to describe it as precise and predictable? The logic of idealization is a marvellous tool in mathematics and axiomatic-deductivist systems, but a poor guide for action in real-world systems, in which concepts and entities are without clear boundaries and continually interact and overlap.
Seen from a deductive-nomological perspective, typical economic models (M) usually consist of a theory (T) — a set of more or less general (typically universal) law-like hypotheses (H) — and a set of (typically spatio-temporal) auxiliary assumptions (A). The auxiliary assumptions give ‘boundary’ descriptions such that it is possible to deduce logically (meeting the standard of validity) a conclusion (explanandum) from the premises T & A. Using this kind of model game theorists are (portrayed as) trying to explain (predict) facts by subsuming them under T, given A.
“Clearly, it is possible to interpret the ‘presuppositions’ of a theoretical system … not as hypotheses, but simply as limitations to the area of application of the system in question. Since a relationship to reality is usually ensured by the language used in economic statements, in this case the impression is generated that a content-laden statement about reality is being made, although the system is fully immunized and thus without content. In my view that is often a source of self-deception in pure economic thought …
A further possibility for immunizing theories consists in simply leaving open the area of application of the constructed model so that it is impossible to refute it with counter examples. This of course is usually done without a complete knowledge of the fatal consequences of such methodological strategies for the usefulness of the theoretical conception in question, but with the view that this is a characteristic of especially highly developed economic procedures: the thinking in models, which, however, among those theoreticians who cultivate neoclassical thought, in essence amounts to a new form of Platonism.” |
An obvious problem with the formal-logical requirements of what counts as H is the often severely restricted reach of the ‘law.’ In the worst case, it may not be applicable to any real, empirical, relevant, situation at all. And if A is not true, then M does not really explain (although it may predict) at all. Deductive arguments should be sound – valid and with true premises – so that we are assured of having true conclusions. Constructing game theoretical models assuming ‘common knowledge’ and ‘rational expectations,’ says nothing of situations where knowledge is ‘non-common’ and expectations are ‘non-rational.’
Building theories and models that are ‘true’ in their own very limited ‘idealized’ domain is of limited value if we cannot supply bridges to the real world. ‘Laws’ that only apply in specific ‘idealized’ circumstances — in ‘nomological machines’ — are not the stuff that real science is built of.
When confronted with the massive empirical refutations of almost all models they have set up, many game theorists react by saying that these refutations only hit A (the Lakatosian ‘protective belt’), and that by ‘successive approximations’ it is possible to make the models more readily testable and predictably accurate. Even if T & A1 do not have much of empirical content, if by successive approximation we reach, say, T & A25, we are to believe that we can finally reach robust and true predictions and explanations.
Hans Albert’s ‘Model Platonism’ critique shows that there is a strong tendency for modellers to use the method of successive approximations as a kind of ‘immunization,’ taking for granted that there can never be any faults with the theory. Explanatory and predictive failures hinge solely on the auxiliary assumptions. That the kind of theories and models used by game theorists should all be held non-defeasibly corroborated, seems, however — to say the least — rather unwarranted.
Retreating — as Ken Binmore and other game theorists — into looking upon their models and theories as some kind of ‘conceptual exploration,’ and give up any hopes whatsoever of relating theories and models to the real world is pure defeatism. Instead of trying to bridge the gap between models and the world, they simply decide to look the other way.
To me, this kind of scientific defeatism is equivalent to surrendering our search for understanding and explaining the world we live in. It cannot be enough to prove or deduce things in a model world. If theories and models do not directly or indirectly tell us anything about the world we live in – then why should we waste any of our precious time on them?
[Originally published here on the RWER Blog]
From: pp.2-3 of WEA Commentaries 8(2), April 2018
https://www.worldeconomicsassociation.org/files/Issue8-2.pdf
Hans Vaihinger (1910, 1922) and influenced both Freud and Jung in their theories of displacement and denial. G. E. Moore’s Platonism also influenced Russell and the early Wittgenstein in their logical atomist presumption that algebra reflecting the least reducible element of meaning in language could reflect realities in ‘truth functions’ which then was repudiated by Wittgenstein after the encounter in which Sraffa asked Wittgenstein for the meaning of a Neapolitan gesture which had no single meaning. Logical atomism was paralleled, with similar errors in the perfect competition assumptions of ‘atomistic competition. Samuelson then repeated the early Wittgenstein error by claiming that language and mathematics were identical and could represent ‘true analysis’, while thereby stripping psychology – and uncertainty – from Keynes’ key concepts. Stuart Holland
Wow! Thanks Stuart. That’s meaty commentary that cuts to the marrow. It also gives me a new angle of view on the schizoid derangement of metamaths, long lost in the limbo of deliberately uncertain, post-Cartesian+Newtonian-post-Einsteinian-post-Heisenbergian agnosticism. I will fine-tune my forthcoming paper (on RH and Metamathematics: Theory, metatheory and proofs) accordingly. I would also be very grateful for your impression and critique of my new theory of post-economic bio-ethical ecometrics. If you like, you can find a rather dated draft at the “Awareness and Value” page of The Greenbook blogsite > mm-greenbook. It’s a Blogspot site-in-progress. You can also get to it via a link on the “unMoney” page of my WordPress blogsite-in-progress, entitled EcotectureNOW. Any contribution of content or assistance will be acknowledged in the final publication, initially at Archivx.
Well done. The limits of philosophy ‘As If’ and promoting fictions was developed by Hans Vaihinger (1910, 1922) and influenced both Freud and Jung in their theories of displacement and denial. G. E. Moore’s Platonism also influenced Russell and the early Wittgenstein in their logical atomist presumption that algebra reflecting the least reducible element of meaning in language could reflect realities in ‘truth functions’ which then was repudiated by Wittgenstein after the encounter in which Sraffa asked Wittgenstein for the meaning of a Neapolitan gesture which had no single meaning. Logical atomism was paralleled, with similar errors in the perfect competition assumptions of ‘atomistic competition. Samuelson then repeated the early Wittgenstein error by claiming that language and mathematics were identical and could represent ‘true analysis’, while thereby stripping psychology – and uncertainty – from Keynes’ key concepts.
This is one of the most encouraging articles I’ve seen in years. However, it reminds me that the source of related pain and sorrow is implicit. Who cares? Who in the mainstream of academic economics or professional plutonomics would want to upset their apple cart and be elbowed off the gravy train? RWER may as well have remained the Post-Autistic Economics Review for all the good it has or has not done. And who knows? How do we quantify the ongoing outcome of that heady, promising, passionate movement of Young Turks (of France, etc.)? One way that may work is using my update of the Green Rating From devised by the late Malcolm Wells, one of the first pioneers of ecotecture and sane planning. You can find it on the homepage of my EcotectureNOW blogsite. It’s a WordPress site. Another possibility is using my ecometric equations and formulas for analyzing and evaluating real-world qualitative realities to quantitative values and effects. You can find that content at The Greenbook (mm-greenbook) Blogspot site, on the Awareness and Value page. If you do, bear in mind it needs updating and revision. It’s a work in progress. So, your comments, suggestions and critiques are very welcome. Thanks
I am an engineer and not a trained economist: Econometrics should serve as a tool for planning and predicting the performance of national economies: Economists of USSR and other socialist countries,as I understand, were a fair success in doing this despite the frequent reports about their massive failures. Debates on econometrics could and evaluation of the tools and methodologies used should be, in my view, be based on this concrete experience in the planning and evaluation of socialist economies. And as I remember published in the good old STP were quite relevant on the subject.under debate: Why is WEA silent on this?