Addressing the malaise in neoclassical economics: a call for partial models
This paper is closed for comments.
Abstract
Economics is currently experiencing a climate of uncertainty regarding the soundness of its theoretical framework and even its status as a science. Much of the criticism is within the discipline, and emphasizes the alleged failure of the neoclassical viewpoint. This article proposes the deployment of partial modeling, utilizing Boolean networks (BNs), as an inductive discovery procedure for the development of economic theory. The method is presented in detail and then linked to the Semantic View of Theories (SVT), closely identified with Bas van Fraassen and Patrick Suppes, in which models are construed as mediators creatively negotiating between theory and reality. It is suggested that this approach may be appropriate for economics and, by implication, for any science in which there is no consensus theory, and a wide range of viewpoints compete for acceptance.
Ron, if you are still interested in this topic, look what I found about macroeconomics, author Sharov
http://jedsnet.com/vol-6-no-3-september-2018-current-issue-jeds
I appreciate Herbert’s suggestion that I take a look at Alexander Shavrov’s article in the Journal of Economics and Development Studies. The article emphasizes a systems-theory approach, but is confusingly written, and could have benefited from a strong editorial hand. On the positive side, it is possible that Shavrov’s model, emphasizing labor, activities, and resources, may indeed be valuable. This example thus serves to reinforce my view that a computational approach which synthesizes competing models is much needed in economics during this time of scientific uncertainty.
I’m interested in this methodology to compare models/theories, but I’d like more detail on how it would work on an economic example, and particularly in the field of macroeconomics rather than finance. For instance, could it evaluate the Smets and Wouters 2007 DSGE model of the US economy (Smets, F. and R. Wouters (2007). “Shocks and Frictions in US Business Cycles: A Bayesian DSGE Approach.” American Economic Review 97(3): 586-606) against my stylized model of Minsky’s Financial Instability Hypothesis? I can’t post either model here, but the former is available at http://dept.ku.edu/~empirics/Emp-Coffee/smets-wauters_aer07.pdf while the latter is detailed on slides 37-40 of https://www.patreon.com/file?h=24555362&i=3266457.
The issues I’d like more information on are (1) whether the methodology works with aggregative models rather than agent-interaction ones; (2) how does it handle comparing two models where one uses discrete time (S&M here) and the other is in continuous time (my model).
In terms of the impact of this comparison methodology itself, it would never shake the Neoclassical school, since they’re driven by internal consistency far more than they are by empirical relevance (even after 2008), but it might help to validate non-Neoclassical approaches to the growing cohort of new students who are critical of the mainstream.
I would like to thank Steve Keen for his queries regarding partial modeling, and his comment regarding Neoclassical Economics.
Regarding the two queries, I should probably first emphasize that my approach to economic modeling has been strongly influenced by systems theory in general, and by molecular biology in particular. It would be fair to describe my approach as bio-inspired. Viewed from that standpoint, the persisting problem of aggregation in macroeconomics strikes me as similar to the problems of trial models in Boolean network methodology (the method described in the article). When drafting a tentative BN model (or “wiring diagram”), it is common practice to combine components (nodes) based on available information or, sometimes—frankly—on guesswork. The model is then run and tested against empirical data to assess its predictive power. The combined nodes (“aggregates”) may be empirically justified, or it may be necessary to combine them into smaller (i.e., less encompassing) nodes. The important question that Professor Keen has posed thus translates, in my view, to a creative aspect of modeling strategy. On the issue of discrete versus continuous models—an issue which, at the moment, is being vigorously debated in computational molecular biology—it is essential to concede at the outset that BNs are comprised of discrete time and state components; thus, in their basic form they cannot represent continuous processes in the manner of ordinary differential equations (ODEs). However, the type of study that Professor Keen suggests should be possible through the use of hybrid models, for which toolkits are available, that convert selected BN nodes into ODEs.
Addressing Keen’s final comment, I am of course uncertain, as is everyone else, regarding the fate of Neoclassical Economics. Perhaps Keen is right: proponents of the mainstream theory will remain impervious to critique. But I remain hopeful that the current heterodox turmoil will give rise to model-based strategies, and to a new body of theory more relevant to economic life.
Your paper presents various interesting insights. However, as also noted by Steve Keen, more attention should be paid to a number of theoretical issues which have been at the centre of the debate in the latest decades.
Now I briefly propose some remarks (we have addressed in more detail these aspects in the WEA on line Conference, https://the2008crisistenyearson.weaconferences.net/ )
The typical assumptions of the new-Keynesian theories (in reality, neoclassical with rigidities) can be so summarized: (i) if prices and wages were perfectly flexible, no unemployment would exist; (ii) public spending can increase GDP in the short run, but in the long run will crowd out private investments, reduce GDP and increase prices.
Such unrealistic ideas are more or less explicitly related to the general equilibrium model (GEM) put forward by Léon Walras. However, what mainstream economists forget is that Walras, in his Studies in Social Economics, expounded the notion that only in very simple markets, like that of bricolage, GEM can apply, (and, in my view, GEM and the related neoclassical approach are inadequate also in these instances). In all other cases, including monopoly, oligopoly and public ownership of the land (which he strongly advocated) an active public intervention was required. I have addressed these aspects in “The Studies in Social Economics of Léon Walras and His Far-Reaching Critique of Laissez Faire”, International Journal of Pluralism and Economics Education, vol. 7(1): 59-76, 2016.
Now I will add some more comments on the points (i) and (ii).
As for (i), the reason why prices are not flexible is simply because, normally, for firms is more profitable to apply sticky prices. One central reason for this strategy lies in the phenomenon of satiation for a large category of products which, by lowering the price elasticity of demand, makes it unprofitable for firms to cut down prices.
For instance, if we need for a given period one car, five pullovers and ten kilos of oranges, it is rather unlikely that we will buy the double amount if the prices halve. For the same reason, it becomes even more unlikely that a decrease of prices of, say, 90%, would induce to buy nine times more the initial quantity.
The same reasoning can be applied to wages. Fully flexible wages (especially downward), by further weakening the contractual power of the workers, would not spur firms to hire more workers. Rather, such course would induce firms to further reduce wages and prolong the working hours.
More generally, if prices and wages were wildly flexible, no reliable social life would be possible.
Another central and related issue is whether it is fair or expedient that wages should be equal to the marginal revenue product of labour (MRPL). This comes about because, for a given amount of fixed capital, MRPL will in general decrease beyond a certain value of the “production function” (however defined).
The result is the widening for every additional worker of the gap between MRPL and the average revenue product of labour (ARPL). Orthodox economists would say that the MRPL criterion should anyway be applied to all the workers. And that, if workers do not accept these conditions, this means that they prefer voluntary unemployment.
However, it is also true that the choice of moving along a decreasing path of marginal product is usually taken by the firm’s management, without any involvement of the workers. For instance, if a firm chooses to hire an additional worker at the MRPL of 1 penny per day ─ perhaps as a strategy for lessening all wages ─ is it fair to reduce accordingly the wages of all the workers?
In this regard, I think it would be much fairer that workers be remunerated according to the ARPL.
In this sense, while it is true that wages (and profits) cannot be considered as an independent variable, it is also true that there are no necessary and efficient laws of wage determination. This is much more so when, under an appearance of “perfect market”, firms have a much stronger contractual power than the single worker.
Hence, all these matters find their settlement in the institutional, cultural, political and psychological dimensions of socio-economic relations.
Also the hypothesis (ii) is very unrealistic. In particular, the orthodox tenet of crowding out effect does not hold true in real economies. In fact, a reduction of public spending (even if deemed useful) is rarely replaced by a corresponding increase of private spending. This is one the reason why the ratio of public spending on GDP has increased among the OECD Countries in the long period: from 10%-20% of the 1910s-1920s, to 20%-30% of the 1950s-1960s to 40%-50% of the latest decades.
The same applies to credit creation as a central means to finance consumption and investment and then to create effective demand. This takes place especially when ─ as it is very often the case ─ the process of debt repayment is slow and imperfect. Also in this case, the ratio of private debt on GDP has sharply increased in the long run, ranging now in the various OECD Countries from 150% to 400% of GDP (with the highest values occurring in the most developed Countries).
Other aspects on which heterodox economics can cast a better light can be found in the policies of inflation-targeting of the early 1980s, which had led to a sharp increase of the real interest rates.
Now, while it is highly doubtful that these policies were effective and relevant for curbing inflation, what more realistically took place was that the sharp increase of real interest rates depressed real economies and fueled the widespread financialisation of the system. On that account, high real interest rates have a negative effect on GDP, because the cost of borrowing for firms increases, and the Keynesian marginal efficiency of capital shrinks (the difference between expected rate of profit and real interest rate). This comes about also because high real interest rates are often accompanied by “credit rationing”. A phenomenon that, by favouring the stronger companies and putting at a disadvantage small and medium-sized firms, has pushed the concentration of economic activities in few dominant groups. And, last but not least, high real interest rates have contributed to disseminating the “psychology of financial rent” to all social classes.
For these reasons, then, lowering real interest rates would be beneficial for promoting investment. To this it can be added the idea ─ put forward by Keynes in the final chapter of the “General Theory” ─ that the diminution of the real interest rate (that he associated with the “euthanasia of the rentier”) would also lessen the rate of profit. Hence, a parallel reduction of inequalities would ensue. This kind of approach will shift attention, in the analysis of investments’ incentives, from the all-important role attributed to profits to the ways to enliven ‘animal spirits’, namely, a tendency to make initiatives not strictly related to the ‘economic motive’. Hence, in order to foster positive behavior in economic and social spheres, measures aimed at improving motivation and participation will play a relevant role.
These measures, besides their intrinsic worthiness, would foster a virtuous circle in the wider social context. As a matter of fact, a better participation, by improving the process of social valuing ─ namely, a better understanding of the motivations related to the various policy options ─ would also help overcome its most conflicting aspects.
I thank the contributor for the commentary regarding foundational concepts of neoclassical economics.
This commentary is surely valuable, but I am afraid that it misses the main point. As I sought to make clear in the introduction to the article, my objective was not a critique of neoclassical economics, a task which has been ably performed by David Colander and others. Rather, I hoped – and still hope – to persuade a new generation of economists that a model-based strategy, along the lines philosophically argued by Bas van Fraassen, and computationally applied through Boolean networks and cellular automata, is appropriate for economics during this period of theoretical crisis. My objective was thus methodological. As specifically noted in the article, I was not offering up one more heterodox critique, nor defending the mainstream theory. Indeed, if I may briefly venture into heresy, I do not find it at all inconceivable that neoclassical constructs within some restricted economic domain could be deployed as input models alongside heterodox concepts in a partial-modeling strategy. The result might well be a novel, synthetic body of theory which neither traditional economists nor their most vigorous heterodox critics can presently anticipate.
Dear Ron, I understand your point, “Rather, I hoped – and still hope – to persuade a new generation of economists that a model-based strategy, along the lines philosophically argued by Bas van Fraassen, and computationally applied through Boolean networks and cellular automata, is appropriate for economics during this period of theoretical crisis. My objective was thus methodological.”
However, I still believe that such approach can be appropriate not as a way to bypass controversies in economics but as a means to cast a better light on the most debated aspects.
For instance, when you say that “Volatility does not result, therefore, from irrationality and swarming in the HFT micro-world, but is primarily due to the extraneous over-corrections of individual
investors to dramatic economic events (e.g., the subprime mortgage crisis)”, are you sure that this is the only sensible explanation of financial crises? And that other powerful economic imbalances, widely analysed in economic literature, have not played (and still play) a central role in fueling the massive financialisation of economic systems in the latest decades?
The same applies to your idea that “neoclassical constructs within some restricted economic domain could be deployed as input models alongside heterodox concepts in a partial-modeling strategy. The result might well be a novel, synthetic body of theory which neither traditional economists nor their most vigorous heterodox critics can presently anticipate”.
In order to do all this, your “model-based strategy, along the lines philosophically argued by Bas van Fraassen, and computationally applied through Boolean networks and cellular automata” can certainly be useful, but needs to be more focused on the economic debate surrounding these issues.
The article by Ron Wallace “proposes the deployment of partial modeling, utilizing Boolean networks (BNs), as an inductive discovery procedure for the development of economic theory.” The central argument in favour of partial models is well-made, and while I agree with this aspect of the paper, and the conclusion that models should serve as “cognitive instruments in a regime of exploration,” I have a number of concerns about the proposed strategy and the example of BNs.
The paper states that a theory “should be tested for its ability to predict an actual economy,” and notes that BNs have been applied to areas including systems biology, “frequently yielding results with high predictive power.” The implication is that a technique which is predictive in systems biology may also be useful for predicting the economy. However (speaking as someone who works in the area) in systems biology the word “predict” tends to be used rather loosely. It often just means that a result which is already known and/or non-surprising can be reproduced, which is not the same as the usual meaning (e.g. predicting a financial crisis). A typical useage for example is the title of the paper “Boolean Network Model Predicts Cell Cycle Sequence of Fission Yeast”. And what researchers call “testing against empirical data to assess predictive power” (as in response to Keen’s comment) usually means calibration, unless it is done in a blind-tested fashion which is extremely rare. This is an important distinction for this paper, because a sufficiently complicated model is very flexible and can be made to match known data, but may be poor at making non-trivial predictions (examples are given below). The article notes that “BNs are remarkably flexible” which is not necessarily a good thing.
The article describes a method whereby BNs can be combined together to form a larger model; however this relies on “simplification of the partial models to avoid an intractable result when they are combined” such as “excluding node values that will produce multiple steady states. In addition, feedback loops are excluded because they can frequently yield oscillations.” An advantage of the strategy is “the ability of BNs to include system components (e.g. cultural or religious variables) for which quantitative data are minimal or lacking, without significant loss of predictive power.” Furthermore “it is possible to convert a BN into a continuous dynamical system configured as ordinary differential equations (ODEs).”
The idea is therefore to patch together simple models to create a larger and more sophisticated model, while pruning features which create problems, and adding new nodes for missing information. This seems a sensible thing to try; however, if models are viewed as patches, there is no reason to think that combining the patches will give a better result, or that simply adding a node to a network is enough to account for missing information or dynamics.
To give a few examples from different contexts: in a model of gene regulation model in yeast (Ramsey et al., 2006) it turned out that the main (experimentally verified) prediction concerned stochastic effects that were invisible to any ODE model, no matter how many equations or parameters were added. Predictive (and blind-tested) models used by drug companies to optimise cancer treatments rely on the careful modelling of dynamical cell population effects of the sort that cannot be captured by either BNs or ODEs, so instead a combined ODE/agent-based approach is used (https://www.physiomics-plc.com/technology/) which nonetheless limits parameters to things that can be measured or estimated experimentally. A simple model, based on just a few parameters, outperformed large-scale biophysical “gold standard” cardiac models containing hundreds of parameters at predicting the cardiac toxicity of drug compounds (Mistry, 2018). In cognitive science, quantum decision theory shows that decisions are inconsistent with classical probability: interference effects need not more detail but a different kind of probability (Yukalov & Sornette, 2015). In economics, as (Bezemer, 2012) notes, the money system “is alien to the (DS)GE models structure and trying to introduce it undermines key model properties.” It isn’t therefore enough to add a “finance node” to a model (general equilibrium or other). In all these cases the proposed strategy would fail, because extending the model doesn’t address the problem, which is not model size (in fact small is often better) but model structure (which is never perfect because a complex system doesn’t reduce to equations) and the difficulty of identifying parameter values.
More generally, as these examples also illustrate, the strategy does not address the main practical limitations of modelling complex living systems, from a cell to an economy. The first is that as further detail is added to a model (e.g. extra nodes or equations), the number of unknown parameters increases, as does uncertainty about model structure, resulting in “sloppy parameters” which cannot be determined from data (Gutenkunst et al., 2007). Secondly, such systems are also characterised by opposing positive and negative feedback loops (which the paper notes are sometimes omitted during model integration because they are destabilising). These are extremely hard to tease out from data because they are usually hidden (they are often in a state of tension so seem to cancel out), and also lead to complex behaviour. The combination of feedback loops and uncertain parameters means that models are unstable, so a small change in parameters can give a very different result (Orrell, 2007: 266). What the cited paper (Schlatter et al., 2012) calls the “the final goal of a comprehensive dynamic model” may therefore remain elusive in systems biology as in other fields. (General equilibrium models were born out of neoclassical economists’ intention to build such a “comprehensive dynamic model” of the economy, though the dynamics were of the equilibrium sort.)
One result of these limitations is that paradoxically, simple models usually give the best predictive results (Makridakis & Hibon, 2000), but at the same time never give a complete picture. I therefore agree completely that models should be viewed as partial approximations and I found the discussion of this very interesting. It is also certainly the case that economics can learn much from systems biology, which uses a variety of models including ODEs, stochastic models, agent-based models, machine learning, BNs, and so on. Howeve the argument of the paper, and in particular the use of the BN method, points towards the goal of a comprehensive model patched together using semi-automatic techniques; and it isn’t clear how this strategy (joining and extending models while pruning features) can address the main problems with economics models, which is that (a) all the difficult features – such as feedback loops, human behaviour, money, and so on – have already been pruned out to avoid conflicts or an “intractable result” and (b) at the same time there are already too many “sloppy parameters” that can’t be determined from data (Romer, 2016).
In light of the structural and predictive limitations of models, I have argued for a different dashboard approach, which uses a range of model types (as in systems biology), but keeps them mostly separate, on the understanding that none are correct but each captures one complementary aspect of the underlying reality (see Orrell, 2018: 254; Orrell, 2017: 21-22). Model size should be limited in order to avoid the above-mentioned problems associated with dimensionality. So instead of seeing a BN as a partial model on which to add another BN in order to form a larger (and less partial?) BN, one treats a BN or other model as a partial model to be complemented by different models or approaches.
Finally, one should consider the role of incentives. In an area such as systems biology, which is relatively unconstrained by the need to make blind-tested predictions, there is a tendency to build ever-larger models of the sort critiqued in (Mistry, 2018). For something like “theoretical controversies related to high-frequency trading” on the other hand, where unlike in systems biology the debate over regulation is a matter of considerable interest to the financial sector, I think a first step before combining models would be to investigate influences such as the source of funds for the various modelling approaches, and consider how these might affect the underlying assumptions (Wilmott & Orrell, 2017: 194).
References:
Bezemer DF (2012) Finance and Growth: When Credit Helps, and When it Hinders. https://www.ineteconomics.org/uploads/papers/bezemer-dirk-berlin-paper.pdf
Davidich MI, Bornholdt S (2008) Boolean Network Model Predicts Cell Cycle Sequence of Fission Yeast. PLoS ONE 3(2): e1672.
Gutenkunst RN, Waterfall JJ, Casey FP, Brown KS, Myers CR, Sethna JP (2007) Universally Sloppy Parameter Sensitivities in Systems Biology Models. PLoS Comput Biol 3(10): e189.
Makridakis S and Hibon M (2000) The M3-Competition: results, conclusions and implications. International Journal of Forecasting, 16, 451–76.
Mistry HB (2018). Complex versus simple models: ion-channel cardiac toxicity prediction. PeerJ 6:e4352.
Orrell D (2007). The Future of Everything: The Science of Prediction. New York: Basic Books.
Orrell D (2017). A Quantum Theory of Money and Value, Part 2: The Uncertainty Principle, Economic Thought, 6 (2), 14-26.
Orrell D (2018). Quantum Economics: The New Science of Money. London: Icon.
Ramsey SA, Smith JJ, Orrell D, Marelli M, Petersen TW, de Atauri P, Bolouri H, Aitchison JD (2006). Dual feedback loops in the GAL regulon suppress cellular heterogeneity in yeast. Nature Genetics 38 (9), 1082.
Romer P (2016) The Trouble with Macroeconomics. Available at https://paulromer.net/wp-content/ uploads/2016/09/WP-Trouble.pdf
Wilmott P, Orrell D (2017). The Money Formula: Dodgy Finance, Pseudo Science, and How Mathematicians Took Over the Markets. Chichester: Wiley.
Yukalov VI, Sornette D (2015). Preference reversal in quantum decision theory. Frontiers in Psychology, 6, 1–7.
I wish to thank Arthur Hermann and David Orrell for their constructive commentaries.
In response to Arthur’s views, I am in complete agreement that partial modeling (PM)—or, for that matter, any computational strategy deployed in economics—needs to be informed by a focus on key theoretical issues. It is not my objective (and in my enthusiasm I hope I did not suggest that it is) to “bypass” the traditional exchange of ideas which economics needs today more than ever. On the contrary, I am hopeful that issues-oriented discussions will refine the models utilized in PM, and that the theories generated by PM will inform the exchange of ideas. In short, I envision a symbiotic and productive relation between debate and computational method.
David lucidly addresses the problems of modeling complex systems, taking special note of computational cell biology, the platform which inspired my discussion of PM. I appreciate his observation that “[patching] together simple models to create a larger and more sophisticated model while pruning feature which create problems…seems a sensible thing to try.” He then goes on to note (correctly) that computationally=based prediction in cell biology often refers to the similarity of the model to empirical data from cell preparations, and is thus not prediction in the ordinary sense. In addition , he points out (again, correctly) that the addition of layers in a Boolean-network (BN) model can have a destabilizing effect. On the first point, I would emphasize that the conceptually-loose variant “prediction” encountered in computational molecular cell biology is often associated with models of very large molecular networks. Smaller models, as David points out, supplying an example, are much better at prediction (using that word in its ordinary sense). This distinction is important because I am offering would utilize smaller models (e.g., a market-efficiency model of high-frequency trading in a hedge fund or perhaps on a single exchange) and not models of an entire economy. Regarding the second issue (the instability of multi-layered BNs), my view is that the problem will likely be solved within a short time. Multi-level BNs are the focus of intensive research; the issue of instability is receiving much attention. Thus Cozzo et al. (2012) report that a multi-level BN “provides a mechanism for the stabilization of the dynamics of the whole system even when individual layers work on the chaotic regime, therefore identifying new ways of feedback between the structure and the dynamics of these systems.” The implication of the study is that future versions of PM may approach feedback in a very different way. Finally, regarding model structure, and as discussed in the paper, I recognize that there are several promising alternatives to BNs including cellular automata (CA) and especially agent-based models (ABMs). These models, or hybrid variants thereof, could very possibly be utilized in in the emerging PM platform.
Source:
Cozzo, E., Arenas, A., and Y. Moreno (2012). Stability of multi-level Boolean networks. Physical Review E, i6, 036115, 1-4.