Complexity, Power Laws and a Humean Argument in Risk Management: The fundamental Inadequacy of Probability Theory as a Foundation for Modeling Complex Risk in Banking

Download full paper

This paper is closed for comments.

Abstract

Whenever risk managers are confronted with deep uncertainty and organized complexity, probabilistic inference methods are used. These seem able to allow for crisp inputs and precise results. However, as has been noted by several thinkers (e.g., Hayek, 1967; Weaver, 1948), such methods cannot be used effectively in such situations. This might basically sound like old wine in a new bottle and, in fact, objections to, and limitations of conventional, i.e., probabilistic risk modeling are anything but unheard of in the literature of banking and finance. However, this paper introduces for the first time an argument, inspired by reflections on the old riddle of induction, from which those shortcomings of the limited suitability of probability can be derived. It demonstrates that any choice of a particular probability distribution for a given risk management purpose is necessarily arbitrary, i.e., it is not grounded in the data but in the choices of the statistician or risk manager, and cannot be justified by appealing to something more objective. Thereby, this paper unmasks the illusion that financial data and extreme losses are well-described by non-standard probability functions such as power laws that have been embraced at the expense of bell curves in the aftermath of the global financial crisis of 2008. Moreover, although we do not propose a positive solution, we believe that articulating the real, and as yet unnoticed, source of the problem is a key step towards developing a principle and tractable response.

Posted for comments on 12 Nov 2019, 2:10 pm.

Comments (2)

  • Ragupathy Venkatachalam says:

    The paper attempts to deal with the implications of a foundational problem in inference on modelling risk in financial markets and banking. The authors argue that there are serious limitations in using classical probability theory – along the lines axiomatised by Kolmogorov (1933) – for modelling risk in financial markets. They point out that the modern financial systems are highly “complex” in their organisation and such complex systems pose challenges to applying classical probability notions. They argue that modelling complex risks lies outside the scope of classical probability theory. Based on the arguments that choice of a specific probability distribution is arbitrary, questionable assumption concerning stability of past phenomena into the future and complex nature of the financial system, the authors claim that use of probability concepts in risk modelling is inadequate. Such inadequacy, they further note, cannot be overcome by tinkering with adjustments to the nature of probability distributions (e.g. normal to power law) or by the availability of more data.

    Overall Comments

    The paper is well-written, interesting and deals with in an important issue concerning epistemological limits. This referee is certainly sympathetic to the overall argument – i.e., limits of using classical probability concepts in modelling risk in financial markets, even if some of it is familiar criticism.

    This paper makes an eloquent case about how using power law distributions or any such incremental refinements are akin to epicycles and are inadequate to bridge certain fundamental philosophical issues.

    Despite the overall importance of the theme, there are areas in which this paper requires improvement. I am overall positive about the paper. My suggestions/comments are minor/suggestive and relatively straight- forward to address, if the authors choose to do so.

    The main shortcomings of this paper have to do with the fact that some of the conclusions are fairly well-known, others slightly exaggerated, at times the mathematical claims/concepts are imprecise.

    Specific comments

    The title has “… The fundamental Inadequacy of Probability Theory as a Foundation for Modeling Complex Risk in Banking“. Although banking is mentioned in the title and a few other places, the discussion is largely about financial markets in general and there is no serious discussion about specificities of banking practise other than some generic statements. Perhaps worth removing this and shortening the title.

    Abstract: non-standard probability functions – this is not a precise characterisation since power law distributions are not non-standard in a strict mathematical sense. Typically, non-standard probabilities refer to those with Non-Archimedean properties or related to non-standard analysis.

    It is unclear what aspects of modelling that authors take issue with. Since models could be geared for explanatory or predictive purposes, it may be worth clarifying.

    p.2 : Authors note that “probability theory will lose its claim to be capable of guiding the practical management of complex risk in banking and other financial institutions.” It is unclear whether the criticism is about the usefulness of probability theory for theoretical modelling of risk or for practical management. If it is the latter, such limitations are rather obvious.

    p. 7: “…standard formalism concerned with probability as axiomatized by Kolmogorov (1933)”. lt is important to note that Kolmogorov himself was dissatisfied with aspects of his axiomatic characterisation of measure- theoretic probability – in particular frequency aspects, which squarely relate to the themes of this paper – probability and complexity. His later contributions – Kolmogorov complexity, which in turns underpins algorithmic probabilities (developed by Ray Solomoff) and modern theory of inductive inference (see McCall, 2004 on Induction).

    P. 8, footnote 10: Bruno de Finetti is not mentioned in this context. Ramsey and de Finetti both came up with their formulations of subjective probabilities independently. In fact, Ramsey’s seminal paper was published (written earlier though) after de Finetti’s.

    P.9: “complex situations” – what do authors mean precisely by complex?
    The terms disorganised complexity, organised complexity (including the characterisations provided in appendix) are not clearly defined or precise in mathematical senses.

    p.10: dynamic complexity, stable random, truly complex – not clearly defined or imprecise usage.

    p. 12: Interesting discussion. However, surely all modelling activity requires primitives – in this case, an input can the probability distribution. It may be a subjective choice by a modeller, but that does not necessarily imply that it is arbitrary. Also, propositions, results or inferences derived using such models would be conditional statements on the primitives. This in itself does not make the exercise illegitimate. It certainly requires caution on interpreting the output of such a model (or questions of model stability come in). On the other hand, if practical considerations of risk management alone matter, then these can be seen merely as instrumental vehicles to be judged solely by accuracy of prediction. It could be worth qualifying these arguments.

    p.16: Von Hayek (1967) and in reference list – it should be just Hayek (1967) since Hayek himself dropped von, also in the book in question.

    p. 17 – real dynamics (in contrast to complex dynamics, defined on C as opposed to R spaces?)

    p.19 – “This is partly because making sense of and understanding the behavior of complex systems is hardly possible because they are non-linear and have too many interdependent variables” – This may be partly incorrect, misleading. First, both the presence of nonlinearity and too many … variables is not a necessary condition for a system to exhibit complex behaviour. In fact, coupled linear systems can exhibit complex behaviour and so can low dimensional non-linear systems (i.e., with few variables).

    p. 21: ‘organized and complex system whereas in situations of disorgnized complexity …large numbers, the Central Limit Theorem and other so-called “properties of stochasticity” (Martin-Löf, 1966: 604) hold.’ – This reference is taken out of context.

    p. 21 What do authors mean by “high degrees of genuine randomness”? – I do not think that authors are referring to algorithmic complexity theory (Martin-Löf-Schnorr-Solomonoff) and Turing reducibility. In which case this characterisation is too loose.

    p. 24: “Our central argument is valid and the rationale beneath the premises ought to ensure that they are correct (or at least, very plausible) and that, therefore, the whole argument is sound.” True it may be, but it would be good to let the reader be the judge in this case. I suggest removing this phrase.

  • Stephanos Papadamou says:

    This paper underlines the fundamental inadequacy of probability theory as a foundation for modelling complex risk in banking.

    I think that the way of writing in some parts follows a significantly negative style about the application of probability theory on banking – but in the conclusion the authors are much smoother. I think that the title should be expressed a little differently: something like “the cautious use of probability theory on banking”, for example.

    In page two the authors refer to several theorists but in parenthesis mention only Weaver (1948) – I think they should add some further examples here.

    In the introduction they describe the aim of the paper, but I think it needs a better description of the motivation: what are the main research questions that they are trying to answer more clearly?

    I agree with the many weaknesses described by the authors, but I would expect not just a general conclusion about being cautious, but to provide some policy implications of their findings about the use of probabilistic theory in specific cases.