Article

Classical Econophysics

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

This monograph examines the domain of classical political economy using the methodologies developed in recent years both by the new discipline of econo-physics and by computing science. This approach is used to re-examine the classical subdivisions of political economy: production, exchange, distribution and finance. The book begins by examining the most basic feature of economic life – production – and asks what it is about physical laws that allows production to take place. How is it that human labour is able to modify the world? It looks at the role that information has played in the process of mass production and the extent to which human labour still remains a key resource. The Ricardian labour theory of value is re-examined in the light of econophysics, presenting agent based models in which the Ricardian theory of value appears as an emergent property. The authors present models giving rise to the class distribution of income, and the long term evolution of profit rates in market economies. Money is analysed using tools drawn both from computer science and the recent Chartalist school of financial theory. Covering a combination of techniques drawn from three areas, classical political economy, theoretical computer science and econophysics, to produce models that deepen our understanding of economic reality, this new title will be of interest to higher level doctoral and research students, as well as scientists working in the field of econophysics.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Other works that the interested reader should consult are Cottrell (1994, 1998);Cockshott, Cottrell, and Michaelson (1995); Julius (2005); Puty (n.d.); Barnes (1986, 1990); Webber (1987); Rigby (1986, 1996). Instead, we mention what is perhaps the most significant contribution to appear since the 2008 conference, Classical Econophysics (Cockshott, Cottrell, Michaelson, Wright, & Yakovenko, 2009). ...
... 23. Besides the discussion in Cockshott et al. (2009), readers should consult the original paper, published in Physica A ( Wright, 2005). ...
Chapter
Full-text available
Popular understandings of the financial crisis tend to focus on the rents extracted by elite personnel in the financial sector. Professional discussions, however, have addressed the faulty assumptions underlying theory and practice – in particular, the assumption that returns to financial assets follow the Gaussian distribution, in the face of much empirical evidence that these have power law distributions with far higher kurtosis. It turns out that the power law tails of returns to financial assets are also a feature of the distribution of company rates of profit, a discovery that stems from proposals to ‘dissolve’ the traditional transformation problem by abandoning the condition of a uniform rate of profit and instead considering its distribution.Marx himself was aware of the importance of considering the distributional properties of economic variables, based on his reading of Quetelet. In fact, heavy-tailed distributions characterise a wide range of variables in capitalist economies, the best-known probably being the Paretian tail component in distributions of income and wealth. Nor is this simply an empirical fact – such distributions emerge readily from a range of agent-based simulations.Capitalist economies are, in a particular technical sense, complex self-organising systems perpetually on the brink of crisis. This modern understanding is prefigured in Marx’s discussion of how the compulsive character of social relations emerges from the atomistic exercise of human free will in commercial society. The developing literature of probabilistic Marxism successfully applies these insights to the wider fields of econophysics and complexity, demonstrating the continuing relevance of Marx’s thought. Keywords: Quetelet, Farjoun and Machover, econophysics, financial crisis complexity, agent-based modelling
... Other works that the interested reader should consult are Cottrell (1994, 1998); Cockshott, Cottrell, and Michaelson (1995); Julius (2005); Puty (n.d.); Barnes (1986, 1990); Webber (1987); Webber andRigby (1986, 1996). Instead, we mention what is perhaps the most significant contribution to appear since the 2008 conference, Classical Econophysics (Cockshott, Cottrell, Michaelson, Wright, & Yakovenko, 2009). ...
... 23. Besides the discussion in Cockshott et al. (2009), readers should consult the original paper, published in Physica A (Wright, 2005). ...
Chapter
Full-text available
Popular understandings of the financial crisis tend to focus on the rents extracted by elite personnel in the financial sector. Professional discussions, however, have addressed the faulty assumptions underlying theory and practice – in particular, the assumption that returns to financial assets follow the Gaussian distribution, in the face of much empirical evidence that these have power law distributions with far higher kurtosis. It turns out that the power law tails of returns to financial assets are also a feature of the distribution of company rates of profit, a discovery that stems from proposals to ‘dissolve’ the traditional transformation problem by abandoning the condition of a uniform rate of profit and instead considering its distribution.Marx himself was aware of the importance of considering the distributional properties of economic variables, based on his reading of Quetelet. In fact, heavy-tailed distributions characterise a wide range of variables in capitalist economies, the best-known probably being the Paretian tail component in distributions of income and wealth. Nor is this simply an empirical fact – such distributions emerge readily from a range of agent-based simulations.Capitalist economies are, in a particular technical sense, complex self-organising systems perpetually on the brink of crisis. This modern understanding is prefigured in Marx’s discussion of how the compulsive character of social relations emerges from the atomistic exercise of human free will in commercial society. The developing literature of probabilistic Marxism successfully applies these insights to the wider fields of econophysics and complexity, demonstrating the continuing relevance of Marx’s thought.
... In this paper, we make the case that econophysics deserves more credit than these critiques give it, as an enterprise that is at least sometimes successful in its main goals of predicting and explaining economic phenomena of certain kinds. Our strategy will not be to address the general criticisms just described head-on, and we do not mean to 1 For more on the relationship between physics, finance, and econophysics, see Weatherall (2013); for further technical details and overviews of recent work, see Mantegna and Stanley (1999), McCauley (2004), and Cottrell et al. (2009). There is also a small literature in philosophy of science dealing with econophysics, including Rickles (2007) and Thébault et al. (2017). 2 Despite the prevalence of this sort of criticism, it is far from clear that physics is more guilty of oversimplification than economics when it is applied to economic facts. ...
Article
Full-text available
We study the Johansen–Ledoit–Sornette (JLS) model of financial market crashes (Johansen et al. in Int J Theor Appl Financ 3(2):219–255, 2000). On our view, the JLS model is a curious case from the perspective of the recent philosophy of science literature, as it is naturally construed as a “minimal model” in the sense of Batterman and Rice (Philos Sci 81(3):349–376, 2014) that nonetheless provides a causal explanation of market crashes, in the sense of Woodward’s interventionist account of causation (Woodward in Making things happen: a theory of causal explanation. Oxford University Press, Oxford, 2003).
... The very long research tradition on international trade patterns has been enriched by a streamline which, starting almost 20 years ago from Snyder & Kick (1979), applied social network analysis Kick & Byron, 2001;Kim & Shin, 2002;Mahutga, 2006;Rauch, 1999Rauch, , 2001Roth & Dakhli, 2000;Smith & Nemeth, 1988;Smith & White, 1992;Snyder & Kick, 1979;Su & Clawson, 1994;van Rossem, 1996). Further, during the second half of past decade, a number of papers investigated world trade web from the so-called econo-physics perspective (Barigozzi et al., 2010a(Barigozzi et al., , 2010bBhattacharia et al., 2008aBhattacharia et al., , 2008bChakrabarti et al., 2006;Cockshott et al., 2009;Fagiolo et al., 2009;Garlaschelli & Loffredo, 2004a, 2004bLi et al., 2003;Ruzzenenti et al., 2010;Serrano & Boguñà, 2003;Serrano, 2007;Serrano et al., 2007), which basically is the application of advanced mathematical and statistical methodologies to social sciences, and especially to economics (Chatterjee & Chakrabarti, 2008). Likely and hopefully in next years we will witness to a fruitful breeding between the two perspectives, as it is the spirit of this and other recent papers (Arribas et al., 2009;Kastelle, 2009;Reyes et al., 2008Reyes et al., , 2009). ...
Chapter
Full-text available
Do trading countries also collaborate in R&D? This is the question that, facing with a number of methodological problems, here it is dealt with. Studying and comparing the international trade network and the R&D collaboration network of European countries in the aerospace sector, social network analysis offers a wide spectrum of methods and criteria either to make them comparable or to evaluate its similarity. International trade is a 1-mode directed and valued network, while the EU-subsidized R&D collaboration is an affiliation (2-mode) undirected and unvalued network, and the elementary units of this latter are organizations and not countries. Therefore, to the aim to make these two networks comparable, this paper shows and discusses a number of methodological problems and solutions offered to solve them, and provides a multi-faceted comparison in terms of various statistical and topological indicators. A comparative analysis of the two networks structures is made at aggregate and disaggregate level, and it is shown that the common centralization index is definitively inappropriate and misleading when applied to multi-centered networks like these, and especially to the R&D collaboration network. The final conclusion is that the two networks resemble in some important aspects, but differ in some minor traits. In particular, they are both shaped in a core-periphery structure, and in both cases important countries tend to exchange or collaborate more with marginal countries than between themselves.
... The very long research tradition on international trade patterns has been enriched by a streamline which, starting almost 20 years ago from Snyder & Kick (1979), applied social network analysis Kick & Byron, 2001;Kim & Shin, 2002;Mahutga, 2006;Rauch, 1999Rauch, , 2001Roth & Dakhli, 2000;Smith & Nemeth, 1988;Smith & White, 1992;Snyder & Kick, 1979;Su & Clawson, 1994;van Rossem, 1996). Further, during the second half of past decade, a number of papers investigated world trade web from the so-called econo-physics perspective (Barigozzi et al., 2010a(Barigozzi et al., , 2010bBhattacharia et al., 2008aBhattacharia et al., , 2008bChakrabarti et al., 2006;Cockshott et al., 2009;Fagiolo et al., 2009;Garlaschelli & Loffredo, 2004a, 2004bLi et al., 2003;Ruzzenenti et al., 2010;Serrano & Boguñà, 2003;Serrano, 2007;Serrano et al., 2007), which basically is the application of advanced mathematical and statistical methodologies to social sciences, and especially to economics (Chatterjee & Chakrabarti, 2008). Likely and hopefully in next years we will witness to a fruitful breeding between the two perspectives, as it is the spirit of this and other recent papers (Arribas et al., 2009;Kastelle, 2009;Reyes et al., 2008Reyes et al., , 2009). ...
Chapter
Full-text available
Do trading countries also collaborate in R&D? This is the question that, facing with a number of methodological problems, here it is dealt with. Studying and comparing the international trade network and the R&D collaboration network of European countries in the aerospace sector, social network analysis offers a wide spectrum of methods and criteria either to make them comparable or to evaluate its similarity. International trade is a 1-mode directed and valued network, while the EU-subsidized R&D collaboration is an affiliation (2-mode) undirected and unvalued network, and the elementary units of this latter are organizations and not countries. Therefore, to the aim to make these two networks comparable, this paper shows and discusses a number of methodological problems and solutions offered to solve them, and provides a multi-faceted comparison in terms of various statistical and topological indicators. A comparative analysis of the two networks structures is made at aggregate and disaggregate level, and it is shown that the common centralization index is definitively inappropriate and misleading when applied to multi-centered networks like these, and especially to the R&D collaboration network. The final conclusion is that the two networks resemble in some important aspects, but differ in some minor traits. In particular, they are both shaped in a core-periphery structure, and in both cases important countries tend to exchange or collaborate more with marginal countries than between themselves.
... For a derivation of this rate see (, (Cockshott et al.,2009). ...
Article
Full-text available
This paper reviews the articles by Pan and by Zhu on the China Model. The review of Pan is critical, that of Zhu sympathetic. Pan is criticized for taking an unquestioning attitude towards state supporting ideologies and failing to adequately account for the effects of changes in family structure and class structure in China over the past 50 years. The reviewer broadly agrees with Zhu's comments about a future steady state economy. The article provides statistical data from the recent economic and demographic histories of China and Japan to back up the general conclusions drawn by Zhu.
Article
Full-text available
We discuss the role of heterodox economics in opening new perspectives, the question of scalability of socio-economic order, the heritage of the “socialist calculation debate” and its ongoing relevance for discussions on “post-capitalism” today and finally the potentials of computational simulation and agent-based modelling for the exploration of alternative socio-economic approaches. The contributions to our special issue address these aspects and topics in different ways and therefore underline the fruitfulness of these discussions, especially in regard to the development of more just and sustainable socio-economic structures. Faced with the contemporary polycrisis, we can no longer afford “capitalist realism”.
Article
This article surveys computational approaches to classical‐Marxian economics. These approaches include a range of techniques—such as numerical simulations, agent‐based models, and Monte Carlo methods—and cover many areas within the classical‐Marxian tradition. We focus on three major themes in classical‐Marxian economics, namely price and value theory; inequality, exploitation, and classes; and technical change, profitability, growth and cycles. We show that computational methods are particularly well‐suited to capture certain key elements of the vision of the classical‐Marxian approach and can be fruitfully used to make significant progress in the study of classical‐Marxian topics.
Article
Economic systems produce robust statistical patterns in key sate variables including prices and incomes. Statistical equilibrium methods explain the distributional properties of state variables as arising from specific institutional, environmental, and behavioral postulates. Two broad traditions have developed in political economy with the complementary aim of conceptualizing economic processes as irreducibly statistical phenomena but differ in their methodologies and interpretations of statistical explanation. These conceptual differences broadly mirror the methodological divisions in statistical mechanics, but also emerge in distinct ways when considered in the context of social sciences. This paper surveys the use of statistical equilibrium methods in analytical political economy and identifies the leading methodological and philosophical questions in this growing field of research.
Chapter
Sociology and other social sciences have employed network analysis earlier than management and organization sciences, and much earlier than economics, which has been the last one to systematically adopt it. Nevertheless, the development of network economics during last 15 years has been massive, alongside three main research streams: strategic formation network modeling, (mostly descriptive) analysis of real economic networks, and optimization methods of economic networks. The main reason why this enthusiastic and rapidly diffused interest of economists came so late is that the most essential network properties, like externalities, endogenous change processes, and nonlinear propagation processes, definitely prevent the possibility to build a general – and indeed even partial – competitive equilibrium theory. For this paradigm has dominated economics in the last century, this incompatibility operated as a hard brake, and presented network analysis as an inappropriate epistemology. Further, being intrinsically (and often, until recent times, also radically) structuralist, social network analysis was also antithetic to radical methodological individualism, which was – and still is – economics dominant methodology. Though culturally and scientifically influenced by economists in some fields, like finance, banking and industry studies, scholars in management and organization sciences were free from “neoclassical economics chains”, and therefore more ready and open to adopt the methodology and epistemology of social network analysis. The main and early field through which its methods were channeled was the sociology of organizations, and in particular group structure and communication, because this is a research area largely overlapped between sociology and management studies. Currently, network analysis is becoming more and more diffused within management and organization sciences. Mostly descriptive until 15 years ago, all the fields of social network analysis have a great opportunity of enriching and developing its methods of investigation through statistical network modeling, which offers the possibility to develop, respectively, network formation and network dynamics models. They are a good compromise between the much more powerful agent-based simulation models and the usually descriptive (or poorly analytical) methods.
Article
Full-text available
En este documento se exploran algunas relaciones entre la economía y algunos de los paradigmas actuales que definen las metodologías y modelos de inteligencia artificial. La aproximación que se destaca es el paradigma de principios matemáticos de aprendizaje automatizado o machine learning , así como la contribución de la economía computacional y economía de la complejidad sobre modelos basados en agentes en el paradigma de principios biológicos. En esta investigación se muestran algunos esquemas de información que distinguen un modelo estándar de aprendizaje automatizado y la econometría convencional, más adelante se desarrollan las visiones. Finalmente, se explica la importancia de la precisión en los modelos clasificatorios de machine learning en la industria de tecnología.
Article
Microfoundations proposed for macroeconomics often include strong counterfactual assumptions about the knowledge and foresight of agents and about the pervasiveness of equilibrium exchange. This paper explores and improves the social-architecture model, an agent-based macromodel that discards such assumptions. In this monetary exchange economy, individuals transact at disequilibrium prices in shopping-based goods markets and search-based labor markets. GDP and unemployment distributions are emergent outcomes of the individual-level interactions. These distributions expose some problems in the original model. Modest model amendments largely address these problems. Some apparently central ingredients of the model prove to have little influence on the simulation results.
Article
Full-text available
Modern fundamental and applied challenges of economics increasingly acquire an interdisciplinary nature that contains concepts and methods of sociology, demography, psychology, the theory of nonlinear dynamics and even physics. One of such problems is the shortage of labor resources in the regions of the Russian Far East. For estimation and prediction the discrepancy between the demands of the economy and demographic trends, the indicator of the level of employment is suggested to be applied. This index is defined as the ratio of the number of employed population of a certain age group (cohort) to the population of this age group. The analysis and forecasting of the employment rate can be reduced to the construction of an economic-mathematical model of the dynamics of the number of the employed and the corresponding demographic model. An econophysical model of competition between specialists of different ages and a continuous analogue of the demographic model of Lefkovich are used as these models. Thus, for the first time, a comparative analysis of the employment rate has been made for particular regions of the Far East from the view point of nonlinear dynamics. The modeling of the changes in the amount of the employed population and of the demographic dynamics in the regions of the Far East has made it possible to draw several important conclusions. The employment of the age group of the population of 30-49 years old of Khabarovsk and Primorsky Krai that are considered to be the actively developing regions of the Far East is close to saturation. Consequently, the increase in the number of the employed population of this age group can be determined only by migration growth. At the same time, a significant increase in the number of the cohorts of 30-49 years old has been registered relative to the increase in the number of the employed in this age group. It is related to a possible increase in migration inflows, as the region was suggested to become a region with special social support for the population and with a developing economy. Such effect may be observed in the Khabarovsk and Primorye Territories with the active and successful implementation of large investment projects. The effect may lead to high unemployment rate among migrants. In general, due to the lack of labor resources, young people under the age of 30 are actively involved in the labor activity. Further perspectives of the study may be related to the use of more detailed agent-oriented models, which will allow describing the behavior strategies of individual people and integrating them into homogeneous groups. Such modeling will allow to study in more details the motivation of the population and their influence on the general dynamics of employment.
Chapter
Full-text available
This paper tries to clarify the logical structure of the relationship between labor values and prices from an axiomatic perspective. The famous "transformation problem is interpreted as an impossibility result for a specific interpretation of value theory based on specific assumptions and definitions. A comprehensive review of recent literature is provided, which shows that there are various theoretically relevant and logically consistent alternative interpretations based on different assumptions and definitions.
Article
Full-text available
This paper tries to clarify the logical structure of the relationship between labor values and prices from an axiomatic perspective. The famous “transformation problem” is interpreted as an impossibility result for a specific interpretation of value theory based on specific assumptions and definitions. A comprehensive review of recent literature is provided, which shows that there are various theoretically relevant and logically consistent alternative interpretations based on different assumptions and definitions.
Article
Recent studies have shown that imitation and adaptation are the dominant mechanisms of a positive feedback loop that leads to a dramatic amplification of stock prices. In this research, relative wealth concerns have been taken into account as the primary origin of the positive feedback effect. Specifically, relative wealth concerns alongside wealth inequality would change the risk attitude of each stock-trading agent to catch up with their peers’ wealth, by imitating and adapting their trading strategies. We simulate an artificial stock market via an agent-based modeling approach, which allows us to observe what happens to each agent’s relationships, providing a more insightful view than the traditional economic model. This research demonstrates how relative wealth concerns can affect today’s financial mechanism, by means of positive feedback effects.
Article
Motivated by classical political economy we detail a probabilistic, 'statistical equilibrium' approach to explaining why even in equilibrium, the equalization of profit rates leads to a non-degenerate distribution. Based on this approach we investigate the empirical content of the profit rate distribution for previously unexamined annual firm level data comprising over 24,000 publicly listed North American firms for the period 1962-2014. We find strong evidence for a structural organization and equalization of profit rates on a relatively short time scale both at the economy wide and one- and two-digit SIC industry levels into a Laplace or double exponential distribution. We show that the statistical equilibrium approach is consistent with economic theorizing about profit rates and discuss research questions emerging from this novel look at profit rate distributions. We also highlight the applicability of the underlying principle of maximum entropy for inference in a wide range of economic topics.
Chapter
Full-text available
Sociology and other social sciences have employed network analysis earlier than management and organization sciences, and much earlier than economics, which has been the last one to systematically adopt it. Nevertheless, the development of network economics during last 15 years has been massive, alongside three main research streams: strategic formation network modeling, (mostly descriptive) analysis of real economic networks, and optimization methods of economic networks. The main reason why this enthusiastic and rapidly diffused interest of economists came so late is that the most essential network properties, like externalities, endogenous change processes, and nonlinear propagation processes, definitely prevent the possibility to build a general – and indeed even partial – competitive equilibrium theory. For this paradigm has dominated economics in the last century, this incompatibility operated as a hard brake, and presented network analysis as an inappropriate epistemology. Further, being intrinsically (and often, until recent times, also radically) structuralist, social network analysis was also antithetic to radical methodological individualism, which was – and still is – economics dominant methodology. Though culturally and scientifically influenced by economists in some fields, like finance, banking and industry studies, scholars in management and organization sciences were free from “neoclassical economics chains”, and therefore more ready and open to adopt the methodology and epistemology of social network analysis. The main and early field through which its methods were channeled was the sociology of organizations, and in particular group structure and communication, because this is a research area largely overlapped between sociology and management studies. Currently, network analysis is becoming more and more diffused within management and organization sciences. Mostly descriptive until 15 years ago, all the fields of social network analysis have a great opportunity of enriching and developing its methods of investigation through statistical network modeling, which offers the possibility to develop, respectively, network formation and network dynamics models. They are a good compromise between the much more powerful agent-based simulation models and the usually descriptive (or poorly analytical) methods.
Article
The nature of monetary arrangements is often discussed without any reference to its detailed construction. We present a graph representation which allows for a clear understanding of modern monetary systems. First, we show that systems based on commodity money are incompatible with credit. We then study the current chartalist systems based on pure fiat money, and we discuss the consolidation of the central bank with the Treasury. We obtain a visual explanation about how commercial banks are responsible for endogenous money creation whereas the Treasury and the central bank are in charge of the total amount of net money. Finally we draw an analogy between systems based on gold convertibility and currency pegs to show that fixed exchange rates can never be maintained.
Article
Full-text available
Dobb was the most prominent Marxian political economist in Britain during the middle years of the 20th century. He was actively writing from the early 1920s to the 1970s. In this short book 1 , Brian Pollitt has brought together a number of publications from the last period of Dobbs life some of which have never appeared before in English. Taken together they give a revealing insight into the thoughts of an erudite Western Marxist as he tried to report on and come to terms with the economic debates going on in Eastern Europe at that time. It is impossible now to read these articles without a certain sense of pathos engendered by hindsight. Dobb died in 1976 a mere 15 years or so before the final crisis of hithertoo existing socialism. The topics that he discusses in these essays: the role of markets versus plans, centralisation versus decentralisation, pricing policies, etc, came to the fore during the Gorbachov period and counter revolutionary process of the late 1980s. One can not read Dobbs commentary from the 1960s and '70s without reflecting on the eventual political trajectory of the 'reform' currents that he discusses. At the same time one is brought face to face with the real limits on conceptualisation and policy that existed in those days. One comes to see just how the arguments of the reform school, arguing for the relaxation of planning disciplines and a greater role for market forces within socialist economies would have seemed plausible even to Dobb. I say even to Dobb, and because in previous decades he had been a strong advocate of the benefits of socialist planning and because the last page of his article on planning reveals that his concession to the market remained reluctant. In the rest of my review I will criticise what, in restrospect, seem weaknesses in Dobb's arguments. But this does not mean that you should not read his book. It is well worth reading, in order to understand the debate on socialist economics thirty or forty years ago.
Article
Full-text available
The report attempts of apply econophysics concepts to the Eurozone crisis. It starts by examining the idea of conservation laws as applied to market economies. It formulates a measure of financial entropy and gives numerical simulations indicating that this tends to rise. We discuss an analogue for free energy released during this process. The concepts of real and symbolic appropriation are introduced as a means to analyse debt and taxation. We then examine the conflict between the conservation laws that apply to commodity exchange with the exponential growth implied by capital accumulation and how these have necessitated a sequence of evolutionary forms for money, and go on to present a simple stochastic model for the formation of rates of interest and a model for the time evolution of the rate of profit. Finally we apply the conservation law model to examining the Euro Crisis and the European Stability pact, arguing that if the laws we hypothesise actually hold, then the goals of the stability pact are unobtainable.
Article
During the last years several empirical studies found out that deviations from labour values to market prices are quite small. However, most of these articles do not offer a detailed reason for this result. In this paper two theoretical justifications of the labour theory of value are brought together with some data concerning labour values, prices of production and market prices on the base of German IO tables from 2000 and 2004. In addition, the statistical characteristics of profit rates are analyzed. Both of the theoretical arguments are much in line with the empirical observations because there is only a slight transformation tendency and at the same time profit rates and capital intensity are negatively correlated. Moreover, during the period under observation the German economy seems to be in a state of statistical equilibrium.
Article
In this editorial guide for the special issue on econophysics, we give a unique review of this young but quickly growing discipline. A suggestive taxonomy of the development is proposed by making a distinction between classical econophysics and modern econophysics. For each of these two stages of development, we identify the key economic issues whose formulations and/or treatments have been affected by physics or physicists, which includes value, business fluctuations, economic growth, economic and financial time series, the distribution of economic entities, interactions of economic agents, and economic and social networks. The recent advancements in these issues of modern econophysics are demonstrated by nine articles selected from the papers presented at the Econophysics Colloquium 2010 held at Academia Sinica in Taipei.
Article
Full-text available
This Colloquium reviews statistical models for money, wealth, and income distributions developed in the econophysics literature since the late 1990s. By analogy with the Boltzmann-Gibbs distribution of energy in physics, it is shown that the probability distribution of money is exponential for certain classes of models with interacting economic agents. Alternative scenarios are also reviewed. Data analysis of the empirical distributions of wealth and income reveals a two-class distribution. The majority of the population belongs to the lower class, characterized by the exponential ("thermal") distribution, whereas a small fraction of the population in the upper class is characterized by the power-law ("superthermal") distribution. The lower part is very stable, stationary in time, whereas the upper part is highly dynamical and out of equilibrium. Comment: 24 pages, 13 figures; v.2 - minor stylistic changes and updates of references corresponding to the published version
Article
Full-text available
Article
Full-text available
The use of labour values as a basis for economic calculation in a socialist economy is defended. A resource allocation mechanism is outlined that uses a combination of labour value calculation with market clearing prices for consumer goods. Conditions for full employment are specified. A type theoretic analysis of economic calculation is presented. Information theory is used to estimate the information content of real price vectors. It is demonstrated that both price calculations and value calculations are type theory equivalent and that both involve information loss. It is shown that modern computer technology is capable of computing up to date labour values with comparative ease.
Chapter
Full-text available
This paper examines a stochastic multiplicative process with reset event to explain the power law in the tail of personal income distribution. The tail part of the income distributions in post-war Japan persistently exhibits a power-law distribution with an exponent around 2. We find that a multiplicative process with reset events can explain this pattern. By using a default rate of corporate fundings as a hazard rate of the reset event, we obtain the correct exponent for the power-law in Japanese income.
Article
Full-text available
Inspired by work of both Widom and Mandelbrot, we analyze the Computstat database comprising all publicly traded United States manufacturing companies in the years 1974–1993. We find that the distribution of company size remains stable for the 20 years we study, i.e., the mean value and standard deviation remain approximately constant. We study the distribution of sizes of the “new” companies in each year and find it to be well approximated by a log- normal. We find (i) the distribution of the logarithm of the growth rates, for a fixed growth period of T years, and for companies with approximately the same size S displays an exponential “tent-shaped” form rather than the bell-shaped Gaussian, one would expect for a log-normal distribution, and (ii) the fluctuations in the growth rates — measured by the width of this distribution σT — decrease with company size and increase with time T. We find that for annual growth rates (T = 1), σT ∼ S−β, and that the exponent β takes the same value, within the error bars, for several measures of the size of a company. In particular, we obtain β = 0.20 ± 0.03 for sales, β = 0.18 ± 0.03 for number of employees, β = 0.18±0.03 for assets, β = 0.18 ± 0.03 for cost of goods sold, and β = 0.20 ± 0.03 for propert, plant, and equipment. We propose models that may lead to some insight into these phenomena. First, we study a model in which the growth rate of a company is affected by a tendency to retain an “optimal” size. That model leads to an exponential distribution of the logarithm of growth rate in agreement with the empirical results. Then, we study a hierarchical tree-like model of a company that enables us to relate β to parameters of a company structure. We find that β = −1n Π/1nz, where z defines the mean branching ratio of the hierarchical tree and Π is the probability that the lower levels follow the policy of higher levels in the hierarchy. We also study the output distribution of growth rates of this hierarchical model. We find that the distribution is consistent with the exponential form found empirically. We also discuss the time dependence of the shape of the distribution of the growth rates.
Article
Full-text available
Wegner and Eberbach have argued that there are fundamental limitations to Turing Machines as a foundation of computability and that these can be overcome by so-called super-Turing models such as interaction machines, the π-calculus and the $-calculus. In this article, we contest the Wegner and Eberbach claims.
Article
Full-text available
General equilibrium theory in economics defines the relative prices for goods and services, but does not fix the absolute values of prices. We present a theory of money in which the value of money is a time dependent "strategic variable," to be chosen by the individual agents. The idea is illustrated by a simple network model of monopolistic vendors and buyers. The indeterminacy of the value of money in equilibrium theory implies a soft "Goldstone mode," leading to large fluctuations in prices in the presence of noise. Submitted to Physical Review Letters.
Article
Full-text available
It is widely believed that the rate of profit across industrial sectors, while not in fact uniform as stipulated in the theory of prices of production, is independent of the sectoral organic composition of capital. It follows that the simple labour theory of value must be systematically in error as a predictor of actual sectoral aggregate prices. We offer empirical evidence from the US economy (1987 input--output table) suggesting that this is not so: there is a substantial and statistically significant negative association between organic composition and profit rate across sectors. Copyright 2003, Oxford University Press.
Article
Full-text available
This paper extends the empirical investigation of the relation between labour values and different price forms in the case of the Greek economy. Subjecting the labour theory of value to empirical tests with data from various countries helps in the derivation of general conclusions regarding its empirical validity and practical usefulness. Our results on the closeness of values and prices as measured by their absolute deviation and correlation, the shape of the wage--profit curves, the predictive power of labour values over market prices compared with other 'value bases', and the comparison of fundamental Marxian categories when estimated in value and price terms provide further support for the empirical strength of the labour theory of value. Copyright 2002, Oxford University Press.
Article
Full-text available
This paper examines two aspects of Hayek's business cycle theory in the early 1930s: his methodological approach to the analysis of the cycle, and his substantive analysis of the role of changes in the 'structure of production' over the course of the cycle. The examination of the first aspect is developed by means of a comparison between Hayek's approach and those subsequently adopted by, first, Keynes and, second, Robert Lucas. The second aspect is investigated with the help of a formal example of the sort of 'structural transition' which Hayek envisaged: this is designed to shed some light on the question of which aspects of Sraffa's critique of Hayek are valid and which miss the mark. Copyright 1994 by Oxford University Press.
Article
Full-text available
Contrary to what is expected from indirect evidence and two-commodity hypothetical examples, the evdie nce provided in this paper supports David Ricardo's empirical proposi tion that relative prices of production are mainly determined by labo r-value ratios. The results obtained indicate that a 1 percent change in profit rate will cause relative prices to change up to 2 percent, and as the profit rate on gross capital is approximately 5 percent o ne ends up with a "90 percent labor theory of value." The empirical evidence refers to the Yugoslav economy, but the factors that determ ine the deviations considered turn out to be of the same order of mag nitude as in other economies. Copyright 1987 by Oxford University Press.
Article
Full-text available
This paper offers a reassessment of the socialist calculation debate, and examines the extent to which the conclusions of that debate must be modified in the light of the subsequent development of the theory and technology of computation. Following an introduction to the two main perspectives on the debate which have been offered to date, we examine the classic case mounted by von Mises against the possibility of rational economic calculation under socialism. We discuss the response given by Oskar Lange, along with the counter-arguments to Lange from the Austrian point of view. Finally we present what we call the `absent response ', namely a re-assertion of the classic Marxian argument for economic calculation in terms of labour time. We argue that labour-time calculation is defensible as a rational procedure, when supplemented by algorithms which allow consumer choice to guide the allocation of resources, and that such calculation is now technically feasible with the type of computing machinery currently available in the West and with a careful choice of efficient algorithms. Our argument cuts against recent discussions of economic planning which continue to assert that the task is of hopeless complexity. # Department of Economics, Wake Forest University, and Turing Institute, University of Glasgow, respectively. This paper was published in Review of Political Economy, vol. 5, no. 1, July 1993, pp. 73--112. 1 Contents 1
Article
Full-text available
We investigate the wealth evolution in a system of agents that exchange wealth through a disordered network in presence of an additive stochastic Gaussian noise. We show that the resulting wealth distribution is shaped by the degree distribution of the underlying network and in particular we verify that scale free networks generate distributions with power-law tails in the high-income region. Numerical simulations of wealth exchanges performed on two different kind of networks show the inner relation between the wealth distribution and the network properties and confirm the agreement with a self-consistent solution. We show that empirical data for the income distribution in Australia are qualitatively well described by our theoretical predictions.
Article
In the 2lst century, scientists will introduce a manufacturing strategy based on machines and materials that virtually make themselves. Called self-assembly, it is easiest to define by what it is not. A self-assembling process is one in which humans are not actively involved, in which atoms, molecules, aggregates of molecules and components arrange themselves into ordered, functioning entities without human intervention. In contrast, most current methods of manufacturing involve a considerable degree of human direction. We, or machines that we pilot, control many important elements of fabrication and assembly. Self-assembly omits the human hand from the building. People may design the process, and they may launch it, but once under way it proceeds according to its own internal plan, either toward an energetically stable form or toward some system whose form and function are encoded in its parts. In the next few decades, materials scientists will begin deliberately to design machines and manufacturing systems explicitly incorporating the principles of self-assembly. The approach could have many advantages. It would allow the fabrication of materials with novel properties. It would eliminate the error and expense introduced by human labor. And the minute machines of the future envisioned by enthusiasts of so-called nanotechnology would almost certainly need to be constructed by self-assembly methods.
Article
We study size and growth distributions of products and business firms in the context of a given industry. Firm size growth is analyzed in terms of two basic mechanisms, i.e., the increase of the number of new elementary business units and their size growth. We find a power-law relationship between size and the variance of growth rates for both firms and products, with an exponent between −0.17 and −0.15, with a remarkable stability upon aggregation. We then introduce a simple and general model of proportional growth for both the number of firm independent constituent units and their size, which conveys a good representation of the empirical evidences. This general and plausible generative process can account for the observed scaling in a wide variety of economic and industrial systems. Our findings contribute to shed light on the mechanisms that sustain economic growth in terms of the relationships between the size of economic entities and the number and size distribution of their elementary components.
Article
Patterned self-assembled monolayers (SAMs) of alkanethiolates on gold films were used as constraining elements in forming shapes, in a strategy based on minimizing interfacial free energy. Circular right cylinders, catenoids, and other related shapes having centimeter dimensions were formed from poly(dimethylsiloxane) (PDMS) in a system comprising patterned SAMs and an aqueous solution of sodium chloride whose density equaled that of the polymer. These shapes were fabricated without using complementary, three-dimensional molds: the final form adopted by the PDMS was a minimum free energy shape with certain features of the shape set by the wetting of the pattern in the SAM by the PDMS. Using previously formed polymeric shapes and patterned SAMs as constraining elements, a cylinder fused with a catenoid, a cylinder fused with a truncated cone, two truncated cones fused together, and a truncated cone fused with a hemisphere were fabricated. Applying a magnetic field gradient influenced the final shape of the polymer by generating an effective spatial gradient in the density of the solution. Without using SAMs as constraining elements, convex-concave and double convex lenses were formed at interfaces of two immiscible liquids. Shapes with micrometer dimensions were fabricated by microcontact printing of patterned SAMs and self-assembly of a polymer on these patterns. These procedures produced shapes such as arrays of channel waveguides (with width of a few micrometers) and microlenses (with diameter of 1-2 μm).
Article
Previous writers on the Sraffa–Hayek exchange have tended to view it in four ways. First, as an ambiguous position that represents a half-way house between Sraffa's early and later work. Secondly, as representing the analytical basis for the most elaborate analysis by Keynes, in The general theory, of the ‘essential properties of interest and money’. Thirdly, as Sraffa's opening shots in his long critique of subjectivism. Fourthly, as an early discussion of the true problems associated with the attempt to integrate money into a Walrasian general equilibrium model. Our article adds another interpretation that focuses primarily upon those issues being explicitly discussed by Sraffa and Hayek. This raises many interesting issues, not least in providing us with some new insights on Sraffian scholarship. …when the definitive history of economic analysis during the nineteen-thirties comes to be written, a leading character in the drama (and it was quite a drama) will be Professor Hayek (Hicks, 1967: 103). The term‘fascination’, though perhaps slightly unacademic, aptly describes the effect of the first impact of Professor Hayek's ideas on economists trained in the Anglo-Saxon tradition…to whom it suggested aspects of the nature of capitalistic production they were never taught to think of. This was the first impact. On second thoughts the theory was by no means so intellectually satisfying as it appeared at first. There were admitted gaps here and there in the first published account which was merely intended as rudimentary, and when one attempted to fill these gaps, they became larger instead of smaller, and new and unsuspected gaps appeared – until one was driven to the conclusion that the basic hypothesis of the theory, that scarcity of capital causes crises, must be wrong (Kaldor, 1942:359). Nor should we be surprised that a Sraffa, with his taste for the concrete and his characteristic irony, has at the same time put us on our guard against a certain loose manner of conducting politics and tackling economic questions (Napolitano, 1978:67).
Article
This paper presents the concurrency control strategy of SDD-1. SDD-1, a System for Distributed Databases, is a prototype distributed database system being developed by Computer Corporation of America. In SDD-1, portions of data distributed throughout a network may be replicated at multiple sites. The SDD-1 concurrency control guarantees database consistency in the face of such distribution and replication. This paper is one of a series of companion papers on SDD-1 [4, 10, 12, 21].
Article
Both theoretical and applied economics have a great deal to say about many aspects of the firm, but the literature on the extinctions, or demises, of firms is very sparse. We use a publicly available data base covering some 6 million firms in the US and show that the underlying statistical distribution which characterises the frequency of firm demises—the disappearances of firms as autonomous entities—is closely approximated by a power law. The exponent of the power law is, intriguingly, close to that reported in the literature on the extinction of biological species.
Article
The rules of deduction which are usually used for many-sorted equational logic in computer science, for example in the study of abstract data types, are not sound. Correcting these rules by introducing explicit quantifiers yields a system which, although it is sound, is not complete; some new rules are needed for the addition and deletion of quantifiers. This note is intended as an informal, but precise, introduction to the main issues and results. It gives an example showing the unsoundness of the usual rules; it also gives a completeness theorem for our new rules, and gives necessary and sufficient conditions for the old rules to agree with the new.
Article
This study investigates the empirical strength of the labour theory of value and its relation to profit rate equalisation. It replicates tests from previous studies, using input-output data from 18 countries spanning from year 1968 to 2000. The results are broadly consistent; labour values and production prices of industry outputs are highly correlated with its market price. The predictive power is compared to alternative value bases. Fur-thermore, the empirical support for profit rate equalisation, as assumed by the theory of production prices, is weak.
Article
We apply methods and concepts of statistical physics to the study of economic organizations. We identify robust, universal, characteristics of the time evolution of economic organizations. Speciÿcally, we ÿnd the existence of scaling laws describing the growth of the size of these organizations. We study a model assuming a complex evolving internal structure of an organi-zation that is able to reproduce many of the empirical ÿndings. c 2001 Elsevier Science B.V. All rights reserved.
Article
The SHUNYATA program contains heuristics which are related to reasoning processes of mathematicians and guide the search for a proof. For example, a heuristic applies the method of reductio ad absurdum to prove the negation of a proposition. Another heuristic generates formulas and sets which form the central “ideas” of significant proofs. Some heuristics control the application of other heuristics, for example, by time limits which interrupt a heuristic if it achieves no result. The architecture and the mode of operation of SHUNYATA are illustrated in detail by SHUNYATA's proof of Gödel's incompleteness theorem which says that every formal number theory contains an undecidable formula, i.e., neither the formula nor its negation are provable in the theory. In this proof, SHUNYATA constructed several closed formulas on the basis of elementary rules for the formation of formulas and proved that one of these formulas is undecidable. Further experiments with a learning procedure suggest that an automatic construction of SHUNYATA's heuristics on the basis of a universal set of elementary functions is feasible.
Article
In the social sciences, there is increasing evidence of the existence of power law distributions. The distribution of recessions in capitalist economies has recently been shown to follow such a distribution. The preferred explanation for this is self-organised criticality. Gene Stanley and colleagues propose an alternative, namely that power law scaling can arise from the interplay between random multiplicative growth and the complex structure of the units composing the system. This paper offers a parsimonious model of the US business cycle based on similar principles. The business cycle, along with long-term growth, is one of the two features which distinguishes capitalism from all previously existing societies. Yet, economics lacks a satisfactory theory of the cycle. The source of cycles is posited in economic theory to be a series of random shocks which are external to the system. In this model, the cycle is an internal feature of the system, arising from the level of industrial concentration of the agents and the interactions between them. The model—in contrast to existing economic theories of the cycle—accounts for the key features of output growth in the US business cycle in the 20th century.
Article
Following findings by Ormerod and Mounfield (Physica A 293 (2001) 573) Wright (The duration of recessions follows an exponential nota power law, cond-mat/0311585) rises the problem whether a power (Ormerod and Mounfield, 2001) or an exponential law (Wright) describes the distribution of occurrences of economic recession periods. In order to clarify the controversy a different set of GDP data are hereby examined. The conclusion about a power law distribution of recession periods seems better though the matter is not entirely settled. The case of prosperity duration is also studied and is found to follow a power law. Universal but also non universal features between recession and prosperity cases are emphasized. Considering that the economy is basically a bistable (recession/prosperity) system we may derive a characteristic (de)stabilisation time. 89.65.Gh; 05.45.Tp; 02.70.Rv Econophysics; Time series analysis; Power law; Exponential law; Recession; Prosperity
Article
We consider the compu- tational complexity of the market equilibrium problem by exploring the structural properties of the Leontief exchange economy. We prove that, for economies guaranteed to have a mar- ket equilibrium, flnding one with max- imum social welfare or maximum indi- vidual welfare is NP-hard. In addition, we prove that counting the number of equilibrium prices is #P-hard.
Article
The concepts of transaction and of data consistency are defined for a distributed system. The cases of partitioned data, where fragments of a file are stored at multiple nodes, and replicated data, where a file is replicated at several nodes, are discussed. It is argued that the distribution and replication of data should be transparent to the programs which use the data. That is, the programming interface should provide location transparency, replica transparency, concurrency transparency, and failure transparency. Techniques for providing such transparencies are abstracted and discussed. By extending the notions of system schedule and system clock to handle multiple nodes, it is shown that a distributed system can be modeled as a single sequential execution sequence. This model is then used to discuss simple techniques for implementing the various forms of transparency.
Article
A dynamic computational model of a simple commodity economy is examined and a theory of the relationship between commodity values, market prices and the efficient division of social labour is developed. The main conclusions are: (i) the labour value of a commodity is an attractor for its market price; (ii) market prices are error signals that function to allocate the available social labour between sectors of production; and (iii) the tendency of prices to approach labour values is the monetary expression of the tendency of a simple commodity economy to allocate social labour efficiently. The model demonstrates that, in the special case of simple commodity production, Marx's law of value can naturally emerge from multiple local exchanges and operate 'behind the backs' of actors solely via money flows that place budget constraints on their local evaluations of commodity prices, which are otherwise subjective and unconstrained.
Article
The paper addresses Kliman's criticisms of the observed correlations between prices and labour values. It argues that the notion of spurious correlation is not relevant in this case. It examines Kliman's own simulations and shows that his statistical correction techniques involve dividing through by the signal to leave the noise. Copyright 2005, Oxford University Press.
Article
This study replicates findings that sectoral prices and values are highly correlated cross-sectionally, and that deviations between them are small. Yet after controlling for variations in industry size that produce 'spurious correlation', I find no reliable evidence that relative values have any influence upon relative prices. The smallness of price--value deviations thus does not result from such an influence; it is shown instead to result from a lack of dispersion in the data. Values turn out to be no better predictors of prices than any other random variable with the same probability distribution. Copyright 2002, Oxford University Press.
Article
This paper provides empirical support for the ‘law of value’, understood as the proposition that embodied labour time is conserved in exchanges of commodities. Market prices are well correlated with the sum of direct and indirect labour content. Is it possible to produce equally good correlations by taking the sum of direct and indirect x-content, where x is some input other than labour time? We repeat the analysis for electricity, iron and steel, and oil and show that the answer is no. The high correlations in the case of labour time are, therefore, not a statistical artefact.
Article
Ormerod and Mounfield analysed GDP data of 17 leading capitalist economies from 1870 to 1994 and concluded that the frequency of the duration of recessions is consistent with a power-law. But in fact the data is consistent with an exponential (Boltzmann-Gibbs) law.
Article
Power law distributions of macroscopic observables are ubiquitous in both the natural and social sciences. They are indicative of correlated, cooperative phenomena between groups of interacting agents at the microscopic level. In this paper we argue that when one is considering aggregate macroeconomic data (annual growth rates in real per capita GDP in the seventeen leading capitalist economies from 1870 through to 1994) the magnitude and duration of recessions over the business cycle do indeed follow power law like behaviour for a significant proportion of the data (demonstrating the existence of cooperative phenomena amongst economic agents). Crucially, however, there are systematic deviations from this behaviour when one considers the frequency of occurrence of large recessions. Under these circumstances the power law scaling breaks down. It is argued that it is the adaptive behaviour of the agents (their ability to recognise the changing economic environment) which modifies their cooperative behaviour. Comment: 17 pages, 6 figures, Accepted for Publication in Physica A
Revisiting the song monetary revolution: A review essay
  • Von Glahn R.