An Interview with Venkat Venkatasubramanian, author of How Much Inequality is Fair?: Mathematical Principles of a Moral, Optimal, and Stable Capitalist Society

This week, our featured book is How Much Inequality Is Fair?: Mathematical Principles of a Moral, Optimal, and Stable Capitalist Society, Today, we are happy to present an exclusive interview with author Venkat Venkatasubramanian.

Q: The topic is obviously very important, but why a chemical engineer? How did you get interested in this problem?

Venkat: That’s a fair question. When I started this journey, I wasn’t thinking about income inequality at all. It all started with a question I had asked myself, in 1983, as I was writing my doctoral thesis on a topic in statistical thermodynamics. Now, statistical thermodynamics is the conceptual framework for predicting the macroscopic behavior of a thermodynamic system given the properties of the millions of individual entities (e.g., molecules) that make up the system. As “prisoners” of Newton’s Laws and conservation principles, molecules do not have control over their “fates”, their dynamical trajectories, and their dynamical evolution is the result of thermal agitations resulting from incessant intermolecular collisions. Nor do the molecules have “free will” that would make them “desire” one path over another.

But what if they did? What if the entities possessed free will and had the ability to decide what to do next? What would be a statistical mechanics-like framework for predicting the macroscopic behavior of a large collection of such rational, intelligent, goal-driven agents? This is the question I had asked myself.  

As I couldn’t find the framework I was looking for in the literature at that time, I went about developing one in the following years, not realizing that this odyssey would take me thirty four years! This led to a series of papers that culminated in this book in 2017. Progress came only in fits and starts, in a series of six key insights, which I call my “happy thought” moments, with years of quiescence in between.

What I was after (and I am still on this quest) were the fundamental principles and the mathematical framework  that could help predict the macroscopic properties and behavior of a teleological system given the microscopic properties of its rational entities – i.e., going from the parts to the whole. We know how to do this, in many cases but not all, if the entities are non-rational or purpose-free (such as molecules) – that is what statistical mechanics is all about. But how about rational entities which exhibit purposeful, goal-driven, behavior?

Q: So, how did this lead to income inequality?

Venkat: After nearly two decades of almost no progress, I had my first break, in 2003, when I started thinking about the design of self-organizing adaptive complex networks for optimal performance in a given environment. Our work on this topic gave me the first insight: how the microstructure of a network, i.e., the entity– or part-level properties such as the vertex degree in a graph, is related to its macroscopic, system-level properties, such as its survival purpose, and how the environment plays a critical role in determining the optimal balance between efficiency and robustness trade-offs in design through network topology.

This study in turn led me, in 2004, to investigate the maximum entropy framework in the design of such networks. This resulted in the formulation of my initial theoretical ideas about a new constructionist or emergentist framework, in 2007, which I named statistical teleodynamics.

Soon I realized that the true essence of entropy is fairness in a distribution, which got me interested in fairness in income distribution. This resulted in my papers in 2009 and in 2010, wherein I proved that the lognormal distribution is the fairest distribution of pay in an ideal free market at equilibrium, which is determined by maximizing entropy, i.e., by maximizing fairness. This was a statistical mechanics-information-theoretic perspective, and I knew that there should be a game-theoretic answer as well to this. So I started working on that right away, and when I moved to Columbia in 2011, I initiated collaboration with Professor Jay Sethuraman, a game theory expert, and my doctoral student Yu Luo. This resulted in our 2015 paper.  

Essentially, my long quest for a statistical mechanics-like framework for rational intelligent agents started in 1983 and ended with our 2015 paper. In that paper, however, I had not fully explored the connection between the philosophical theories of human societies and the statistical teleodynamics perspective, even though I had touched upon those connections in my earlier papers. This book was written to address that gap as well as to present all the results from my past papers, and some new results, in a unified conceptual framework.

Q: Can you give us an example of what you mean by fair income inequality?

Venkat: Sure. Inequality per se is not bad and that some inequality is inevitable, even desirable, in a free-market society. As different people have different talents and skills, and different capacities for work, they make different contributions in a society, some more, others less. Therefore, it is only fair that those who contribute more earn more.

But how much more?

In other words, at the risk of sounding oxymoronic, what is the fairest inequality of income? This critical question is at the heart of the inequality debate. The debate is not so much about inequality per se as it is about fairness.

Consider a simple example to illustrate this point. John performs an odd job for one hour and makes $100. Jane performs the same job, but works for two hours and makes $200. Is there inequality in their incomes? Of course, there is. But is the inequality fair? Yes, of course. Jane earned more because she contributed more. Their incomes are not equal, but equitable. Their economic rewards, after accounting for John’s and Jane’s contributions, are the same — both are paid at the same rate per hour of contribution, which is the basis for fairness here.

In this simple case, it was easy to ensure equity. But how do we accomplish this, in general, in a free-market consisting of millions of workers of varying degrees of talent, skill, and capacity for work? The fairest outcome is the one where everyone’s economic reward, after accounting for his or her contribution, is the same, similar to the John-Jane example. So, for the general case, is there a measure of fairness that can guide us to accomplish this? Given the complexity of the problem, one might anticipate the answer to be no. But, surprisingly, the answer is yes. Furthermore, what that measure turns out to be is an even bigger surprise as we discuss below.

Q: Why does extreme inequality matter?

Venkat: Extreme inequality in the United States, and elsewhere, is deeply troubling on a number of fronts. First, there is the moral issue. For a country explicitly founded on the principles of liberty, equality, and the pursuit of happiness, protected by the “government of the people, by the people, for the people,” extreme inequality raises troubling questions of social justice that get at the very foundations of our society. We seem to have a “government of the 1% by the 1% for the 1%,” as the economics Nobel laureate Joseph Stiglitz wrote in his Vanity Fair essay.

As Harvard philosopher Tim Scanlon argues, extreme inequality is bad for the following reasons: (1) economic inequality can give wealthier people an unacceptable degree of control over the lives of others; (2) economic inequality can undermine the fairness of political institutions; (3) economic inequality undermines the fairness of the economic system itself; and (4) workers, as participants in a scheme of cooperation that produces national income, have a claim to a fair share of what they have helped to produce.

As Robert Reich warns: “Third and finally, people who feel subjected to what they consider to be rigged games often choose to subvert the system in ways that cause everyone to lose. . . . In summary, when people feel that the system is unfair and arbitrary and that hard work does not pay off, we all end up losing. . . . Together, these responses impose incalculable damage on an economic system. They turn an economy and a society into what mathematicians would call a ‘‘negative sum’’ game. When capitalism ceases to deliver economic gains to the majority, it eventually stops delivering them at all — even to a wealthy minority at the top. It is unfortunate that few of those at the top have yet to come to understand this fundamental truth.”

Q: What does economics have to say about the question of how much inequality is fair?

Venkat: This critical question has remained unanswered in economics for over two centuries, largely because this question seemed mathematically intractable. As a result, mainstream economics has offered little guidance on fairness and the ideal distribution of income in a free-market society. Most people think this question is too difficult, and perhaps too vague, to be answered. The typical answer is one of the two: (i) it all depends on what you mean by fair; and/or (ii) fair inequality is whatever income distribution the free-market delivers.  So, most people think this question couldn’t even be posed mathematically, let alone try answering it. That’s why, I believe, this has remained an open question for over two centuries. Another reason is that mainstream economics, generally, has preferred to deal with efficiency and growth, and not fairness.

Meanwhile, political philosophy has much to say about fairness and inequality. Consider the ground-breaking theories on fairness by John Rawls, Robert Nozick, Ronald Dworkin, and others, in the 1970s and 1980s. However, all these contributions are purely qualitative, and therefore do not make quantitative predictions about income distributions that can be verified by empirical data.

On the empirical front in economics, there has been exciting recent progress, as seen in the outstanding contributions of Atkinson, Piketty, Saez, and others. Empirical observations are obviously very important, but it is equally important to complement them with a theoretical framework that provides a deeper understanding and analytical insight. Many feel intuitively that the current level of inequality is unfair, is morally and economically unjustified, and is potentially destabilizing. Can we convert this intuitive, qualitative understanding into a more formal quantitative theory? That is, can we address this intuition mathematically and develop an analytical framework that can model and explore this understanding in more precise, quantitative terms? As we take steps to address extreme inequality, we need to know what the desired target inequality is — and for this we need a quantitative, testable theory of fairness for free-market capitalism. This is what I have done with my theory.

Q: What is new about your theory? What are the central ideas?

The very approach itself is new, unusual, and unorthodox in economics and in political philosophy. I have proposed a transdisciplinary theory that integrates foundational principles from disparate disciplines into a unified conceptual and mathematical framework that includes the key perspectives on this question — the perspectives of political philosophy, economics, statistical mechanics, information theory, game theory, and systems engineering.

One might be surprised to see that the answer to the fair inequality question requires concepts and techniques from statistical mechanics, information theory, and systems engineering, disciplines that seem unrelated to, perhaps even incompatible with, economics and political philosophy. However, a free-market system is, after all, a system of millions of rational agents interacting and making decisions, thereby exhibiting certain features of stochastic dynamical systems long-studied in these disciplines. As I show, these concepts and techniques are not only relevant, but they are, in fact, quite indispensable to answer our question.

My theory integrates liberty, equality, and fairness principles from political philosophy with utility maximization by rational self-interested agents in economics, which dynamically interact with one another and the environment modeled by entropy and potential from statistical mechanics, information theory, and game theory, subject to the efficiency and robustness criteria from systems engineering. The framework unifies the concept of entropy from statistical mechanics and information theory with the concept of potential from game theory, and proves these represent the concept of fairness in economics and political philosophy. This deep and surprising insight is one of the two key concepts in my unified framework, which I have named statistical teleodynamics.

The other key insight is that when one maximizes fairness, all agents enjoy the sameoverall or effectiveutility, after accounting for everyone’s contribution, at equilibrium, in an ideal free-market society. This is an important result, because with this outcome we have proved that everyone is rewarded equitably in an ideal free market – similar to the John-Jane example. You are rewarded more for contributing more, but to the same extent. By effective utility, I mean the term hi (see the book for details), which accounts for the contribution, i.e., the cost or effort (the disutility term, vi) incurred by a worker in order to enjoy the benefits derived from her salary Si (i.e., ui), as well as the utility of better future prospects (wi). The theory proves that the fairest inequality in wage income, achieved at equilibrium when entropy (i.e., fairness) is maximized, is a lognormal distribution in an ideal free-market.  Since maximizing fairness leads to an equitable outcome for all workers, my theory provides the moral justification for the free-market economy in mathematical terms.

The deep connection between statistical teleodynamics and statistical thermodynamics reveals the essential nature of economic equilibrium.  Economic equilibrium is often cast in a classical mechanics-like framework, as the balance between supply and demand forces. This posits the equilibrium as a mechanical equilibrium. However, my theory shows that it is more like a chemical equilibrium, which is achieved when the chemical potentials of different phases are equal. We proved that the equilibrium income distribution is achieved when the effective utilities are the same in all salary categories – this is analogous to the equality of chemical potentials. Thus, utility corresponds to chemical potential. Therefore, our economic equilibrium is not like a mechanical equilibrium or a thermal equilibrium, but more like a chemical equilibrium, which is an important insight.

Q: What does entropy have to do with anything in economics? Isn’t it a concept in physics?

Venkat: Good question. This question has been a major conceptual challenge for nearly a century and has prevented the successful use of statistical mechanics concepts and techniques in economics.

The concept of entropy is a hard one to grasp and is somewhat mysterious to many. Even 150 years after its discovery, it is still one of the most misunderstood, even maligned, concepts in science. Its common association with randomness, uncertainty, and disorder – with doom, as the economics Nobel laureate Amartya Sen put it – has been a major conceptual stumbling block for its proper recognition and use in economics. It is a historical accident that the concept of entropy was first discovered in the context of thermodynamics and, therefore, it has generally been identified as a measure of randomness or disorder.

However, as my analysis reveals, the true essence of entropy is fairness, which appears with different masks in different contexts. In thermodynamics, being fair to all accessible phase space cells at equilibrium under the given constraints — i.e., assigning equal probabilities to all the allowed microstates — projects entropy as a measure of randomness or disorder. This is a reasonable interpretation in this particular context, but it obscures the essential meaning of entropy as a measure of fairness.

It is important not to confuse entropy as a concept from physics even though it was first discovered there. In other words, it is not like energy or momentum, which are physics-based concepts. Entropy really is a concept in probability and statistics, an important property of distributions, whose application has been found to be useful in physics and information theory. In this regard, it is more like variance, which is a property of distributions, a statistical property, with applications in a wide variety of domains.

This confusion presented in the past an insurmountable obstacle in neoclassical economics as Philip Mirowski observes: “As Bausor put it, whereas classical thermodynamics ‘approaches entropic disintegration, [neoclassical theory] assembles an efficient socially coherent organization. Whereas one articulates decay, the other finds constructive advance.’ ” This obstacle goes away once we realize that entropy is a measure of fairness, not randomness, and that maximizing entropy produces a self-organized coherent categorization of employees, not decay or entropic disintegration.

This critical insight about entropy as fairness has not been explicitly recognized and emphasized in prior work in statistical mechanics, information theory, or economics. Despite several attempts in the past, entropy has played, by and large, only a marginal role in economics, even that with strong objections from leading practitioners. Its pivotal role in economics and in free-market dynamics has never been recognized in all these years. By correctly interpreting entropy as a measure of fairness, I solve this conceptual problem and show how central entropy is to the functioning of an ideal free-market system.

Q: Why did you name your theory Statistical Teleodynamics?

Venkat: The name comes from telos, a Greek word meaning goal or purpose or end. Just as the dynamics of molecules are driven by thermal agitation, leading to thermodynamics, the dynamics of rational entities (such as people) are driven by their goals, giving us teleodynamics. Because of the stochastic nature of this dynamics, we have statistical teleodynamics.

It turns out that Professor Terrance Deacon of the University of California, Berkeley, had also coined the term teleodynamics independently. However, I was the first to define and use the term statistical teleodynamics, as well as to formulate its postulates mathematically.

Q: What is the connection between game theory and statistical mechanics?

Venkat: As we know, statistical mechanics or statistical thermodynamics is the theory of interactions and emergent behavior of millions of goal-free entities, namely, molecules. This is a theory of chance interactions – probability theory is at its foundation. At the other extreme, for goal-driven strategic interactions, the standard conceptual framework for modeling is game theory. Game theory arose from the study of games of strategy, such as chess, whereas probability theory arose from the study of games of pure chance as in gambling. In our 2015 paper, we showed that for a class of games called potential games played by millions of rational agents, the game of strategy becomes essentially equivalent to a game of chance for large number of players, thereby unifying game theory and statistical mechanics yielding statistical teleodynamics.

Q: How do the theory’s predictions compare with real-world data on inequality?

Venkat: Comparing our theory’s predictions to the actual inequality data of different countries, we find some surprising results.

The theory predicts a lognormal distribution for the ideal 1-class society (i.e., all workers have the same preference parameters α, β, and γ for the utility function; please see the book for details) and a mixture of two different lognormals for a 2-class society (i.e., there are two different sets of parameters for the two different classes). Empirical data show that the bottom 95-97% of the population follows a lognormal distribution, but the top 3-5% follows a power law. But our 2-class model suggests that the top 3-5% is really a truncated lognormal that can easily be misidentified as a power law. So, the theory correctly predicts the lognormal for the bottom 95-97%, but offers an intriguing possibility regarding the top population.

The theory proposes a new measure of inequality, the Non-ideal Inequality Coefficient, ψ, which measures deviations from ideal inequality. If ψ = 0, then that society has ideal, fairest inequality, which is the most desirable outcome. We, of course, don’t expect to see this for any real-life societies. Even though unrealistic, this target nevertheless is quite useful because it defines the gold standard with which real-life societies can be compared and measured. This is the target to shoot for. In all the current discussions about inequality, no one has offered any notion what it should be ideally. Most people seem to agree that the current level is unfair, but no one is able to say what it should be. This is the first time we have that target identified, defined, and measured.

If ψ is close to ‘0’, then that society has near-ideal inequality. It is a pleasant surprise to find this outcome for an overwhelming majority of the population (bottom 99%) for Norway, Sweden, Denmark, and Switzerland, who have achieved income shares that are close to the ideal values for the past 25 years.

What is even more surprising is that these societies did not know, a priori, what the ideal, theoretically fairest, distribution was, and yet they seem to have “discovered” a near-ideal outcome empirically on their own.  One should take this as encouraging news for the other societies that may strive to accomplish this goal.

It is no big surprise to learn that the United States is at the other extreme with markedly unfair distribution of income, but the measure ψ tells us by how much – the bottom 90% are making about 24% less than their fair share. This is a big deal because it amounted to about $800 billion/year in 2010. This has a large negative impact on GDP growth. Most other Western democracies are in between the two extremes of Norway (near-ideal fairness) and the United States (markedly unfair).

Q: How can this theory help with reducing income inequality?

Venkat: I do not provide any policy prescriptions, for it is best left to experts such as Krugman, Piketty, Reich, Stiglitz, and others, who have written extensively about the possible cures in their op-eds, papers, and books. For example, the late Anthony Atkinson, widely regarded as the father of the modern approach to economic inequality, discussed his fifteen-point policy proposal to address the problem in his recent book. While these experts may differ on the details, they all generally agree that the cure involves an appropriate mix of progressive taxation on the wealthy and corporations, a living minimum wage, universal health care, inheritance tax, improved corporate governance, assistance for education and skills development, and so on. Put simply, we need to restore fairness and upward social mobility by adopting policies that empower all individuals to continue to innovate and grow a free-market economy. In some sense, the cures are all well known. In fact, economists have known them for quite a while, but what is lacking is the social conscience and political will to get it done.

Even though I do not provide policy prescriptions, I believe my theory can still help in an important way. By defining the ideal, theoretically possible free-market society in quantitative terms, this theory has identified the target to shoot for regarding distributive justice. The target society where everyone is equitably rewarded at equilibrium, all enjoying the same effectiveutility, is the best that a free-market society can deliver to its citizens.

Since we now know that the ideal income distribution is lognormal, with ψ = 0, we can design our tax & transfer policies such that after taxes & transfers the final income inequality has low ψ, thereby improving income inequality by getting as close to ideality as desired. Such an approach could be used to design better executive compensation packages as well. Thus, instead of guessing what the different tax brackets should be, we now have a scientific basis for applying the fairness criterion across the compensation spectrum.

Remember to enter our book giveaway by Friday at 1 PM for a chance to win a free copy!

Leave a Reply