A Liberal Radicalism Mechanism for Public Goods

In a recent paper, Vitalik Buterin, Zoë Hitzig, and E. Glen Weyl propose what they refer to as a Liberal Radicalism mechanism for handling the provision of public goods.

The difference between a public good and a private goods has to do with rivalry and excitability. Private goods are rivalrous and excludable while public goods are non-rivalrous and non-excludable. A good is rivalrous if one person consuming it inhibits another person’s ability to consume it. For example, a sandwich is highly rivalrous because if someone eats it then no one else gets to eat it but information is non-rival because one person learning something doesn’t inhibit someone else’s ability to learn that thing. A good is excludable if there are effective ways of preventing people from consuming it. For example, most of your personal belongings are hopefully excludable in the sense that you can effectively control who gets to use them but a fireworks show is relatively non-excludable because it’s not really feasible for the people who put on the display to selectively choose who gets to see it. Of course, all goods really exist on a spectrum for both of these traits so there is no line in the sand at which something becomes a public good but, in general, the more something behaves like a public good the harder it is for traditional markets to handle.

This has to do with the fact that these markets rely on property rights. The less rivalrous something is, the more the benefits of it will come to people besides the person who paid for it and thus less of the good’s actual benefit will be taken into account by the market. The less excludable a good is, the harder it will be to define and enforce property rights in the first place. The combination of these two effects leads to what is commonly known as the free rider problem where individuals are incentivized to free-ride off of others contributions to public goods rather than contributing themselves which leads to the goods being under-provided overall.

There are variety of methods that have been used to try to handle this problem such as developing new technologies to make the good excludable (e.g. copyrights for information), having the good be provided or subsidized by the state, assurance contracts (in which people agree to contribute conditional on others contributing), or using moral, cultural, or social motives to compel individuals to contribute. One naive approach you might try if you’re a government or an influential philanthropist is to ask people how much they value the good and then look at the difference between the total amount that people value it and the current amount that people are paying for it and supply the difference. The problem with this is that people won’t be properly incentivized to tell you how much they really value the good. In fact, in such a situation people would most likely be incentivized to say they valued the good as much as possible since there’s no cost to giving a higher stated value.

One attempt to try to deal with this is matching programs where organizations agree to match individual contributions at some ratio such as 1:1 (this is usually only done up to some amount for each individual). While it seems reasonable to try to infer some information about how much people value the good from their contributions and compensate based on that, there is still arbitrariness in choosing the matching ratio and cap and there is no way to guarantee that the optimal amount is being compensated. Ideally, we would like to set up the system so that we can infer how much individuals value the good based on how much they contribute so we can know what the socially optimal amount is. The new mechanism being proposed in the paper attempts to do exactly this.

They do this by having a mechanism where each individual makes a contribution and then the organization implementing it makes an additional contribution so that the total amount contributed will equal the following:

CT = (C11/2 + C21/2 + C31/2 + … + CN1/2)2

(Where CT is the total contribution and Ci is the contribution of the ith individual out of N total people)

A couple of things to note initially about this system:

  1. The total contribution will always be greater than or equal to the sum of the individual contributions
  2. If there is only one individual then this behaves like a private good because the total contribution is equal to their individual contribution
  3. The additional contribution made by the organization will be monotonic in all of the individual contributions (which means someone will never cause the total to decrease by contributing more themselves)
  4. If each individual multiples their contribution by X then the total will be multiplied by X as well (this means the mechanism will be unaffected by irrelevant factors such as the currency used, how goods are split or combined, and how often the mechanism is run)

These all seem like desirable properties but does this mechanism really guarantee that it will pick the socially optimal allocation?  It achieves this in the sense that the Nash Equilibrium strategy for each individual involves them paying amounts such that the total amount of the public good that is provided is the amount at which the sum of everyone’s marginal benefit is equal to the marginal cost of providing the good.

To see why let’s look at an example.

Let’s imagine we have a good x that is perfectly non-rival and costs $60 to produce per unit along with two individuals whose true willingness to pay can be described using the following functions:

B1(x) = 50x – (x2)/4                               B2(x) = 50x – x2

(So, for example, the second individual would be indifferent between losing $50 and gaining 1 unit of the good). We can get the marginal benefit each person receives for an additional unit of the good by taking the derivatives of these functions:

MB1(x) = 50 – x/2                                  MB2(x) = 50 – 2x

Because we are assuming that the good is perfectly non-rival, one of the individuals using it does not impair the other’s ability to use it at all so the total marginal benefit will simply be the sum of the individuals’ marginal benefits:

MBT(x) = 100 – 5x/2

We said that each unit costs $60, so the marginal costs will just be $60:

C(x) = 60x                                               MC(x) = 60

If we set the total marginal benefit of the good equal to the marginal cost we will get x=16. This is significantly greater than what the market equilibrium quantity would be (which in this case is actually x=0!). But now let’s think about what would happen under this new mechanism. Since the equation for the mechanism determines how much of the good both people get we can get new functions for their benefit by substituting the equation for the mechanism into their old benefit functions:

B1(x) = 50(x11/2 + x21/2)2 – (x11/2 + x21/2)4/4

B2(x) = 50(x11/2 + x21/2)2 – (x11/2 + x21/2)4

Getting the marginal benefits and setting them equal to the marginal cost again, we get the following two equations:

50(C11/2 + C21/2)/C11/2 – 0.5(C11/2 + C21/2)3/C11/2 = 60

50(C11/2 + C21/2)/C21/2 – 2(C11/2 + C21/2)3/C21/2 = 60

Each of these equations tells us how each individual can best respond to the other’s contribution, so by solving the system of equations we can find how much they will contribute when they are best responding to the each other’s contributions (i.e. the Nash equilibrium). Doing this we find that C1 = 196/25 and C2 = 36/25. Plugging these back into the equation for the mechanism we find that the total quantity will be 16, which is exactly the optimal amount!

Up until this point I’ve been assuming that the good in question was perfectly non-rival and this was why the LR mechanism was achieving the efficient allocation. However, as mentioned previously, rivalry comes on a spectrum and most things that fall into the fuzzy category of public goods are not perfectly non-rivalrous. Luckily, this can be accommodated by making the mechanism provide an amount of the good that is a weighted average of what the LR mechanism and a traditional market would provide:

CT = α(C11/2 + C21/2 + C31/2 + … + CN1/2)+ (1 – α)(C1 + C2 + C3 + … + CN) where (0 ≤ α ≤ 1)

They also offer some alternative versions of the mechanism to deal with budget caps and negative contributions in the paper. There are some rather standard concerns someone might have about such a mechanism including:

  1. Is social welfare (in the sense of the sum of people’s willingness to pay) what we should be optimizing?
  2. Will real people using this mechanism actually tend to move towards the Nash equilibrium strategy?
  3. How can we ensure that people aren’t able to collude and how robust is the mechanism to collusion?

Additionally, it’s worth noting that if this mechanism were being used by a state and funded through taxation then it would need to be the case that increasing the amount the government is contributing doesn’t directly lead to the people in the mechanism being taxed more as this would change the incentives. I’ll be excited to see it tested out (starting out either in the lab or small-scale real world settings presumably) and I’m happy to take it as a reminder that there is still progress to be made in designing mechanisms to improve cooperation and collective outcomes.

Advertisements

The Mother of All Big Data

An excerpt from the book Scale by Geoffrey West on the Large Hadron Collider and the importance of incorporating prior knowledge when using machine learning techniques:

The entire project represents an unprecedented engineering achievement whose output is the mother of all big data-nothing comes close. There are about 600 million collisions per second monitored by about 150 million individual sensors in each detector. This produces about 150 million petabytes of data a year or 150 exabytes a day (a byte being the basic unit of information). Let me give you a sense of what the scale of this means. The Word document containing this entire book, including all of its illustrations, is less than 20 megabytes (20MB, meaning 20 million bytes). This MacBook Air can store 8 gigabytes of data (8GB, meaning 8 billion bytes). All of the films stored by Netflix amount to less than 4 petabytes-which is 4 million GB, or about half a million times larger than the capacity of this laptop. Now to the big one: each day the total amount of data produced by all of the world’s computers and other IT devices taken together amounts to 2.5 exabytes; an exabytes is 1018 bytes, or a billion gigabytes (GB).

This is awesome and is often touted as a measure of the big data revolution. But here’s what’s truly awesome: it pales in comparison to the amount of data produced by the LHC. If every one of the 600 million collisions occurring each second were recorded it would amount to about 150 exabytes a day, which is about sixty times greater than the entire amount of data produced by all of the computational devices in the world added together. Obviously this means that the strategy of naively letting the data speak for itself by devising machine-learning algorithms to search for correlations that would eventually lead to the discovery of the Higgs mechanism is futile. Even if the machine produced a million times less data it is extremely unlikely this strategy could succeed. How, then, did physicists discover the proverbial needle in this mammoth haystack?

The point is that we have a well-developed, well-understood, well-tested conceptual framework and mathematical theory that guides us in where to look. It tells us that almost all of the debris resulting from almost all of the collisions are actually uninteresting or irrelevant as far as searching for the Higgs particle is concerned. In fact, it tells us that out of the approximately 600 million collisions occurring every second, only about 100 are of interest, representing only about 0.00001 percent of the entire data stream. It was by devising sophisticated algorithms for focusing on only this very special tiny subset of the data that the Higgs was eventually discovered.

An Introduction to Reinforcement Learning

Much of machine learning, and particularly supervised learning, is focused on predictions problems: given some training data drawn from some distribution can you predict what other data points from that same distribution will be like? However, the subfield of machine learning known as reinforcement learning focuses on what is in many ways a fundamentally harder problem. Reinforcement learning is all about decision-making: given that you’re in some environment where taking certain actions in certain situations tends to give you some amount of either positive or negative reward, how do you maximize the total amount of reward you receive? While many applications of reinforcement learning may seem rather narrowly focused, such as playing chess or simple video games, many of the principles underlying it are extremely general. In fact, in some ways reinforcement learning can be thought of as simply the study of bounded instrumental rationality: what’s the optimal way for agents with limited computational resources to go about maximizing their values in uncertain environments?

One reason why this is a harder problem than some other learning problems is that the decisions you make impact not only the rewards you receive but also the information that you gain. This leads to what is known as the explore-exploit tradeoff. You have to find a way to balance exploring (taking the actions that you expect will give you the most useful information about how to maximize reward) and exploiting (taking the actions that you expect will maximize reward based on what you currently know). There are a variety of techniques that can be used to manage this tradeoff but let’s put this issue to the side for the purposes of this post.

Another reason why this is a harder problem than a typical prediction problem is that the feedback you receive from the environment isn’t the main thing you’re trying to learn. What you want to know is given you’re in a certain situation what action is best to take, but the feedback you actually get is since you took an action in a situation, how much reward did you receive for that time step and what situation are you in during the next time step. One thing I should clarify is when I refer to the “situation” you’re in, I don’t necessarily just mean the characteristics of the environment that are currently observable to you. In order to accurately learn, the description of the current state of the environment needs to also include all relevant information from the past that you’ve observed. For example, at any point during a game of chess all of the relevant information about the current state of the environment is contained in the current state of the board which is fully observable to both players. In other words, when a player observes the current state of the board they don’t need to know any additional information about past board states to know what strategies are good to use moving forward. In most environments though, when you’re trying to predict the consequences of your actions, information about the past isn’t entirely screened off by the information you’re currently observing. Finding compact descriptions of states of the environment that contain all of the relevant information about the past can be a challenging problem in and of itself but let’s also put this aside and assume we’re using some encoding of the states of the environment that accomplishes this.

So how do we transform the information we receive into information we can actually use to select actions? We can categorize different approaches to this problem by first thinking about the different aspects of the problem we can model. There are four main aspects we might want to model:

The Environment: We can model the environment as a function E(A,S) that takes in the action A that you take and the current state S and output a new state S’ and the reward R that you received during that time step.

State Values: We can model how valuable it is to be in a given state with a function V(S) that takes in a state S and outputs the expected amount of reward you will receive from that state moving forward. (The expected amount of reward is usually calculated with discounting so rewards received later on are valued less than rewards received sooner. This may be a practically useful thing to do in order to make learning easier even if you don’t actually value later rewards less)

Action-State Values: We can model how valuable we expect taking a certain action in a given state will be with a function Q(A,S) that takes in an action A and a state S and outputs the expected amount of future reward (usually discounted as above).

Policy: This is your decision-making procedure. We can model it with a function π(A) that takes in a state S and outputs an action A that you should take.

How do we learn these functions from the data? One approach is to use neural networks. I won’t get into the details here but the important point is neural networks can act as function approximators so if we have a neural network for each of the functions we’re using then we can learn those functions from the data. In order to learn these function though we also need examples of the inputs and outputs. This is fairly straightforward for E(A,S) since the feedback we receive is actually the output of the function. While modeling the environment often isn’t strictly necessary in order to solve a reinforcement learning problem, it is often extremely useful since once a decent model of the environment is obtained, simulated-based learning can happen much faster than reality-based learning and the two can be used to supplement each other. Approaches that try to calculate E(A,S) are known as model-based while those that don’t are referred to as model-free.

Things are a little trickier for the other three because as I said before we’re not directly getting information about what action was best to take and we’re also not directly receiving feedback about expected discounted values. There are two main approaches to learning Q(A,S) or V(A,S) from the information we are directly receiving at each time step but first I should note something. The expected value of a state or a state-action pair depends upon what policy you’re following. Whether the current arrangement of pieces on the chessboard is good for you depends partially on what strategy you are planning to follow from that point forward and whether your next move has a high expected value depends on what future moves you will make (unless you’re making a move that ends the game). To put this in the terminology we’ve been using, there’s only a single correct Q(A,S) and V(A,S) for a given π(S) (even if we’re not explicitly modeling π(S)). This means that if we learn Q(A,S) or V(S) for a given policy and then change how we selection actions based on what we learned, then our approximation of Q(A,S) or V(S) will become less accurate. Luckily, if we are only making incremental changes to our policy and we continuously train our approximations of Q(A,S) or V(S) then we can usually maintain good approximations of the expected values even as we change our behavior.

One way to estimate the expected value of a state is to repeatedly play the game forward from that state. Each time you do this you can see what the pattern of rewards is from that point to the end of the game and use that to calculate the discounted reward for that state. This means that each time we play a game, we can update our function approximator of V(S) for any state S that we visited so that its prediction of its expected value is closer to the actual amount of discount reward that was received after that state. This is known as Monte Carlo (MC) learning.

This can be very useful but it would be nice to be able to learn from one time step to the next rather than waiting until the end of a game to update these values. This can be done using Temporal Difference (TD) learning. This kind of learning relies on the following relationship: V(S) = R + d*E[V(S’)] (the value of a state should be equal to the reward received at that time step plus the expected value of the next state times some discounting factor d). We can use this equation by observing how much reward R we received after a state S and which state S’ we are moved to and using our estimate of V(S’) to estimate V(S). While the bootstrapping nature of TD learning (the fact that you’re using your own predicted values to update your other values) may seem questionable, generally the values will converge towards the true values over time. (Also, when we’re doing TD learning there is no reason why we have to only compare states that are one time step apart and there are more advanced techniques we can use to average over the observed differences in expected rewards for multiple time gaps).

Another concern related to TD is that many environments have sparse rewards. In chess, the only thing you ultimately care about is whether you win or lose the game but if you use this as your reward function then you won’t receive any feedback about how well you’re playing until the end of the game. In addition to appearing inefficient this should also seem counter-intuitive since it seems like if you were to play chess against someone who flipped over the board halfway through the game, you could still learn from that experience. Most approaches to tackling this problem involve giving the agent some instrumental values in some sense (e.g. rewarding the chess player for taking its opponents pieces because this tends increase their chances of winning). There are some clever ways of having an agent construct these instrumental rewards themselves such as Hindsight Experience Replay and Curiosity-Driven Exploration. A related approach is known as Hierarchical Reinforcement Learning and it involves having the agent break the goal that they want to achieve to sub-goals. Breaking problems into sub-problems and realizing the instrumental value of things can come fairly naturally to people but it can be surprisingly tricky to construct general procedures for doing this.

All of what I just said about using Monte Carlo and Temporal Difference learning for learning V(S) can also be applied to learning Q(A,S), but what about learning π(S)? One thought that might occur to you is that if we have Q(A,S) then why can’t we just look at the expected value of each action in our current state and select the one with the highest expected value (or the one that’s best given the explore-exploit tradeoff)? This is known as Q-learning and it can be effective in many settings and comes with the upside of only needing to approximate one function. Similarly, if we have both E(A,S) and V(S), then we can know both which states different action will tend to send us to and how valuable those states are so again it doesn’t seem like we need to model π(S) explicitly.

However, in situations where there are a large number of actions available at each time step or a continuous space of actions, these methods can become intractable. In order to handle these situations it can be useful model π(S) as well. One way to do this is to use an advantage function, which is simply: Advantage(A,S) = Q(A,S) – V(S). We can update π(S) so that it’s more likely to recommend actions that have a high advantage and less likely to recommend ones with a low advantage. The advantage function can be approximated using a variation on Temporal Difference learning with Advantage(A,S) = R + d*V(S’) – V(S). The intuition behind this is that we want to update in favor of doing actions that allowed us to do unexpectedly good and away from actions that lead us to do unexpectedly bad.

There are some more advanced setups that I won’t get into here such as off-policy learning, Actor-Critic, and Tree Search methods but I’ve tried to give the basic layout of techniques to the best of my knowledge. I’ve also tried to flag important problems that arise when trying to find tractable methods for optimal decision-making including:

  • How do we find compact representations of the relevant information we have about the environment?
  • How do we tradeoff between trying to gain more information about which actions are best and trying to do what’s best based on what we currently know?
  • What are good methods for systematically breaking problems down into sub-problems and determining things instrumental worth?

Hopefully, I’ve also managed to convey the general importance of these challenges. If not then I would advise you to pay close attention when you’re making decisions in your daily life. I think you’ll find yourself navigating these challenges far more often than when you’re in front of a chess board to say the least.

 

 

 

 

Cognitive Hierarchy Theory

Imagine you’re playing the following game with a thousand other people: Each person simultaneously names a real number between zero and one hundred. The numbers will then be averaged and the person who guessed the number that is closest to two-thirds of the average will win (i.e. receive some monetary prize). What number would you submit in order to maximize your chance of winning?

If this were a situation where everyone was rational and this was common knowledge then each player would follow the Nash Equilibrium strategy, which is everyone submitting zero. This is because in there will always be some people who guess a number greater than or equal to the average so it would be a better response for those people to lower their guess which they can always do except when everyone guesses zero. Unfortunately, knowing this does not seem very useful if you were actually playing this game with a group of real people.

One more practical way to handle such a situation is to use Cognitive Hierarchy Theory, which assumes that different agents reason at different levels:

Level 0 Player: Doesn’t reason strategically at all and is usually assumed to be selecting an action at random

Level 1 Player: Chooses the option that would be their best response given that everyone else is reasoning like level 0 players

Level 2 Player: Chooses the option that would be their best response given that everyone else is reasoning like level 1 players

….

Level N Player: Chooses the option that would be their best response given that everyone is reasoning like level N-1 players

So if you were reasoning about the game using Cognitive Hierarchy Theory you would take your best guess of how many people are reasoning at each level and use this to calculate what the average will based on the answer that each level of reasoner would give. In this game, a level 0 player would guess 50 on average, so a level 1 player would guess 33.33, a level 2 player would guess 22.22, etc. Of course, this is only meant to be a simple approximation for how people will tend to reason. After all, whatever guess you end up taking yourself won’t be the same as one of the levels unless you really think that everyone else is reasoning at one particular level. However, its simplicity means it can be useful in strategic situations in which you don’t have a lot of information about the other people involved.

Additionally, when people do play games like the one I described in experimental settings their choices do tend to cluster around the ones that different levels would make. For the game I described about 37% behaved closest to a level 0 player, 47% closest to a level 1, 15% closest to a level 2, and only 1% behaved closest to a level 3 or above. This would lead you to expect the average to be around 37.6 and therefore guess about 25, which is close to what the actual best was. Similar results have been found in other experiments showing that it’s most common for people to act similarly to level 1 players with significant portions acting like level 0 and level 2 players and almost no one acting like a level 3 or above player. This means that people who act like level 2 players tend to come out ahead.

However, there is an important caveat to this, which is that this only applies to simultaneous games with a single round. When these games are played for multiple rounds, people tend to follow strategies that correspond to higher and higher levels as they respond to what people did in previous rounds. Usually this means that things are gradually pushed towards the Nash Equilibrium over time. This illustrates how the notion of Nash Equilibrium can still be important in situations where there isn’t common knowledge of rationality because as long as the situation is simple enough that people can find a strategy close to the best response of their opponents previous action, they will tend to be pushed toward the Nash Equilibrium over time.

Alien Forces of Our Own Creation

This is a passage from a blog post by Statistics Professor Cosma Shalizi that I wanted to share. It’s responding to a Marxist criticism of capitalism, which laments the cold and impersonal nature with which the “invisible hand” of the market confronts every individual within a society.

There is a fundamental level at which Marx’s nightmare vision is right: capitalism, the market system, whatever you want to call it, is a product of humanity, but each and every one of us confronts it as an autonomous and deeply alien force. Its ends, to the limited and debatable extent that it can even be understood as having them, are simply inhuman. The ideology of the market tells us that we face not something inhuman but superhuman, tells us to embrace our inner zombie cyborg and lose ourselves in the dance. One doesn’t know whether to laugh or cry or running screaming.

But, and this is I think something Marx did not sufficiently appreciate, human beings confront all the structures which emerge from our massed interactions in this way. A bureaucracy, or even a thoroughly democratic polity of which one is a citizen, can feel, can be, just as much of a cold monster as the market. We have no choice but to live among these alien powers which we create, and to try to direct them to human ends. It is beyond us, it is even beyond all of us, to find “a human measure, intelligible to all, chosen by all”, which says how everyone should go. What we can do is try to find the specific ways in which these powers we have conjured up are hurting us, and use them to check each other, or deflect them into better paths. Sometimes this will mean more use of market mechanisms, sometimes it will mean removing some goods and services from market allocation, either through public provision or through other institutional arrangements. Sometimes it will mean expanding the scope of democratic decision-making (for instance, into the insides of firms), and sometimes it will mean narrowing its scope (for instance, not allowing the demos to censor speech it finds objectionable). Sometimes it will mean leaving some tasks to experts, deferring to the internal norms of their professions, and sometimes it will mean recognizing claims of expertise to be mere assertions of authority, to be resisted or countered.

These are all going to be complex problems, full of messy compromises. Attaining even second best solutions is going to demand “bold, persistent experimentation”, coupled with a frank recognition that many experiments will just fail, and that even long-settled compromises can, with the passage of time, become confining obstacles. We will not be able to turn everything over to the wise academicians, or even to their computers, but we may, if we are lucky and smart, be able, bit by bit, make a world fit for human beings to live in.

Creating Strategy-Proof Mechanisms

Disclaimer: It’s probably a good idea to read this post first. Also, I apologize for using a phrase as clunky as “willingness to pay” so many times but I thought it was necessary to make it clear what was actually being referred to.

When thinking about what makes for a good mechanism, one criterion we might care about is maximizing social welfare. By this I mean that the mechanism selects outcomes that maximize the sum of everyone’s willingness to pay for that outcome. Someone’s willingness to pay for an outcome is the amount of money they would have to lose or receive along with getting that outcome in order to make them indifferent towards the change. This is nice because people’s willingness to pay gives us a way to quantify the strength of their preferences and aggregate them across individuals. Of course people’s willingness to pay is also determined by how much money they have so maximizing social welfare won’t lead us to good outcomes in cases where there is already a lot of wealth inequality but I guess that’s a discussion for another day.

However, given that we want to try to maximize social welfare, it’s hard to tell how well this criterion is being met if the people participating in the mechanism are being dishonest about how much they value different outcomes. This means that if we want to maximize social welfare we should look for mechanisms that are strategy-proof in the sense that they at least don’t incentivize people to be dishonest about their preferences. This can be achieved if being truthful about their preferences is a weakly dominant strategy for all people participating in the mechanism. Unfortunately, Gibbard’s Theorem shows us that if we assume agents can have any possible preferences then any Social Choice Function with at least three outcomes that is implemented by a mechanism that makes being truthful a weakly dominant strategy will be dictatorial.

Luckily, we can get around Gibbard’s Theorem is we make some assumptions about what kind of preferences people have. One specific assumption that is useful to make is that people have quasi-linear preferences, which means that we can model their preferences with a utility function where the amount of utility they have depends linearly on the amount of money they lose or gain. This will probably not always be a sensible assumption to make, but if we’re in a situation where it is reasonable to assume then we can use monetary transfers to induce people to act truthfully.

In fact, there is a specific category of these kinds of mechanisms that are strategy-proof and lead to outcomes that maximize social welfare when this assumption holds. These are known as VCG (Vickrey-Clarke-Groves) mechanisms and they essentially have two components. In the first component people are given a set of options and they express their preferences for each one by stating their willingness to pay for that option to occur (which can be negative if they would be willing to pay to have it not occur). Then the option is selected which maximizes the sum of everyone’s declared willingness to pay, which if they were being honest would maximize social welfare. The second component is where the monetary payments come in. Each agent pays the difference between the sum of the other agents’ declared willingness to pay for the option that would have been selected if that agents wasn’t involved in the mechanism and the sum of the other agents’ declared willingness to pay for the option that actually was selected.

VCG mechanisms are sometimes called Pivotal mechanisms because the only people who are affected by the monetary transfers are the ones who are pivotal in the sense that removing them from the mechanism would change the outcome that it selects. One way to think about what the monetary transfers are doing is that they are making people pay their social cost because how much money each person has to pay or receive is equal to how much worse or better off everyone else is made by including that person in the mechanism. But what does making people pay their social cost have to do with making the mechanism strategy-proof?  Well, it’s a little subtle but think about this…

Because what option would be selected if someone wasn’t involved is outside of their control, in a VCG mechanism a person can maximally satisfy their preferences if they get the outcome selected that has the highest sum of their own willingness to pay for that outcome added to the sum of everyone else’s declared willingness to pay for that outcome. Because the way a VCG mechanism selects an outcome is by picking the one with the highest sum of their own declared willingness to pay added to everyone else’s declared willingness to pay, making their declared willingness to pay equal to their true willingness to pay will always be the best that they can do strategically. In other words, making their declared willingness to pay equal to their true willingness to pay will be a weakly dominant strategy for everyone in the mechanism. I think this shows what is so clever of VCG mechanisms because while it seems like the first component is all about social welfare maximization and the second is all about making it strategy-proof, it actually takes both of the components working off one another to achieve either of these things.

Let’s try to make things more concrete by looking at an example: imagine we’re running a sealed-bid auction where some good is being auctioned off and people simultaneously write down how much they would be willing to pay for it and then this information is used to decide who gets the good. I think the most intuitive way to set up this kind of auction is to give the person with the highest bid the good and then have them pay their bid. This is known as a first price auction but it certainly isn’t strategy-proof since people are incentivized to bid lower than their true willingness to pay, trading off between paying less for the good and increasing their probability of winning.

How would we set this up as a VCG mechanism though? Well, if we assume people aren’t willing to pay for someone else to win, then the first component of the mechanism will tell us to give it to the person with the highest declared willingness to pay. How would the monetary payments be set up? The only pivotal person is one with the highest declared willingness to pay because the outcome would be the same if anyone else was removed. The sum of everyone else’s willingness to pay for the selected outcome would be zero since they didn’t get anything, but if the winner of the auction were excluded then the second highest bidder would win so the amount of money the actual winner has to pay is equal to the second highest bid. This is known as a second-price auction. I would invite you to try to imagine different scenarios in your mind to confirm that a bidder would never benefit from bidding above or below their true willingness to pay. (On a side note, it’s been shown that given some assumptions the expected amount of revenue for the auctioneer in first-price and second-price auctions is the same because the effect people bidding below their willingness to pay cancels out with paying the second highest price)

VCG mechanisms can also be useful in a variety of other domains such as deciding whether to build a public project and network routing. It was also the inspiration behind the idea of a C.O.S.T which I discussed here. However, it does definitely have its limitations. VCG mechanisms won’t tend to work as well in situations where agents are affected by incentives outside of the mechanism to be strategic, agents don’t have close to quasi-linear preferences, or where collusion between agents is possible. Also, in some situations the payments that it requires can seem like an unreasonable amount of money for the mechanism to payout or can seem unfairly large for agents to pay. And of course there’s the question about how much we should care about maximizing social welfare in the first place.

Reverse Game Theory and the Revelation Principle

In my last post, I introduced the concept of a Social Choice Function. I would like to introduce a related concept called a mechanism. In this context, a mechanism is a system that makes some set of actions available to each agent participating in it and has some mapping between actions taken by the agents and, in the most general case, probability distributions over possible outcomes. A voting method is one specific type of mechanism where the set of actions available to agents are the different ways they can fill out their ballots and the outcomes are the different combinations of winners of the election that can occur. This might sound very similar to Social Choice Functions but those are mappings between preferences and outcomes not actions and outcomes.

The connection between the two concepts is that a mechanism is said to implement a Social Choice Function if when all the agents participating in that mechanism rationally maximize their preferences, the mapping between the agents’ preferences and outcomes that occurs is the same as the mapping of that Social Choice Function. More precisely, for a given setting where agents have some private and public information and for a given set of outcomes, a mechanism implements a Social Choice Function in dominant strategies if every agent following a dominant strategy leads to that Social choice Function arising and it implements a Social Choice Function in Bayes-Nash Equilibrium if there exists a Bayes-Nash Equilibrium that leads to that Social Choice Function arising. An additional distinction regarding implementation is whether it is direct or indirect, where with direct implementation agents act simultaneously in a one-round game (e.g. most voting mechanisms, sealed bid auctions, games like rock-paper-scissors) but with indirect implementation there can be sequences of decisions and agents can gain information about the previous decisions of other agents (e.g. iterated prisoner’s dilemma, games like chess and poker).

The study of the properties of mechanisms is known as mechanism design. It is also sometimes referred to as reverse game theory because instead of taking the mechanism as given and trying to find the agents’ optimal strategic behavior, it takes that agents are pursuing their optimal strategy as given tries to find mechanisms that will produce better overall outcomes.

Now that we’ve gotten some terminology out-of-the-way we can discuss some ideas from mechanism design. One of the most important concepts in mechanism design is the Revelation Principle, which has two versions:

  1. Any Social Choice Function that can be implemented in dominant strategies by any mechanism can be implemented in dominant strategies by a direct mechanism where being truthful is a weakly dominant strategy
  2. Any Social Choice Function that can be implemented in Bayes-Nash Equilibrium by any mechanism can be implemented in Bayes-Nash Equilibrium by a direct mechanism where agents aren’t made any worse off by being truthful given that the other agents are being truthful

This means that wanting truthful mechanisms or wanting direct mechanisms aren’t inherently limiting conditions in terms of what Social Choice Functions can be implemented. This may seem surprising at first but the intuition behind it is actually rather simple.

Basic idea can be summed up as follows: it’s never required that agents need to be strategic or take part in multiple steps because there is always a mechanism that can do that for them. In other words, we can take any arbitrary mechanism that agents have to interact with using a strategy that maximizes their preferences and create a new mechanism where the agents tell the mechanism their preferences and other private information and then it implements their optimal strategy in the original mechanism for them. (As a caveat to the second version, the set of Bayes-Nash Equilibria for the truthful and direct mechanism will not necessarily be the same as they were for the original mechanism being considered).

revelation-principle-intuition-n

An imperfect but useful analogy is thinking about someone who is representing themselves in a court case. They are participating in a very complicated indirect mechanism and it certainly won’t always be in their best interest to be truthful. But if they were to get a lawyer to represent them then if they trust them to follow their best strategy, it is in their best interest to simply tell the lawyer everything they know.

While perhaps interesting conceptually, it doesn’t seem obvious that the Revelation Principle should be of any practical use. However, it is actually extremely useful for proving what properties Social Choice Functions implemented by mechanisms can have. This is because if you can show that any Social Choice Function that can be implemented by a direct and truthful mechanism has a certain property, then it follows that any Social Choice Function that can be implemented at all has that property.

In order to see an example of this, let’s consider how the Revelation Principle is connected to Gibbard’s Theorem. As I’ve previously discussed, Gibbard’s Theorem shows that if we assume agents can have any possible combination of preferences then a Social Choice Function that has at least three outcomes can be implemented by a mechanism that makes being truthful a weakly dominant strategy for all agents if and only if that Social Choice Function is dictatorial. But wait…. if the Revelation Principle tell us that any Social Choice Function that can be implemented in dominant strategies can be implemented by a mechanism that makes being truthful a weakly dominant strategy, then this means that any Social Choice Function that has at least three options and can be implemented in dominant strategies is dictatorial. In other words if you want to set up a system that isn’t a dictatorship and people have more than two options then it’s impossible to make it one where everyone has a dominant strategy.

Gibbard’s Theorem seems like a pretty devastating blow for hopes of creating mechanisms that incentivize honesty but there are some ways to try to get around it such as focusing on Bayes-Nash Equilibria where everyone is being honest rather than worrying about dominant strategies. Many mechanisms have multiple Bayes-Nash Equilibria though so it can be difficult to guarantee that the system will end up and stay in the desired one. The most common approach to avoiding the theorem is to make some restricting assumption about agents’ preferences. I discussed one example of this in the previous post regarding one-dimensional single-peaked preferences but there’s a much more general assumption that can be made about how people value money that leads to the possibility of efficient and truthful mechanisms across many domains which is what I will discuss next.