The proxy failure scenario of the target article posits a – social, ecological, or physiological – system monitored/controlled by a regulator that selects an observable measure proxying for the system's current state, or for the distance between the system's current state and a goal state, as a basis for allocating rewards to an agent who acts upon the system on the basis of the incentives provided. When agent and regulator have differing objectives and are capable of mutual representation and manipulation – as they often are in a human organization – their relative levels of sophistication significantly impact the dynamics of proxy failures.
Let us consider human agents in organizations such as a business or an institution. They vary in their abilities to classify, observe, predict, and manipulate their environments, which include the regulators to whom they “report.” A salesperson can manipulate a commission-based compensation plan in which her quarterly earnings grow nonlinearly as a function of revenue generated by closing all of her sales for 1 year in a single quarter, and thus realizing a much higher “cut” than what she would have received had the closings been evenly spaced. If her quarterly commission is 10% for the first $1,000,000 sold, 15% for the next $1,000,000, and 20% for anything over $2,000,000 and she is looking at closing some $4,000,000 in sales in 1 year, then the difference between piling a year's revenue onto one quarter (commission = $650,000) and smoothing it out over each of the four quarters (commission = $400,000) is $250,000 – more than 5% of the total revenue she has brought in. The cost to the system (the business) is the additional value of the salesperson's earnings, plus the resulting “spikiness” of the revenue, which increases the volatility of its equity and lowers its value.
This is a simple example of an agent being just a little more sophisticated than her regulator. Because it is simple, it tempts us to think the regulator can repair the incentive scheme: Impose a minimum quarterly revenue, make the commission a fixed percentage of the revenue generated, use options on equity to incentivize salesperson to take actions that benefit the business as a whole, and so on. That is what we see in both studies of relational contracting and the “common law” generated by employment and compensation agreements. But, can an agent more sophisticated than the regulator “almost always” game a proxy measure for her own advantage? In our example, she could “play around” with the standard revenue recognition metrics to have more revenue counted toward her incentive plan than the company registers; or “slag” other salespeople on the same organization to their clients in order to have them become her clients; and so on.
The agent's services are retained and contracted for in the first place because she has a higher level of skill in performing a task than the regulator does. A skill is a mapping of effort and ability onto outcomes in the context of a task the skill enables the agent to carry out. Because of the structure of the regulator–agent problem, the agent has a sophistication advantage over the regulator: She will almost-always be able to produce outcomes that satisfy the assessments of the regulator but “game” the incentive scheme assessments drive. To make sophistication mismatch sharper, distinguish between:
• Informational sophistication: Perception, registration, encoding, remembering. The agent “sees more,” makes more distinctions, sees more clearly, and remembers more reliably. If the agent is an expert who makes predictions about system-relevant events, a regulator may incentivize her with a “A + B log(p(ei))” contract (Moldoveanu, Reference Moldoveanu2011), which pays her a fixed wage A and subtracts a fine proportional to the probability p(ei) if/when event ei occurs. This scheme is “optimal” (Bernardo, Reference Bernardo1979) provided the regulator and the agent have the same state space of events {ek}. But defining the “right event space” is precisely what the regulator needs the agent for. A wily agent can articulate “event descriptions” malleable enough to encompass most of what the agent thinks the regulator can “see” (say, broad categories in a market or technology forecast), and ascribe high probabilities to them, which minimize her log(p(ei)) fines in spite of not making a useful prediction.
• Computational sophistication: Calculation and behavior. The agent “thinks more quickly,” “sees further into the future,” and “takes more actions.” If regulators attempt to remedy proxy failures via “patches” that locally address failures of the measurement to reflect an objective for the system then the agent–regulator “game” becomes a sequence of moves in which “the quick” kill off “the slow” (Moldoveanu, Reference Moldoveanu2009). In a typical CEO–Chairperson (CE–CH) game in which CH tries to oust CE for poor performance or fraudulent behavior, CE, knowing CH needs to individually persuade board members of the necessity of the move, can map out “influence networks” by which she can persuade crucial board members of the faulty reasoning or bias of CH and, using her speed advantage, marshal the board clique necessary to pre-emptively oust CH. Both inferential speed (thinking one move ahead of CH) and behavioral agility (making the requisite number of calls) count: But those are also core skills for which CE was hired.
What adjustments and recourses can a regulator access? A “sophisticated” regulator is not as sophisticated as an agent in the domains where he needs the agents' expertise. Rather, she understands the dynamics of proxy failures given mismatches in sophistication, and knows he cannot by himself repair them. So, he can:
• Hire, as subregulators, former agents that understand the games most agents play in a niche – as we see in the case of venture capital and private equity firms.
• Create high-powered incentives for agents (e.g., high CEO equity stakes) that allow the latter to themselves become principals and regulators if they deliver on the results that condition their discharge.
• Hire experts (e.g., consulting firms) that help keep sophisticated agents honest in the face of temptations to realize gains at the expense of the system (shareholders).
None is fail-safe against a wily, sophisticated agent. The “system” (of “capital preservation”) ultimately relies for resilience on the desire of agents to “one day” themselves become principals and regulators (as owners of capital) – and not just to “win the game.”
The proxy failure scenario of the target article posits a – social, ecological, or physiological – system monitored/controlled by a regulator that selects an observable measure proxying for the system's current state, or for the distance between the system's current state and a goal state, as a basis for allocating rewards to an agent who acts upon the system on the basis of the incentives provided. When agent and regulator have differing objectives and are capable of mutual representation and manipulation – as they often are in a human organization – their relative levels of sophistication significantly impact the dynamics of proxy failures.
Let us consider human agents in organizations such as a business or an institution. They vary in their abilities to classify, observe, predict, and manipulate their environments, which include the regulators to whom they “report.” A salesperson can manipulate a commission-based compensation plan in which her quarterly earnings grow nonlinearly as a function of revenue generated by closing all of her sales for 1 year in a single quarter, and thus realizing a much higher “cut” than what she would have received had the closings been evenly spaced. If her quarterly commission is 10% for the first $1,000,000 sold, 15% for the next $1,000,000, and 20% for anything over $2,000,000 and she is looking at closing some $4,000,000 in sales in 1 year, then the difference between piling a year's revenue onto one quarter (commission = $650,000) and smoothing it out over each of the four quarters (commission = $400,000) is $250,000 – more than 5% of the total revenue she has brought in. The cost to the system (the business) is the additional value of the salesperson's earnings, plus the resulting “spikiness” of the revenue, which increases the volatility of its equity and lowers its value.
This is a simple example of an agent being just a little more sophisticated than her regulator. Because it is simple, it tempts us to think the regulator can repair the incentive scheme: Impose a minimum quarterly revenue, make the commission a fixed percentage of the revenue generated, use options on equity to incentivize salesperson to take actions that benefit the business as a whole, and so on. That is what we see in both studies of relational contracting and the “common law” generated by employment and compensation agreements. But, can an agent more sophisticated than the regulator “almost always” game a proxy measure for her own advantage? In our example, she could “play around” with the standard revenue recognition metrics to have more revenue counted toward her incentive plan than the company registers; or “slag” other salespeople on the same organization to their clients in order to have them become her clients; and so on.
The agent's services are retained and contracted for in the first place because she has a higher level of skill in performing a task than the regulator does. A skill is a mapping of effort and ability onto outcomes in the context of a task the skill enables the agent to carry out. Because of the structure of the regulator–agent problem, the agent has a sophistication advantage over the regulator: She will almost-always be able to produce outcomes that satisfy the assessments of the regulator but “game” the incentive scheme assessments drive. To make sophistication mismatch sharper, distinguish between:
• Informational sophistication: Perception, registration, encoding, remembering. The agent “sees more,” makes more distinctions, sees more clearly, and remembers more reliably. If the agent is an expert who makes predictions about system-relevant events, a regulator may incentivize her with a “A + B log(p(ei))” contract (Moldoveanu, Reference Moldoveanu2011), which pays her a fixed wage A and subtracts a fine proportional to the probability p(ei) if/when event ei occurs. This scheme is “optimal” (Bernardo, Reference Bernardo1979) provided the regulator and the agent have the same state space of events {ek}. But defining the “right event space” is precisely what the regulator needs the agent for. A wily agent can articulate “event descriptions” malleable enough to encompass most of what the agent thinks the regulator can “see” (say, broad categories in a market or technology forecast), and ascribe high probabilities to them, which minimize her log(p(ei)) fines in spite of not making a useful prediction.
• Computational sophistication: Calculation and behavior. The agent “thinks more quickly,” “sees further into the future,” and “takes more actions.” If regulators attempt to remedy proxy failures via “patches” that locally address failures of the measurement to reflect an objective for the system then the agent–regulator “game” becomes a sequence of moves in which “the quick” kill off “the slow” (Moldoveanu, Reference Moldoveanu2009). In a typical CEO–Chairperson (CE–CH) game in which CH tries to oust CE for poor performance or fraudulent behavior, CE, knowing CH needs to individually persuade board members of the necessity of the move, can map out “influence networks” by which she can persuade crucial board members of the faulty reasoning or bias of CH and, using her speed advantage, marshal the board clique necessary to pre-emptively oust CH. Both inferential speed (thinking one move ahead of CH) and behavioral agility (making the requisite number of calls) count: But those are also core skills for which CE was hired.
What adjustments and recourses can a regulator access? A “sophisticated” regulator is not as sophisticated as an agent in the domains where he needs the agents' expertise. Rather, she understands the dynamics of proxy failures given mismatches in sophistication, and knows he cannot by himself repair them. So, he can:
• Hire, as subregulators, former agents that understand the games most agents play in a niche – as we see in the case of venture capital and private equity firms.
• Create high-powered incentives for agents (e.g., high CEO equity stakes) that allow the latter to themselves become principals and regulators if they deliver on the results that condition their discharge.
• Hire experts (e.g., consulting firms) that help keep sophisticated agents honest in the face of temptations to realize gains at the expense of the system (shareholders).
None is fail-safe against a wily, sophisticated agent. The “system” (of “capital preservation”) ultimately relies for resilience on the desire of agents to “one day” themselves become principals and regulators (as owners of capital) – and not just to “win the game.”
Financial support
This work was supported by the Desautels Centre for Integrative Thinking, Rotman School of Management, University of Toronto.
Competing interest
None.