I can see the appeal in the commitment to consumption. We might just do that if it inspires trust in the market. Then again it sends a weird signal if not even we want to use our own system to sustain our operation. “Dogfooding” would also allow us to experience the system from the user side and notice problems with it even when no one reports them to us.
Also people are routinely trusted not to make callous decisions even if it’d be to their benefit. For example, charities are trusted to make themselves obsolete if at all possible. The existence of the Against Malaria Foundation hinges on there being malaria. Yet we trust them to do their best to eliminate malaria.
Charities often receive exploratory grants to allow them to run RCTs and such. They’re still trusted to conduct a high-quality RCT and not manipulate the results even though their own jobs and the ex post value of years of their work hinge on the results.
Even I personally used to run a charity that was very dear to me, but when we became convinced that the program was nonoptimal and found out that we couldn’t change the bylaws of the association to accommodate a more optimal program, we also shut it down.
I think the signalling benefit from providing ultimate consumers is more important than failing to signal there are speculators. I think speculators are logically downstream of consumers, and impact markets are bottlenecked by lack of clarity about whether there will be consumers.
(I’m also quite unclear whether we’ve worked out enough of the fundamentals of how to avoid bad incentives that it’s good to establish trust in impact markets … I guess you’d want to handle this by saying that people shouldn’t buy impact for any work trying to establish them at the moment, since it’s ex ante risky?)
I guess you’d want to handle this by saying that people shouldn’t buy impact for any work trying to establish them at the moment, since it’s ex ante risky?
I personally agree overall with the general gist of this (something like not selling the impact of working on impact markets in the short term, probably years or decades, maybe forever) and was going to make a statement of my own long-term intention along these lines at some point when I got around to personally responding to one of Ofer’s comments; the way you put it solidifies further my sense that this would probably be prudent. I have more to say but will bow out for now due to personal needs. I’d prefer to have these discussions in a space dedicated to curiously examining downsides, which I will make a separate post for.
I think the signalling benefit from providing ultimate consumers is more important than failing to signal there are speculators. I think speculators are logically downstream of consumers, and impact markets are bottlenecked by lack of clarity about whether there will be consumers.
That sounds sensible to me. Two considerations that push a bit against in my mind are:
I want to make a binding commitment to a particular consumption schedule, and that burns option value. So if trust in the consumption is the bottleneck and if it fluctuates, I would like to still have the option to increase the consumption rate when the trust drops. It feels like it’s a bit too early to think about the mechanics here since the market is still so illiquid that we can’t easily measure such fluctuations in the first place.
A source of trust in impact markets could also stem from particular long-term commitments such as windfall clauses. In this case the consumption schedule would have to be tuned such that the windfall funder can still buy and consume the certificates, and it’s usually unclear when the windfall will happen if it happens. So maybe the consumption schedule should always be some thing asymptotic along the lines of consuming half the remaining certificates by some date, and then half again, and so on.
I guess you’d want to handle this by saying that people shouldn’t buy impact for any work trying to establish them at the moment, since it’s ex ante risky?
Hmm, I don’t understand this? Can you clarify what you’re referring to?
Our strategy mostly rests on Attributed Impact, an operationalization of how we value the impact of an action that someone performs. (This is a short summary.)
Its key features include that it addresses moral trade (including the distribution mismatch problem) by making sure that the impact of actions that are negative in ex ante expectation is worthless regardless of how great they turn out or how great they are for some group of moral patients. (In fact it uses the minimum of ex ante and current expected value, so it can go negative, but we don’t have a legal handle on issuers to make them pay up unless we can push for mandatory insurance or staking.)
Another key feature is that it requires issuers to justify that their certificate has positive Attributed Impact. It also has a feature that is meant to prevent threats against retro funders. Those, in combination with our commitment to buy according to Attributed Impact, will, or so we hope, start a feedback cycle where issuers are vocal about Attributed Impact to sell their certs to the retro funders, possible investors scrutinize the certs to see if they think that we will be happy with the justification, and generally everyone uses it by default just as now people use the keyword “longtermism” to appeal to funders. (Just kidding.) That’ll hopefully make Attributed Impact the de facto standard for valuing impact, so that even less aligned new retro funders will find it easier to go along with the existing norms rather than to try to change them, especially since they probably also appreciate the antithreat feature.
But we have a few more lines of defense against Attributed Impact drift (as it were), such as “the pot.” It’s currently too early in my view to try to implement them.
I’ve recently been wondering though: Many of these risks apply to all prize contests, not only certificate-based ones. Also anyone out there, any unaligned millionaire, is free to announce big prizes for things we would disapprove of. Our goal has so far been to build an ecosystem that is so hard to abuse that these unaligned millionaires will choose to stay away and do their prize contests elsewhere. But that only shifts around where the bad stuff happens.
Perhaps there are even mechanisms that could attract the unaligned millionaires and ever so slightly improve the outcomes of their prize contests. But I haven’t thought about how that might be achieved.
Conversely, the right to retro funding could be tied to a particular first retro funder to eliminate the risk of other retro funders joining later. But that probably also just shifts where the bad stuff happens, so I’m not convinced.
I can see the appeal in the commitment to consumption. We might just do that if it inspires trust in the market. Then again it sends a weird signal if not even we want to use our own system to sustain our operation. “Dogfooding” would also allow us to experience the system from the user side and notice problems with it even when no one reports them to us.
Also people are routinely trusted not to make callous decisions even if it’d be to their benefit. For example, charities are trusted to make themselves obsolete if at all possible. The existence of the Against Malaria Foundation hinges on there being malaria. Yet we trust them to do their best to eliminate malaria.
Charities often receive exploratory grants to allow them to run RCTs and such. They’re still trusted to conduct a high-quality RCT and not manipulate the results even though their own jobs and the ex post value of years of their work hinge on the results.
Even I personally used to run a charity that was very dear to me, but when we became convinced that the program was nonoptimal and found out that we couldn’t change the bylaws of the association to accommodate a more optimal program, we also shut it down.
I think the signalling benefit from providing ultimate consumers is more important than failing to signal there are speculators. I think speculators are logically downstream of consumers, and impact markets are bottlenecked by lack of clarity about whether there will be consumers.
(I’m also quite unclear whether we’ve worked out enough of the fundamentals of how to avoid bad incentives that it’s good to establish trust in impact markets … I guess you’d want to handle this by saying that people shouldn’t buy impact for any work trying to establish them at the moment, since it’s ex ante risky?)
Quick comment here—thanks for chipping in!
I personally agree overall with the general gist of this (something like not selling the impact of working on impact markets in the short term, probably years or decades, maybe forever) and was going to make a statement of my own long-term intention along these lines at some point when I got around to personally responding to one of Ofer’s comments; the way you put it solidifies further my sense that this would probably be prudent. I have more to say but will bow out for now due to personal needs. I’d prefer to have these discussions in a space dedicated to curiously examining downsides, which I will make a separate post for.
That sounds sensible to me. Two considerations that push a bit against in my mind are:
I want to make a binding commitment to a particular consumption schedule, and that burns option value. So if trust in the consumption is the bottleneck and if it fluctuates, I would like to still have the option to increase the consumption rate when the trust drops. It feels like it’s a bit too early to think about the mechanics here since the market is still so illiquid that we can’t easily measure such fluctuations in the first place.
A source of trust in impact markets could also stem from particular long-term commitments such as windfall clauses. In this case the consumption schedule would have to be tuned such that the windfall funder can still buy and consume the certificates, and it’s usually unclear when the windfall will happen if it happens. So maybe the consumption schedule should always be some thing asymptotic along the lines of consuming half the remaining certificates by some date, and then half again, and so on.
Hmm, I don’t understand this? Can you clarify what you’re referring to?
Our strategy mostly rests on Attributed Impact, an operationalization of how we value the impact of an action that someone performs. (This is a short summary.)
Its key features include that it addresses moral trade (including the distribution mismatch problem) by making sure that the impact of actions that are negative in ex ante expectation is worthless regardless of how great they turn out or how great they are for some group of moral patients. (In fact it uses the minimum of ex ante and current expected value, so it can go negative, but we don’t have a legal handle on issuers to make them pay up unless we can push for mandatory insurance or staking.)
Another key feature is that it requires issuers to justify that their certificate has positive Attributed Impact. It also has a feature that is meant to prevent threats against retro funders. Those, in combination with our commitment to buy according to Attributed Impact, will, or so we hope, start a feedback cycle where issuers are vocal about Attributed Impact to sell their certs to the retro funders, possible investors scrutinize the certs to see if they think that we will be happy with the justification, and generally everyone uses it by default just as now people use the keyword “longtermism” to appeal to funders. (Just kidding.) That’ll hopefully make Attributed Impact the de facto standard for valuing impact, so that even less aligned new retro funders will find it easier to go along with the existing norms rather than to try to change them, especially since they probably also appreciate the antithreat feature.
But we have a few more lines of defense against Attributed Impact drift (as it were), such as “the pot.” It’s currently too early in my view to try to implement them.
I’ve recently been wondering though: Many of these risks apply to all prize contests, not only certificate-based ones. Also anyone out there, any unaligned millionaire, is free to announce big prizes for things we would disapprove of. Our goal has so far been to build an ecosystem that is so hard to abuse that these unaligned millionaires will choose to stay away and do their prize contests elsewhere. But that only shifts around where the bad stuff happens.
Perhaps there are even mechanisms that could attract the unaligned millionaires and ever so slightly improve the outcomes of their prize contests. But I haven’t thought about how that might be achieved.
Conversely, the right to retro funding could be tied to a particular first retro funder to eliminate the risk of other retro funders joining later. But that probably also just shifts where the bad stuff happens, so I’m not convinced.
I’d be curious if you have any thoughts on this!