We didn’t think about this because we’re not planning this at all. But we’re in the process of forming a public benefit corporation. Our benefit statement is “Increase contributions to public and common goods by developing and deploying innovative market mechanisms.” The PBC will be the one doing the purchases, so if we ever sell the certs again the returns will flow black to the PBC account and will be used in line with the benefit statement.
That’s sort of like when a grant recipient buys furniture for an office but then, a few years later, moves to a group office with existing furniture and sells their own (now redundant) furniture on eBay. Those funds then also flow back to the account of the grant recipient unless they have some nonstandard agreements around their furniture.
But of course we can run this by FTX if it ever becomes an action-relevant question.
If the shareholders of the public benefit corporation will be able to receive dividends, I think there’s a conflict of interest problem with this setup. The Impact Markets team will probably need to make high-stakes decisions under great uncertainty. (E.g. should an impact market be launched? Should the impact market be decentralized? Should a certain person be invited to serve as a retro funder? How to navigate the tradeoff between explaining the safety rules thoroughly and writing more engaging posts that are more conducive to gaining traction?) It’s a big conflict of interest problem if the decision makers can end up making a lot of money via a (future) impact market due to making certain decisions.
Therefore, I think it’s better to commit to “consume/open” (i.e. never sell) the certificates that you purchase with the grant.
I can see the appeal in the commitment to consumption. We might just do that if it inspires trust in the market. Then again it sends a weird signal if not even we want to use our own system to sustain our operation. “Dogfooding” would also allow us to experience the system from the user side and notice problems with it even when no one reports them to us.
Also people are routinely trusted not to make callous decisions even if it’d be to their benefit. For example, charities are trusted to make themselves obsolete if at all possible. The existence of the Against Malaria Foundation hinges on there being malaria. Yet we trust them to do their best to eliminate malaria.
Charities often receive exploratory grants to allow them to run RCTs and such. They’re still trusted to conduct a high-quality RCT and not manipulate the results even though their own jobs and the ex post value of years of their work hinge on the results.
Even I personally used to run a charity that was very dear to me, but when we became convinced that the program was nonoptimal and found out that we couldn’t change the bylaws of the association to accommodate a more optimal program, we also shut it down.
I think the signalling benefit from providing ultimate consumers is more important than failing to signal there are speculators. I think speculators are logically downstream of consumers, and impact markets are bottlenecked by lack of clarity about whether there will be consumers.
(I’m also quite unclear whether we’ve worked out enough of the fundamentals of how to avoid bad incentives that it’s good to establish trust in impact markets … I guess you’d want to handle this by saying that people shouldn’t buy impact for any work trying to establish them at the moment, since it’s ex ante risky?)
I guess you’d want to handle this by saying that people shouldn’t buy impact for any work trying to establish them at the moment, since it’s ex ante risky?
I personally agree overall with the general gist of this (something like not selling the impact of working on impact markets in the short term, probably years or decades, maybe forever) and was going to make a statement of my own long-term intention along these lines at some point when I got around to personally responding to one of Ofer’s comments; the way you put it solidifies further my sense that this would probably be prudent. I have more to say but will bow out for now due to personal needs. I’d prefer to have these discussions in a space dedicated to curiously examining downsides, which I will make a separate post for.
I think the signalling benefit from providing ultimate consumers is more important than failing to signal there are speculators. I think speculators are logically downstream of consumers, and impact markets are bottlenecked by lack of clarity about whether there will be consumers.
That sounds sensible to me. Two considerations that push a bit against in my mind are:
I want to make a binding commitment to a particular consumption schedule, and that burns option value. So if trust in the consumption is the bottleneck and if it fluctuates, I would like to still have the option to increase the consumption rate when the trust drops. It feels like it’s a bit too early to think about the mechanics here since the market is still so illiquid that we can’t easily measure such fluctuations in the first place.
A source of trust in impact markets could also stem from particular long-term commitments such as windfall clauses. In this case the consumption schedule would have to be tuned such that the windfall funder can still buy and consume the certificates, and it’s usually unclear when the windfall will happen if it happens. So maybe the consumption schedule should always be some thing asymptotic along the lines of consuming half the remaining certificates by some date, and then half again, and so on.
I guess you’d want to handle this by saying that people shouldn’t buy impact for any work trying to establish them at the moment, since it’s ex ante risky?
Hmm, I don’t understand this? Can you clarify what you’re referring to?
Our strategy mostly rests on Attributed Impact, an operationalization of how we value the impact of an action that someone performs. (This is a short summary.)
Its key features include that it addresses moral trade (including the distribution mismatch problem) by making sure that the impact of actions that are negative in ex ante expectation is worthless regardless of how great they turn out or how great they are for some group of moral patients. (In fact it uses the minimum of ex ante and current expected value, so it can go negative, but we don’t have a legal handle on issuers to make them pay up unless we can push for mandatory insurance or staking.)
Another key feature is that it requires issuers to justify that their certificate has positive Attributed Impact. It also has a feature that is meant to prevent threats against retro funders. Those, in combination with our commitment to buy according to Attributed Impact, will, or so we hope, start a feedback cycle where issuers are vocal about Attributed Impact to sell their certs to the retro funders, possible investors scrutinize the certs to see if they think that we will be happy with the justification, and generally everyone uses it by default just as now people use the keyword “longtermism” to appeal to funders. (Just kidding.) That’ll hopefully make Attributed Impact the de facto standard for valuing impact, so that even less aligned new retro funders will find it easier to go along with the existing norms rather than to try to change them, especially since they probably also appreciate the antithreat feature.
But we have a few more lines of defense against Attributed Impact drift (as it were), such as “the pot.” It’s currently too early in my view to try to implement them.
I’ve recently been wondering though: Many of these risks apply to all prize contests, not only certificate-based ones. Also anyone out there, any unaligned millionaire, is free to announce big prizes for things we would disapprove of. Our goal has so far been to build an ecosystem that is so hard to abuse that these unaligned millionaires will choose to stay away and do their prize contests elsewhere. But that only shifts around where the bad stuff happens.
Perhaps there are even mechanisms that could attract the unaligned millionaires and ever so slightly improve the outcomes of their prize contests. But I haven’t thought about how that might be achieved.
Conversely, the right to retro funding could be tied to a particular first retro funder to eliminate the risk of other retro funders joining later. But that probably also just shifts where the bad stuff happens, so I’m not convinced.
If you end up reselling impact that you’ve purchased with a grant from the Future Fund Regranting Program, where does the money go?
We didn’t think about this because we’re not planning this at all. But we’re in the process of forming a public benefit corporation. Our benefit statement is “Increase contributions to public and common goods by developing and deploying innovative market mechanisms.” The PBC will be the one doing the purchases, so if we ever sell the certs again the returns will flow black to the PBC account and will be used in line with the benefit statement.
That’s sort of like when a grant recipient buys furniture for an office but then, a few years later, moves to a group office with existing furniture and sells their own (now redundant) furniture on eBay. Those funds then also flow back to the account of the grant recipient unless they have some nonstandard agreements around their furniture.
But of course we can run this by FTX if it ever becomes an action-relevant question.
Thanks for the info!
If the shareholders of the public benefit corporation will be able to receive dividends, I think there’s a conflict of interest problem with this setup. The Impact Markets team will probably need to make high-stakes decisions under great uncertainty. (E.g. should an impact market be launched? Should the impact market be decentralized? Should a certain person be invited to serve as a retro funder? How to navigate the tradeoff between explaining the safety rules thoroughly and writing more engaging posts that are more conducive to gaining traction?) It’s a big conflict of interest problem if the decision makers can end up making a lot of money via a (future) impact market due to making certain decisions.
Therefore, I think it’s better to commit to “consume/open” (i.e. never sell) the certificates that you purchase with the grant.
I can see the appeal in the commitment to consumption. We might just do that if it inspires trust in the market. Then again it sends a weird signal if not even we want to use our own system to sustain our operation. “Dogfooding” would also allow us to experience the system from the user side and notice problems with it even when no one reports them to us.
Also people are routinely trusted not to make callous decisions even if it’d be to their benefit. For example, charities are trusted to make themselves obsolete if at all possible. The existence of the Against Malaria Foundation hinges on there being malaria. Yet we trust them to do their best to eliminate malaria.
Charities often receive exploratory grants to allow them to run RCTs and such. They’re still trusted to conduct a high-quality RCT and not manipulate the results even though their own jobs and the ex post value of years of their work hinge on the results.
Even I personally used to run a charity that was very dear to me, but when we became convinced that the program was nonoptimal and found out that we couldn’t change the bylaws of the association to accommodate a more optimal program, we also shut it down.
I think the signalling benefit from providing ultimate consumers is more important than failing to signal there are speculators. I think speculators are logically downstream of consumers, and impact markets are bottlenecked by lack of clarity about whether there will be consumers.
(I’m also quite unclear whether we’ve worked out enough of the fundamentals of how to avoid bad incentives that it’s good to establish trust in impact markets … I guess you’d want to handle this by saying that people shouldn’t buy impact for any work trying to establish them at the moment, since it’s ex ante risky?)
Quick comment here—thanks for chipping in!
I personally agree overall with the general gist of this (something like not selling the impact of working on impact markets in the short term, probably years or decades, maybe forever) and was going to make a statement of my own long-term intention along these lines at some point when I got around to personally responding to one of Ofer’s comments; the way you put it solidifies further my sense that this would probably be prudent. I have more to say but will bow out for now due to personal needs. I’d prefer to have these discussions in a space dedicated to curiously examining downsides, which I will make a separate post for.
That sounds sensible to me. Two considerations that push a bit against in my mind are:
I want to make a binding commitment to a particular consumption schedule, and that burns option value. So if trust in the consumption is the bottleneck and if it fluctuates, I would like to still have the option to increase the consumption rate when the trust drops. It feels like it’s a bit too early to think about the mechanics here since the market is still so illiquid that we can’t easily measure such fluctuations in the first place.
A source of trust in impact markets could also stem from particular long-term commitments such as windfall clauses. In this case the consumption schedule would have to be tuned such that the windfall funder can still buy and consume the certificates, and it’s usually unclear when the windfall will happen if it happens. So maybe the consumption schedule should always be some thing asymptotic along the lines of consuming half the remaining certificates by some date, and then half again, and so on.
Hmm, I don’t understand this? Can you clarify what you’re referring to?
Our strategy mostly rests on Attributed Impact, an operationalization of how we value the impact of an action that someone performs. (This is a short summary.)
Its key features include that it addresses moral trade (including the distribution mismatch problem) by making sure that the impact of actions that are negative in ex ante expectation is worthless regardless of how great they turn out or how great they are for some group of moral patients. (In fact it uses the minimum of ex ante and current expected value, so it can go negative, but we don’t have a legal handle on issuers to make them pay up unless we can push for mandatory insurance or staking.)
Another key feature is that it requires issuers to justify that their certificate has positive Attributed Impact. It also has a feature that is meant to prevent threats against retro funders. Those, in combination with our commitment to buy according to Attributed Impact, will, or so we hope, start a feedback cycle where issuers are vocal about Attributed Impact to sell their certs to the retro funders, possible investors scrutinize the certs to see if they think that we will be happy with the justification, and generally everyone uses it by default just as now people use the keyword “longtermism” to appeal to funders. (Just kidding.) That’ll hopefully make Attributed Impact the de facto standard for valuing impact, so that even less aligned new retro funders will find it easier to go along with the existing norms rather than to try to change them, especially since they probably also appreciate the antithreat feature.
But we have a few more lines of defense against Attributed Impact drift (as it were), such as “the pot.” It’s currently too early in my view to try to implement them.
I’ve recently been wondering though: Many of these risks apply to all prize contests, not only certificate-based ones. Also anyone out there, any unaligned millionaire, is free to announce big prizes for things we would disapprove of. Our goal has so far been to build an ecosystem that is so hard to abuse that these unaligned millionaires will choose to stay away and do their prize contests elsewhere. But that only shifts around where the bad stuff happens.
Perhaps there are even mechanisms that could attract the unaligned millionaires and ever so slightly improve the outcomes of their prize contests. But I haven’t thought about how that might be achieved.
Conversely, the right to retro funding could be tied to a particular first retro funder to eliminate the risk of other retro funders joining later. But that probably also just shifts where the bad stuff happens, so I’m not convinced.
I’d be curious if you have any thoughts on this!