It’s not that hard to pick out highly risky projects retroactively, relative to identifying them prospectively.
Do you mean that, if a project ends up being harmful we have Bayesian evidence that it was ex-ante highly risky? If so, I agree. But that fact does not alleviate the distribution mismatch problem, which is caused by the prospect of a risky project ending up going well.
Impact markets don’t solve the problem of funders being able to fund harmful projects. But they don’t make it differentially worse (it empowers funders generally, but I don’t expect you would argue that grantmakers are net negative, so this still comes out net-positive).
If the distribution mismatch problem is not mitigated (and it seems hard to mitigate), investors are incentivized to fund high-stakes projects while regarding potential harmful outcomes as if they were neutral. (Including in anthropogenic x-risks and meta-EA domains.) That is not the case with EA funders today.
There are some other effects around cultural effects of making money flows more legible which seem possibly concerning, but I’m not super worried about negative EV projects being run.
I think this is a highly over-optimistic take about cranking up the profit-seeking lever in EA and the ability to mitigate the effects of Goodhart’s law. It seems that when humans have an opportunity to make a lot of money (without breaking laws or norms) at the expense of some altruistic values, they usually behave in a way that is aligned with their local incentives (while convincing themselves it’s also the altruistic thing to do).
I do think it makes sense to not rush into creating a decentralized unregulatable system on general principles of caution, as we certainly should watch the operation of a more controllable one for some time before moving towards that.
If you run a fully controlled (Web2) impact market for 6-12 months, and the market funds great projects/posts and there’s no sign of trouble, will you then launch a decentralized impact market that no one can control (in which people can sell the impact of recruiting additional retro funders, and the impact of establishing that very market)?
If the distribution mismatch problem is not mitigated (and it seems hard to mitigate), investors are incentivized to fund high-stakes projects while regarding potential harmful outcomes as if they were neutral.
I thought a bunch more about this, and I do think there is something here worth paying attention to.
I am not certain that there is a notable enough pool of the projects in the category you’re worried about to offset the benefits of impact markets, but we would incentivise those that exist, and that has a cost.
If we’re limited to accredited investors, as Scott proposed, we have some pretty strong mitigation options. In particular, we can let oraculars pay to mark projects as having been strongly net negative, and have this detract from the ability of those who funded that project to earn on their entire portfolio. Since accounts will be hard to generate and only available to accredited investors, generating a new account for each item is not an available option.
I think I can make some modifications to the Awesome Auto Auction to include this fairly simply, and AAA does not allow selling as an action which removes the other risk of people dumping their money / provides a natural structure for limiting withdrawals (just cut off their automatic payments until the “debt” is repayed).
Would this be sufficient mitigation? And if not, what might you still fear about this?
If you run a fully controlled (Web2) impact market for 6-12 months, and the market funds great projects/posts and there’s no sign of trouble, will you then launch a decentralized impact market that no one can control (in which people can sell the impact of recruiting additional retro funders, and the impact of establishing that very market)?
I don’t see much benefit to a Web3 one assuming we can do microtransactions on a Web2, so I’d be fine with either not doing the Web3 or only doing it after several years of having a Web2 without any of those restrictions and nothing going badly wrong (retaining the option to restrict new markets for it at any time).
In particular, we can let oraculars pay to mark projects as having been strongly net negative, and have this detract from the ability of those who funded that project to earn on their entire portfolio.
I think this approach has the following problems:
Investors will still be risking only the total amount of money they invest in the market (or place as a collateral), while their potential gain is unlimited.
People tend to avoid doing things that directly financially harm other individuals. Therefore, I expect retro funders would usually not use their power to mark a project as “ex-ante net negative”, even if it was a free action and the project was clearly ex-ante net negative (let alone if the retro funders need to spend money on doing it; and if it’s very hard to judge whether the project was ex-ante net negative, which seems a much more common situation).
Seems essentially fine. There’s a reason society converged to loss-limited companies being the right thing to do, even though there is unlimited gain and limited downside, and that’s that individuals tend to be far too risk averse. Exposing them to a risk to the rest of their portfolio should be more than sufficient to make this not a concern.
Might be a fair point, but remember, this is in the case where some project was predictably net negative and then actually was badly net negative. My guess is at least some funders would be willing to step in and disincentivise that kind of activity, and the threat of it would keep people off the worst projects.
There’s a reason society converged to loss-limited companies being the right thing to do, even though there is unlimited gain and limited downside, and that’s that individuals tend to be far too risk averse.
I think the reason that states tend to allow loss-limited companies is that it causes them to have larger GDP (and thus all the good/adaptive things that are caused by having larger GDP). But loss-limited companies may be a bad thing from an EA perspective, considering that such companies may be financially incentivized to act in net-negative ways (e.g. exacerbating x-risks), especially in situations where lawmakers/regulators are lagging behind.
Yes, and greater GDP maps fairly well to greater effectiveness of altruism. I think you’re focused on downside risks too strongly. They exist, and they are worth mitigating, but inaction due to fear of them will cause far more harm. Inaction due to heckler’s veto is a not a free outcome.
Companies not being loss-limited would not cause them to stop producing x-risks when the literal death of all their humans is an insufficient motivation to discourage them. It would reduce a bunch of other categories of harm, but we’ve converged to accepting that risk to avoid crippling risk aversion in the economy.
Do you mean that, if a project ends up being harmful we have Bayesian evidence that it was ex-ante highly risky? If so, I agree. But that fact does not alleviate the distribution mismatch problem, which is caused by the prospect of a risky project ending up going well.
If the distribution mismatch problem is not mitigated (and it seems hard to mitigate), investors are incentivized to fund high-stakes projects while regarding potential harmful outcomes as if they were neutral. (Including in anthropogenic x-risks and meta-EA domains.) That is not the case with EA funders today.
I think this is a highly over-optimistic take about cranking up the profit-seeking lever in EA and the ability to mitigate the effects of Goodhart’s law. It seems that when humans have an opportunity to make a lot of money (without breaking laws or norms) at the expense of some altruistic values, they usually behave in a way that is aligned with their local incentives (while convincing themselves it’s also the altruistic thing to do).
If you run a fully controlled (Web2) impact market for 6-12 months, and the market funds great projects/posts and there’s no sign of trouble, will you then launch a decentralized impact market that no one can control (in which people can sell the impact of recruiting additional retro funders, and the impact of establishing that very market)?
I thought a bunch more about this, and I do think there is something here worth paying attention to.
I am not certain that there is a notable enough pool of the projects in the category you’re worried about to offset the benefits of impact markets, but we would incentivise those that exist, and that has a cost.
If we’re limited to accredited investors, as Scott proposed, we have some pretty strong mitigation options. In particular, we can let oraculars pay to mark projects as having been strongly net negative, and have this detract from the ability of those who funded that project to earn on their entire portfolio. Since accounts will be hard to generate and only available to accredited investors, generating a new account for each item is not an available option.
I think I can make some modifications to the Awesome Auto Auction to include this fairly simply, and AAA does not allow selling as an action which removes the other risk of people dumping their money / provides a natural structure for limiting withdrawals (just cut off their automatic payments until the “debt” is repayed).
Would this be sufficient mitigation? And if not, what might you still fear about this?
I don’t see much benefit to a Web3 one assuming we can do microtransactions on a Web2, so I’d be fine with either not doing the Web3 or only doing it after several years of having a Web2 without any of those restrictions and nothing going badly wrong (retaining the option to restrict new markets for it at any time).
I think this approach has the following problems:
Investors will still be risking only the total amount of money they invest in the market (or place as a collateral), while their potential gain is unlimited.
People tend to avoid doing things that directly financially harm other individuals. Therefore, I expect retro funders would usually not use their power to mark a project as “ex-ante net negative”, even if it was a free action and the project was clearly ex-ante net negative (let alone if the retro funders need to spend money on doing it; and if it’s very hard to judge whether the project was ex-ante net negative, which seems a much more common situation).
Seems essentially fine. There’s a reason society converged to loss-limited companies being the right thing to do, even though there is unlimited gain and limited downside, and that’s that individuals tend to be far too risk averse. Exposing them to a risk to the rest of their portfolio should be more than sufficient to make this not a concern.
Might be a fair point, but remember, this is in the case where some project was predictably net negative and then actually was badly net negative. My guess is at least some funders would be willing to step in and disincentivise that kind of activity, and the threat of it would keep people off the worst projects.
I think the reason that states tend to allow loss-limited companies is that it causes them to have larger GDP (and thus all the good/adaptive things that are caused by having larger GDP). But loss-limited companies may be a bad thing from an EA perspective, considering that such companies may be financially incentivized to act in net-negative ways (e.g. exacerbating x-risks), especially in situations where lawmakers/regulators are lagging behind.
Yes, and greater GDP maps fairly well to greater effectiveness of altruism. I think you’re focused on downside risks too strongly. They exist, and they are worth mitigating, but inaction due to fear of them will cause far more harm. Inaction due to heckler’s veto is a not a free outcome.
Companies not being loss-limited would not cause them to stop producing x-risks when the literal death of all their humans is an insufficient motivation to discourage them. It would reduce a bunch of other categories of harm, but we’ve converged to accepting that risk to avoid crippling risk aversion in the economy.