I think there’s an argument for the thing you were saying, though… Something like… If one marketplace forbids most foundational AI public works, then another marketplace will pop up with a different negative externality estimation process, and it wont go away, and most charities and government funders still aren’t EA and don’t care about undiscounted expected utility, so there’s a very real risk that that marketplace would become the largest one.
I guess there might not be many people who are charitibly inclined, and who could understand, believe in, and adopt impact markets, but also don’t believe in tail risks. There are lots of people who do one of those things, but I’m not sure there are any who do all.
I think there’s an argument for the thing you were saying, though… Something like… If one marketplace forbids most foundational AI public works, then another marketplace will pop up with a different negative externality estimation process, and it wont go away, and most charities and government funders still aren’t EA and don’t care about undiscounted expected utility, so there’s a very real risk that that marketplace would become the largest one.
I guess there might not be many people who are charitibly inclined, and who could understand, believe in, and adopt impact markets, but also don’t believe in tail risks. There are lots of people who do one of those things, but I’m not sure there are any who do all.