The price of a certificate tracks the maximum amount of money that any future retro funder will be willing to pay for it
I get that. I call that retro funder alignment (actually Dony came up with the term :-)) in analogy with AI alignment, where it’s also not enough to just align one AI or all current AIs or some other subset of all AIs that’ll ever come into existence.
Our next experiment is actually not time-bounded but we’re the only buyers (retro funders), so the risk is masked again.
I wonder, though, when I play this through in my mind, I can’t quite see almost any investor investing anything but tiny amounts into a project on the promise that there might be at some point a retro funder for it. It’s a bit like name squatting of domains or Bitclout user names. People buy ten thousands of them in the hopes of reselling a few of them at a profit, so they buy them only when they are still very very cheap (or particularly promising). One place sold most of them at $50–100, so they must’ve bought them even cheaper. One can’t do a lot of harm (at the margin) with that amount of money.
Conversely, if an investor wants to bet a lot of money on a potential future unaligned retro funder, they need to be optimistic that the retro funding they’ll receive will be so massive that it makes up for all the time they had to stay invested. Maybe they’ll have to stay invested 5 years or 20 years, and even then only have a tiny, tiny chance that the unaligned retro funder, even conditional on showing up, will want to buy the impact of that particular project. Counterfactually they could’ve made a riskless 10–30% APY all the while. So it seems like a a rare thing to happen.
But I could see Safemoon type of things happening in more than extremely unlikely cases. Investors invest not because of any longterm promises of unaligned retro funders decades later but because they expect that other investors will invest because the other investors also expect other investors to invest, and so on. They’ll all try to buy in before most others buy in and then sell quickly before all others sell, so they’ll just create a lot of volatility and redistribute assets rather randomly. That seems really pointless, and some of the investors may suffer significant losses, but it doesn’t seem catastrophic for the world. People will probably also learn from it for a year or so, so it can only happen about once a year.
Or can you think of places where this happens in established markets? Penny stocks, yield farming platforms? In both cases the investors either seem small, unsophisticated, and having little effect on the world, or sophisticated and very quickly in and out, also with little effect on the world.
I think the concern here is not about “unaligned retro funders” who consciously decide to do harmful things. It doesn’t take malicious intent to misjudge whether a certain effort is ex-ante beneficial or harmful in expectation.
I wonder, though, when I play this through in my mind, I can’t quite see almost any investor investing anything but tiny amounts into a project on the promise that there might be at some point a retro funder for it.
Suppose investors were able to buy impact certificates of organizations like OpenAI, Anthropic, Conjecture, EcoHealth Alliance etc. These are plausibly very high-impact organizations. Out of 100 aligned retro funders, some may judge some of these organizations to be ex-ante net-positive. And it’s plausible that some of these organizations will end up being extremely beneficial. So it’s plausible that some retro funders (and thus also investors) would pay a lot for the certificates of such orgs.
Okay, but if you’re not actually talking about “malicious” retro funders (a category in which I would include actions that are not typically considered malicious today, such as defecting against minority or nonhuman interests), the difference between a world with and without impact markets becomes very subtle and ambiguous in my mind.
Like, I would guess that Anthropic and Conjecture are probably good, though I know little about them. I would guess that early OpenAI was very bad and current OpenAI is probably bad. But I feel great uncertainty over all of that. And I’m not even taking all considerations into account that I’m aware of because we still don’t have a model of how they interact. I don’t see a way in which impact markets could systematically prevent (as opposed to somewhat reduce) investment mistakes that today not even funders as sophisticated as Open Phil can predict.
Currently, all these groups receive a lot of funding from the altruistic funders directly. In a world with impact markets, the money would first come from investors. Not much would change at all. In fact I see most benefits here in the incentive alignment with employees.
In my models, each investor makes fewer grants than funders currently do because they specialize more and are more picky. My math doesn’t work out, doesn’t show that they can plausibly make a profit, if they’re similarly or less picky than current funders.
So I could see a drop in sophistication as relatively unskilled investors enter the market. But then they’d have to improve or get filtered out within a few years as they lose their capital to more sophisticated investors.
Relatively speaking, I think I’m more concerned about the problem you pointed out where retro funders get scammed by issuers who use p-hacking-inspired tricks to make their certificates seem valuable when they are not. Sophisticated retro funders can probably address that about as well as top journals can, which is already not perfect, but more naive retro funders and investors may fall for it.
One new thing that we’re doing to address this is to encourage people to write exposés of malicious certificates and sell their impact. Eventually of course I also want people to be able to short issuer stock.
Okay, but if you’re not actually talking about “malicious” retro funders (a category in which I would include actions that are not typically considered malicious today, such as defecting against minority or nonhuman interests), the difference between a world with and without impact markets becomes very subtle and ambiguous in my mind.
I think it depends on the extent to which the (future) retro funders take into account the ex-ante impact, and evaluate it without an upward bias even if they already know that the project ended up being extremely beneficial.
I get that. I call that retro funder alignment (actually Dony came up with the term :-)) in analogy with AI alignment, where it’s also not enough to just align one AI or all current AIs or some other subset of all AIs that’ll ever come into existence.
Our next experiment is actually not time-bounded but we’re the only buyers (retro funders), so the risk is masked again.
I wonder, though, when I play this through in my mind, I can’t quite see almost any investor investing anything but tiny amounts into a project on the promise that there might be at some point a retro funder for it. It’s a bit like name squatting of domains or Bitclout user names. People buy ten thousands of them in the hopes of reselling a few of them at a profit, so they buy them only when they are still very very cheap (or particularly promising). One place sold most of them at $50–100, so they must’ve bought them even cheaper. One can’t do a lot of harm (at the margin) with that amount of money.
Conversely, if an investor wants to bet a lot of money on a potential future unaligned retro funder, they need to be optimistic that the retro funding they’ll receive will be so massive that it makes up for all the time they had to stay invested. Maybe they’ll have to stay invested 5 years or 20 years, and even then only have a tiny, tiny chance that the unaligned retro funder, even conditional on showing up, will want to buy the impact of that particular project. Counterfactually they could’ve made a riskless 10–30% APY all the while. So it seems like a a rare thing to happen.
But I could see Safemoon type of things happening in more than extremely unlikely cases. Investors invest not because of any longterm promises of unaligned retro funders decades later but because they expect that other investors will invest because the other investors also expect other investors to invest, and so on. They’ll all try to buy in before most others buy in and then sell quickly before all others sell, so they’ll just create a lot of volatility and redistribute assets rather randomly. That seems really pointless, and some of the investors may suffer significant losses, but it doesn’t seem catastrophic for the world. People will probably also learn from it for a year or so, so it can only happen about once a year.
Or can you think of places where this happens in established markets? Penny stocks, yield farming platforms? In both cases the investors either seem small, unsophisticated, and having little effect on the world, or sophisticated and very quickly in and out, also with little effect on the world.
I think the concern here is not about “unaligned retro funders” who consciously decide to do harmful things. It doesn’t take malicious intent to misjudge whether a certain effort is ex-ante beneficial or harmful in expectation.
Suppose investors were able to buy impact certificates of organizations like OpenAI, Anthropic, Conjecture, EcoHealth Alliance etc. These are plausibly very high-impact organizations. Out of 100 aligned retro funders, some may judge some of these organizations to be ex-ante net-positive. And it’s plausible that some of these organizations will end up being extremely beneficial. So it’s plausible that some retro funders (and thus also investors) would pay a lot for the certificates of such orgs.
Okay, but if you’re not actually talking about “malicious” retro funders (a category in which I would include actions that are not typically considered malicious today, such as defecting against minority or nonhuman interests), the difference between a world with and without impact markets becomes very subtle and ambiguous in my mind.
Like, I would guess that Anthropic and Conjecture are probably good, though I know little about them. I would guess that early OpenAI was very bad and current OpenAI is probably bad. But I feel great uncertainty over all of that. And I’m not even taking all considerations into account that I’m aware of because we still don’t have a model of how they interact. I don’t see a way in which impact markets could systematically prevent (as opposed to somewhat reduce) investment mistakes that today not even funders as sophisticated as Open Phil can predict.
Currently, all these groups receive a lot of funding from the altruistic funders directly. In a world with impact markets, the money would first come from investors. Not much would change at all. In fact I see most benefits here in the incentive alignment with employees.
In my models, each investor makes fewer grants than funders currently do because they specialize more and are more picky. My math doesn’t work out, doesn’t show that they can plausibly make a profit, if they’re similarly or less picky than current funders.
So I could see a drop in sophistication as relatively unskilled investors enter the market. But then they’d have to improve or get filtered out within a few years as they lose their capital to more sophisticated investors.
Relatively speaking, I think I’m more concerned about the problem you pointed out where retro funders get scammed by issuers who use p-hacking-inspired tricks to make their certificates seem valuable when they are not. Sophisticated retro funders can probably address that about as well as top journals can, which is already not perfect, but more naive retro funders and investors may fall for it.
One new thing that we’re doing to address this is to encourage people to write exposés of malicious certificates and sell their impact. Eventually of course I also want people to be able to short issuer stock.
I think it depends on the extent to which the (future) retro funders take into account the ex-ante impact, and evaluate it without an upward bias even if they already know that the project ended up being extremely beneficial.
Yes, that’ll be important!