Thanks for sharing this Linch, I found it a useful complement to the marginal grant thresholds post, which I recommend for those who enjoyed this post.
Thanks Joel for your thoughtful comment, which I’d like to build on.
I was thinking about how we can get funders to make calculated bets on those that have been discarded elsewhere, and get rewarded when they proved others right. Isn’t AI Safety Impact Markets trying to solve some of the issues with adverse selection through that kind of mechanism? Sorry for the lack of depth, but I think others can weigh in better.
Yeah, agreed! I haven’t thought about impact markets through Linch’s particular lens. (I’m cofounder of AI Safety Impact Markets.)
Distinguishing different meanings of costly: Impact markets make applying for funding more costly in terms of reputation, in the sense that people might write public critiques of proposals. But they make applying less costly in terms of time, in the sense that you can post one standardized application rather than one bespoke one per funder.
But most people I’ve talked to don’t consider costly in terms of reputation to be a cost at all because they’re eager to get feedback on their plans to improve them, and rejections from funders rarely include feedback. (Critical feedback would then reflect badly on the previous draft but not on the latest one.)
Conversely, I’ve also heard of funders reasoning like “This project falls into the purview of the LTFF, so if it hasn’t gotten funded by them, there’s probably something wrong with it, and I shouldn’t fund it either.” Public feedback like, “We decided not to fund this project because we couldn’t find an expert in the field to assess its merits” would actually be “negatively costly” or beneficial in terms of reputation. It could also help with unwarranted yellow flags because impact markets are all about aggregating and amplifying specialized local knowledge. If, for example, the rumor mill claims that someone is a drug addict, a longterm flatmate could make a symbolic donation and clarify that the person in question only microdoses LSD, no hard drugs. That could silence the incorrect rumor. The flatmate could thus become an early donor to the project and reap an outsized (compared to the size of the donation) reward in terms of their score or later Impact Marks for adding this information to the market.
Our scores will be based on evaluations of the outputs, so all the issues that have to do with lacking rigor or not publishing anything in the first place are priced in. The issue with plagiarism, low integrity, and interpersonal harm is more concerning for me. I’ll consider adding a “whistle-blowing” tab to the comment section where users can post anonymously to deter low-integrity actors from using the platform. We (GoodX) can also manually intervene if we become aware of bad actors.
Generally, my “bias” is to keep things public by default. The funding ecosystem can then make exceptions in cases where that is not possible. The current default seems to be secret by default, which seems unnecessarily costly to me in multiple ways (reapplying to multiple funders in different formats, no feedback, bad coordination between funders, few funding gaps for small donors).
Thanks for sharing this Linch, I found it a useful complement to the marginal grant thresholds post, which I recommend for those who enjoyed this post.
Thanks Joel for your thoughtful comment, which I’d like to build on.
I was thinking about how we can get funders to make calculated bets on those that have been discarded elsewhere, and get rewarded when they proved others right. Isn’t AI Safety Impact Markets trying to solve some of the issues with adverse selection through that kind of mechanism? Sorry for the lack of depth, but I think others can weigh in better.
Yeah, agreed! I haven’t thought about impact markets through Linch’s particular lens. (I’m cofounder of AI Safety Impact Markets.)
Distinguishing different meanings of costly: Impact markets make applying for funding more costly in terms of reputation, in the sense that people might write public critiques of proposals. But they make applying less costly in terms of time, in the sense that you can post one standardized application rather than one bespoke one per funder.
But most people I’ve talked to don’t consider costly in terms of reputation to be a cost at all because they’re eager to get feedback on their plans to improve them, and rejections from funders rarely include feedback. (Critical feedback would then reflect badly on the previous draft but not on the latest one.)
Conversely, I’ve also heard of funders reasoning like “This project falls into the purview of the LTFF, so if it hasn’t gotten funded by them, there’s probably something wrong with it, and I shouldn’t fund it either.” Public feedback like, “We decided not to fund this project because we couldn’t find an expert in the field to assess its merits” would actually be “negatively costly” or beneficial in terms of reputation. It could also help with unwarranted yellow flags because impact markets are all about aggregating and amplifying specialized local knowledge. If, for example, the rumor mill claims that someone is a drug addict, a longterm flatmate could make a symbolic donation and clarify that the person in question only microdoses LSD, no hard drugs. That could silence the incorrect rumor. The flatmate could thus become an early donor to the project and reap an outsized (compared to the size of the donation) reward in terms of their score or later Impact Marks for adding this information to the market.
Our scores will be based on evaluations of the outputs, so all the issues that have to do with lacking rigor or not publishing anything in the first place are priced in. The issue with plagiarism, low integrity, and interpersonal harm is more concerning for me. I’ll consider adding a “whistle-blowing” tab to the comment section where users can post anonymously to deter low-integrity actors from using the platform. We (GoodX) can also manually intervene if we become aware of bad actors.
Generally, my “bias” is to keep things public by default. The funding ecosystem can then make exceptions in cases where that is not possible. The current default seems to be secret by default, which seems unnecessarily costly to me in multiple ways (reapplying to multiple funders in different formats, no feedback, bad coordination between funders, few funding gaps for small donors).