Thanks for sharing this Linch, I found it a useful complement to the marginal grant thresholds post, which I recommend for those who enjoyed this post.
Thanks Joel for your thoughtful comment, which Iâd like to build on.
I was thinking about how we can get funders to make calculated bets on those that have been discarded elsewhere, and get rewarded when they proved others right. Isnât AI Safety Impact Markets trying to solve some of the issues with adverse selection through that kind of mechanism? Sorry for the lack of depth, but I think others can weigh in better.
Yeah, agreed! I havenât thought about impact markets through Linchâs particular lens. (Iâm cofounder of AI Safety Impact Markets.)
Distinguishing different meanings of costly: Impact markets make applying for funding more costly in terms of reputation, in the sense that people might write public critiques of proposals. But they make applying less costly in terms of time, in the sense that you can post one standardized application rather than one bespoke one per funder.
But most people Iâve talked to donât consider costly in terms of reputation to be a cost at all because theyâre eager to get feedback on their plans to improve them, and rejections from funders rarely include feedback. (Critical feedback would then reflect badly on the previous draft but not on the latest one.)
Conversely, Iâve also heard of funders reasoning like âThis project falls into the purview of the LTFF, so if it hasnât gotten funded by them, thereâs probably something wrong with it, and I shouldnât fund it either.â Public feedback like, âWe decided not to fund this project because we couldnât find an expert in the field to assess its meritsâ would actually be ânegatively costlyâ or beneficial in terms of reputation. It could also help with unwarranted yellow flags because impact markets are all about aggregating and amplifying specialized local knowledge. If, for example, the rumor mill claims that someone is a drug addict, a longterm flatmate could make a symbolic donation and clarify that the person in question only microdoses LSD, no hard drugs. That could silence the incorrect rumor. The flatmate could thus become an early donor to the project and reap an outsized (compared to the size of the donation) reward in terms of their score or later Impact Marks for adding this information to the market.
Our scores will be based on evaluations of the outputs, so all the issues that have to do with lacking rigor or not publishing anything in the first place are priced in. The issue with plagiarism, low integrity, and interpersonal harm is more concerning for me. Iâll consider adding a âwhistle-blowingâ tab to the comment section where users can post anonymously to deter low-integrity actors from using the platform. We (GoodX) can also manually intervene if we become aware of bad actors.
Generally, my âbiasâ is to keep things public by default. The funding ecosystem can then make exceptions in cases where that is not possible. The current default seems to be secret by default, which seems unnecessarily costly to me in multiple ways (reapplying to multiple funders in different formats, no feedback, bad coordination between funders, few funding gaps for small donors).
Thanks for sharing this Linch, I found it a useful complement to the marginal grant thresholds post, which I recommend for those who enjoyed this post.
Thanks Joel for your thoughtful comment, which Iâd like to build on.
I was thinking about how we can get funders to make calculated bets on those that have been discarded elsewhere, and get rewarded when they proved others right. Isnât AI Safety Impact Markets trying to solve some of the issues with adverse selection through that kind of mechanism? Sorry for the lack of depth, but I think others can weigh in better.
Yeah, agreed! I havenât thought about impact markets through Linchâs particular lens. (Iâm cofounder of AI Safety Impact Markets.)
Distinguishing different meanings of costly: Impact markets make applying for funding more costly in terms of reputation, in the sense that people might write public critiques of proposals. But they make applying less costly in terms of time, in the sense that you can post one standardized application rather than one bespoke one per funder.
But most people Iâve talked to donât consider costly in terms of reputation to be a cost at all because theyâre eager to get feedback on their plans to improve them, and rejections from funders rarely include feedback. (Critical feedback would then reflect badly on the previous draft but not on the latest one.)
Conversely, Iâve also heard of funders reasoning like âThis project falls into the purview of the LTFF, so if it hasnât gotten funded by them, thereâs probably something wrong with it, and I shouldnât fund it either.â Public feedback like, âWe decided not to fund this project because we couldnât find an expert in the field to assess its meritsâ would actually be ânegatively costlyâ or beneficial in terms of reputation. It could also help with unwarranted yellow flags because impact markets are all about aggregating and amplifying specialized local knowledge. If, for example, the rumor mill claims that someone is a drug addict, a longterm flatmate could make a symbolic donation and clarify that the person in question only microdoses LSD, no hard drugs. That could silence the incorrect rumor. The flatmate could thus become an early donor to the project and reap an outsized (compared to the size of the donation) reward in terms of their score or later Impact Marks for adding this information to the market.
Our scores will be based on evaluations of the outputs, so all the issues that have to do with lacking rigor or not publishing anything in the first place are priced in. The issue with plagiarism, low integrity, and interpersonal harm is more concerning for me. Iâll consider adding a âwhistle-blowingâ tab to the comment section where users can post anonymously to deter low-integrity actors from using the platform. We (GoodX) can also manually intervene if we become aware of bad actors.
Generally, my âbiasâ is to keep things public by default. The funding ecosystem can then make exceptions in cases where that is not possible. The current default seems to be secret by default, which seems unnecessarily costly to me in multiple ways (reapplying to multiple funders in different formats, no feedback, bad coordination between funders, few funding gaps for small donors).