Thanks for the post, Gergo! Do you think donating to local AI safety groups results in more quality-adjusted research-years per $ than to BlueDot? I would estimate that proxy for the cost-effectiveness from “quality of the future research of the additional participants”*”acceleration of the career of the additional participants in years”/”cost per participant”. I guess BlueDot has a higher bar for acceptance, and lower cost per parcitipant, so that local AI safety groups would need to accelerate the careers of participants significantly more to be as cost-effective as BlueDot.
Sorry for the late reply, Vasco. Thanks for your comment! I echo some of Chris’s points, but wanted to add some scattered thoughts:
I think the cost-effectiveness of AIS groups vs Bluedot would definitely depend on a given group, as I expect there to be large differences between different groups. The best ones might be more cost-effective than Bluedot, but I doubt anyone looked at this very rigorously (this analysis is great, but doesn’t include a Bluedot-like program). If you have the money, you likely want to fund both even if one is slightly better than the other.
The volunteer-run initiatives are likely more cost-effective kind of by default (though see next point).
Another consideration is a tradeoff between impact and cost-effectiveness. E.g if Bluedot is not funding constrained, then it might make sense for them to optimize for the largest impact, even if that comes with the price of slightly decreased cost-effectiveness. An example of this could be diminishing returns on spending money on LinkedIn ads. This could mean that the marginal impact of money donated to them is smaller, but funders still might prefer this to having to spend time on evaluating 10 AIS groups due to time costs)
Thanks for elaborating, @gergo! I am tagging you because I have just expanded this comment.
Another consideration is a tradeoff between impact and cost-effectiveness.
Nitpick. Maximising cost-effectiveness and impact is equivalent holding spending constant. However, I understand you mean the cost-effectiveness will tend to decrease as the spending increases.
The best ones might be more cost-effective than Bluedot, but I doubt anyone looked at this very rigorously (this analysis is great, but doesn’t include a Bluedot-like program).
I agree that analysis is great. I have just asked them whether they have considered estimating their cost-effectiveness in QARYs (quality-adjusted research years) per $, as done in that analysis. You may want to nudge them too.
This could mean that the marginal impact of money donated to them is smaller, but funders still might prefer this to having to spend time on evaluating 10 AIS groups due to time costs)
Great point. Funders should maximise “cost-effectiveness” = “impact”/”cost” = “impact”/(“financial cost” + “time cost”) = “impact”/”financial cost”/(1 + “time cost”/”financial cost”) = “cost-effectiveness neglecting the time cost”/(1 + “ratio between the time and financial cost”). Smaller grants have a higher ratio between the time and financial cost, so their cost-effectiveness neglecting the time cost has to be higher to clear a given cost-effectiveness bar.
The volunteer-run initiatives are likely more cost-effective kind of by default (though see next point).
There could still be costs besides wages (such that the “financial cost” above would not be 0), although the “time cost” of the grantmakers may well be the driver of the overall cost.
Context: I’ve done local community building (running AI Safety ANZ), but also facilitated for BlueDot.
There’s definitely a lot of advantages from being able to draw talent from anywhere in the world. I suspect that the competitiveness of local movement-building will vary massively by location. In terms of impact per dollar, groups at top global universities or in strategic locations (San Fransisco, London, Washington, Brussels, ect.) are most likely to be competitive.
It’s also important to think on the margin rather than on average. You’d have to talk to the core BlueDot team to find out what they would do with marginal funding and how promising they think the folks they rejected are.
Thanks for the post, Gergo! Do you think donating to local AI safety groups results in more quality-adjusted research-years per $ than to BlueDot? I would estimate that proxy for the cost-effectiveness from “quality of the future research of the additional participants”*”acceleration of the career of the additional participants in years”/”cost per participant”. I guess BlueDot has a higher bar for acceptance, and lower cost per parcitipant, so that local AI safety groups would need to accelerate the careers of participants significantly more to be as cost-effective as BlueDot.
Sorry for the late reply, Vasco. Thanks for your comment! I echo some of Chris’s points, but wanted to add some scattered thoughts:
I think the cost-effectiveness of AIS groups vs Bluedot would definitely depend on a given group, as I expect there to be large differences between different groups. The best ones might be more cost-effective than Bluedot, but I doubt anyone looked at this very rigorously (this analysis is great, but doesn’t include a Bluedot-like program). If you have the money, you likely want to fund both even if one is slightly better than the other.
The volunteer-run initiatives are likely more cost-effective kind of by default (though see next point).
Another consideration is a tradeoff between impact and cost-effectiveness. E.g if Bluedot is not funding constrained, then it might make sense for them to optimize for the largest impact, even if that comes with the price of slightly decreased cost-effectiveness. An example of this could be diminishing returns on spending money on LinkedIn ads. This could mean that the marginal impact of money donated to them is smaller, but funders still might prefer this to having to spend time on evaluating 10 AIS groups due to time costs)
Thanks for elaborating, @gergo! I am tagging you because I have just expanded this comment.
Nitpick. Maximising cost-effectiveness and impact is equivalent holding spending constant. However, I understand you mean the cost-effectiveness will tend to decrease as the spending increases.
I agree that analysis is great. I have just asked them whether they have considered estimating their cost-effectiveness in QARYs (quality-adjusted research years) per $, as done in that analysis. You may want to nudge them too.
Great point. Funders should maximise “cost-effectiveness” = “impact”/”cost” = “impact”/(“financial cost” + “time cost”) = “impact”/”financial cost”/(1 + “time cost”/”financial cost”) = “cost-effectiveness neglecting the time cost”/(1 + “ratio between the time and financial cost”). Smaller grants have a higher ratio between the time and financial cost, so their cost-effectiveness neglecting the time cost has to be higher to clear a given cost-effectiveness bar.
There could still be costs besides wages (such that the “financial cost” above would not be 0), although the “time cost” of the grantmakers may well be the driver of the overall cost.
Context: I’ve done local community building (running AI Safety ANZ), but also facilitated for BlueDot.
There’s definitely a lot of advantages from being able to draw talent from anywhere in the world. I suspect that the competitiveness of local movement-building will vary massively by location. In terms of impact per dollar, groups at top global universities or in strategic locations (San Fransisco, London, Washington, Brussels, ect.) are most likely to be competitive.
It’s also important to think on the margin rather than on average. You’d have to talk to the core BlueDot team to find out what they would do with marginal funding and how promising they think the folks they rejected are.
Thanks for sharing your thoughts, Chris.