Hi Vasco! I’m keen for you to paint me a persona. Specifically; who is the kind of person that thinks sinking 10k into a bet with an EA (i.e. you) is a better use of money than all the other ways to help make AI go better (by making it as a donation)?
Even if you were big on bets for signalling purposes, I think its easy to argue that making one of this size with an EA on a niche forum isn’t the way to do it (i.e. find someone more prominent and influential on X or similar).
Hi Vasco! I’m keen for you to paint me a persona. Specifically; who is the kind of person that thinks sinking 10k into a bet with an EA (i.e. you) is a better use of money than all the other ways to help make AI go better (by making it as a donation)?
If the winner donates the profits, the bet has the effect in expectation of moving donations from the organisations preferred by the loser to the ones preferred by the winner. So the bet would increase total social impact (not just the winner’s social impact) under the view of someone who thinks their preferred organisations (e.g. in AI safety) are more cost-effective than the organisations in animal welfare I would donate my profits to.
Even if you were big on bets for signalling purposes, I think its easy to argue that making one of this size with an EA on a niche forum isn’t the way to do it (i.e. find someone more prominent and influential on X or similar).
I have been messaging some prominent people who are worried about AI about similar bets, but no success so far.
I suppose it depends whether the counterfactual is the two parties to the bet donate the 10k to their preferred causes now, or donate the 10k inflation adjusted in 2029, or don’t donate it at all. Insofar as we think donations now are better (especially for someone who has short AI timelines) there might be a big difference between the value of money now vs the value of money after (hypothetically) winning the bet.
Thanks for the comemnt, Oscar! Right, I am assuming the cost-effectiveness of donations does not vary much over time. Donors have an incentive to equalise the marginal cost-effectiveness of donations across time. If Open Philanthropy (OP) thought their marginal spending on AI safety in 2025 was more cost-effective than that in 2029, they should decrease their planned spending in 2029 to increase that in 2025. More broadly, money should be moved from the worst to the best years.
Good point, I agree that ideally that would be the case, but my impression (from the outside) is that OP is somewhat capacity-constrained, especially for technical AI grantmaking? Which I think would mean if non-OP people feel like they can make useful grants now that could still be more valuable given the likelihood that OP scales up and gets more AI grantmaking in coming years. But all that is speculation, I haven’t thought carefully about the value of donations over time, beyond deciding to not save all my donations for later for me personally.
My point holds across all types of spending. OP’s spending on expanding their team should be optimised to ensure the marginal cost-effectiveness of their grants matches that of their internal spending, and that both do not vary across time. I do not know whether OP is striking the right balance. However, I think one is implicitly claiming that OP is making some wrong decisions if one expects the marginal cost-effectiveness of OP’s AI safety grants to decrease across time.
I think it is more likely that people do not take my bet because they do not actually believe in short AI timelines.
Hi Vasco! I’m keen for you to paint me a persona. Specifically; who is the kind of person that thinks sinking 10k into a bet with an EA (i.e. you) is a better use of money than all the other ways to help make AI go better (by making it as a donation)?
Even if you were big on bets for signalling purposes, I think its easy to argue that making one of this size with an EA on a niche forum isn’t the way to do it (i.e. find someone more prominent and influential on X or similar).
Hi Yanni.
If the winner donates the profits, the bet has the effect in expectation of moving donations from the organisations preferred by the loser to the ones preferred by the winner. So the bet would increase total social impact (not just the winner’s social impact) under the view of someone who thinks their preferred organisations (e.g. in AI safety) are more cost-effective than the organisations in animal welfare I would donate my profits to.
I have been messaging some prominent people who are worried about AI about similar bets, but no success so far.
I suppose it depends whether the counterfactual is the two parties to the bet donate the 10k to their preferred causes now, or donate the 10k inflation adjusted in 2029, or don’t donate it at all. Insofar as we think donations now are better (especially for someone who has short AI timelines) there might be a big difference between the value of money now vs the value of money after (hypothetically) winning the bet.
Thanks for the comemnt, Oscar! Right, I am assuming the cost-effectiveness of donations does not vary much over time. Donors have an incentive to equalise the marginal cost-effectiveness of donations across time. If Open Philanthropy (OP) thought their marginal spending on AI safety in 2025 was more cost-effective than that in 2029, they should decrease their planned spending in 2029 to increase that in 2025. More broadly, money should be moved from the worst to the best years.
Good point, I agree that ideally that would be the case, but my impression (from the outside) is that OP is somewhat capacity-constrained, especially for technical AI grantmaking? Which I think would mean if non-OP people feel like they can make useful grants now that could still be more valuable given the likelihood that OP scales up and gets more AI grantmaking in coming years. But all that is speculation, I haven’t thought carefully about the value of donations over time, beyond deciding to not save all my donations for later for me personally.
My point holds across all types of spending. OP’s spending on expanding their team should be optimised to ensure the marginal cost-effectiveness of their grants matches that of their internal spending, and that both do not vary across time. I do not know whether OP is striking the right balance. However, I think one is implicitly claiming that OP is making some wrong decisions if one expects the marginal cost-effectiveness of OP’s AI safety grants to decrease across time.
I think it is more likely that people do not take my bet because they do not actually believe in short AI timelines.