Happy to see this be set up! It makes a lot of sense to me that our community gradually gets more and more competition from different funds. Over time we’ll get evidence on which one seem to perform the best.
A few questions:
1) Are you planning on coordinating information and decisions with other funds? I general my impression is that there should be similar application procedures and perhaps some information sharing. That said, I could imagine ways where coordination could come across in bad ways.
2) Where do you expect to get most of your donations from? Individual EA donors?
3) Your Tier-1 and Tier-2 questions don’t seem exclusive to s-risks, they seem to me like things that would be very applicable to much of AI-risk concerns. Does this sound correct to you? Are you seeking projects that focus specifically on s-risks within these areas?
1) The Long-Term Future Fund seems most important to coordinate with. Since I’m both a fund manager at the EAF Fund and an advisor to the Long-Term Future Fund, I hope to facilitate such coordination.
2) Individual EA donors, poker pros (through our current matching challenge), and maybe other large donors.
3) Yes, that sounds correct. We’re particularly excited to support researchers who work on specific s-risk-related questions within those areas, but I expect that the research we fund could also positively influence AI in other ways (e.g. much of the decision theory work might make positive-sum trade more likely and could thereby increase the chance of realizing the best possible outcomes). We might also fund established organizations like MIRI if they have room for more funding.
Happy to see this be set up! It makes a lot of sense to me that our community gradually gets more and more competition from different funds. Over time we’ll get evidence on which one seem to perform the best.
A few questions:
1) Are you planning on coordinating information and decisions with other funds? I general my impression is that there should be similar application procedures and perhaps some information sharing. That said, I could imagine ways where coordination could come across in bad ways.
2) Where do you expect to get most of your donations from? Individual EA donors?
3) Your Tier-1 and Tier-2 questions don’t seem exclusive to s-risks, they seem to me like things that would be very applicable to much of AI-risk concerns. Does this sound correct to you? Are you seeking projects that focus specifically on s-risks within these areas?
Thanks! :)
1) The Long-Term Future Fund seems most important to coordinate with. Since I’m both a fund manager at the EAF Fund and an advisor to the Long-Term Future Fund, I hope to facilitate such coordination.
2) Individual EA donors, poker pros (through our current matching challenge), and maybe other large donors.
3) Yes, that sounds correct. We’re particularly excited to support researchers who work on specific s-risk-related questions within those areas, but I expect that the research we fund could also positively influence AI in other ways (e.g. much of the decision theory work might make positive-sum trade more likely and could thereby increase the chance of realizing the best possible outcomes). We might also fund established organizations like MIRI if they have room for more funding.