OAK intends to train people who are likely to have important impacts on AI, to help them be kinder or something like that. So I see a good deal of overlap with the reasons why CFAR is valuable.
I attended a 2-day OAK retreat. It was run in a professional manner that suggests they’ll provide a good deal of benefit to people who they train. But my intuition is that the impact will be mainly to make those people happier, and I expect that OAK’s impact will have less effect on peoples’ behavior than CFAR has.
I considered donating to OAK as an EA charity, but have decided it isn’t quite effective enough for me to treat it that way.
I believe that the person who promoted that grant at SFF has more experience with OAK than I do.
I’m surprised that SFF gave more to OAK than to ALLFED.
Nearly all of CFAR’s activity is motivated by their effects on people who are likely to impact AI. As a donor, I don’t distinguish much between the various types of workshops.
There are many ways that people can impact AI, and I presume the different types of workshop are slightly optimized for different strategies and different skills, and differ a bit in how strongly they’re selecting for people who have a high probability of doing AI-relevant things. CFAR likely doesn’t have a good prediction in advance about whether any individual person will prioritize AI, and we shouldn’t expect them to try to admit only those with high probabilities of working on AI-related tasks.
OAK intends to train people who are likely to have important impacts on AI, to help them be kinder or something like that. So I see a good deal of overlap with the reasons why CFAR is valuable.
I attended a 2-day OAK retreat. It was run in a professional manner that suggests they’ll provide a good deal of benefit to people who they train. But my intuition is that the impact will be mainly to make those people happier, and I expect that OAK’s impact will have less effect on peoples’ behavior than CFAR has.
I considered donating to OAK as an EA charity, but have decided it isn’t quite effective enough for me to treat it that way.
I believe that the person who promoted that grant at SFF has more experience with OAK than I do.
I’m surprised that SFF gave more to OAK than to ALLFED.
Peter, thank you! I am slightly confused by your phrasing.
To benchmark, would you say that
(a) CFAR mainline workshops are aimed to train [...] to “people who are likely to have important impacts on AI”;
(b) AIRCS workshops are aimed at the same audience;
(c) MSFP is aimed at the same audience?
Nearly all of CFAR’s activity is motivated by their effects on people who are likely to impact AI. As a donor, I don’t distinguish much between the various types of workshops.
There are many ways that people can impact AI, and I presume the different types of workshop are slightly optimized for different strategies and different skills, and differ a bit in how strongly they’re selecting for people who have a high probability of doing AI-relevant things. CFAR likely doesn’t have a good prediction in advance about whether any individual person will prioritize AI, and we shouldn’t expect them to try to admit only those with high probabilities of working on AI-related tasks.
Thank you, Peter. If you are curious Anna Salamon connected various types of activities with CFAR’s mission in the recent Q&A.