This summary is a really helpful resource, thanks for sharing!
Have there been any studies (or plans to study) the effect of FTX on the perceptions of people in relevant overlapping social circles to EA that we would expect to recruit from via warm networks? For example, the Bay Area tech scene, ML researchers, academics in relevant disciplines at top 10 schools (etc.)?
My guess is that the impressions of EA on those groups (who are probably “EA-aware” but not “EA”—the 1% who know about EA) could be (much) more negative than either existing community members or the general public.
I am not aware of any plans to do that. I think it would be useful to get a sense of how those communities view EA; in particular I have heard various hypotheses about how AI safety projects should present themselves to the AI community, and would be good for that to be grounded in more data. Thanks for the suggestion!
This summary is a really helpful resource, thanks for sharing!
Have there been any studies (or plans to study) the effect of FTX on the perceptions of people in relevant overlapping social circles to EA that we would expect to recruit from via warm networks? For example, the Bay Area tech scene, ML researchers, academics in relevant disciplines at top 10 schools (etc.)?
My guess is that the impressions of EA on those groups (who are probably “EA-aware” but not “EA”—the 1% who know about EA) could be (much) more negative than either existing community members or the general public.
I am not aware of any plans to do that. I think it would be useful to get a sense of how those communities view EA; in particular I have heard various hypotheses about how AI safety projects should present themselves to the AI community, and would be good for that to be grounded in more data. Thanks for the suggestion!