[Speaking in my personal capacity, not on behalf of the LTFF] I am also strongly in favor of there being an AI safety specific fund, but this is mostly unrelated to recent negative press for longtermism. My reasons for support are (primarily): a) people who aren’t EAs (and might not even know about longtermism) are starting to care a lot more about AI safety, and many of them might donate to such a fund; and b) EAs (who may or may not be longtermists) may prioritize AI safety over other longtermist causes (eg biosafety), so an AI safety specific fund may fit better with their preferences.
it’s true that the correlation between framings of the problem socially overlapping with longtermism and longtermism could be made spurious! there’s a lot of bells and whistles on longtermism that don’t need to be there, especially for the 99% of what needs to be done in which fingerprints never come up.
[Speaking in my personal capacity, not on behalf of the LTFF] I am also strongly in favor of there being an AI safety specific fund, but this is mostly unrelated to recent negative press for longtermism. My reasons for support are (primarily): a) people who aren’t EAs (and might not even know about longtermism) are starting to care a lot more about AI safety, and many of them might donate to such a fund; and b) EAs (who may or may not be longtermists) may prioritize AI safety over other longtermist causes (eg biosafety), so an AI safety specific fund may fit better with their preferences.
it’s true that the correlation between framings of the problem socially overlapping with longtermism and longtermism could be made spurious! there’s a lot of bells and whistles on longtermism that don’t need to be there, especially for the 99% of what needs to be done in which fingerprints never come up.