Thanks so much for sharing your thoughts and reasons for disillusionment. I found this section the most concerning. If this has even a moderate amount of truth to it (especially the bit about discouraging new potential near termist EAs) then these kind of fellowships might need serious rethinking.
“Once the fellowship is over, the people who stick around are those who were sold on the ideas espoused in weeks 4, 5, and 6 (existential risks, longtermism, and AI) either because their facilitators were passionate about those topics, they were tech bros, or they were inclined to those ideas due to social pressure or emotional appeal. The folks who were intrigued by weeks 1, 2, and 3 (animal welfare, global health, and cost-effectiveness) but dismissed longtermism, x-risks, or AI safety may (mistakenly) think there is no place for them in EA. Over time, the EA group continues to select for people with those values, and before you know it your EA group is now a factory that churns out x-risk reducers, longtermists, and AI safety prioritizers.”
Thanks so much for sharing your thoughts and reasons for disillusionment. I found this section the most concerning. If this has even a moderate amount of truth to it (especially the bit about discouraging new potential near termist EAs) then these kind of fellowships might need serious rethinking.
“Once the fellowship is over, the people who stick around are those who were sold on the ideas espoused in weeks 4, 5, and 6 (existential risks, longtermism, and AI) either because their facilitators were passionate about those topics, they were tech bros, or they were inclined to those ideas due to social pressure or emotional appeal. The folks who were intrigued by weeks 1, 2, and 3 (animal welfare, global health, and cost-effectiveness) but dismissed longtermism, x-risks, or AI safety may (mistakenly) think there is no place for them in EA. Over time, the EA group continues to select for people with those values, and before you know it your EA group is now a factory that churns out x-risk reducers, longtermists, and AI safety prioritizers.”