But I sometimes have a fear in the back of my mind that some of the attendees who are intrigued by these ideas are later going to look up effective altruism, get the impression that the movement’s focus is just about existential risks these days, and feel duped. Since EA pitches don’t usually start with longtermist ideas, it can feel like a bait and switch.
To avoid the feeling of a bait and switch, I think one solution is to introduce existential risk in the initial pitch. For example, when introducing my student group Effective Altruism at Georgia Tech, I tend to say something like: “Effective Altruism at Georgia Tech is a student group which aims to empower students to pursue careers tackling the world’s most pressing problems, such as global poverty, animal welfare, or existential risk from climate change, future pandemics, or advanced AI.” It’s totally fine to mention existential risk – students still seem pretty interested and happy to sign up for our mailing list.
Not every personality type is well-equipped to work in AI alignment, so I strongly feel the pitch should be about finding the field that best suits you and where you specifically can find your greatest impact, regardless of whether that ends up being in a longtermist career or global health/poverty or earning to give. As to what charity to give to, whether it would be better to donate to a GiveWell charity or better to donate to AI alignment research, I personally am not sure, I lean more toward the GiveWell charities but I’m fairly new to the EA community and still forming my opinions....
To avoid the feeling of a bait and switch, I think one solution is to introduce existential risk in the initial pitch. For example, when introducing my student group Effective Altruism at Georgia Tech, I tend to say something like: “Effective Altruism at Georgia Tech is a student group which aims to empower students to pursue careers tackling the world’s most pressing problems, such as global poverty, animal welfare, or existential risk from climate change, future pandemics, or advanced AI.” It’s totally fine to mention existential risk – students still seem pretty interested and happy to sign up for our mailing list.
I think this is a great opening pitch!
Not every personality type is well-equipped to work in AI alignment, so I strongly feel the pitch should be about finding the field that best suits you and where you specifically can find your greatest impact, regardless of whether that ends up being in a longtermist career or global health/poverty or earning to give. As to what charity to give to, whether it would be better to donate to a GiveWell charity or better to donate to AI alignment research, I personally am not sure, I lean more toward the GiveWell charities but I’m fairly new to the EA community and still forming my opinions....