Thanks for taking the time to write this. I had an almost identical experience at my university. I helped re-start the club, with every intention to lead the club, but I am no longer associated with it because of the lack of willingness from others to engage with AI safety criticisms or to challenge their own beliefs regarding AI safety/Existential risk.
I also felt that those in our group who prioritized AI safety had an advantage as far as getting recognition from more senior members of the city group, ability to form connections with other EAs in the club, and to get funding from EA orgs. I was quite certain I could get funding from the CEA too, as long as I lied and said I prioritized AI safety/Existential risk, but I wasn’t willing to do that. I also felt the money given to other organizers in the club was not necessary and did not have any
positive outcomes other than for that individual.
I am now basically fully estranged from the club (which sucks, because I actually enjoyed the company of everyone) because I do not feel like my values, and the values I originally became interested in EA for (such as epistemic humility) exist in the space I was in.
I did manage to have a few conversations with people in the club about AI safety that were somewhat productive, and I am grateful for those people (one senior EA community member who works in AI safety in particular). But despite this, our club basically felt like an AI safety club. Almost every student involved (at least the consistent ones, and the president) were AI safety focused. In addition, they were mainly interested in starting AI safety reading groups and most conversations led to AI safety (other than a philosophy group that my partner and I started, but eventually stopped running).
Thanks for writing this. This comment, in connection with Dave’s, reminds me that paying people—especially paying them too much—can compromise their epistemics. Of course, paying people is often a practical necessity for any number of reasons, so I’m not suggesting that EA transforms into a volunteer-only movement.
I’m not talking about grift but something that has insidious onset in the medical sense: slow, subtle, and without the person’s awareness. If one believes that financial incentives matter (and they seemingly must for the theory of change behind paying university organizers to make much sense), it’s important to consider the various ways in which those incentives could lead to bad epistemics for the paid organizer.
If student organizers believe they will be well-funded for promoting AI safety/x-risk much more so than broad-tent EA, we would expect that to influence how they approach their organizing work. Moreover, reduction of cognitive dissonance can be a powerful drive—so the organizer may actually (but subconsciously) start favoring the viewpoint they are emphasizing in order to reduce that dissonance rather than for sound reasons. If a significant number of people filling full-time EA jobs were previously paid student organizers, the cumulative effect of this bias could be significant.
I don’t have a great solution for this given that the funding situation is what it is. However, I would err on the side of paying student organizers too little rather than too much. I speculate that the risk of cognitive dissonance—and any pressure student organizers may feel to take certain positions -- increases to some extent with the amount of money involved. While I don’t have a well-developed opinion on whether to pay student organizers at all, they should not be paid “an outrageous amount of money” as Dave reports.
It seems like a lot of criticism of EA stems from concern about “groupthink” dynamics. At least, that is my read on the main reason Dave dislikes retreats. This is a major concern of mine as well.
I know groups like CEA and Open Phil have encouraged and funded EA criticism. My difficulty is I don’t know where to find that criticism. I suppose the EA forum frequently posts criticisms, but fighting groupthink by reading the forum seems counter productive.
I’ve personally found a lot of benefit in reading Reflective Altruism’s blog.
What I’m saying is, I know EA orgs want to encourage criticism, and good criticisms do exit, but I don’t think orgs have found a great way to disseminate those criticisms yet. I would want criticism dissemination to be more of a focus.
For example, there is an AI Safety reading list an EA group put out. It’s very helpful, but I haven’t seen any substantive criticism linked to in that list, while arguments in favor of longtermism comprise most of the lists.
I’ve only been to a handful of the conferences, but I’ve not seen a “Why to be skeptical of longtermism” talk posted.
Has there been an 80k podcast episode that centers longtermism skepticism before? I know it’s been addressed, but I think I’ve only seen it addressed relatively briefly by people who are longtermists or identify as EA. I’d like to see more guests like the longtermist skeptics at GPI.
I’ve not seen an event centering longtermism/EA criticism put on by my local group. To be fair to the group, I’ve not browsed their events for some time.
The rare occasions I have seen references to longtermism criticism, it’s something like a blog post made by someone who agrees with longtermism but is laying out counter arguments to be rigorous. This is good of them to do, but genuine criticisms from people outside of the community are more valuable and I’d like to see more of them.
Something related to disseminating more criticism, is including more voices from non-EAs. I worry when I see a list of references and it is all EAs. This seems common, even on websites like 80k.
If you’re an animal welfare EA I’d highly recommend joining the wholesome refuge that is the newly minted Impactful Animal Advocacy (IAA).
Website and details here. I volunteered for them at the AVA Summit which I strongly recommend as the premier conference and community-builder for animal welfare-focused EAs. The AVA Summit has some features I have long thought missing from EAGs—namely people arguing in good faith about deep deep disagreements (e.g. why don’t we ever see a panel with prominent longtermist and shorttermist EAs arguing for over an hour straight at EAGs?). There was an entire panel addressing quantification bias which turned into talking about some believing how EA has done more harm than good for the animal advocacy movement… but that people are afraid to speak out against EA given it is a movement that has brought in over 100 million dollars to animal advocacy. Personally I loved there being a space for these kind of discussions.
Also, one of my favourite things about the IAA community is they don’t ignore AI, they take it seriously and try to think about how to get ahead of AI developments to help animals. It is a community where you’ll bump into people who can talk about x-risk and take it seriously, but for whatever reason are prioritizing animals.
Hi Dave,
Thanks for taking the time to write this. I had an almost identical experience at my university. I helped re-start the club, with every intention to lead the club, but I am no longer associated with it because of the lack of willingness from others to engage with AI safety criticisms or to challenge their own beliefs regarding AI safety/Existential risk.
I also felt that those in our group who prioritized AI safety had an advantage as far as getting recognition from more senior members of the city group, ability to form connections with other EAs in the club, and to get funding from EA orgs. I was quite certain I could get funding from the CEA too, as long as I lied and said I prioritized AI safety/Existential risk, but I wasn’t willing to do that. I also felt the money given to other organizers in the club was not necessary and did not have any positive outcomes other than for that individual.
I am now basically fully estranged from the club (which sucks, because I actually enjoyed the company of everyone) because I do not feel like my values, and the values I originally became interested in EA for (such as epistemic humility) exist in the space I was in.
I did manage to have a few conversations with people in the club about AI safety that were somewhat productive, and I am grateful for those people (one senior EA community member who works in AI safety in particular). But despite this, our club basically felt like an AI safety club. Almost every student involved (at least the consistent ones, and the president) were AI safety focused. In addition, they were mainly interested in starting AI safety reading groups and most conversations led to AI safety (other than a philosophy group that my partner and I started, but eventually stopped running).
Thanks for writing this. This comment, in connection with Dave’s, reminds me that paying people—especially paying them too much—can compromise their epistemics. Of course, paying people is often a practical necessity for any number of reasons, so I’m not suggesting that EA transforms into a volunteer-only movement.
I’m not talking about grift but something that has insidious onset in the medical sense: slow, subtle, and without the person’s awareness. If one believes that financial incentives matter (and they seemingly must for the theory of change behind paying university organizers to make much sense), it’s important to consider the various ways in which those incentives could lead to bad epistemics for the paid organizer.
If student organizers believe they will be well-funded for promoting AI safety/x-risk much more so than broad-tent EA, we would expect that to influence how they approach their organizing work. Moreover, reduction of cognitive dissonance can be a powerful drive—so the organizer may actually (but subconsciously) start favoring the viewpoint they are emphasizing in order to reduce that dissonance rather than for sound reasons. If a significant number of people filling full-time EA jobs were previously paid student organizers, the cumulative effect of this bias could be significant.
I don’t have a great solution for this given that the funding situation is what it is. However, I would err on the side of paying student organizers too little rather than too much. I speculate that the risk of cognitive dissonance—and any pressure student organizers may feel to take certain positions -- increases to some extent with the amount of money involved. While I don’t have a well-developed opinion on whether to pay student organizers at all, they should not be paid “an outrageous amount of money” as Dave reports.
It seems like a lot of criticism of EA stems from concern about “groupthink” dynamics. At least, that is my read on the main reason Dave dislikes retreats. This is a major concern of mine as well.
I know groups like CEA and Open Phil have encouraged and funded EA criticism. My difficulty is I don’t know where to find that criticism. I suppose the EA forum frequently posts criticisms, but fighting groupthink by reading the forum seems counter productive.
I’ve personally found a lot of benefit in reading Reflective Altruism’s blog.
What I’m saying is, I know EA orgs want to encourage criticism, and good criticisms do exit, but I don’t think orgs have found a great way to disseminate those criticisms yet. I would want criticism dissemination to be more of a focus.
For example, there is an AI Safety reading list an EA group put out. It’s very helpful, but I haven’t seen any substantive criticism linked to in that list, while arguments in favor of longtermism comprise most of the lists.
I’ve only been to a handful of the conferences, but I’ve not seen a “Why to be skeptical of longtermism” talk posted.
Has there been an 80k podcast episode that centers longtermism skepticism before? I know it’s been addressed, but I think I’ve only seen it addressed relatively briefly by people who are longtermists or identify as EA. I’d like to see more guests like the longtermist skeptics at GPI.
I’ve not seen an event centering longtermism/EA criticism put on by my local group. To be fair to the group, I’ve not browsed their events for some time.
The rare occasions I have seen references to longtermism criticism, it’s something like a blog post made by someone who agrees with longtermism but is laying out counter arguments to be rigorous. This is good of them to do, but genuine criticisms from people outside of the community are more valuable and I’d like to see more of them.
Something related to disseminating more criticism, is including more voices from non-EAs. I worry when I see a list of references and it is all EAs. This seems common, even on websites like 80k.
If you’re an animal welfare EA I’d highly recommend joining the wholesome refuge that is the newly minted Impactful Animal Advocacy (IAA).
Website and details here. I volunteered for them at the AVA Summit which I strongly recommend as the premier conference and community-builder for animal welfare-focused EAs. The AVA Summit has some features I have long thought missing from EAGs—namely people arguing in good faith about deep deep disagreements (e.g. why don’t we ever see a panel with prominent longtermist and shorttermist EAs arguing for over an hour straight at EAGs?). There was an entire panel addressing quantification bias which turned into talking about some believing how EA has done more harm than good for the animal advocacy movement… but that people are afraid to speak out against EA given it is a movement that has brought in over 100 million dollars to animal advocacy. Personally I loved there being a space for these kind of discussions.
Also, one of my favourite things about the IAA community is they don’t ignore AI, they take it seriously and try to think about how to get ahead of AI developments to help animals. It is a community where you’ll bump into people who can talk about x-risk and take it seriously, but for whatever reason are prioritizing animals.