I think this is directionally right relative to my impression of the attitudes on the ground in today’s effective altruism local groups. I also directionally agree with Thomas Kwa’s pushback below relative to your post.
EDIT: If you don’t think that more people should go into AI as a community-builder, you should not indicate you think people need a reason to not go into AI!
People who don’t have their own views should feel very comfortable saying that it’s fine to not have a view yet!
My actual overall take is that I think it is good that more people are taking the idea of going into AI seriously. I also think it’s incredibly important that the conclusion doesn’t feel pre-written because that is so counter-productive to attracting the most inquisitive and curious minds who do feel like there are plenty of reasonable objections.
The less people feel free to explore the objections, the more we accidently select for people who are willing to believe stuff without thinking hard about it themselves. The more people feel like they belong in this group of people regardless of whether they end up coming to a particular conclusion, the healthier our local groups’ epistemics are likely to be. People need to feel free to accept “AI” and “not-AI” to think clearly about this.
Huge amounts of social pressure to come to a certain conclusion about the state of the world is bound to end in more yes-people and less people who are deeply curious about how to help others the most.
It is challenging to push certain “most promising candidates” while still making everyone else who thinks really hard and has a nuanced understanding of effective altruism and decides “I’ve thought really hard about it and that/none of those seem like my best option” to feel they fully belong in an effective altruism local group.
If you think AI is worth mentioning anyway/it’s worth people you talk to thinking more seriously about it for other reasons, I think it is good to be upfront about exactly what you think and why.
Example of how to be upfront and why I think this is so important in community building
EG. “I honestly haven’t formed a view on AI yet. I think it’s worth mentioning as something worth looking into anyway. The reason I think this is because people who I think generally have robust reasons for thinking what they do who I agree with on other topics I’ve thought about more, think AI is a big deal. This makes me suspect I’ll end up thinking AI is important even if I don’t fully buy into the arguments at the moment.”.
This community builder could then go onto discuss exactly who they respect and what idea they have that made them respect them. This then brings the conversation to things the community builder in question actually believes and they can passionately and inspirationally discuss that topic.
I think great community building conversations I’ve had with people new to the community happen when:
1. I change my mind;
2. they change their mind;
3. we realise we actually agree but were using different words to express it (and now both of us have more language to express our views to a wider range of people).
If I am summarising someone else’s view and I don’t make that clear, it is very hard for me to move the conversation onto something where one of the three above things happen. If neither I nor the person I’m talking to has fleshed out views on something (because I’m deferring and they have never thought about it before), this is much less likely to be the sort of conversation that builds more nuance into either of our views on the topic.
My initial mistake My mistake initially when writing this comment was forgetting that I currently have enough views on AI (but I didn’t always) to have these sorts of conversations so I should engage with people’s reasoning and encourage them to flesh out their views if someone says this to me.
I still think that I should absolutely give people permission to take time to come to a view and not have a reason other than “I haven’t thought about it enough to have a confident view” and that I also should not force people to talk about AI if they don’t want to!
Just if they’re bringing it up, which is what is happening in the context of this post, then I think it’s good for me, in that situation, to engage with their chosen topic of conversation with their thinking and my thinking to see if we can start charting out the common ground and figuring out what the points of disagreement are.
When I didn’t have my own views on AI, I believed it was worth mentioning because other people thought it was worth mentioning. I hope I was able to be upfront about this when talking about AI but I know that memory paints a rosier picture in retrospect.
I can imagine myself having glossed over why I didn’t go into detail on AI when I didn’t have detailed views because of cultural pressures to appear like effective altruism, and me as its representative, have everything figured out. I think that not being upfront about not buying into the arguments would have been a bad way of handling it because then it makes it seem like I buy into it and can’t make a nuanced case for it!
There are certainly components of the AI case that I don’t have views on and if we hit those I think it’s good for me to be really upfront. I also have quite shallow views on biorisk, extreme climate change and nuclear weapons (but really in-depth views about a bunch of other topics). I think that it is very hard to develop your own view on every topic and it takes a lot of time—I think deferring to people who I think think well is often necessary, but it is simply important for me, and for us all, to be as upfront as possible. Being upfront about when we’re deferring and when we buy-in to the arguments for something is so helpful for community building conversations to go as well as they possibly can go.
From memory, he also said something like, for most people, going into AI straight away instead of just becoming really good at something you could be really good at is probably a mistake: having said this, not sure if his views have changed given we had a bunch of really quick developments in AI that made a bunch of people think AI timelines were way shorter.
I actually love that you didn’t just cite random EA stuff only. I think citing more outside sources for things is really good for keeping effective altruism and the rest of the world’s discourse on how to make the world better more connected (both for making it easier for newcomers by having more material grounded in language that more people understand and presented in ways that are way more familiar, but also for the object-level advantages of just keeping our epistemics cleaner because we’re a little bit less in our echo chamber).
I just also thought it was worth pointing out that people in the EA community that people respect a lot seem to completely agree with you too (power of social-proof from people within our in-group is totally a thing that makes citing stuff from outside the community receive too little social reinforcement IMO).
I think this is directionally right relative to my impression of the attitudes on the ground in today’s effective altruism local groups.
I also directionally agree with Thomas Kwa’s pushback below relative to your post.EDIT:
If you don’t think that more people should go into AI as a community-builder, you should not indicate you think people need a reason to not go into AI!
People who don’t have their own views should feel very comfortable saying that it’s fine to not have a view yet!
More detail in the footnote. [1]
END OF EDIT
My actual overall take is that I think it is good that more people are taking the idea of going into AI seriously. I also think it’s incredibly important that the conclusion doesn’t feel pre-written because that is so counter-productive to attracting the most inquisitive and curious minds who do feel like there are plenty of reasonable objections.
The less people feel free to explore the objections, the more we accidently select for people who are willing to believe stuff without thinking hard about it themselves. The more people feel like they belong in this group of people regardless of whether they end up coming to a particular conclusion, the healthier our local groups’ epistemics are likely to be. People need to feel free to accept “AI” and “not-AI” to think clearly about this.
Huge amounts of social pressure to come to a certain conclusion about the state of the world is bound to end in more yes-people and less people who are deeply curious about how to help others the most.
It is challenging to push certain “most promising candidates” while still making everyone else who thinks really hard and has a nuanced understanding of effective altruism and decides “I’ve thought really hard about it and that/none of those seem like my best option” to feel they fully belong in an effective altruism local group.
If you think AI is worth mentioning anyway/it’s worth people you talk to thinking more seriously about it for other reasons, I think it is good to be upfront about exactly what you think and why.
Example of how to be upfront and why I think this is so important in community building
EG. “I honestly haven’t formed a view on AI yet. I think it’s worth mentioning as something worth looking into anyway. The reason I think this is because people who I think generally have robust reasons for thinking what they do who I agree with on other topics I’ve thought about more, think AI is a big deal. This makes me suspect I’ll end up thinking AI is important even if I don’t fully buy into the arguments at the moment.”.
This community builder could then go onto discuss exactly who they respect and what idea they have that made them respect them. This then brings the conversation to things the community builder in question actually believes and they can passionately and inspirationally discuss that topic.
I think great community building conversations I’ve had with people new to the community happen when:
1. I change my mind;
2. they change their mind;
3. we realise we actually agree but were using different words to express it (and now both of us have more language to express our views to a wider range of people).
If I am summarising someone else’s view and I don’t make that clear, it is very hard for me to move the conversation onto something where one of the three above things happen. If neither I nor the person I’m talking to has fleshed out views on something (because I’m deferring and they have never thought about it before), this is much less likely to be the sort of conversation that builds more nuance into either of our views on the topic.
My initial mistake
My mistake initially when writing this comment was forgetting that I currently have enough views on AI (but I didn’t always) to have these sorts of conversations so I should engage with people’s reasoning and encourage them to flesh out their views if someone says this to me.
I still think that I should absolutely give people permission to take time to come to a view and not have a reason other than “I haven’t thought about it enough to have a confident view” and that I also should not force people to talk about AI if they don’t want to!
Just if they’re bringing it up, which is what is happening in the context of this post, then I think it’s good for me, in that situation, to engage with their chosen topic of conversation with their thinking and my thinking to see if we can start charting out the common ground and figuring out what the points of disagreement are.
When I didn’t have my own views on AI, I believed it was worth mentioning because other people thought it was worth mentioning. I hope I was able to be upfront about this when talking about AI but I know that memory paints a rosier picture in retrospect.
I can imagine myself having glossed over why I didn’t go into detail on AI when I didn’t have detailed views because of cultural pressures to appear like effective altruism, and me as its representative, have everything figured out. I think that not being upfront about not buying into the arguments would have been a bad way of handling it because then it makes it seem like I buy into it and can’t make a nuanced case for it!
There are certainly components of the AI case that I don’t have views on and if we hit those I think it’s good for me to be really upfront. I also have quite shallow views on biorisk, extreme climate change and nuclear weapons (but really in-depth views about a bunch of other topics). I think that it is very hard to develop your own view on every topic and it takes a lot of time—I think deferring to people who I think think well is often necessary, but it is simply important for me, and for us all, to be as upfront as possible. Being upfront about when we’re deferring and when we buy-in to the arguments for something is so helpful for community building conversations to go as well as they possibly can go.
Holden Karnofsky also basically says “be so good they can’t ignore you” in the 80k podcast episode interviewing him on his career thoughts (as the tile of the episode suggests, the advice was basically “build aptitudes and kick ass”).
From memory, he also said something like, for most people, going into AI straight away instead of just becoming really good at something you could be really good at is probably a mistake: having said this, not sure if his views have changed given we had a bunch of really quick developments in AI that made a bunch of people think AI timelines were way shorter.
I actually love that you didn’t just cite random EA stuff only. I think citing more outside sources for things is really good for keeping effective altruism and the rest of the world’s discourse on how to make the world better more connected (both for making it easier for newcomers by having more material grounded in language that more people understand and presented in ways that are way more familiar, but also for the object-level advantages of just keeping our epistemics cleaner because we’re a little bit less in our echo chamber).
I just also thought it was worth pointing out that people in the EA community that people respect a lot seem to completely agree with you too (power of social-proof from people within our in-group is totally a thing that makes citing stuff from outside the community receive too little social reinforcement IMO).