Yeah, I definitely think there are some multiplicative effects.
Now Iām teasing out what I think in more detail, Iām starting to find the āmedianā and ātailsā distinction, while useful, still maybe a bit too rough for me to decide whether we should do more or less of any particular strategy that is targeted at either group (which makes me hesitant to immediately put these thoughts as a top form post until Iāve teased out what my best guesses are on how we should maybe change our behaviour if we think we live in a āmultiplicativeā world).[1]
Here are some more of the considerations/āclaims (that Iām not all that confident in) that are swirling around in my head at the moment š.
tl;dr:
High fidelity communication is really challenging (and doubly so in broad outreach efforts).
However, broad outreach might thicken the positive tail of the effective altruism movementās impact distribution and thin the negative one even if the median outcome might result in a ādilutedā effective altruism community.
Since we are trying to maximize the effective altruism communityās expected impact, and all the impact is likely to be at the tails, we actually probably shouldnāt care all that much about the median outcome anyway.
High fidelity communication about effective altruism is challenging (and even more difficult when we do broader outreach/ātry to be welcoming to a wider range of people)
I do think it is a huge challenge to preserve the effective altruism communityās dedication to:
caring about, at least, everyone alive today; and
transparent reasoning, a scout mindset and more generally putting a tonne of effort into finding out what is true even if it is really inconvenient.
I do think really narrow targeting might be one of the best tools we have to maintain those things.
Some reasons why we might we want to de-emphasize filtering in existing local groups:
First reason One reason why focusing on this logic can sometimes be counter-productive because some filtering seems to just miss the mark (see my comment here for an example of how some filtering could plausibly be systematically selecting against traits we value).
Introducing a second reason (more fully-fleshed out in the remainder of this comment) However, the main reason I think that trying to leverage our media attention, trying to do broad outreach well and trying to be really welcoming at all our shop-fronts might be important to prioritise (even if it might sometimes mean community builders will have to sometimes spend less time spent focusing on the people who seem most promising) is not because of the median outcome from this strategy.
Trying to nail campground effects is really, really, really hard while simultaneously trying to keep effective altruism about effective altruism. However, weāre not trying to optimize for the median outcome for the effective altruism community, weāre trying to maximize the effective altruism communityās expected impact. This is why, despite the fact that ādilutionā effects seem like a huge risk, we probably should just aim for the positive tail scenario because that is where our biggest positive impact might be anyway (and also aim to minimize the risks of negative tail scenarios because that also is going to be a big factor in our overall expected impact).
āMedianā outreach work might be important to increase our chances of a positive ātailā impact of the effective altruism community as a whole
It is okay for in most worlds for the effective altruism community to have very little impact in the end.
Weāre not actually trying to guarantee some level of impact in every possible world. Weāre trying to maximize the effective altruism movementās expected impact.
Weāre not aiming for a āmedianā effective altruism community, weāre trying to maximize our expected impact (so itās okay if we risk having no impact if that is what we need to do to make positive tails possible or reduce the risk of extreme negative tail outcomes of our work)
Increasing the chances of the positive tails of the effective altruism movement
I think the positive tail impacts are in the worlds where weāve mastered the synergies between ātentā strategies and ācampgroundā strategies. If we can find ways of keeping the ātentā on point and still make use of our power to spread ideas to a very large number of people (even if the ideas we spread to a much larger audience are obviously going to be lower fidelity, we can still put a tonne of effort into which lower fidelity ideas are the best ones to spread to make the long-term future go really well).
Avoiding the negative tail impacts of the effective altruism movement
This second thought makes me very sad, but I think it is worth saying. Iām not confident in any of this because I donāt like thinking about it so much because it is not fun. Therefore, these thoughts are probably a lot less developed than my āhappierā, more optimistic thoughts about the effective altruism community.
I have a strong intuition that more campground strategies reduce the risk of negative tail impacts of the effective altruism movement (though I wish I didnāt have this intuition and I hope someone is able to convince me that this gut feeling is unfounded because I love the effective altruism movement).
Even if campground strategies make it more likely that the effective altruism movement has no impact, it seems completely plausible to me that that might still be a good thing.
A small and weird ācabalā effective altruism, with a lot of power and a lot of money, makes people feel uncomfortable for good reason. There are selection effects, but history is lined with small groups of powerful people who genuinely believed they were making the world a better place and seem, in retrospect, to have done a lot more harm than good.
More people understanding what weāre saying and why makes it more likely that smart people outside our echo chamber can pushback when weāre wrong. Itās a nice safety harness to prevent very bad outcomes.
It is also plausible to me that a tent effective altruism movement might be more likely to achieve their 95th percentile plus positive impact as well as the 5th percentile and below very negative impact.
Effective altruism feels like a rocket right now and rockets arenāt very stable. It intuitively feels easy to have a very big impact, when you do big, ambitious things in an unstable way, and not be able to easily control the sign of that big impact: there is a chance it is very positive or very negative.
I find it plausible that, if youāre going to have a huge impact on the world, having a big negative impact is easier than having a big positive impact by a wide margin (doing good is just darn hard and there are no slam dunk answers[2]).[3] Even though weāre thinking hard about how to make it good, I think it might just be really easy to make it bad (e.g. by bringing attention to the alignment problem, we might be increasing excitement and interest in the plausibility of AGI and therefore are going to get to AGI faster than if no-one talked about alignment).
I might post a high level post before I have finished teasing out my best guesses on what the implications might be because I find my views change so fast that it is really hard to ever finish writing down what I think and it is possibly still better for me to share some of my thoughts more publicly than to share none of them. I often feel like Iām bouncing around like a yo-yo and Iām hoping at some point my thoughts will settle down somewhere on an āequilibriumā view instead of continuously thinking up considerations that cause me to completely flip my opinion (and leave me saying inconsistent things left, right and center because I just donāt know what I think quite yet šš¤£š ). I have made a commitment bet with a friend to post something as a top-level post within two weeks so I will have to either give a snapshot view then or settle on a view or lose $$ (the only reason I got a finished the top level short-form comment that started this discussion was because of a different bet with a different friend šš¶š¤·š¼āāļø). At the very least, I hope that I can come with a more wholesome (but still absolutely true) framing of a lot of the considerations Iāve outlined in the remainder of this post as I think it over more.
I think it was Ben Garfinkel said the āno slam dunkā answers thing in his post on suspicious convergence when it comes to arguments about AI risk but Iām too lazy to chase it up to link it (edit: I did go try and chase up the link to this, I think my memory had maybe merged/āmixed together this post by Gregory Lewis on suspicious convergence and this transcript from a talk by Ben Garfinkel, Iām leaving both links in this footnote because Gregory Lewisā post is so good that Iāll use any excuse to leave a link to it wherever I can even though it wasnāt actually relevant to the āno slam dunk answersā quote)
Thanks Luke š
Yeah, I definitely think there are some multiplicative effects.
Now Iām teasing out what I think in more detail, Iām starting to find the āmedianā and ātailsā distinction, while useful, still maybe a bit too rough for me to decide whether we should do more or less of any particular strategy that is targeted at either group (which makes me hesitant to immediately put these thoughts as a top form post until Iāve teased out what my best guesses are on how we should maybe change our behaviour if we think we live in a āmultiplicativeā world).[1]
Here are some more of the considerations/āclaims (that Iām not all that confident in) that are swirling around in my head at the moment š.
tl;dr:
High fidelity communication is really challenging (and doubly so in broad outreach efforts).
However, broad outreach might thicken the positive tail of the effective altruism movementās impact distribution and thin the negative one even if the median outcome might result in a ādilutedā effective altruism community.
Since we are trying to maximize the effective altruism communityās expected impact, and all the impact is likely to be at the tails, we actually probably shouldnāt care all that much about the median outcome anyway.
High fidelity communication about effective altruism is challenging (and even more difficult when we do broader outreach/ātry to be welcoming to a wider range of people)
I do think it is a huge challenge to preserve the effective altruism communityās dedication to:
caring about, at least, everyone alive today; and
transparent reasoning, a scout mindset and more generally putting a tonne of effort into finding out what is true even if it is really inconvenient.
I do think really narrow targeting might be one of the best tools we have to maintain those things.
Some reasons why we might we want to de-emphasize filtering in existing local groups:
First reason
One reason why focusing on this logic can sometimes be counter-productive because some filtering seems to just miss the mark (see my comment here for an example of how some filtering could plausibly be systematically selecting against traits we value).
Introducing a second reason (more fully-fleshed out in the remainder of this comment)
However, the main reason I think that trying to leverage our media attention, trying to do broad outreach well and trying to be really welcoming at all our shop-fronts might be important to prioritise (even if it might sometimes mean community builders will have to sometimes spend less time spent focusing on the people who seem most promising) is not because of the median outcome from this strategy.
Trying to nail campground effects is really, really, really hard while simultaneously trying to keep effective altruism about effective altruism. However, weāre not trying to optimize for the median outcome for the effective altruism community, weāre trying to maximize the effective altruism communityās expected impact. This is why, despite the fact that ādilutionā effects seem like a huge risk, we probably should just aim for the positive tail scenario because that is where our biggest positive impact might be anyway (and also aim to minimize the risks of negative tail scenarios because that also is going to be a big factor in our overall expected impact).
āMedianā outreach work might be important to increase our chances of a positive ātailā impact of the effective altruism community as a whole
It is okay for in most worlds for the effective altruism community to have very little impact in the end.
Weāre not actually trying to guarantee some level of impact in every possible world. Weāre trying to maximize the effective altruism movementās expected impact.
Weāre not aiming for a āmedianā effective altruism community, weāre trying to maximize our expected impact (so itās okay if we risk having no impact if that is what we need to do to make positive tails possible or reduce the risk of extreme negative tail outcomes of our work)
Increasing the chances of the positive tails of the effective altruism movement
I think the positive tail impacts are in the worlds where weāve mastered the synergies between ātentā strategies and ācampgroundā strategies. If we can find ways of keeping the ātentā on point and still make use of our power to spread ideas to a very large number of people (even if the ideas we spread to a much larger audience are obviously going to be lower fidelity, we can still put a tonne of effort into which lower fidelity ideas are the best ones to spread to make the long-term future go really well).
Avoiding the negative tail impacts of the effective altruism movement
This second thought makes me very sad, but I think it is worth saying. Iām not confident in any of this because I donāt like thinking about it so much because it is not fun. Therefore, these thoughts are probably a lot less developed than my āhappierā, more optimistic thoughts about the effective altruism community.
I have a strong intuition that more campground strategies reduce the risk of negative tail impacts of the effective altruism movement (though I wish I didnāt have this intuition and I hope someone is able to convince me that this gut feeling is unfounded because I love the effective altruism movement).
Even if campground strategies make it more likely that the effective altruism movement has no impact, it seems completely plausible to me that that might still be a good thing.
A small and weird ācabalā effective altruism, with a lot of power and a lot of money, makes people feel uncomfortable for good reason. There are selection effects, but history is lined with small groups of powerful people who genuinely believed they were making the world a better place and seem, in retrospect, to have done a lot more harm than good.
More people understanding what weāre saying and why makes it more likely that smart people outside our echo chamber can pushback when weāre wrong. Itās a nice safety harness to prevent very bad outcomes.
It is also plausible to me that a tent effective altruism movement might be more likely to achieve their 95th percentile plus positive impact as well as the 5th percentile and below very negative impact.
Effective altruism feels like a rocket right now and rockets arenāt very stable. It intuitively feels easy to have a very big impact, when you do big, ambitious things in an unstable way, and not be able to easily control the sign of that big impact: there is a chance it is very positive or very negative.
I find it plausible that, if youāre going to have a huge impact on the world, having a big negative impact is easier than having a big positive impact by a wide margin (doing good is just darn hard and there are no slam dunk answers[2]).[3] Even though weāre thinking hard about how to make it good, I think it might just be really easy to make it bad (e.g. by bringing attention to the alignment problem, we might be increasing excitement and interest in the plausibility of AGI and therefore are going to get to AGI faster than if no-one talked about alignment).
I might post a high level post before I have finished teasing out my best guesses on what the implications might be because I find my views change so fast that it is really hard to ever finish writing down what I think and it is possibly still better for me to share some of my thoughts more publicly than to share none of them. I often feel like Iām bouncing around like a yo-yo and Iām hoping at some point my thoughts will settle down somewhere on an āequilibriumā view instead of continuously thinking up considerations that cause me to completely flip my opinion (and leave me saying inconsistent things left, right and center because I just donāt know what I think quite yet šš¤£š ). I have made a commitment bet with a friend to post something as a top-level post within two weeks so I will have to either give a snapshot view then or settle on a view or lose $$ (the only reason I got a finished the top level short-form comment that started this discussion was because of a different bet with a different friend šš¶š¤·š¼āāļø). At the very least, I hope that I can come with a more wholesome (but still absolutely true) framing of a lot of the considerations Iāve outlined in the remainder of this post as I think it over more.
I think it was Ben Garfinkel said the āno slam dunkā answers thing in his post on suspicious convergence when it comes to arguments about AI risk but Iām too lazy to chase it up to link it(edit: I did go try and chase up the link to this, I think my memory had maybe merged/āmixed together this post by Gregory Lewis on suspicious convergence and this transcript from a talk by Ben Garfinkel, Iām leaving both links in this footnote because Gregory Lewisā post is so good that Iāll use any excuse to leave a link to it wherever I can even though it wasnāt actually relevant to the āno slam dunk answersā quote)maybe I just took my reading of HPMOR too literally :P