Yeah, I definitely think there are some multiplicative effects.
Now I’m teasing out what I think in more detail, I’m starting to find the “median” and “tails” distinction, while useful, still maybe a bit too rough for me to decide whether we should do more or less of any particular strategy that is targeted at either group (which makes me hesitant to immediately put these thoughts as a top form post until I’ve teased out what my best guesses are on how we should maybe change our behaviour if we think we live in a “multiplicative” world).[1]
Here are some more of the considerations/claims (that I’m not all that confident in) that are swirling around in my head at the moment 😊.
tl;dr:
High fidelity communication is really challenging (and doubly so in broad outreach efforts).
However, broad outreach might thicken the positive tail of the effective altruism movement’s impact distribution and thin the negative one even if the median outcome might result in a “diluted” effective altruism community.
Since we are trying to maximize the effective altruism community’s expected impact, and all the impact is likely to be at the tails, we actually probably shouldn’t care all that much about the median outcome anyway.
High fidelity communication about effective altruism is challenging (and even more difficult when we do broader outreach/try to be welcoming to a wider range of people)
I do think it is a huge challenge to preserve the effective altruism community’s dedication to:
caring about, at least, everyone alive today; and
transparent reasoning, a scout mindset and more generally putting a tonne of effort into finding out what is true even if it is really inconvenient.
I do think really narrow targeting might be one of the best tools we have to maintain those things.
Some reasons why we might we want to de-emphasize filtering in existing local groups:
First reason One reason why focusing on this logic can sometimes be counter-productive because some filtering seems to just miss the mark (see my comment here for an example of how some filtering could plausibly be systematically selecting against traits we value).
Introducing a second reason (more fully-fleshed out in the remainder of this comment) However, the main reason I think that trying to leverage our media attention, trying to do broad outreach well and trying to be really welcoming at all our shop-fronts might be important to prioritise (even if it might sometimes mean community builders will have to sometimes spend less time spent focusing on the people who seem most promising) is not because of the median outcome from this strategy.
Trying to nail campground effects is really, really, really hard while simultaneously trying to keep effective altruism about effective altruism. However, we’re not trying to optimize for the median outcome for the effective altruism community, we’re trying to maximize the effective altruism community’s expected impact. This is why, despite the fact that “dilution” effects seem like a huge risk, we probably should just aim for the positive tail scenario because that is where our biggest positive impact might be anyway (and also aim to minimize the risks of negative tail scenarios because that also is going to be a big factor in our overall expected impact).
“Median” outreach work might be important to increase our chances of a positive “tail” impact of the effective altruism community as a whole
It is okay for in most worlds for the effective altruism community to have very little impact in the end.
We’re not actually trying to guarantee some level of impact in every possible world. We’re trying to maximize the effective altruism movement’s expected impact.
We’re not aiming for a “median” effective altruism community, we’re trying to maximize our expected impact (so it’s okay if we risk having no impact if that is what we need to do to make positive tails possible or reduce the risk of extreme negative tail outcomes of our work)
Increasing the chances of the positive tails of the effective altruism movement
I think the positive tail impacts are in the worlds where we’ve mastered the synergies between “tent” strategies and “campground” strategies. If we can find ways of keeping the “tent” on point and still make use of our power to spread ideas to a very large number of people (even if the ideas we spread to a much larger audience are obviously going to be lower fidelity, we can still put a tonne of effort into which lower fidelity ideas are the best ones to spread to make the long-term future go really well).
Avoiding the negative tail impacts of the effective altruism movement
This second thought makes me very sad, but I think it is worth saying. I’m not confident in any of this because I don’t like thinking about it so much because it is not fun. Therefore, these thoughts are probably a lot less developed than my “happier”, more optimistic thoughts about the effective altruism community.
I have a strong intuition that more campground strategies reduce the risk of negative tail impacts of the effective altruism movement (though I wish I didn’t have this intuition and I hope someone is able to convince me that this gut feeling is unfounded because I love the effective altruism movement).
Even if campground strategies make it more likely that the effective altruism movement has no impact, it seems completely plausible to me that that might still be a good thing.
A small and weird “cabal” effective altruism, with a lot of power and a lot of money, makes people feel uncomfortable for good reason. There are selection effects, but history is lined with small groups of powerful people who genuinely believed they were making the world a better place and seem, in retrospect, to have done a lot more harm than good.
More people understanding what we’re saying and why makes it more likely that smart people outside our echo chamber can pushback when we’re wrong. It’s a nice safety harness to prevent very bad outcomes.
It is also plausible to me that a tent effective altruism movement might be more likely to achieve their 95th percentile plus positive impact as well as the 5th percentile and below very negative impact.
Effective altruism feels like a rocket right now and rockets aren’t very stable. It intuitively feels easy to have a very big impact, when you do big, ambitious things in an unstable way, and not be able to easily control the sign of that big impact: there is a chance it is very positive or very negative.
I find it plausible that, if you’re going to have a huge impact on the world, having a big negative impact is easier than having a big positive impact by a wide margin (doing good is just darn hard and there are no slam dunk answers[2]).[3] Even though we’re thinking hard about how to make it good, I think it might just be really easy to make it bad (e.g. by bringing attention to the alignment problem, we might be increasing excitement and interest in the plausibility of AGI and therefore are going to get to AGI faster than if no-one talked about alignment).
I might post a high level post before I have finished teasing out my best guesses on what the implications might be because I find my views change so fast that it is really hard to ever finish writing down what I think and it is possibly still better for me to share some of my thoughts more publicly than to share none of them. I often feel like I’m bouncing around like a yo-yo and I’m hoping at some point my thoughts will settle down somewhere on an “equilibrium” view instead of continuously thinking up considerations that cause me to completely flip my opinion (and leave me saying inconsistent things left, right and center because I just don’t know what I think quite yet 😝🤣😅). I have made a commitment bet with a friend to post something as a top-level post within two weeks so I will have to either give a snapshot view then or settle on a view or lose $$ (the only reason I got a finished the top level short-form comment that started this discussion was because of a different bet with a different friend 🙃😶🤷🏼♀️). At the very least, I hope that I can come with a more wholesome (but still absolutely true) framing of a lot of the considerations I’ve outlined in the remainder of this post as I think it over more.
I think it was Ben Garfinkel said the “no slam dunk” answers thing in his post on suspicious convergence when it comes to arguments about AI risk but I’m too lazy to chase it up to link it (edit: I did go try and chase up the link to this, I think my memory had maybe merged/mixed together this post by Gregory Lewis on suspicious convergence and this transcript from a talk by Ben Garfinkel, I’m leaving both links in this footnote because Gregory Lewis’ post is so good that I’ll use any excuse to leave a link to it wherever I can even though it wasn’t actually relevant to the “no slam dunk answers” quote)
Thanks Luke 🌞
Yeah, I definitely think there are some multiplicative effects.
Now I’m teasing out what I think in more detail, I’m starting to find the “median” and “tails” distinction, while useful, still maybe a bit too rough for me to decide whether we should do more or less of any particular strategy that is targeted at either group (which makes me hesitant to immediately put these thoughts as a top form post until I’ve teased out what my best guesses are on how we should maybe change our behaviour if we think we live in a “multiplicative” world).[1]
Here are some more of the considerations/claims (that I’m not all that confident in) that are swirling around in my head at the moment 😊.
tl;dr:
High fidelity communication is really challenging (and doubly so in broad outreach efforts).
However, broad outreach might thicken the positive tail of the effective altruism movement’s impact distribution and thin the negative one even if the median outcome might result in a “diluted” effective altruism community.
Since we are trying to maximize the effective altruism community’s expected impact, and all the impact is likely to be at the tails, we actually probably shouldn’t care all that much about the median outcome anyway.
High fidelity communication about effective altruism is challenging (and even more difficult when we do broader outreach/try to be welcoming to a wider range of people)
I do think it is a huge challenge to preserve the effective altruism community’s dedication to:
caring about, at least, everyone alive today; and
transparent reasoning, a scout mindset and more generally putting a tonne of effort into finding out what is true even if it is really inconvenient.
I do think really narrow targeting might be one of the best tools we have to maintain those things.
Some reasons why we might we want to de-emphasize filtering in existing local groups:
First reason
One reason why focusing on this logic can sometimes be counter-productive because some filtering seems to just miss the mark (see my comment here for an example of how some filtering could plausibly be systematically selecting against traits we value).
Introducing a second reason (more fully-fleshed out in the remainder of this comment)
However, the main reason I think that trying to leverage our media attention, trying to do broad outreach well and trying to be really welcoming at all our shop-fronts might be important to prioritise (even if it might sometimes mean community builders will have to sometimes spend less time spent focusing on the people who seem most promising) is not because of the median outcome from this strategy.
Trying to nail campground effects is really, really, really hard while simultaneously trying to keep effective altruism about effective altruism. However, we’re not trying to optimize for the median outcome for the effective altruism community, we’re trying to maximize the effective altruism community’s expected impact. This is why, despite the fact that “dilution” effects seem like a huge risk, we probably should just aim for the positive tail scenario because that is where our biggest positive impact might be anyway (and also aim to minimize the risks of negative tail scenarios because that also is going to be a big factor in our overall expected impact).
“Median” outreach work might be important to increase our chances of a positive “tail” impact of the effective altruism community as a whole
It is okay for in most worlds for the effective altruism community to have very little impact in the end.
We’re not actually trying to guarantee some level of impact in every possible world. We’re trying to maximize the effective altruism movement’s expected impact.
We’re not aiming for a “median” effective altruism community, we’re trying to maximize our expected impact (so it’s okay if we risk having no impact if that is what we need to do to make positive tails possible or reduce the risk of extreme negative tail outcomes of our work)
Increasing the chances of the positive tails of the effective altruism movement
I think the positive tail impacts are in the worlds where we’ve mastered the synergies between “tent” strategies and “campground” strategies. If we can find ways of keeping the “tent” on point and still make use of our power to spread ideas to a very large number of people (even if the ideas we spread to a much larger audience are obviously going to be lower fidelity, we can still put a tonne of effort into which lower fidelity ideas are the best ones to spread to make the long-term future go really well).
Avoiding the negative tail impacts of the effective altruism movement
This second thought makes me very sad, but I think it is worth saying. I’m not confident in any of this because I don’t like thinking about it so much because it is not fun. Therefore, these thoughts are probably a lot less developed than my “happier”, more optimistic thoughts about the effective altruism community.
I have a strong intuition that more campground strategies reduce the risk of negative tail impacts of the effective altruism movement (though I wish I didn’t have this intuition and I hope someone is able to convince me that this gut feeling is unfounded because I love the effective altruism movement).
Even if campground strategies make it more likely that the effective altruism movement has no impact, it seems completely plausible to me that that might still be a good thing.
A small and weird “cabal” effective altruism, with a lot of power and a lot of money, makes people feel uncomfortable for good reason. There are selection effects, but history is lined with small groups of powerful people who genuinely believed they were making the world a better place and seem, in retrospect, to have done a lot more harm than good.
More people understanding what we’re saying and why makes it more likely that smart people outside our echo chamber can pushback when we’re wrong. It’s a nice safety harness to prevent very bad outcomes.
It is also plausible to me that a tent effective altruism movement might be more likely to achieve their 95th percentile plus positive impact as well as the 5th percentile and below very negative impact.
Effective altruism feels like a rocket right now and rockets aren’t very stable. It intuitively feels easy to have a very big impact, when you do big, ambitious things in an unstable way, and not be able to easily control the sign of that big impact: there is a chance it is very positive or very negative.
I find it plausible that, if you’re going to have a huge impact on the world, having a big negative impact is easier than having a big positive impact by a wide margin (doing good is just darn hard and there are no slam dunk answers[2]).[3] Even though we’re thinking hard about how to make it good, I think it might just be really easy to make it bad (e.g. by bringing attention to the alignment problem, we might be increasing excitement and interest in the plausibility of AGI and therefore are going to get to AGI faster than if no-one talked about alignment).
I might post a high level post before I have finished teasing out my best guesses on what the implications might be because I find my views change so fast that it is really hard to ever finish writing down what I think and it is possibly still better for me to share some of my thoughts more publicly than to share none of them. I often feel like I’m bouncing around like a yo-yo and I’m hoping at some point my thoughts will settle down somewhere on an “equilibrium” view instead of continuously thinking up considerations that cause me to completely flip my opinion (and leave me saying inconsistent things left, right and center because I just don’t know what I think quite yet 😝🤣😅). I have made a commitment bet with a friend to post something as a top-level post within two weeks so I will have to either give a snapshot view then or settle on a view or lose $$ (the only reason I got a finished the top level short-form comment that started this discussion was because of a different bet with a different friend 🙃😶🤷🏼♀️). At the very least, I hope that I can come with a more wholesome (but still absolutely true) framing of a lot of the considerations I’ve outlined in the remainder of this post as I think it over more.
I think it was Ben Garfinkel said the “no slam dunk” answers thing in his post on suspicious convergence when it comes to arguments about AI risk but I’m too lazy to chase it up to link it(edit: I did go try and chase up the link to this, I think my memory had maybe merged/mixed together this post by Gregory Lewis on suspicious convergence and this transcript from a talk by Ben Garfinkel, I’m leaving both links in this footnote because Gregory Lewis’ post is so good that I’ll use any excuse to leave a link to it wherever I can even though it wasn’t actually relevant to the “no slam dunk answers” quote)maybe I just took my reading of HPMOR too literally :P