EDIT: Witness this train-wreck of me figuring out what I maybe think in real time half-coherently below as I go :P[1]
yeah, I guess an intuition that I have is there are some decisions where we can gain a lot of ground by focusing are efforts in places where it is more likely we come across people who are able to create tail impacts over their lifetimes (e.g. by prioritising creating effective altruism groups in places with lots of people who have a pre-existing track record of being able to achieve the things they set out to achieve). However, I feel like there are some places where more marginal effort on targeting the people who could become tails has sharp diminishing returns and comes with some costs that might not actually be worth it. For example, once you have set up a group in a place where people who have track records of achieving things they set their minds to to a really exceptional degree, trying to figure out how “tail potential” someone is from there often can make people who might have been tail potential if they had been guided in a helpful way completely put off from engaging with us at all.
This entire thread is not actually recommended reading but keeping it here because I haven’t yet decided whether I endorse it or not and I don’t see it as that much dis-utility in leaving it here in the meantime while I think about this more.
I’m also not sure, once we’re already targeting people who have track records of doing the things they’ve put their minds to (which obviously won’t be a perfect proxy for tail potential but it often seems better than no prioritisation of where the marginal group should go), I’m not sure how good we are at assessing someone’s “tail potential”, especially because there are going to be big marginal returns to finding people who have a different comparative advantage to the existing community (if it is possible to communicate the key ideas/thinking with high fidelity) who will have more of an inferential gap to cross before communication is efficient enough for us to be able to tell how smart they are/how much potential they have.
This impression comes from knowing people where I speak their language (metaphorically) and I also speak EA (so I can absorb a lot of EA content and translate it in a way they can understand) who are pretty great at reasoning transparency and updating in conversations with people whom they’ve got pre-established trust (which means when miscommunications inevitably happen, the base assumption is still that I’m arguing in good faith). They can’t really demonstrate that reasoning transparency if the person they are talking to doesn’t understand their use of language/their worldview well enough to see that it is actually pretty precise and clear and transparent once you understand what they mean by the words they use.
(I mainly have this experience with people who maybe didn’t study maths or economics or something that STEM-y who I have other “languages” that mean I can still cross inferential gaps reasonably efficiently with them)
This is a proof of existence of these kinds of people. It doesn’t really tell us all that much about what proportion of people without the backgrounds that make the EA language barrier a lot smaller (like philosophy, econ and STEM) are actually good at the thinking processes we value very highly that are taught a lot in STEM subjects.
I could have had this experience with people who I know and this still not mean that this “treating people with a huge amount of charity for the reason that some people might have the potential to have a tail impact even if we’d not guess it when we first meet them” is actually worth it overall. I’ve got a biased sample but I don’t think it’s irrational that this informs my inside view even if I am aware that my sample is likely to be heavily biased (I am only going to have built a common language with people/built trust with people if there is something that fuels our friendships—the people who I want to be friends with are not random! They are people who make me feel understood or say things that I find thought-provoking or a number of other factors that kind of makes them naturally a very cherry-picked pool of people).
Basically, my current best guess is that being really open-minded and patient with people once your group is at a place where pretty much everyone has demonstrated they are a tail person in one way or another (whether that’s because of their personal traits or because of their fortunate circumstances) will get us more people who have the potential to have a positive tail-end impact engaging with us enough for that potential to have a great shot of being realised.
EDIT: I copied and pasted this comment as a direct reply to Chris and then edited it to make it make more sense than it did the first time I wrote it and also to make it way nicer than my off-the-cuff/figuring-out-what-thought-as-I-went stream-of-consciousness but I left this here anyway partly for context for the later comments and also because I think it’s kind of fun to have a record (even if just for me) of how my thoughts develop as I write/tease out what sounds plausibly true once I’ve written it and what doesn’t quite seem to hit the mark of what intuition I’m attempting to articulate (or what intuition that, once I find a way to articulate it, ends up seeming obviously false once I’ve written it up).
I am not arguing that we should not target exceptional people, I think exceptionally smart and caring people are way better to spend a lot of one-on-one time with than people who care an average amount about helping others and for whom there is a lot of evidence that they haven’t yet got a track record of being able to accomplish things they set their minds to.
My guess is that sometimes we can filter too hard, too early for us to get the tail-end of the effective altruism community’s impact.
It is easy for a person to form an accurate impression of another person who is similar to them. It is much harder for a person to quickly form an accurate impression of another person who is really different (but because of diminishing returns, it seems way more valuable on the margin to get people who are exceptional in a different way to the way that the existing community tends to be exceptional than another person who thinks the same way and has the same skills).
(I am not confident I will reflectively endorse much of the above in 24 hours from now, I’m just sharing my off-the-cusp vibes which might solidify into more or less confidence when I let these thoughts sit for a bit more time)
If my confidence in any of these claims substantially increases or decreases in the next few days I might come back and clarify that (but if doing this becomes a bit of an ugh field, I’m not going to prioritise de-ughing it because there are other ugh-fields that are higher on my list to prioritise de-ughing 😝)
I think there’s a lot of value in people reaching out to people they know (this seems undervalued in EA, then again maybe it’s intentional as evangelism can turn people off). This doesn’t seem to trade-off too substantially against more formal movement-building methods which should probably filter more on which groups are going to be most impactful.
In terms of expanding the range of people and skills in EA, that seems to be happening over time (take for example the EA blog prize: https://effectiveideas.org/ ). Or the increased focus on PA’s (https://pineappleoperations.org/). I have no doubt that there are still many useful skills that we’re missing, but there’s a decent chance that funding would be available if there was a decent team to work on the project.
If my confidence in any of these claims substantially increases or decreases in the next few days I might come back and clarify that (but if doing this becomes a bit of an ugh field, I’m not going to prioritise de-ughing it because there are other ugh-fields that are higher on my list to prioritise de-ughing 😝)
I suspect that some ways we filter at events of existing groups are good and we should keep doing them.
I also suspect some strategies/tendencies we have when we filter at the group level are counter-productive to finding and keeping high-potential people.
For example, filtering too fast based on how quickly someone seems to “get” longtermism might filter in the people who are more willing to defer and so seem like they get it more than they do.
It might filter out the people who are really trying to think it through, who seem more resistant to the ideas or who are more willing to voice their half-formed thoughts that haven’t developed yet into something that deep (because thinking through all the different considerations to form an inside view takes a lot of time and voicing a lot of “dead-end” thoughts). Those higher value people might systematically be classed as “less tractable” or “less smart” when, in fact, it is sometimes[1] that we have just forgotten that people who are really thinking about these ideas seriously, who are smart enough to possibly be a person who could have a tail end impact, are going to say things that don’t sound smart as they navigate what they think. The further someone is from our echo chamber, the stronger I expect this effect to be.
Obviously I don’t know how most groups filter at the group-level, this is so dependent on the particular community organizers (and then also there are maybe some cultural commonalities across the movement which is why I find it tempting to make broad-sweeping generalisations that might not hold in many places).
but obviously not always (and I don’t actually have a clear idea of how big a deal this issue is, I’m just trying to untangle my various intuitions so I can more easily scrutinize if there is a grain of truth in any of them on closer inspection)
Hmm… Some really interesting thoughts. I generally try to determine whether people are actually making considered counter-arguments vs. repeating cliches, but I take your point about a willingness to voice half-formed thoughts can cause others to assume you’re stupid.
I guess in terms of outreach it makes sense to cultivate a sense of practical wisdom so that you can determine when to patiently continue a conversation or when to politely and strategically withdraw so as to save energy and avoid wasting time. This won’t be perfect and it’s subject to biases as you mentioned, but it’s really the best option available.
Hmm, I’m not sure I agree with the claim “it’s really the best option available” even if I don’t already have a better solution pre-thought up. Or at the very least, I think that how to foster this culture might be worth a lot of strategic thought.
Even if there is a decent chance we end up concluding there isn’t all that much we can do, I think the payoff to finding a good way to manage this might be big enough to make up for all the possible worlds where this work ends up being a dead-end.
EDIT: Witness this train-wreck of me figuring out what I maybe think in real time half-coherently below as I go :P[1]
yeah, I guess an intuition that I have is there are some decisions where we can gain a lot of ground by focusing are efforts in places where it is more likely we come across people who are able to create tail impacts over their lifetimes (e.g. by prioritising creating effective altruism groups in places with lots of people who have a pre-existing track record of being able to achieve the things they set out to achieve). However, I feel like there are some places where more marginal effort on targeting the people who could become tails has sharp diminishing returns and comes with some costs that might not actually be worth it. For example, once you have set up a group in a place where people who have track records of achieving things they set their minds to to a really exceptional degree, trying to figure out how “tail potential” someone is from there often can make people who might have been tail potential if they had been guided in a helpful way completely put off from engaging with us at all.
This entire thread is not actually recommended reading but keeping it here because I haven’t yet decided whether I endorse it or not and I don’t see it as that much dis-utility in leaving it here in the meantime while I think about this more.
I’m also not sure, once we’re already targeting people who have track records of doing the things they’ve put their minds to (which obviously won’t be a perfect proxy for tail potential but it often seems better than no prioritisation of where the marginal group should go), I’m not sure how good we are at assessing someone’s “tail potential”, especially because there are going to be big marginal returns to finding people who have a different comparative advantage to the existing community (if it is possible to communicate the key ideas/thinking with high fidelity) who will have more of an inferential gap to cross before communication is efficient enough for us to be able to tell how smart they are/how much potential they have.
This impression comes from knowing people where I speak their language (metaphorically) and I also speak EA (so I can absorb a lot of EA content and translate it in a way they can understand) who are pretty great at reasoning transparency and updating in conversations with people whom they’ve got pre-established trust (which means when miscommunications inevitably happen, the base assumption is still that I’m arguing in good faith). They can’t really demonstrate that reasoning transparency if the person they are talking to doesn’t understand their use of language/their worldview well enough to see that it is actually pretty precise and clear and transparent once you understand what they mean by the words they use.
(I mainly have this experience with people who maybe didn’t study maths or economics or something that STEM-y who I have other “languages” that mean I can still cross inferential gaps reasonably efficiently with them)
This is a proof of existence of these kinds of people. It doesn’t really tell us all that much about what proportion of people without the backgrounds that make the EA language barrier a lot smaller (like philosophy, econ and STEM) are actually good at the thinking processes we value very highly that are taught a lot in STEM subjects.
I could have had this experience with people who I know and this still not mean that this “treating people with a huge amount of charity for the reason that some people might have the potential to have a tail impact even if we’d not guess it when we first meet them” is actually worth it overall. I’ve got a biased sample but I don’t think it’s irrational that this informs my inside view even if I am aware that my sample is likely to be heavily biased (I am only going to have built a common language with people/built trust with people if there is something that fuels our friendships—the people who I want to be friends with are not random! They are people who make me feel understood or say things that I find thought-provoking or a number of other factors that kind of makes them naturally a very cherry-picked pool of people).
Basically, my current best guess is that being really open-minded and patient with people once your group is at a place where pretty much everyone has demonstrated they are a tail person in one way or another (whether that’s because of their personal traits or because of their fortunate circumstances) will get us more people who have the potential to have a positive tail-end impact engaging with us enough for that potential to have a great shot of being realised.
EDIT: I copied and pasted this comment as a direct reply to Chris and then edited it to make it make more sense than it did the first time I wrote it and also to make it way nicer than my off-the-cuff/figuring-out-what-thought-as-I-went stream-of-consciousness but I left this here anyway partly for context for the later comments and also because I think it’s kind of fun to have a record (even if just for me) of how my thoughts develop as I write/tease out what sounds plausibly true once I’ve written it and what doesn’t quite seem to hit the mark of what intuition I’m attempting to articulate (or what intuition that, once I find a way to articulate it, ends up seeming obviously false once I’ve written it up).
I am not arguing that we should not target exceptional people, I think exceptionally smart and caring people are way better to spend a lot of one-on-one time with than people who care an average amount about helping others and for whom there is a lot of evidence that they haven’t yet got a track record of being able to accomplish things they set their minds to.
My guess is that sometimes we can filter too hard, too early for us to get the tail-end of the effective altruism community’s impact.
It is easy for a person to form an accurate impression of another person who is similar to them. It is much harder for a person to quickly form an accurate impression of another person who is really different (but because of diminishing returns, it seems way more valuable on the margin to get people who are exceptional in a different way to the way that the existing community tends to be exceptional than another person who thinks the same way and has the same skills).
(I am not confident I will reflectively endorse much of the above in 24 hours from now, I’m just sharing my off-the-cusp vibes which might solidify into more or less confidence when I let these thoughts sit for a bit more time)
If my confidence in any of these claims substantially increases or decreases in the next few days I might come back and clarify that (but if doing this becomes a bit of an ugh field, I’m not going to prioritise de-ughing it because there are other ugh-fields that are higher on my list to prioritise de-ughing 😝)
I think there’s a lot of value in people reaching out to people they know (this seems undervalued in EA, then again maybe it’s intentional as evangelism can turn people off). This doesn’t seem to trade-off too substantially against more formal movement-building methods which should probably filter more on which groups are going to be most impactful.
In terms of expanding the range of people and skills in EA, that seems to be happening over time (take for example the EA blog prize: https://effectiveideas.org/ ). Or the increased focus on PA’s (https://pineappleoperations.org/). I have no doubt that there are still many useful skills that we’re missing, but there’s a decent chance that funding would be available if there was a decent team to work on the project.
Makes sense
I suspect that some ways we filter at events of existing groups are good and we should keep doing them.
I also suspect some strategies/tendencies we have when we filter at the group level are counter-productive to finding and keeping high-potential people.
For example, filtering too fast based on how quickly someone seems to “get” longtermism might filter in the people who are more willing to defer and so seem like they get it more than they do.
It might filter out the people who are really trying to think it through, who seem more resistant to the ideas or who are more willing to voice their half-formed thoughts that haven’t developed yet into something that deep (because thinking through all the different considerations to form an inside view takes a lot of time and voicing a lot of “dead-end” thoughts). Those higher value people might systematically be classed as “less tractable” or “less smart” when, in fact, it is sometimes[1] that we have just forgotten that people who are really thinking about these ideas seriously, who are smart enough to possibly be a person who could have a tail end impact, are going to say things that don’t sound smart as they navigate what they think. The further someone is from our echo chamber, the stronger I expect this effect to be.
Obviously I don’t know how most groups filter at the group-level, this is so dependent on the particular community organizers (and then also there are maybe some cultural commonalities across the movement which is why I find it tempting to make broad-sweeping generalisations that might not hold in many places).
but obviously not always (and I don’t actually have a clear idea of how big a deal this issue is, I’m just trying to untangle my various intuitions so I can more easily scrutinize if there is a grain of truth in any of them on closer inspection)
Hmm… Some really interesting thoughts. I generally try to determine whether people are actually making considered counter-arguments vs. repeating cliches, but I take your point about a willingness to voice half-formed thoughts can cause others to assume you’re stupid.
I guess in terms of outreach it makes sense to cultivate a sense of practical wisdom so that you can determine when to patiently continue a conversation or when to politely and strategically withdraw so as to save energy and avoid wasting time. This won’t be perfect and it’s subject to biases as you mentioned, but it’s really the best option available.
Hmm, I’m not sure I agree with the claim “it’s really the best option available” even if I don’t already have a better solution pre-thought up. Or at the very least, I think that how to foster this culture might be worth a lot of strategic thought.
Even if there is a decent chance we end up concluding there isn’t all that much we can do, I think the payoff to finding a good way to manage this might be big enough to make up for all the possible worlds where this work ends up being a dead-end.
Well, if you think of anything, let me know.
👍🏼
Oh, here’s another excellent example, the EA Writing Retreat.
😍
Yeah, this is happening! I also think it helps a lot that Sam BF has a really broad spectrum of ideas take of longtermism, which is really cool!