I enjoyed this article and found it useful, thanks for writing it! I think it could be interesting to think about how these ideas might apply to situations like running a local EA group, where it’s not just discussing EA when it comes up organically.
Isaac Dunn
I agree that there may be cases of “complex” (i.e. non-symmetric) cluelessness that are nevertheless resiliently uncertain, as you point out.
My interpretation of @Gregory_Lewis’ view was that rather than looking mainly at whether the cluelessness is “simple” or “complex”, we should look for the important cases of cluelessness where we can make some progress. These will all be “complex”, but not all “complex” cases are tractable.
I really like this framing, because it feels more useful for making decisions. The thing that lets us safely ignore a case of “simple” cluelessness isn’t the symmetry in itself, but the intractability of making progress. I think I agree with the conclusion that we ought to be prioritising the difficult task of better understanding the long-run consequences of our actions, in the ways that are tractable.
Thanks for writing this! I especially enjoyed the part where you described how donating has given you a sense of purpose and self-worth when things have been difficult for you—I can relate.
I think I have to disagree with your last point, though, because it seems to me that whenever we make a decision to spend resources, we are making a trade off. A donation to an effective global health charity could in fact have gone to a different cause.
I don’t think that diminishes how worthwhile any donation is, but I think that the spirit of effective altruism is to keep asking ourselves whether there’s something else we could do that would be even better. What do you think?
I agree. How about just a right arrow? (🡲)
I suspect that many people don’t post on the forum because they’re worried about their post being poorly received and damaging their reputation in the EA community.
I believe this because I feel this way myself, because I’ve heard other people around me worrying a lot about posting to the forum, because Will MacAskill spoke on the 80,000 hours podcast of being anxious about their reputation being damaged after posting on the forum, and because of the existence of Aaron Gertler’s talk “Why you (yes, you) should post on the EA Forum”.
Perhaps, by default, new posts could be anonymous until a certain karma threshold (say 30 karma) is met. After that post meets the karma threshold, the true author of the post could become visible.
That way, authors could post knowing that their reputation wouldn’t be damaged if their post wasn’t well received, but that they would get the credit if the post was well received.
I’d expect this to increase the number of posts (both good and bad) from hesitant new users, and I think that the increase in the number of mediocre new posts would be a cost worth paying. It’s good for people to contribute and feel valued for their contribution, especially if it encourages them to make more valuable contributions in the future.
I think it’d be important that the anonymous-until-threshold was the default (i.e. opt out), so that people didn’t feel embarrassed about using it.
Thanks for sharing this! I enjoyed the comments about picking the right scope for a project. I also liked the general nudge towards being transparent about reasoning and uncertainty rather than overstating how much evidence supports particular conclusions.
I think that it probably is worth the trouble to be more encouraging. I’d consider being specific about some things that have been done well, beginning and ending the feedback with encouraging words, and taking a final pass to word things in a way that implies that you’re glad they’ve done this work and you’re rooting for them. That said, it definitely seems much better to give unpolished feedback rather than no feedback, so if it’d be too high a burden then I’d go ahead with potentially discouraging feedback.
I agree that the EA community does try to be welcoming to new members, but I suspect that doing it even more would probably be good to counteract the shame and guilt I think many people might have about not being good enough for a community that places high value on success.
I agree that it’s valuable to give honest feedback if you think that someone should consider trying something else, rather than just giving blithely positive feedback that might cause them to continue pursuing something that’s a bad fit.
It’s probably worth being especially thoughtful about the way that such feedback is framed. For example, if feedback of type a) can be made constructive, it might make it seem more sincerely encouraging: rather than “it’s probably bad for you to do this kind of work”, saying “I actually think that you might not be as well suited to this kind of work as others in the EA community because others are better at [specific thing], but from [strength X] and [strength Y] that I’ve noticed, I wonder if you’ve considered [type of work T] or [type of work S]?” (I know that you were paraphrasing and wouldn’t say those actual phrases to people)
For feedback of type b), my gut reaction is that basically no one should be given feedback of that type because of the risk if you’re wrong as you say, but also because of the risk of exacerbating feelings that only sufficiently impressive people are welcome in EA. I guess it depends whether you mean “you’re a valued member of this community, but not competitive for a job in the community” or “you’re not good enough to be a member of this community”. I agree that some people should be given the first type of feedback if you’re sure enough, but I don’t think anyone should be told they’re not good enough to join the community.
Here’s a similar but slightly different suggestion: rather than there being one definitive regional organisation for each area, we just encourage the creation of more organisations that are in between local groups and large funders.
Some examples of these organisations:
A team that runs and is responsible for a few local groups (e.g. a successful local group expands locally)
An organisation that centralises certain specific group functions (e.g. marketing, organising talks, introductory programs), so that local groups can outsource
A team that specialises in seeding new groups, and provides significant help and support to new organisers
A national organisation that tries to coordinate and encourage collaboration across local groups, including keeping an eye on groups that are at risk of disappearing
Some reasons this could be good:
Local groups wouldn’t automatically be reliant on the one designated regional organisation
A designated regional organisation that is doing poorly is a problem because it’s more difficult to replace
Teams can specialise in the thing that they are good at, rather than having responsibility for every aspect of all local groups in their area
Teams can work at whatever regional scope makes most sense (could be just a few groups, could be global) - overlap in geographical scope between different kinds of these organisations is often fine, whereas every geographical location would need exactly one designated regional organisation
Some reasons that designated regional organisations could be better:
Who has responsibility for what would be clearer, so fewer things would fall through the cracks
The role of a regional organisation would be clearly defined and the same across regions, making evaluation easier, making knowledge sharing between regions easier, and making it more straightforward to start or join an organisation (i.e. a clearer career progression pipeline)
A regional organisation may be well placed to decide what kinds of functions to prioritise to best support its local region
There are quick thoughts, I’m likely missing important considerations. I’m not sure which approach seems better, and they aren’t mutually exclusive, but I thought I’d share the thought.
A related example is multi-academy trusts in the UK school system, which are essentially organisations that run multiple schools. Schools can choose to join an existing trust, and trust can start new schools. Rather than the central government funding each school individually, it funds trusts, who have responsibility for the schools in their control.
Thanks for the brilliant post, by the way, I’m really glad you wrote it!
I think this is a great question to ask.
As it happens, I think it probably is an effective use of money, in short because it’s an investment in the human capital of the community, which is probably one of the main bottlenecks to impact at the moment. That’s because there’s a large amount of money committed to EA compared to the number of people in the community working out the best ways to spend it. It’s true that there are global health charities that could absorb a lot more money, but there’s interest in finding even more impactful ways to spend money!
ETA: looks like Stefan got there slightly quicker with a very similar answer!
Thanks for writing this! I hadn’t thought about high engagement levels being more stable than medium or low ones, and that seems right to me. I agree that having people spend time with highly-engaged people is likely to be a good way to make them more engaged. And I definitely agree with your points about fidelity and epistemics being particularly important.
I’m uncertain some of your suggestions, though. You suggest inviting a few “promising” people to socials where most people are highly-engaged. I worry that doing this could result in a “cool kids club” in-group vibe, where people who haven’t been invited to join might not feel welcome in or good enough for EA. There are benefits to this—it might make people more strongly desire to join the highly-engaged group—it’s not obvious to me that it’s worth the cost of exclusivity.
Besides the “why am I not invited” cost, there’s another cost that you point out: only adding a few new people limits how quickly the group can grow. I agree that your approach would fairly reliably create new HEAs, but my guess is we’re early enough in working out how to grow EA that it’s worth looking for a more scalable approach that (a) isn’t exclusive and (b) has a better HEA to new person ratio. For example, 1:1 mentorship is somewhere in between your suggestion (several HEAs to each new person) and so-called “fellowships” (several new people to each HEA).
I want to add to the chorus of commenter praising this post—thank you for writing it! I think it’s really cool that you tried something you thought could have a high upside, actually checked if it worked, and shared your findings.
I agree that it’s well worth acknowledging that many of us have parts of ourselves that want social validation, and that the action that gets you the most social approval in the EA community is often not the same as the action that is best for the world.
I also think it’s very possible to believe that your main motivation for doing something is impact, when your true motivation is actually that people in the community will think more highly of you. [1]
Here are some quick ideas about how we might try to prevent our desire for social validation from reducing our impact:
We could acknowledge our need for social validation, and try to meet it in other areas of our lives, so that we care less about getting it from people in the EA community through appearing to have an impact, freeing us up to focus on actually having an impact.
We could strike a compromise between the part of us wants social validation from the EA community, and the part of us that wants to have an impact. For example, we might allow ourselves to spend some effort trying to get validation (e.g. writing forum posts, building our networks, achieving positions of status in the community), in the full knowledge that they’re mainly useful in satisfying our need for validation so that our main efforts (e.g. our full-time work) can be focused on what we think is directly most impactful.
We could spend time emotionally connecting with whatever drives us to help others, reflecting on the reality that others’ wellbeing really does depend on our actions, proactively noticing situations where we have a choice between more validation or more impact, and being intentional in choosing what we think is overall best.
We might try to align our parts by noticing that although people might be somewhat impressed with our short-term efforts to impress them, in the long run we will probably get even more social status in the EA community if we skill up and achieve something that is genuinely valuable.
Relatedly, we might think about who we would ideally like to impress. Perhaps we could impress some people by simply doing whatever is cool in the EA movement right now. But the people whose approval we might value most will be less impressed by that, and more impressed by us actually being strategic about how best to have an impact. In other words, we might stop considering people pursuing today’s hot topic as necessarily cool, and start thinking about people who genuinely pursue impact as the cool group we aspire to join.
- ^
I haven’t read it, but I think the premise of The Elephant in the Brain is that self deception like this is in our own interests, because we can truthfully claim to have a virtuous motivation even if that’s not the case.
I like the idea of visualising important things to make them feel more salient, and it’s fun that this is linked to predictions on Metaculus! I also liked the visualisation of other predictions once I found them. Thanks for making it.
You mention that the purpose is to give doomy people a sense that there is hope and we can take action to survive. I would be very interested for you to find some of these people and do user interviews or similar to understand whether it has the effect you hope! And you might learn how to improve it for that goal. Have you done anything like this yet?
Another thought is that the title “x-risk tree” is slightly misleading:
The two things I think it visualises are drops in global population of 10% or 95% before 2100
So it doesn’t visualise the risk of extinction (although it does provide an upper bound)
It also doesn’t visualise existential risk (x-risk), which could be much higher than extinction risk, so the upper bound doesn’t hold
How about replacing the title with something like “How likely is a global catastrophe in our lifetimes?”
Thank you for sharing this post—it’s well written, well structured, relevant and concise. (And I agree with the conclusion, which I’m sure makes me like it more!)
Even though this post has no relevance for me, I really enjoyed reading it—mainly because it was well written, and partly because I was curious to hear about your experiences and takeaways. Thanks for writing it!
It seems to me that there’s a difference between financial investment and EA bets: returns on financial bets can be then invested again, whereas returns on most EA bets are not more resources for the EA movement but are direct positive impact that helps our ultimate beneficiaries. So we can’t get compounding returns from these bets.
So, except for when we’re making bets to grow the resources of the EA movement, I don’t think I agree that EA making correlated bets is bad in itself—we just want the highest EV bets.
Does that seem right to you?
I’m glad you’re running this! It seems valuable to have reading programs focused on such an important question.
But I wonder whether a better goal for the program might be to help people to engage with the ideas and figure out their views either way, rather than to increase people’s confidence that longtermism is correct[1].
I think that this is better even if your motivation is to increase the number of people who agree with longtermism—convincing people of specific conclusions seems worse for community epistemics, and might seem offputtingly dogmatic to some people.
- ^
I understood “we hope to decrease the uncertainty about your belief in longtermism” to mean “we hope to increase your confidence in longtermism being correct”, although is it a bit ambiguous.
- ^
Sounds great!
I second what Alex has said about this discussion being very valuable pushback against ideas that have got some traction—at the moment I think that strong longtermism seems right, but it’s important to know if I’m mistaken! So thank you for writing the post & taking some time to engage in the comments.
On this specific question, I have either misunderstood your argument or think it might be mistaken. I think your argument is “even if we assume that the life of the universe is finite, there are still infinitely many possible futures—for example, the infinite different possible universes where someone shouts a different natural number”.
But I think this is mistaken, because the universe will end before you finish shouting most natural numbers. In fact, there would only be finitely many natural numbers you could finish shouting before the universe ends, so this doesn’t show there are infinitely many possible universes. (Of course, there might be other arguments for infinite possible futures.)
More generally, I think I agree with Owen’s point that if we make the (strong) assumption the universe is finite in duration and finite in possible states, and can quantise time, then it follows that there are only finite possible universes, so we can in principle compute expected value.
So I’d be especially interested if you have any thoughts on whether expected value is in practice an inappropriate tool to use (e.g. with subjective probabilities) even assuming in principle it is computable. For example, I’d love to hear when (if at all) you think we should use expected value reasoning, and how we should make decisions when we shouldn’t.