This was great, thank you. I’ve been asking people about their reasons to work on AI safety as opposed to other world improving things, assuming they want to maximize the world improving things they do. Wonderful when people write it up without me having to ask!
One thing this post/your talk would have benefited from to make things clearer (or well, at least for me) is if you gave more detail on the question of how you define ‘AGI’, since all the cruxes depend on it.
Thank you for defining AGI as something that can do regularly smart human things and then asking the very important question how expensive that AGI is. But what are those regularly smart human things? What fraction of them would be necessary (though that depends a lot on how you define ‘task’)?
I still feel very confused about a lot of things. My impression is that AI is much better than humans at quite a few narrow tasks though this depends on the definition. If AI was suddenly much better than humans at half of all the tasks human can do, but sucked at the rest, then that wouldn’t count as artificial ‘general’ intelligence under your definition(?) but it’s unclear to me whether that would be any less transformative though this depends a lot on the cost again. Now that I think about it, I don’t think I understand how your definition of AGI is different to the results of whole-brain emulation, apart from the fact that they used different ways to get there. I’m also not clear on whether you use the same definition as other people, whether those usually use the same one and how much all the other cruxes depend on how exactly you define AGI.
I’m fairly surprised by this response, this doesn’t match what I have read. The Human Fertilisation and Embryology Authority imposes a limit for sperm and egg donors to donate to a maximum of ten families in the UK, although there is no limit on how many children might be born to these ten families (I’m struggling to link, but google ‘HFEA ten family limit’). But realistically, they won’t all want to have three children.
I’m curious whether you have a source for the claim that 99% of prospective sperm donors in the UK get rejected? I’m much less confident about this, but this doesn’t line up with my impression. I also didn’t have the impression they were particularly picky about egg donors, unlike in the US.
But yes, it’s true for sperm and egg donors alike that in the UK they can be contacted once the offspring turns 18.
There are also multiple medical and genetic appointments required in advance. I am currently undergoing the process to become an egg donor in the UK (though there is a good chance that I will be rejected) and the process is quite involved. To some extent, this is also true for sperm donors.
Adding to what Khorton said, it depends a lot on what your bar for doing good that you consider worth doing is and what you consider ‘doing good’ to be.
In the UK, there is an egg and sperm donor shortage, so there is some chance you will cause children to exist that wouldn’t have existed otherwise (instead of just ‘replacing’ children).
No, I haven’t. Given the amount of upvotes Phil’s comment received (from which I conclude a decent fraction of people do find arguments in this space demotivating which is important to know) I will probably read up on it again. But I very rarely write top-level posts and the probability of this investigation turning into one is negligible.
Through thinking about these comments, I did remember an EA Forum thread in which ii) and iii) were argued about from 4 years ago: https://forum.effectivealtruism.org/posts/ajPY6zxSFr3BbMsb5/are-givewell-top-charities-too-speculative
It’s worth reading the comment section in full. Turns out my position has been consistent for the past 4 years (though I should have remembered that thread!).
I’ve been involved in the community since 2012 - the changes seem drastic to me, both based on in-person interactions with dozens of people as well as changes in the online landscape (e.g. the EA Forum/EA Facebook groups).
But that is not in itself surprising. The EA community is on average older than when it started. Youth movements are known for becoming less enthusiastic and ambitious over time, when it turns out that changing the world is actually really, really hard.
A better test is: how motivated do EAs feel who are of a similar demographic to long-term EAs years ago when EA started? I have the impression they are much less motivated. It used to be a common occurrence in e.g. Facebook groups to see people writing about how motivating they have found it to be around other EAs. This is much rarer than it used to be. I’ve met a few new-ish early 20s EAs and I don’t think I can even name a single one who is as enthusiastic as the average EA was in 2013.
I wonder whether the lack of new projects being started by young EAs is partially caused by this (though I am sure there are other causes).
To be clear, I don’t think there has been as drastic a change since 2018, which is I think when you started participating in the community.
In principle you only need i) and iii), that’s true, but I think in practice ii) is usually also required. Humans are fairly scope insensitive, and I doubt we’d see low community morale from ordinary do gooding actions being less good by a factor of two or three. As an example, historically GiveWell estimates of how much saving a life with AMF costs have differed by about this much—and it didn’t seem to have much of an impact on community morale. Not so now.
Our crux seems to be that you assume cluelessness or ideas in the same space are a large factor in producing low community morale for doing good. I must admit that I was surprised by this response, I personally haven’t found these arguments to be particularly persuasive, and most people around me seem to feel similarly about such arguments, if they are familiar with them at all.
Yep, I agree that if i) you personally buy into the long-termist thesis, and ii) you expect the long-term effects of ordinary do gooding actions to be bigger than short-term effects, and iii) you expect these long-term effects to be negative, then it makes sense to be less enthusiastic about your ability to do good than before.
However, I doubt most people who feel like I described in the post fall into this category. As you said, you were uncertain about how common this feeling is.
Lots of people hear about the much bigger impact you can have by focussing on the far future. Significantly fewer are well versed in the specific details and practical implications of long-termism.
While I have heard about people believing ii) and iii), I haven’t seen either argument carefully written up anywhere. I’d assume this is true for lots of people. There has been a big push in the EA community to believe i), this has not been true for ii) and iii) as far as I can tell.
Thinking about this further, one concern I have with this post as well as Ollie’s comment is that I think people could unduly underrate the amount of good the average Westerner can actually do after reading it.
If you have a reasonably high salary or donate more than 10% (and assuming donations don’t become much less cost-effective) to AMF or similarly effective charities, you can save hundreds of lives over your lifetime. Saving one life via AMF is currently estimated to cost around only £2,500. If you only earn the average graduate salary forever and only donate 10%, you can still save dozens of lives.
For reference, Oskar Schindler saved 1200 lives and is now famous for it worldwide.
My words at someone’s funeral who saved dozens or even hundreds of lives would be a lot more laudatory than what was said about Dorothea.
Great post. I also think we could work more on the root cause of people feeling like this. Perhaps the message should be: “Doing good and having an impact is not about you. Doing good is for the world, its people and other living beings.”
This might be a slightly silly suggestion and I’m not sure how best to implement it, but I think it might be useful to remind potential attendees that attending EAG is not obligatory just because you are part of the EA Community and/or care a lot about doing good well. I heard from a few people who weren’t particularly excited about attending EAG, but still did it because that’s ‘what you do as an EA’. It seems sad that these people take up spots from people who are actually keen on EAG itself.
It only occurred to me fairly late last year that attending EAG is actually entirely optional. On a side note, rising ticket prices did help me come to the realisation that I did not actually want to go (and therefore didn’t take up a spot from someone who was more keen on going).
I don’t feel like I get more value out of large conferences and I’d be curious about seeing more data on this question. For me, having more people at a conference makes it harder to physically find the people I actually want to talk to. They make up a smaller fraction of attendees and are more spread out. I have also had the impression that conversations at large conferences are shorter. In combination, I get much less value out of very large events compared to small or medium sized ones.
The event size was one of the main reasons I decided not to attend EAG London this year for the first time. It is too big for me to get sufficient value out of it.
5. also has a negative impact on the people who are trying to decide between different career options and would actually be happy to hear constructive criticism. I often feel like I cannot trust others to be honest in their feedback if I’m deciding between career options because they prefer to be ‘nice’.
Well, I’d assume this is because the LTFF team has more time available than the Meta Fund team. Plausibly largely driven by one volunteer who is very happy to spend a lot of time on the LTFF.
I’m part of the Meta Fund committee and was the person who decided against giving feedback in the aforementioned cases.
Unfortunately, giving good feedback is very difficult and something the Meta Fund committee currently isn’t reliably able to provide. I have provided candidates with feedback when I felt I could give easily understandable practical suggestions that would actually lead to the project being more likely to be funded in the future (or explained why this was not likely to happen) and I could do this without investing more than a couple of hours per applicant.
In practice, this sadly means applicants do not get provided with feedback very often (I would need to check, but it might be in 20% of cases). I think giving good feedback is very valuable, but this is unfortunately currently beyond our resources.
Thank you Max, strong upvoted. It sounds like you put a lot of thought into making the programme run better than in previous years and succeeded.
I agree with this. I think the setup of the CEA Summer Fellowship programme is a bit concerning.
Adding to the points you mentioned (little supervision, doesn’t provide good evidence for career paths outside EA organisations) it is also an unpaid programme that does not by default result in job offers being made to the best performers. *
I’m worried that students will think this programme will advance their future career, while I doubt this is true in most cases. Instead they might just pay high opportunity costs.
*At least this was true 1-2 years ago, I’m not entirely sure what the most recent iteration of the programme looks like.
I would expect that apart from contraception global health interventions to be most helpful in reducing deaths of unborn humans. Miscarriages and stillbirths are a much bigger deal than abortions, and in developing countries there is still a lot of room for health interventions to help for little money.
I would be surprised if other interventions to reduce unborn deaths were very cost-effective, even if you have a worldview which values embryos as much as newborns.*
I’d just be curious to see a writeup, especially of the impact of contraception access. Unborn humans don’t feature in traditional QALY-based effectiveness analyses and I’d be interested how the results would change if they were included, even if at a discounted rate. I am not expecting this to be a promising area for most people interested in effective altruism.
*An exception might be if you value pre-implantation blastocysts as much as born humans, in which case your priority could well be to sterilize everyone. See also Toby Ord’s paper The Scourge.