I try to take my moral obligations seriously.
Please chat with me about donation opportunities.
I try to take my moral obligations seriously.
Please chat with me about donation opportunities.
I appreciate that you are putting out numbers and explain the current research landscape, but I am missing clear actions.
The closest you are coming to proposing them is here:
We need a concerted effort that matches the gravity of the challenge. The best ML researchers in the world should be working on this! There should be billion-dollar, large-scale efforts with the scale and ambition of Operation Warp Speed or the moon landing or even OpenAI’s GPT-4 team itself working on this problem.[17] Right now, there’s too much fretting, too much idle talk, and way too little “let’s roll up our sleeves and actually solve this problem.”
But that still isn’t an action plan. Say you convince me, most of the EA Forum and half of all university educated professionals in your city that this is a big deal. What, concretely, should we do now?
Thank you Quintin, this was very helpful for me as a non-ML person to understand the other side of Eliezer’s arguments. As your post is quite dense and it took me a while to work through it, I summarised it for myself. I occasionally had to check the context of the original interview (transcript here) to fully parse the arguments made. I thought the summary might also be helpful to share with others (and let me know if I got anything wrong!):
Eliezer thinks current ML approaches won’t scale to AGI, though due to money influx an approach might be found. Quintin is more optimistic that current ML approaches can scale to AGI. As current alignment techniques are focused on current ML approaches, they won’t help if we have something different that gets us to ML. Current ML capability improvements usually integrate well with previously used alignment approaches which suggests they will keep doing so.
Eliezer is concerned that AI will show more ‘truly general’ intelligence. Humans are not equally general at different tasks as evolution made them specialize on what was important in the ancestral environment and might therefore outclass humans in other tasks. Quintin points out that the learning process humans have been given by evolution is pretty general (albeit biased to what was useful in the ancestral environment), just as the learning process current ML paradigms use is pretty general. How different ML systems actually differ isn’t by using different paradigms but by being trained on different data. Therefore he doesn’t expect such a pattern. He also points out that scale is what makes humans smarter, just as scale is a big driver of how good ML systems are. Humans are not any more constrained by their architecture than ML systems; both can modify themselves to an extent.
Eliezer considers a superintelligence to be what can beat all humans at all tasks. Quintin finds this to be a too high bar as you can have transformative systems which will have deficits.
Eliezer points out that mindspace is large and humans occupy a tiny corner, as such we should expect many different potential AI designs which poses danger. Quintin thinks we should expect AI systems to occupy only a small corner in mindspace, similar to humans. An intuition pump for this is that most real-life data in higher dimensions actually only occupies a small part in those. Again, so far in practice ML systems are using pretty similar processes to humans. They will also be trained on data similar to the data humans are “trained on” as ML systems are mostly trained on human-written text which make them more similar to humans as well.
Eliezer thinks it’s not only hard to align AIs on human values, but on even much more simple goals like duplicating a strawberry. Quintin again thinks this isn’t actually all that hard in principle, but requires starting out with an AI with more general goals which would then be modified to aim for strawberry duplication. He points out that human value formation follows more general and multiple goals than something as single minded as strawberry duplication, so we should allow ML systems to follow such a process of value formation. This will also be a lot easier as ML systems can follow actual examples in the data of such value formation processes and there is a lot more data on human following complex goals than single minded ones.
Eliezer thinks that we won’t be able to align AIs by merely using gradient descent. This is because the primary example of using gradient descent to align a system is evolution and we know that evolution failed to align humans to pursue inclusive genetic fitness in the modern environment. In the ancestral environment, e.g. desiring sexuality was sufficient, but now humans have figured out contraception. People do not desire to maximise their inclusive genetic fitness for its own sake. Quintin thinks this is because ancestral humans didn’t have a concept of inclusive genetic fitness, therefore evolution couldn’t optimise its rewards for improving inclusive genetic fitness directly. Modern AI systems however will have an understanding of human values as they are directly exposed to them during training.
Eliezer makes the same point about humans desiring ice cream. Quintin counters again that there was no ice cream in the ancestral environment, therefore evolution couldn’t punish humans for desiring ice cream. Modern ML researchers however can punish ML systems for doing things they aren’t supposed to, i.e. which are misaligned with human values.
Eliezer thinks aligning AI with gradient descent will be even harder than for evolution to align humans with natural selection as gradient descent is blunter and less simple. Quintin isn’t convinced by this and also points out that evolution was optimising over the learning process via the human genome which will be a lot messier due to its indirectness while ML researchers are training the whole ML system directly. Therefore a comparison doesn’t make much sense.
Eliezer is worried about ML systems trained to predict e.g. human preferences will try to look for opportunities to make predictions easier. Quintin thinks ML systems aren’t optimising to do well at long-term prediction by making it easier to predict things, predicting things is something that ML systems do, not what they want to do. He compares this to humans who also don’t explicitly prioritise to e.g. see very well in the long term.
Eliezer considers it important to employ a ‘security mindset’, a term from computer security, for AI alignment. Ordinary paranoia is insufficient for keeping a system secure, some deeper skills are required. Quintin thinks ML is unlike computer security as most fields are unlike computer security and we don’t use a security mindset for most fields including childrearing which seems like an important analogue to training ML systems to him. This is because ML systems during the training process don’t have adversaries to the same extent as computer systems. They might have adversarial users during deployment, but ML systems themselves aren’t keen to be jailbroken. He also uses the opportunity to point out that Eliezer often compares AI to other fields like rocket science, but ML often works in a pretty different way to other fields, e.g. swapping individual components of ML systems often doesn’t change their functionality while changing rocket components would make rockets fail.
Eliezer is concerned that AI optimists haven’t encountered real difficulties yet and that’s why they’re optimistic, the same way that the original AI conference in the 50s thought problems could be solved in two months which took 70 years to solve. Quintin counters that there were plenty of ML problems which were easier than expected and most notably easier than Eliezer and AI field veterans who have been working on AI since the early days predicted. Both Eliezer and AI veterans didn’t expect neural networks to work as well as they do today. He mentions that Eliezer also stated in a different venue that he didn’t believe that general adversarial networks worked right away, yet they did. He expects the hardness of ML research to predict the hardness of ML alignment research and thinks that Eliezer seems to be poorly calibrated on the former so he will also be on the latter.
Eliezer expects that for AI alignment to go well he will have to be wrong about aspects of AI alignment, but he expects that where he is mistaken about AI alignment this will make AI alignment even harder than he already thinks it is, as it would be really surprising when a new engineering project is easier than you think it is. Quintin strongly disagrees with this framing, because if Eliezer was wrong about how hard alignment is he should expect alignment to be easier than he previously thought.
Eliezer points to how fast AI progress was in the game of Go as a reason for concern that superintelligent AI will suddenly kill humans without killing a somewhat smaller amount of humans in advance. Quintin thinks that Go is disanalogous to a more general AI system as progress in more general systems is usually slower and smoother. Go also had a single objective function AI could use to score itself which will not be true for many other tasks which will require human input slowing down improvements.
Eliezer is even more concerned about AI systems which can self-improve and get smarter during inference (deployment) getting us to fast take off. Quintin counters that we basically already have that. ChatGPT could train on user input; but it’s not programmed to as it wouldn’t be practical. ML training processes could also be changed so they could be reasonably said to self-improve during inference as inference is also a part of training.
Eliezer thinks that people who are capable of breaking AI systems show more AI expertise than people who are merely creating functional AI systems, which is how it works in computer security. This is related to the security mindset claim above. Maybe they’d be able to find ways to improve AI alignment. Quintin thinks the people who break things in computer security are only experts there because in computer security there are clear signs whether the system is broken or not, which isn’t true for AI alignment. He discusses an example where Eliezer thinks a ML system is easily breakable as the ML will try to maximise the reward function, but Quintin thinks that simply maximizing the reward function isn’t how realistic ML systems work. He discusses another example where he thinks ML systems are not easily broken.
Overall my take: Eliezer is concerned about AI that doesn’t look like modern ML systems. Quintin argues modern ML systems don’t show the properties that Eliezer is concerned about more advanced AI showing. Quintin thinks that more advanced ML systems can already be real AGI. What I am confused about is why Eliezer is then so worried about the current state of AI if the thing he is worried about is so much more advanced/general in mindspace, or more specifically why does he consider current ML systems to be evidence that we are getting closer to the kind of AI he is worried about.
We end up seeming more deferential and hero-worshipping than we really are.
I feel like this post is missing something. I would expect one of the strongest predictors of the aforementioned behaviors to be age. Are there any people in their thirties you know who are prone to hero-worshipping?
I don’t consider hero-worshipping an EA problem as such, but a young people problem. Of course EA is full of young people!
Make sure people incoming to the community, or at the periphery of the community, are inoculated against this bias, if you spot it. Point out that people usually have a mix of good and bad ideas. Have some go-to examples of respected people’s blind spots or mistakes, at least as they appear to you.
This seems like good advice to me, but I expect it to benefit from being aware that you need to talk about these things to a young person because they are young.
Thank you! You’re laying out the argument well that if a previous omnivore eats seafood for every meal where they previously ate meat this will be harmful for animals.
What I’d like to see is some empirical backing how much pescetarians actually swap out seafood for meat given what you’re claiming in the title.
You discuss your own experience of eating 2 pounds of salmon weekly, but when I was pescetarian I had a fish meal once every month or two. If omnivores switch to a pescetarian diet like mine that still seems like a win for animal welfare.
Thank you Lizka. You are making a good point and I have edited the comment above to no longer refer to a specific demographic group.
I would not want anyone to get the impression that Owen’s poor behaviour is merely a strong negative update on men. It is a strong negative update on the decency of everybody.
(Though I would expect women to show a lack of decency in slightly different ways than men.)
I still expect some decent people to exist. I just now think there are even more rare than I previously thought.
[hastily written]
Never ever would I have guessed this. You were living proof to me that at least some, if not many, decent men people exist. I am completely devastated.
EA has been dying. But for me, this is the ultimate death blow.
[Edit: Comment was modified to no longer refer to a specific demographic group.]
Thank you, that was a beautiful response. I’m glad I asked!
I share the experience that sometimes my personal experiences and emotions affect how I view different causes. I do think it’s good to feel the impacts occasionally, though overall it leads me to being more strict about relying on spreadsheets.
Thank you, that was very interesting Saulius. You talk a bit about comparisons with other cause areas, but I’m still not entirely sure which cause area you would personally prioritise the most right now ?
But overall, I find that younger kids are much more physically draining, and older kids require much more emotional labor.
This is my experience as well (oldest is 12).
I often say that while small children aren’t easy, they are simple. While it seems it should be easier to fulfill the needs of older children if you know what they are, it’s much harder to figure out what the right thing to do is in the first place. I have a lot more doubt whether I’m doing right by my oldest than when she was small.
I do agree with you that silence can hurt community epistemics.
In the past I also thought people worried about missing out on job and grant opportunities if they voiced criticisms on the EA Forum overestimated the risks. I am ashamed to say that I thought this was a mere result of their social anxiety and pretty irrational.
Then last year I applied to an explicitly identified longtermist (central) EA org. They rejected me straight away with the reason that I wasn’t bought into longtermism (as written up here which is now featured in the EA Handbook as the critical piece on longtermism...). This was perfectly fine by me, my interactions with the org were kind and professional and I had applied on a whim anyway.
But only later I realised that this meant that the people who say they are afraid to be critical of longtermism and potentially other bits of EA because they are worried about losing out on opportunities were more correct than I previously thought.
I still think it’s harmful not to voice disagreements. But evidently there is a more of a cost to individuals than I thought, especially to ones who are financially reliant on EA funding or EA jobs, and I was unreasonably dismissive of this possibility.
I am a bit reluctant to write this. I very much appreciated being told the reason for the rejection and I think it’s great that the org invested time and effort to do so. I hope they’ll continue doing this in the future, even if insufficient buy-in to longtermism is the reason for rejection.
Most of the time where an upper bound is mentioned in job ads (e.g. LinkedIn) it’s less than <1.5 times the lower bound. So I’m implicitly assuming the upper bound, though not mentioned, will be in the same ballpark.
Perhaps this is wrong and I’m supposed to interpret no upper bound as ‘very negotiable, potentially the sky is the limit’. But that possibility didn’t occur to me until you mentioned it.
I do interpret no range at all as a plausible ‘sky is the limit’ though.
I am a woman who could be very much interested in the role. But the lack of an upper bound for compensation is putting me off a bit, it might help to include that.
On average I’d expect more men to be put off by this than women though!
Some people may be psychologically cut out for being a dedicate, but not have a high level of personal fit for any jobs where being a dedicate even makes sense as a thing to do. Not all dedicates go to an Ivy League school, but jobs like technical AI safety researcher, startup founder, program officer at a major foundation, or farmed-animal welfare corporate relations specialist all require very particular sets of abilities. If your abilities point you more in the direction of being (say) a teacher, then being a dedicate is probably not for you.
Do you not consider EtG a way to be a dedicate?
If someone pushes towards maximizing their earnings, even if they don’t top out that high, I would consider them to be a dedicate. Being a teacher isn’t the best starting point for EtG, but there are highly paid adjacent options. Or someone who would make for a good teacher would probably also be able to find a different career.
If you think the moral concerns about abortion is more about the prevention of future people instead of the value of the lives of the embryos, you should probably try to optimise for women having more children in the near term. It is not clear to me why you think preventing abortions is the best way to do so.
Thank you, I agree with a lot of the underlying motive (once upon a time I wrote a research proposal about this, but never got into it). Where I disagree:
This is already mentioned in the comments, but my understanding was that improved contraceptive access is one of the best ways to lower abortions so moral concerns about abortions drive me towards supporting family planning charities.
Women will often not want to have children—so we should ensure they don’t conceive in the first place instead of terminating their pregnancies.
What I would add: Something I find lacking in your description is how much more fetuses matter morally over time in my view at least. Merely terminating an unwanted pregnancy faster already has a lot of value. Many people seem to be oblivious to the drastic changes an embryo undergoes in the first trimester. Terminating at 4 weeks would mean aborting a being which is less than 1mm big and does not appear particularly human. At 12 weeks you have 5cm big little one (not counting the legs!) which very much looks like human baby.
My understanding was as well that improved contraceptive access in poor countries is one of the best things we can do to lower abortions.
Thank you so much for laying out this view. I completely agree, including every single subpoint (except the ones about the male perspective which I don’t have much of an opinion on). CEA has a pretty high bar for banning people. I’m in favour of lowering this bar as well as communicating more clearly that the bar is really high and therefore someone being part of the community certainly isn’t evidence they are safe.
Thank you in particular for point D. I’ve never been quite sure how to express the same point and I haven’t seen it written up elsewhere.
It’s a bit unfortunate that we don’t seem to have agreevote on shortforms.
Will the results of this research project be published? I’d really like to have a better sense of biosecurity risk in numbers.
I have also barely reported, despite keeping the pledge for 10 years. Will finally get my reckoning with missing out on the pin though...