Really great post, and I’m waiting to see what others would think of it. My personal answer to most questions is that EA isn’t as smart as we want to think it is, and we should indeed “be more normal”.
One note is that I’d like to challenge the assumption that EA is the “largest and smartest” expert group on “Might AI lead to extinction?”. I don’t think this is true? This question involves a ton of different disciplines and many big guesses, and people in EA and Rationality who work on it aren’t relatively better at them (and certainly not at all of them at once) than others. EAs might have deliberated more on this question, but the motives for that make it a biased sample.
My impression is that others have thought so much less about AI x-risk than EAs and rationalists, and for generally bad reasons, that EAs/rats are the “largest and smartest” expert group basically ‘by default’. Unfortunately with all the biases that come with that. I could be misunderstanding the situation tho.
Yeah, there’s almost certainly some self-selection bias there. If someone thinks that talk of AI x-risk is merely bad science fiction, they will either choose not to become an EA or one chooses to go into a different cause area (and are unlikely to spend significant time thinking any more about AI x-risk or discussing their heterodox view).
For example, people in crypto have thought so much more about crypto than people like me . . . but I would not defer to the viewpoints of people in crypto about crypto. I would want to defer to a group of smart, ethical people who I had bribed so heavily that they were all willing to think deeply about crypto whether they thought it was snake oil or more powerful than AGI. People who chose to go into crypto without my massive bribery are much more likely to be pro-crypto than an unbiased sample of people would be.
I think this is true, and I only discovered in the last two months how attached a lot of EA/rat AI Safety people are to going ahead with creating superintelligence— even though they think the chances of extinction are high— because they want to reach the Singularity (ever or in their lifetime). I’m not particularly transhumanist and this shocked me, since averting extinction and s-risk is obviously the overwhelming goal in my mind (not to mention the main thing these Singularitarians would talk about to others). It made me wonder of we could have sought regulatory solutions earlier and we didn’t because everyone was so focused on alignment or bust…
We’ve thought about it a lot, but that doesn’t mean we got anything worthwhile? It’s like saying that literal doom prophets are the best group to defer to about when the world would end, because they’ve spent the most time thinking about it.
I think maybe about 1% of publicly available EA thought about AI isn’t just science fiction. Maybe less. I’m much more worried about catastrophic AI risk than ‘normal people’ are, but I don’t think we’ve made convincing arguments about how those will happen, why, and how to tackle them.
I’d like to challenge the assumption that EA is the “largest and smartest” expert group on “Might AI lead to extinction?”. I don’t think this is true?
You seem to imply that there is another expert group which discusses the question of extinction from AI deeply (and you consider the possibility that the other group is in some sense “better” at answering the question)
I’m not necessarily implying that. EA is not an expert group on AI. There are some experts among us (many of which work at big AI labs, doing valuable research), but most people here discussing it aren’t experts. Furthermore, discussing a question ‘deeply’ does not guarantee that your answer is more accurate (especially if there’s more than one ‘deep’ way to discuss it).
I would defer to AI experts, or to the world at large, more than I would to just EA alone. But either of those groups carries uncertainty and internal disagreement—and indeed, the best conclusion might just be that the answer is currently uncertain. And that we therefore need (as many experts outside EA have now come to support) to engage many more people and institutions in a collaborative effort to mitigate the possible danger.
Really great post, and I’m waiting to see what others would think of it. My personal answer to most questions is that EA isn’t as smart as we want to think it is, and we should indeed “be more normal”.
One note is that I’d like to challenge the assumption that EA is the “largest and smartest” expert group on “Might AI lead to extinction?”. I don’t think this is true? This question involves a ton of different disciplines and many big guesses, and people in EA and Rationality who work on it aren’t relatively better at them (and certainly not at all of them at once) than others. EAs might have deliberated more on this question, but the motives for that make it a biased sample.
My impression is that others have thought so much less about AI x-risk than EAs and rationalists, and for generally bad reasons, that EAs/rats are the “largest and smartest” expert group basically ‘by default’. Unfortunately with all the biases that come with that. I could be misunderstanding the situation tho.
Yeah, there’s almost certainly some self-selection bias there. If someone thinks that talk of AI x-risk is merely bad science fiction, they will either choose not to become an EA or one chooses to go into a different cause area (and are unlikely to spend significant time thinking any more about AI x-risk or discussing their heterodox view).
For example, people in crypto have thought so much more about crypto than people like me . . . but I would not defer to the viewpoints of people in crypto about crypto. I would want to defer to a group of smart, ethical people who I had bribed so heavily that they were all willing to think deeply about crypto whether they thought it was snake oil or more powerful than AGI. People who chose to go into crypto without my massive bribery are much more likely to be pro-crypto than an unbiased sample of people would be.
I think this is true, and I only discovered in the last two months how attached a lot of EA/rat AI Safety people are to going ahead with creating superintelligence— even though they think the chances of extinction are high— because they want to reach the Singularity (ever or in their lifetime). I’m not particularly transhumanist and this shocked me, since averting extinction and s-risk is obviously the overwhelming goal in my mind (not to mention the main thing these Singularitarians would talk about to others). It made me wonder of we could have sought regulatory solutions earlier and we didn’t because everyone was so focused on alignment or bust…
We’ve thought about it a lot, but that doesn’t mean we got anything worthwhile? It’s like saying that literal doom prophets are the best group to defer to about when the world would end, because they’ve spent the most time thinking about it.
I think maybe about 1% of publicly available EA thought about AI isn’t just science fiction. Maybe less. I’m much more worried about catastrophic AI risk than ‘normal people’ are, but I don’t think we’ve made convincing arguments about how those will happen, why, and how to tackle them.
You seem to imply that there is another expert group which discusses the question of extinction from AI deeply (and you consider the possibility that the other group is in some sense “better” at answering the question)
Who are these people?
I’m not necessarily implying that. EA is not an expert group on AI. There are some experts among us (many of which work at big AI labs, doing valuable research), but most people here discussing it aren’t experts. Furthermore, discussing a question ‘deeply’ does not guarantee that your answer is more accurate (especially if there’s more than one ‘deep’ way to discuss it).
I would defer to AI experts, or to the world at large, more than I would to just EA alone. But either of those groups carries uncertainty and internal disagreement—and indeed, the best conclusion might just be that the answer is currently uncertain. And that we therefore need (as many experts outside EA have now come to support) to engage many more people and institutions in a collaborative effort to mitigate the possible danger.