My impression is that others have thought so much less about AI x-risk than EAs and rationalists, and for generally bad reasons, that EAs/rats are the “largest and smartest” expert group basically ‘by default’. Unfortunately with all the biases that come with that. I could be misunderstanding the situation tho.
Yeah, there’s almost certainly some self-selection bias there. If someone thinks that talk of AI x-risk is merely bad science fiction, they will either choose not to become an EA or one chooses to go into a different cause area (and are unlikely to spend significant time thinking any more about AI x-risk or discussing their heterodox view).
For example, people in crypto have thought so much more about crypto than people like me . . . but I would not defer to the viewpoints of people in crypto about crypto. I would want to defer to a group of smart, ethical people who I had bribed so heavily that they were all willing to think deeply about crypto whether they thought it was snake oil or more powerful than AGI. People who chose to go into crypto without my massive bribery are much more likely to be pro-crypto than an unbiased sample of people would be.
I think this is true, and I only discovered in the last two months how attached a lot of EA/rat AI Safety people are to going ahead with creating superintelligence— even though they think the chances of extinction are high— because they want to reach the Singularity (ever or in their lifetime). I’m not particularly transhumanist and this shocked me, since averting extinction and s-risk is obviously the overwhelming goal in my mind (not to mention the main thing these Singularitarians would talk about to others). It made me wonder of we could have sought regulatory solutions earlier and we didn’t because everyone was so focused on alignment or bust…
We’ve thought about it a lot, but that doesn’t mean we got anything worthwhile? It’s like saying that literal doom prophets are the best group to defer to about when the world would end, because they’ve spent the most time thinking about it.
I think maybe about 1% of publicly available EA thought about AI isn’t just science fiction. Maybe less. I’m much more worried about catastrophic AI risk than ‘normal people’ are, but I don’t think we’ve made convincing arguments about how those will happen, why, and how to tackle them.
My impression is that others have thought so much less about AI x-risk than EAs and rationalists, and for generally bad reasons, that EAs/rats are the “largest and smartest” expert group basically ‘by default’. Unfortunately with all the biases that come with that. I could be misunderstanding the situation tho.
Yeah, there’s almost certainly some self-selection bias there. If someone thinks that talk of AI x-risk is merely bad science fiction, they will either choose not to become an EA or one chooses to go into a different cause area (and are unlikely to spend significant time thinking any more about AI x-risk or discussing their heterodox view).
For example, people in crypto have thought so much more about crypto than people like me . . . but I would not defer to the viewpoints of people in crypto about crypto. I would want to defer to a group of smart, ethical people who I had bribed so heavily that they were all willing to think deeply about crypto whether they thought it was snake oil or more powerful than AGI. People who chose to go into crypto without my massive bribery are much more likely to be pro-crypto than an unbiased sample of people would be.
I think this is true, and I only discovered in the last two months how attached a lot of EA/rat AI Safety people are to going ahead with creating superintelligence— even though they think the chances of extinction are high— because they want to reach the Singularity (ever or in their lifetime). I’m not particularly transhumanist and this shocked me, since averting extinction and s-risk is obviously the overwhelming goal in my mind (not to mention the main thing these Singularitarians would talk about to others). It made me wonder of we could have sought regulatory solutions earlier and we didn’t because everyone was so focused on alignment or bust…
We’ve thought about it a lot, but that doesn’t mean we got anything worthwhile? It’s like saying that literal doom prophets are the best group to defer to about when the world would end, because they’ve spent the most time thinking about it.
I think maybe about 1% of publicly available EA thought about AI isn’t just science fiction. Maybe less. I’m much more worried about catastrophic AI risk than ‘normal people’ are, but I don’t think we’ve made convincing arguments about how those will happen, why, and how to tackle them.