Co-founder of Arb, an AI / forecasting / etc consultancy. Doing a technical AI PhD.
Conflicts of interest: ESPR, EPSRC, Emergent Ventures, OpenPhil, Infrastructure Fund, Alvea.
Co-founder of Arb, an AI / forecasting / etc consultancy. Doing a technical AI PhD.
Conflicts of interest: ESPR, EPSRC, Emergent Ventures, OpenPhil, Infrastructure Fund, Alvea.
The main problem with Terminator is not that it is silly and made-up (though actually that has been a serious obstacle to getting the proudly pragmatic majority in academia and policy on board).
It’s that it embeds false assumptions about AI risk: “no risk without AGI malevolence, no risk without conscious AGI, no risk without greedy corporations, AGI danger is concentrated in androids”, etc. These have caused a lot of havoc.
If I could choose between a world where no one outside the field has ever heard of AI risk and a world where everyone has but as a degraded thought-terminating meme, I think I’d choose the first one.
On fancy credentials: most EAs didn’t go to fancy universities*. And I guess that 4% of EAs dropped out entirely. Just the publicly known subset includes some of the most accomplished: Yudkowsky, Muehlhauser, Shlegeris?, Kelsey Piper, Nuno Sempere. (I know 5 others I admire greatly.)
On intelligence: You might be over-indexing to research, and to highly technical research. Inside research / writing the peak difficulty is indeed really high, but the average forum post seems manageable. You don’t need to understand stuff like Löb’s theorem to do great work. I presume most great EAs don’t understand formal results of this sort. I often feel dumb when following alignment research, but I can sure do ordinary science and data analysis and people management, and this counts for a lot.
On the optics of the above two things: seems like we could do more to make people feel welcome, and to appreciate the encouraging demographics and the world’s huge need for sympathetic people who know their comparative advantage. (I wanted to solve the education misconception by interviewing great dropouts in EA. But it probably would have landed better with named high-status interviewees.)
* Link is only suggestive evidence cos I don’t have the row-level data.
The closing remarks about CH seem off to me.
Justice is incredibly hard; doing justice while also being part of a community, while trying to filter false accusations and thereby not let the community turn on itself, is one of the hardest tasks I can think of.
So I don’t expect disbanding CH to improve justice, particularly since you yourself have shown the job to be exhausting and ambiguous at best.
You have, though, rightly received gratitude and praise—which they don’t often, maybe just because we don’t often praise people for doing their jobs. I hope the net effect of your work is to inspire people to speak up.
The data on their performance is profoundly censored. You simply will not hear about all the times CH satisfied a complainant, judged risk correctly, detected a confabulator, or pre-empted a scandal through warnings or bans. What denominator are you using? What standard should we hold them to? You seem to have chosen “being above suspicion” and “catching all bullies”.
It makes sense for people who have been hurt to be distrustful of nearby authorities, and obviously a CH team which isn’t trusted can’t do its job. But just to generate some further common knowledge and meliorate a distrust cascade: I trust CH quite a lot. Every time I’ve reported something to them they’ve surprised me with the amount of skill they put in, hours per case. (EDIT: Clarified that I’ve seen them work actual cases.)