Obvious point, but you could assign significant credence to this being the right take, and still think working on A.I. risk is very good in expectation, given exceptional neglectedness and how bad an A.I. takeover could be. Something feels sleazy and motivated about this line of defence to me, but I find it hard to see where it goes wrong.
Something feels sleazy and motivated about this line of defence to me, but I find it hard to see where it goes wrong.
Iâm not sure if weâre picking up on the same notion of sleaziness, and I guess it depends on what you mean by âsignificant credenceâ, and âworking on A.I riskâ but I think itâs hard to imagine someone doing really good mission-critical research work if they come into it from a perspective of âoh I donât think AI risk is at all an issue but smart people disagree and thereâs a small chance that Iâm wrong and the EV is higher than working on other issues.â Though I think itâs plausible my psychology is less well-suited to âgrim determinationâ than most people in EA.
(Donations or engineering in comparison seem comparatively much more reasonable).
Just my anecdotal experience, but when I ask a lot of EAs working in or interested in AGI risk why they think itâs a hugely important x-risk, one of the first arguments that comes to peopleâs minds is some variation on âa lot of smart people [working on AGI risk] are very worried about itâ. My model of many people in EA interested in AI safety is that they use this heuristic as a dominant factor in their reasoning â which is perfectly understandable! After all, formulating a view of the magnitude of risk from transformative AI without relying on any such heuristics is extremely hard. But I think this post is a valuable reminder that itâs not particularly good epistemics for lots of people to think like this.
when I ask a lot of EAs working in or interested in AGI risk
Can I ask roughly what work theyâre doing? Again I think it makes more sense if youâre earning-to-give or doing engineering work, and less if youâre doing conceptual or strategic research. It also makes sense if youâre interested in it as an avenue to learn more.
Obvious point, but you could assign significant credence to this being the right take, and still think working on A.I. risk is very good in expectation, given exceptional neglectedness and how bad an A.I. takeover could be. Something feels sleazy and motivated about this line of defence to me, but I find it hard to see where it goes wrong.
Iâm not sure if weâre picking up on the same notion of sleaziness, and I guess it depends on what you mean by âsignificant credenceâ, and âworking on A.I riskâ but I think itâs hard to imagine someone doing really good mission-critical research work if they come into it from a perspective of âoh I donât think AI risk is at all an issue but smart people disagree and thereâs a small chance that Iâm wrong and the EV is higher than working on other issues.â Though I think itâs plausible my psychology is less well-suited to âgrim determinationâ than most people in EA.
(Donations or engineering in comparison seem comparatively much more reasonable).
Just my anecdotal experience, but when I ask a lot of EAs working in or interested in AGI risk why they think itâs a hugely important x-risk, one of the first arguments that comes to peopleâs minds is some variation on âa lot of smart people [working on AGI risk] are very worried about itâ. My model of many people in EA interested in AI safety is that they use this heuristic as a dominant factor in their reasoning â which is perfectly understandable! After all, formulating a view of the magnitude of risk from transformative AI without relying on any such heuristics is extremely hard. But I think this post is a valuable reminder that itâs not particularly good epistemics for lots of people to think like this.
Can I ask roughly what work theyâre doing? Again I think it makes more sense if youâre earning-to-give or doing engineering work, and less if youâre doing conceptual or strategic research. It also makes sense if youâre interested in it as an avenue to learn more.