Hang in there. I really hope that one day EA will be able to break out of it’s AI obsession, and realize how flimsy and full of half-baked assumptions the case for AI x-risk actually is. I think a problem is that a lot of people like you are understandably gonna get discouraged and just leave the movement or not join in the first place, further ossifying the subtle groupthink going on here.
Thankfully EA is very open to criticism, so I’m hoping to slowly chink away at the bad reasoning. For example, relying on a survey where you ask people to give a chance of destruction as a percentage, which will obviously anchor people to the 1-99 % range.
Interesting, I hadn’t thought of the anchoring effect you mention. One way to test this might be to poll the same audience about other more outlandish claims, something like the probability of x-risk from alien invasion, or CERN accidentally creating a blackhole.
Hang in there. I really hope that one day EA will be able to break out of it’s AI obsession, and realize how flimsy and full of half-baked assumptions the case for AI x-risk actually is. I think a problem is that a lot of people like you are understandably gonna get discouraged and just leave the movement or not join in the first place, further ossifying the subtle groupthink going on here.
Thankfully EA is very open to criticism, so I’m hoping to slowly chink away at the bad reasoning. For example, relying on a survey where you ask people to give a chance of destruction as a percentage, which will obviously anchor people to the 1-99 % range.
Interesting, I hadn’t thought of the anchoring effect you mention. One way to test this might be to poll the same audience about other more outlandish claims, something like the probability of x-risk from alien invasion, or CERN accidentally creating a blackhole.