How does XR weigh costs and benefits? Does XR consider tech progress default-good or default-bad?
The core concept here is differential intellectual progress. Tech progress can be bad if it reorders the sequence of technological developments to be worse, by making a hazard precede its mitigation. In practice, that applies mainly to gain-of-function research and to some, but not all, AI/ML research. There are lots of outstanding disagreements between rationalists about which AI/ML research is good vs bad, which, when zoomed in on, reveal disagreements about AI timeline and takeoff forecasts, and about the feasibility of particular AI-safety research directions.
Progress in medicine (especially aging- and cryonics-related medicine) is seen very positively (though there’s a deep distrust of the existing institutions in this area, which bottoms out in a lot of rationalists doing their own literature reviews and wishing they could do their own experiments).
On a more gut/emotional level, I would plug my own Petrov Day ritual as attempting to capture the range of it: it’s a mixed bag with a lot of positive bits, and some terrifying bits, and the core message is that you’re supposed to be thinking about both and not trying to oversimplify things.
What would moral/social progress actually look like?
This seems like a good place to mention Dath Ilan, Eliezer’s fictional* universe which is at a much higher level of moral/social progress, and the LessWrong Coordination/Cooperation tag, which has some research pointing in that general direction.
What does XR think about the large numbers of people who don’t appreciate progress, or actively oppose it?
I don’t think I know enough to speak about the XR community broadly here, but as for me personally: mostly frustrated that their thinking isn’t granular enough. There’s a huge gulf between saying “social media is toxic” and saying “it is toxic for the closest thing to a downvote button to be reply/share”, and I try to tune out/unfollow the people whose writings say things closer to the former.
The core concept here is differential intellectual progress. Tech progress can be bad if it reorders the sequence of technological developments to be worse, by making a hazard precede its mitigation. In practice, that applies mainly to gain-of-function research and to some, but not all, AI/ML research. There are lots of outstanding disagreements between rationalists about which AI/ML research is good vs bad, which, when zoomed in on, reveal disagreements about AI timeline and takeoff forecasts, and about the feasibility of particular AI-safety research directions.
Progress in medicine (especially aging- and cryonics-related medicine) is seen very positively (though there’s a deep distrust of the existing institutions in this area, which bottoms out in a lot of rationalists doing their own literature reviews and wishing they could do their own experiments).
On a more gut/emotional level, I would plug my own Petrov Day ritual as attempting to capture the range of it: it’s a mixed bag with a lot of positive bits, and some terrifying bits, and the core message is that you’re supposed to be thinking about both and not trying to oversimplify things.
This seems like a good place to mention Dath Ilan, Eliezer’s fictional* universe which is at a much higher level of moral/social progress, and the LessWrong Coordination/Cooperation tag, which has some research pointing in that general direction.
I don’t think I know enough to speak about the XR community broadly here, but as for me personally: mostly frustrated that their thinking isn’t granular enough. There’s a huge gulf between saying “social media is toxic” and saying “it is toxic for the closest thing to a downvote button to be reply/share”, and I try to tune out/unfollow the people whose writings say things closer to the former.