Thanks! I think that’s a good summary of possible views.
FWIW I personally have some speculative pro-progress anti-xr-fixation views, but haven’t been quite ready to express them publicly, and I don’t think they’re endorsed by other members of the Progress community.
Tyler did send me some comments acknowledging that the far future is important in EV calculations. His counterargument is more or less that this still suggests prioritizing the practical work of improving institutions, rather than agonizing over the philosophical arguments. I’m heavily paraphrasing there.
He did also mention the risk of falling behind in AI development to less cautious actors. My own counterargument here is that this is a reason to either a) work very quickly on developing safe AI and b) work very hard on international cooperation. Though perhaps he would say those are both part of the Progress agenda anyway.
Ultimately, I suspect much of the disagreement comes down to there not being a real Applied Progress Studies agenda at the moment, and if one were put together, we would find it surprisingly aligned with XR aims. I won’t speculate too much on what such a thing might entail, but one very low hanging recommendations would be something like:
Ramp up high skilled immigration (especially from China, especially in AI, biotech, EE and physics) by expanding visa access and proactively recruiting scientists
Thanks! I think that’s a good summary of possible views.
FWIW I personally have some speculative pro-progress anti-xr-fixation views, but haven’t been quite ready to express them publicly, and I don’t think they’re endorsed by other members of the Progress community.
Tyler did send me some comments acknowledging that the far future is important in EV calculations. His counterargument is more or less that this still suggests prioritizing the practical work of improving institutions, rather than agonizing over the philosophical arguments. I’m heavily paraphrasing there.
He did also mention the risk of falling behind in AI development to less cautious actors. My own counterargument here is that this is a reason to either a) work very quickly on developing safe AI and b) work very hard on international cooperation. Though perhaps he would say those are both part of the Progress agenda anyway.
Ultimately, I suspect much of the disagreement comes down to there not being a real Applied Progress Studies agenda at the moment, and if one were put together, we would find it surprisingly aligned with XR aims. I won’t speculate too much on what such a thing might entail, but one very low hanging recommendations would be something like:
Ramp up high skilled immigration (especially from China, especially in AI, biotech, EE and physics) by expanding visa access and proactively recruiting scientists