@ADS: I enjoyed your discussion of (1), but I understood the conclusion to be :shrug:. Is that where you’re at?
Generally, my impression is that differential technological development is an idea that seems right in theory, but the project of figuring out how to apply it in practice seems rather… nascent. For example:
(a) Our stories about which areas we should speed up and slow down are pretty speculative, and while I’m sure we can improve them, the prospects for making them very robust seem limited. DTD does not free us from the uncomfortable position of having to “take a punt” on some extremely high stakes issues.
(b) I’m struggling to think of examples of public discussion of how “strong” a version of DTD we should aim for in practice (pointers, anyone?).
Yes, the upshot from that piece is “eh”. I think there are some plausible XR-minded arguments in favor of economic growth, but I don’t find them overly compelling.
In practice, I think the particulars matter a lot. If you were to say, make progress on a cost-effective malaria vaccine, it’s hard to argue that it’ll end up bringing about superintelligence in the next couple decades. But it depends on your time scale. If you think AI is more on a 100 year time horizon, there might be more reason to be worried about growth.
R.e. DTD, I think it depends way more than EA/XR people tend to think on global coordination.
@ADS: I enjoyed your discussion of (1), but I understood the conclusion to be :shrug:. Is that where you’re at?
Generally, my impression is that differential technological development is an idea that seems right in theory, but the project of figuring out how to apply it in practice seems rather… nascent. For example:
(a) Our stories about which areas we should speed up and slow down are pretty speculative, and while I’m sure we can improve them, the prospects for making them very robust seem limited. DTD does not free us from the uncomfortable position of having to “take a punt” on some extremely high stakes issues.
(b) I’m struggling to think of examples of public discussion of how “strong” a version of DTD we should aim for in practice (pointers, anyone?).
Hey sorry for the late reply, I missed this.
Yes, the upshot from that piece is “eh”. I think there are some plausible XR-minded arguments in favor of economic growth, but I don’t find them overly compelling.
In practice, I think the particulars matter a lot. If you were to say, make progress on a cost-effective malaria vaccine, it’s hard to argue that it’ll end up bringing about superintelligence in the next couple decades. But it depends on your time scale. If you think AI is more on a 100 year time horizon, there might be more reason to be worried about growth.
R.e. DTD, I think it depends way more than EA/XR people tend to think on global coordination.