I might well donate to this. You’ve got a good framework, which is that long-run impacts are important but tough to know. I agree with investigating all five of these topics and with changing institutions to address unknown future risks. That seems at least as likely to work as direct mitigation of known ones. Your comment on the relative importance of different kinds of meta-research for the far future also seems spot on.
Some smaller points:
I’m with you on immigration but for different reasons. I don’t see why increasing GDP is particularly great for maximizing long-run welfare since as Nick Bostrom says in his existential risk paper, what we really want to optimize for is safety. So my guess is that immigration’s biggest impact would be increasing good-faith cooperation between different countries to avoid dangerous unilateral initiatives, rather than boosting human capital.
Some other things I think might be worth looking into are
1. Not only foresight but methods for communicating whatever is found to policymakers, and in democratic countries, the public. There might arise situations where we can predict outcomes but only a few people know and are ineffective. I happen to be thinking here about embryo selection. In general, I hope that as Al Gore writes in his book “The Future,” we are able to “steer” and especially steer technological changes to suit current priorities instead of just having them drop into our laps out of nowhere. Or as Paul Christiano says, we should increase the influence of human values over the far future. This is contrast to Robin Hanson who has actually written that voter foresight is bad.
2. Lowering barriers to international trade and maybe promoting democracy because democratic countries tend to be more peaceful and internationally cooperative. But there might already be a lot of money flowing toward this.
3. Whether we can really expect, in the case of AI, any current actions to persist into whatever future world could create a potentially dangerous, self-sufficient AI civilization. In other words, we already face high uncertainty about the efficacy of altering the political landscape right now or in the near future. The “track” leading to AI seems hugely volatile, adding a whole new layer of haze. This suggests to me that no action is now justified on this.
Lastly, as a practical issue if you did make an organization I would hope it could avoid taking a clear stance on the transhumanist vs. bioconservative question, since for me that might be a deal breaker, in contrast to the above. Unfortunately this is why I don’t donate to FHI.
I might well donate to this. You’ve got a good framework, which is that long-run impacts are important but tough to know. I agree with investigating all five of these topics and with changing institutions to address unknown future risks. That seems at least as likely to work as direct mitigation of known ones. Your comment on the relative importance of different kinds of meta-research for the far future also seems spot on.
Some smaller points:
I’m with you on immigration but for different reasons. I don’t see why increasing GDP is particularly great for maximizing long-run welfare since as Nick Bostrom says in his existential risk paper, what we really want to optimize for is safety. So my guess is that immigration’s biggest impact would be increasing good-faith cooperation between different countries to avoid dangerous unilateral initiatives, rather than boosting human capital.
http://www.nickbostrom.com/papers/unilateralist.pdf
Some other things I think might be worth looking into are
1. Not only foresight but methods for communicating whatever is found to policymakers, and in democratic countries, the public. There might arise situations where we can predict outcomes but only a few people know and are ineffective. I happen to be thinking here about embryo selection. In general, I hope that as Al Gore writes in his book “The Future,” we are able to “steer” and especially steer technological changes to suit current priorities instead of just having them drop into our laps out of nowhere. Or as Paul Christiano says, we should increase the influence of human values over the far future. This is contrast to Robin Hanson who has actually written that voter foresight is bad.
http://www.overcomingbias.com/2011/01/against-voter-foresight.html
2. Lowering barriers to international trade and maybe promoting democracy because democratic countries tend to be more peaceful and internationally cooperative. But there might already be a lot of money flowing toward this.
http://longnow.org/seminars/02012/oct/08/decline-violence/
3. Whether we can really expect, in the case of AI, any current actions to persist into whatever future world could create a potentially dangerous, self-sufficient AI civilization. In other words, we already face high uncertainty about the efficacy of altering the political landscape right now or in the near future. The “track” leading to AI seems hugely volatile, adding a whole new layer of haze. This suggests to me that no action is now justified on this.
Lastly, as a practical issue if you did make an organization I would hope it could avoid taking a clear stance on the transhumanist vs. bioconservative question, since for me that might be a deal breaker, in contrast to the above. Unfortunately this is why I don’t donate to FHI.