1. Growth will have flowthrough effects on existential risk.
2. You have a comparative advantage effecting growth over x-risk.
You can agree with Bostrom that x-risk is important, and also think that you should be working on growth. This is something very close to my personal view on what I’m working on.
Growth will have flowthrough effects on existential risk.
This makes sense as an assumption, but the post itself didn’t argue for this thesis at all.
If the argument was that the best way to help the longterm future is to minimize existential risk, and the best way to minimize existential risk is by increasing economic growth, then you’d expect the post to primarily talk about how economic growth decreases existential risk. Instead, the post focuses on human welfare, which is important, but secondary to the argument you are making.
This is something very close to my personal view on what I’m working on.
Can you go more into detail? I’m also very interested in how increased economic growth impacts existential risk. This is a very important question because it could determine the influence from accelerating economic-growth inducing technologies such as AI and anti-aging.
It seems to me that there’s a background assumption of many global poverty EAs that human welfare has positive flowthrough effects for basically everything else.
I’m also very interested in how increased economic growth impacts existential risk.
At one point I was focused on accelerating innovation, but have come to be more worried about increasing x-risk (I have a question somewhere else on the post that gets at this).
I’ve since added a constraint into my innovation acceleration efforts, and now am basically focused on “asymmetric, wisdom-constrained innovation.”
It seems to me that there’s a background assumption of many global poverty EAs that human welfare has positive flowthrough effects for basically everything else.
If this is true, is there a post that expands on this argument, or is it something left implicit?
I’ve since added a constraint into my innovation acceleration efforts, and now am basically focused on “asymmetric, wisdom-constrained innovation.”
I think Bostrom has talked about something similar: namely, differential technological development (he talks about technology rather than economic growth, but the two are very related). The idea is that fast innovation in some fields is preferable to fast innovation in others, and we should try to find which areas to speed up the most.
No, I actually think the post is ignoring x-risk as a cause area to focus on now. It makes sense under certain assumptions and heuristics (e.g. if you think near term x-risk is highly unlikely, or you’re using absurdity heuristics), I think I was more giving my argument for how this post could be compatible with Bostrom.
Let’s say you believe two things:
1. Growth will have flowthrough effects on existential risk.
2. You have a comparative advantage effecting growth over x-risk.
You can agree with Bostrom that x-risk is important, and also think that you should be working on growth. This is something very close to my personal view on what I’m working on.
This makes sense as an assumption, but the post itself didn’t argue for this thesis at all.
If the argument was that the best way to help the longterm future is to minimize existential risk, and the best way to minimize existential risk is by increasing economic growth, then you’d expect the post to primarily talk about how economic growth decreases existential risk. Instead, the post focuses on human welfare, which is important, but secondary to the argument you are making.
Can you go more into detail? I’m also very interested in how increased economic growth impacts existential risk. This is a very important question because it could determine the influence from accelerating economic-growth inducing technologies such as AI and anti-aging.
It seems to me that there’s a background assumption of many global poverty EAs that human welfare has positive flowthrough effects for basically everything else.
At one point I was focused on accelerating innovation, but have come to be more worried about increasing x-risk (I have a question somewhere else on the post that gets at this).
I’ve since added a constraint into my innovation acceleration efforts, and now am basically focused on “asymmetric, wisdom-constrained innovation.”
If this is true, is there a post that expands on this argument, or is it something left implicit?
I think Bostrom has talked about something similar: namely, differential technological development (he talks about technology rather than economic growth, but the two are very related). The idea is that fast innovation in some fields is preferable to fast innovation in others, and we should try to find which areas to speed up the most.
No, I actually think the post is ignoring x-risk as a cause area to focus on now. It makes sense under certain assumptions and heuristics (e.g. if you think near term x-risk is highly unlikely, or you’re using absurdity heuristics), I think I was more giving my argument for how this post could be compatible with Bostrom.