On the biological human side, since we have figured out how to grow our economies faster than our population, our standard of living has increased much beyond subsistence. Many would argue that even at subsistence, human existence was still net positive, but I think it is fairly clear that human existence in developed countries currently is net positive. In the future, barring a global catastrophe, I think we could maintain or increase our standard of living (see my second comment here).
On the computer consciousness side, it is much less straightforward. Robin Hanson has written a lot on what the future might be like if there are many competing computer consciousnesses (e.g. link). Since it is so easy to create a copy of software, he argues that the big supply of labor will reduce wages to subsistence levels, unless we somehow are able to regulate the process. I couldn’t find exactly where, but I believe he argues that the subsistence levels might be quite happy. The logic went something like an optimally productive worker is generally a happy and highly motivated worker, like a workaholic.
However, if there is fast takeoff of an individual computer consciousness, that could become completely dominant. Making that a happy outcome is where MIRI comes in. I am currently pretty scared about our chances in this scenario. But now that we even have Bill Gates concerned about it (though not donating yet), I am hopeful we can improve our odds soon.
Thanks for answering. I don’t really care about computer consciousnesses because I’m somewhat of a carbon chauvinist; I only care what happens to biological humans and other vertebrates who share my ancestry and brain architecture. I think the rest is just our empathy misfiring.
AI or em catastrophe would be terrible, but likely not hellish, so it would be merely a dead future, not a net-negative one.
The things I’m most concerned about are blind spots like animal suffering and political risks like irrational policies that cause more harm than benefit. If we include these, I think it’s plausible there is net-negative aggregate welfare even in developed countries. Technology might change these, but I think political risks and human biases (moral blind spots) can make any innovation useless or net harmful. I don’t know how to address these because I don’t believe advocacy actually works.
On the biological human side, since we have figured out how to grow our economies faster than our population, our standard of living has increased much beyond subsistence. Many would argue that even at subsistence, human existence was still net positive, but I think it is fairly clear that human existence in developed countries currently is net positive. In the future, barring a global catastrophe, I think we could maintain or increase our standard of living (see my second comment here).
On the computer consciousness side, it is much less straightforward. Robin Hanson has written a lot on what the future might be like if there are many competing computer consciousnesses (e.g. link). Since it is so easy to create a copy of software, he argues that the big supply of labor will reduce wages to subsistence levels, unless we somehow are able to regulate the process. I couldn’t find exactly where, but I believe he argues that the subsistence levels might be quite happy. The logic went something like an optimally productive worker is generally a happy and highly motivated worker, like a workaholic.
However, if there is fast takeoff of an individual computer consciousness, that could become completely dominant. Making that a happy outcome is where MIRI comes in. I am currently pretty scared about our chances in this scenario. But now that we even have Bill Gates concerned about it (though not donating yet), I am hopeful we can improve our odds soon.
Thanks for answering. I don’t really care about computer consciousnesses because I’m somewhat of a carbon chauvinist; I only care what happens to biological humans and other vertebrates who share my ancestry and brain architecture. I think the rest is just our empathy misfiring.
AI or em catastrophe would be terrible, but likely not hellish, so it would be merely a dead future, not a net-negative one.
The things I’m most concerned about are blind spots like animal suffering and political risks like irrational policies that cause more harm than benefit. If we include these, I think it’s plausible there is net-negative aggregate welfare even in developed countries. Technology might change these, but I think political risks and human biases (moral blind spots) can make any innovation useless or net harmful. I don’t know how to address these because I don’t believe advocacy actually works.