There is a growing shift within EA toward longtermism, which is a natural consequence of expanding our moral circle to include future sentient beings.
While not claiming to be an authority on longtermism, or anything close, my first impression so far is that this is yet another topic which appeals to academics because of it’s complexity, but serves mostly to distract us from the far simpler fundamental challenges that we should be focused on. For example...
If we don’t take control of the knowledge explosion, there’s not going to be a long term, and thus no need for longtermism.
If I understand correctly, longtermism seems to assume that we can accept and largely ignore the status quo of an ever accelerating knowledge explosion, defeat the multiplying threats that emerge from that explosion one by one by one without limit, and thus some day arrive at the long term which we are supposed to be concerned about.
If that’s at least a somewhat accurate summary of longtermism, not buying it.
I think the argument for longtermism is pretty straightforward: if we have a long future then most people who will ever exist will live in the future. If we value all people across all times equally, then we should care far more about the future than the present.
Hi Phil. I’m also not an authority on the topic, but I think your summary of longtermism is not accurate. You seem to be worried about the effects of the knowledge explosion, which means that you also care about the future. Maybe you disagree with strong longtermism (as I do, for the reasons above) or think that we should worry about the not-so-distant future. I would say that is still to some extent (a fraction of) longtermism. So even if you don’t buy the whole package, you may still agree with a part of longtermism.
While not claiming to be an authority on longtermism, or anything close, my first impression so far is that this is yet another topic which appeals to academics because of it’s complexity, but serves mostly to distract us from the far simpler fundamental challenges that we should be focused on. For example...
If we don’t take control of the knowledge explosion, there’s not going to be a long term, and thus no need for longtermism.
If I understand correctly, longtermism seems to assume that we can accept and largely ignore the status quo of an ever accelerating knowledge explosion, defeat the multiplying threats that emerge from that explosion one by one by one without limit, and thus some day arrive at the long term which we are supposed to be concerned about.
If that’s at least a somewhat accurate summary of longtermism, not buying it.
I think the argument for longtermism is pretty straightforward: if we have a long future then most people who will ever exist will live in the future. If we value all people across all times equally, then we should care far more about the future than the present.
Also, what do you mean by ‘knowledge explosion’?
Hi Phil. I’m also not an authority on the topic, but I think your summary of longtermism is not accurate. You seem to be worried about the effects of the knowledge explosion, which means that you also care about the future. Maybe you disagree with strong longtermism (as I do, for the reasons above) or think that we should worry about the not-so-distant future. I would say that is still to some extent (a fraction of) longtermism. So even if you don’t buy the whole package, you may still agree with a part of longtermism.