From the comments in the NYT, two notes on communicating longtermism to people-like-NYT-readers:
Many readers are confused by the focus on humans.
Some readers are confused by the suggestion that longtermism is weird (Will: “It took me a long time to come around to longtermism”) rather than obvious.
Re 2, I do think it’s confusing to act like longtermism is nonobvious unless you’re emphasizing weird implications like our calculations being dominated by the distant future and x-risk and things at least as weird as digital minds filling the universe.
I’m also a bit surprised at how many of the comments are concerned about overpopulation. The most-recommended comment is essentially the tragedy of the commons. That comment’s tone—and the tone of many like it, as well as a bunch of anti-GOP ones—felt really fatalistic, which worries me. So many of the comments felt like variations on “we’re screwed”, which goes against the belief in a net-positive future upon which longtermism is predicated.
On that note, I’ll shoutout Jacy’s post from about a month ago, echoing those fears in a more-EA way.
which goes against the belief in a net-positive future upon which longtermism is predicated
Longtermism per se isn’t predicated on that belief at all—if the future is net-negative, it’s still (overwhelmingly) important to make future lives less bad.
“Humanity could, theoretically, last for millions of centuries on Earth alone.” I find this claim utterly absurd. I’d be surprised if humanity outlasts this century.
Ughh they’re so close to getting it! Maybe this should give me hope?
Basically, William MacAskill’s longtermism, or EA longtermism is trying to solve the distributional shift issue. Most cultures that have long-term thinking assume that there’s no distributional shift such that no key assumptions of the present are wrong. Now if this assumption is correct, we shouldn’t interfere with cultures, as they will go to local optimums. But it isn’t and thus longtermism from has to deal with weird scenarios like AI or x-risk.
Thus the form of EA longtermism is not obvious, as it can’t assume that there’s no distributional shift into out of distribution behavior. In fact, we have good reasons of thinking that there will be massive distributional shifts. That’s the key difference between EA and other culture’s longtermism.
From the comments in the NYT, two notes on communicating longtermism to people-like-NYT-readers:
Many readers are confused by the focus on humans.
Some readers are confused by the suggestion that longtermism is weird (Will: “It took me a long time to come around to longtermism”) rather than obvious.
Re 2, I do think it’s confusing to act like longtermism is nonobvious unless you’re emphasizing weird implications like our calculations being dominated by the distant future and x-risk and things at least as weird as digital minds filling the universe.
Good points, though it’s worth noting that the people who comment on NYT articles are probably not representative of the typical NYT reader
I’m also a bit surprised at how many of the comments are concerned about overpopulation. The most-recommended comment is essentially the tragedy of the commons. That comment’s tone—and the tone of many like it, as well as a bunch of anti-GOP ones—felt really fatalistic, which worries me. So many of the comments felt like variations on “we’re screwed”, which goes against the belief in a net-positive future upon which longtermism is predicated.
On that note, I’ll shoutout Jacy’s post from about a month ago, echoing those fears in a more-EA way.
Longtermism per se isn’t predicated on that belief at all—if the future is net-negative, it’s still (overwhelmingly) important to make future lives less bad.
I can’t unread this comment:
Ughh they’re so close to getting it! Maybe this should give me hope?
Basically, William MacAskill’s longtermism, or EA longtermism is trying to solve the distributional shift issue. Most cultures that have long-term thinking assume that there’s no distributional shift such that no key assumptions of the present are wrong. Now if this assumption is correct, we shouldn’t interfere with cultures, as they will go to local optimums. But it isn’t and thus longtermism from has to deal with weird scenarios like AI or x-risk.
Thus the form of EA longtermism is not obvious, as it can’t assume that there’s no distributional shift into out of distribution behavior. In fact, we have good reasons of thinking that there will be massive distributional shifts. That’s the key difference between EA and other culture’s longtermism.