Frances, your posts are always so well laid out with just the right amount of ease-of-reading colloquialism and depth of detail. You must teach me this dark art at some point!
As for the content of the post itself, it’s funny that recently the two big criticisms of longermism in EA are that EA is too longtermist and that EA isn’t longtermist enough! I’ve always thought that means it’s about right, haha. You can’t keep everyone happy all of the time.
I’m one of those people you mention who only really interacts with the longtermist side of EA because it meshes well with my area of expertise, but to be honest I feel it’s about right at present. If EA were to become all-longtermist I think it would be a bit of an over-correction and we need other philosophies too in order to keep a broad palette, but if it did happen and if there was good reason for it—I’d get over it.
In regards to:
I personally disagree with this. As a counter-argument:
Longtermism, as a worldview, does not want present day people to suffer; instead, it wants to work towards a future with as little suffering as possible, for everyone.
My one criticism of current longtermist thought is thatsometimes people excluse short-term actions because it’s not ‘longtermist enough’ , eg. creating fertile ground for positive future actions, but this is potentially just in my sphere and isn’t representative of the greater philosophy. I just know from conversations I’ve had that some people are of the opinion that it’s the ‘big win in 50 years time’ or nothing at all, and don’t like the idea of positive short-term baby steps towards aligning a longterm future. However, the literature is fine with this and it seems to just be some people who aren’t, so perhaps it’s the company I keep :) You’re right though in that the worldview itself addresses this.
Luke, thank you for always being so kind :)) I very much appreciate you sharing your thoughts!!
“sometimes people exclude short-term actions because it’s not ‘longtermist enough’” That’s a really good point on how we see longtermism being pursued in practice. I would love to investigate whether others are feeling this way. I have certainly felt it myself in AI Safety. There’s some vague sense that current-day concerns (like algorithmic bias) are not really AI Safety research. Although I’ve talked to some who think addressing these issues first is key in building towards alignment. I’m not even totally sure where this sense comes from, other than that fairness research is really not talked about much at all in safety spaces.
Glad you brought this up as it’s definitely important to field/community building.
“There’s some vague sense that current-day concerns (like algorithmic bias) are not really AI Safety research. Although I’ve talked to some who think addressing these issues first is key in building towards alignment.
Now don’t go setting me off about this topic! You know what I’m like. Suffice to say I think combatting social issues like algorithmic bias are potentially the only way to realistically begin the alignment process. Build transparency etc. But that’s a conversation for another post :D
Frances, your posts are always so well laid out with just the right amount of ease-of-reading colloquialism and depth of detail. You must teach me this dark art at some point!
As for the content of the post itself, it’s funny that recently the two big criticisms of longermism in EA are that EA is too longtermist and that EA isn’t longtermist enough! I’ve always thought that means it’s about right, haha. You can’t keep everyone happy all of the time.
I’m one of those people you mention who only really interacts with the longtermist side of EA because it meshes well with my area of expertise, but to be honest I feel it’s about right at present. If EA were to become all-longtermist I think it would be a bit of an over-correction and we need other philosophies too in order to keep a broad palette, but if it did happen and if there was good reason for it—I’d get over it.
In regards to:
My one criticism of current longtermist thought is that sometimes people excluse short-term actions because it’s not ‘longtermist enough’ , eg. creating fertile ground for positive future actions, but this is potentially just in my sphere and isn’t representative of the greater philosophy. I just know from conversations I’ve had that some people are of the opinion that it’s the ‘big win in 50 years time’ or nothing at all, and don’t like the idea of positive short-term baby steps towards aligning a longterm future. However, the literature is fine with this and it seems to just be some people who aren’t, so perhaps it’s the company I keep :) You’re right though in that the worldview itself addresses this.
Great post!
Luke, thank you for always being so kind :)) I very much appreciate you sharing your thoughts!!
“sometimes people exclude short-term actions because it’s not ‘longtermist enough’”
That’s a really good point on how we see longtermism being pursued in practice. I would love to investigate whether others are feeling this way. I have certainly felt it myself in AI Safety. There’s some vague sense that current-day concerns (like algorithmic bias) are not really AI Safety research. Although I’ve talked to some who think addressing these issues first is key in building towards alignment. I’m not even totally sure where this sense comes from, other than that fairness research is really not talked about much at all in safety spaces.
Glad you brought this up as it’s definitely important to field/community building.
Now don’t go setting me off about this topic! You know what I’m like. Suffice to say I think combatting social issues like algorithmic bias are potentially the only way to realistically begin the alignment process. Build transparency etc. But that’s a conversation for another post :D