Love it! And I love the series of posts you had written lately.
I think that the suggestions here, and most of the arguments, should apply to “Everyday EA ” which isn’t necessarily longtermistic. I’d be interested in your thoughts about where exactly should we make a distinction between everyday longtermist actions and non-longtermist everyday actions.
Some further suggestions:
Be more cooperative. (There are arguments about increasing cooperation, especially from people working on reducing S-risks, but I couldn’t find any suitable resource in a brief search)
Take a strong stance against narrow moral circles.
Have a good pitch prepared about longtermism and EA broadly. Balance confidence with adequate uncertainty.
Have a well-structured methodology for getting interested acquaintances more involved with EA.
Help friends in EA/longtermism more.
Strengthen relationships with friends who have a high potential to be highly influential in the future.
Be more cooperative. (There are arguments about increasing cooperation, especially from people working on reducing S-risks, but I couldn’t find any suitable resource in a brief search)
I share the view that that could be a good suggestion for an action/proxy in line with everyday longtermism.
Thanks! I didn’t see any post under that tag that had the type of argument I had in mind, but I think that this article by Brian Tomasik was what I intended (which I found now from the reading list you linked).
Yeah, I should’ve said “resources on cooperation (including in the relevant sense)”, or something like that. The tag’s scope is a bit broad/messy (though I still think it’s useful—perhaps unsurprisingly, given I made it :D).
Yea, I think the tag is great! I was surprised that I couldn’t find a resource from the forum, not that the tag wasn’t comprehensive enough :)
It might be nice if someone would collect resources outside the forum and publish each one as a forum linkpost so that people could comment and vote on them and they’d be archived in the forum.
I’ve often thought pretty much exactly the same thought, but have sometimes held back because I don’t see that done super often and thus worried it’d be weird or that there was some reason not to. Your comment has made me more inclined to just do it more often.
Though I do wonder where the line should be. E.g., it’d seem pretty weird to just try to link post every single journal article on nuclear war which I found useful.
Maybe it’s easier to draw the line if this is mostly limited to posts by explicitly EA people/orgs? Then we aren’t opening the door to just trying to linkpost the entire internet :D
Yea, I don’t know. I think that it may even be worthwhile to linkpost every such journal article if you also write your notes on these and cross-link different articles, but I agree that it would be weird. I’m sure that there must be a better way for EA to coordinate on such knowledge building and management.
(Partly prompted by this thread, I’ve made a question post on whether pretty much all content that’s EA-relevant and/or created by EAs should be (link)posted to the Forum.)
Be more cooperative. (There are arguments about increasing cooperation, especially from people working on reducing S-risks, but I couldn’t find any suitable resource in a brief search)
Take a strong stance against narrow moral circles.
Have a good pitch prepared about longtermism and EA broadly. Balance confidence with adequate uncertainty.
Have a well-structured methodology for getting interested acquaintances more involved with EA.
Help friends in EA/longtermism more.
Strengthen relationships with friends who have a high potential to be highly influential in the future.
I basically like all of these. I think there might be versions which could be bad, but they seem like a good direction to be thinking in.
I’d love to see further exploration of these—e.g. I think any of your six suggestions could deserve a top-level post going into the weeds (& ideally reporting on experiences from trying to implement it). I feel most interested in #3, but not confidently so.
I think that the suggestions here, and most of the arguments, should apply to “Everyday EA ” which isn’t necessarily longtermistic. I’d be interested in your thoughts about where exactly should we make a distinction between everyday longtermist actions and non-longtermist everyday actions.
I agree that quite a bit of the content seems not to be longtermist-specific. But I was approaching it from a longtermist perspective (where I think the motivation is particularly strong), and I haven’t thought it through so carefully from other angles.
I think the key dimension of “longtermism” that I’m relying on is the idea that the longish-term (say 50+ years) indirect effects of one’s actions are a bigger deal in expectation than the directly observable effects. I don’t think that that requires e.g. any assumptions about astronomically large futures. But if you thought that such effects were very small compared to directly observable effects, then you might think that the best everyday actions involved e.g. saving money or fundraising for charities you had strong reason to believe were effective.
Hmm. There are many studies on “friend of a friend” relationships (say this on how happiness propagates through the friendship network). I think that it would be interesting to research how some moral behaviors or beliefs propagate through the friendship networks (I’d be surprised if there isn’t a study on the effects of a transition to a vegetarian diet, say). Once we have a reasonable model of how that works we could make a basic analysis of the impact of such daily actions. (Although I expect some non-linear effects that would make this very complicated)
Love it! And I love the series of posts you had written lately.
I think that the suggestions here, and most of the arguments, should apply to “Everyday EA ” which isn’t necessarily longtermistic. I’d be interested in your thoughts about where exactly should we make a distinction between everyday longtermist actions and non-longtermist everyday actions.
Some further suggestions:
Be more cooperative. (There are arguments about increasing cooperation, especially from people working on reducing S-risks, but I couldn’t find any suitable resource in a brief search)
Take a strong stance against narrow moral circles.
Have a good pitch prepared about longtermism and EA broadly. Balance confidence with adequate uncertainty.
Have a well-structured methodology for getting interested acquaintances more involved with EA.
Help friends in EA/longtermism more.
Strengthen relationships with friends who have a high potential to be highly influential in the future.
I share the view that that could be a good suggestion for an action/proxy in line with everyday longtermism.
Two collections of resources on cooperation (in the relevant sense) are the Cooperation & Coordination tag and (parts of) EA reading list: moral uncertainty, moral cooperation, and values spreading.
Thanks! I didn’t see any post under that tag that had the type of argument I had in mind, but I think that this article by Brian Tomasik was what I intended (which I found now from the reading list you linked).
Yeah, I should’ve said “resources on cooperation (including in the relevant sense)”, or something like that. The tag’s scope is a bit broad/messy (though I still think it’s useful—perhaps unsurprisingly, given I made it :D).
Yea, I think the tag is great! I was surprised that I couldn’t find a resource from the forum, not that the tag wasn’t comprehensive enough :)
It might be nice if someone would collect resources outside the forum and publish each one as a forum linkpost so that people could comment and vote on them and they’d be archived in the forum.
I’ve often thought pretty much exactly the same thought, but have sometimes held back because I don’t see that done super often and thus worried it’d be weird or that there was some reason not to. Your comment has made me more inclined to just do it more often.
Though I do wonder where the line should be. E.g., it’d seem pretty weird to just try to link post every single journal article on nuclear war which I found useful.
Maybe it’s easier to draw the line if this is mostly limited to posts by explicitly EA people/orgs? Then we aren’t opening the door to just trying to linkpost the entire internet :D
Yea, I don’t know. I think that it may even be worthwhile to linkpost every such journal article if you also write your notes on these and cross-link different articles, but I agree that it would be weird. I’m sure that there must be a better way for EA to coordinate on such knowledge building and management.
(Partly prompted by this thread, I’ve made a question post on whether pretty much all content that’s EA-relevant and/or created by EAs should be (link)posted to the Forum.)
💖
I basically like all of these. I think there might be versions which could be bad, but they seem like a good direction to be thinking in.
I’d love to see further exploration of these—e.g. I think any of your six suggestions could deserve a top-level post going into the weeds (& ideally reporting on experiences from trying to implement it). I feel most interested in #3, but not confidently so.
Gidon Kadosh, from EA Israel, is drafting a post with a suggested pitch for EA :)
I agree that quite a bit of the content seems not to be longtermist-specific. But I was approaching it from a longtermist perspective (where I think the motivation is particularly strong), and I haven’t thought it through so carefully from other angles.
I think the key dimension of “longtermism” that I’m relying on is the idea that the longish-term (say 50+ years) indirect effects of one’s actions are a bigger deal in expectation than the directly observable effects. I don’t think that that requires e.g. any assumptions about astronomically large futures. But if you thought that such effects were very small compared to directly observable effects, then you might think that the best everyday actions involved e.g. saving money or fundraising for charities you had strong reason to believe were effective.
Hmm. There are many studies on “friend of a friend” relationships (say this on how happiness propagates through the friendship network). I think that it would be interesting to research how some moral behaviors or beliefs propagate through the friendship networks (I’d be surprised if there isn’t a study on the effects of a transition to a vegetarian diet, say). Once we have a reasonable model of how that works we could make a basic analysis of the impact of such daily actions. (Although I expect some non-linear effects that would make this very complicated)