I really liked this postāboth a lot of the specific ideas expressed, and the general style of thinking and communication used.
Vague example of the general style of thinking and communication seeming useful: When reading the āSafeguarding against naive utilitarianismā section, I realised that the basic way you described and diagrammed the points there were applicable to a very different topic Iād thought about recently, and provided a seemingly useful additional lens for that.
A few things that parts of this post made me think of:
I think the question of what longtermism recommends doing in all sorts of everyday situations probably overlaps a lot with, though is somewhat distinct from, the question of what a good Task Y is.
I think that that post could be seen as providing a useful framework/āset of proxies for thinking about what longtermism recommends doing, which are applicable for anything from everyday scenarios to a smaller class of higher leverage scenarios
I also think that the framework aligns with or complements some of your points and examples.
E.g., you note that someone moving towards being more strategic without moving towards being more virtuous could substantially increase the harm they do, since it might move them towards working in higher-leverage domains.
My post somewhat similarly notes that increasing an actorās āintelligenceā without increasing their ābenevolenceā could be harmful if the actor is below some threshold of benevolence, partly for a reason similar to the one you give, and partly because it might make the actor more effective in pursuing their harmful-in-expectation plan (not just changing what domain they make plans in).
E.g., you give as an example āBob goes to work as a primary school teacher, and helps the children to perceive themselves as moral actors, as well as rewarding clear thinking.ā
This dovetails with the idea that itās fairly robustly good to increase actorsā benevolence, as well as often good to increase their intelligence, especially if thatās packaged with increases to their benevolence.
That post seems to dovetail with some of the principles and examples you mention.
(Though Iām a bit skeptical of ideas in this direction, for reasons I expressed in comments on that post and will express in another comment on this post.)
Your section on āSimilar optimisation processes [speculative]ā reminded me of some points emphasised in that book, such as (from memory) that:
Historically, humans havenāt so much skilfully created institutions, norms, beliefs, etc. tailored to achieving certain objectives, but rather somewhat randomly created a vast array of institutions, norms, beliefs, etc., with some then surviving and spreading
It is therefore the case that ācultureā is āsmarterā than individual humans
People often donāt understand why or how the institutions etc. that theyāre used to achieve certain objectives, and might not be able to see the point of some components that actually are important. So people tend to just replicate whole packages, rather than picking and choosing components. And this is probably usually best overall, even if it means people replicate unnecessary components.
I really appreciate you highlighting these connections with other pieces of thinkingāa better version of my post would have included more of this kind of thing.
I really liked this postāboth a lot of the specific ideas expressed, and the general style of thinking and communication used.
Vague example of the general style of thinking and communication seeming useful: When reading the āSafeguarding against naive utilitarianismā section, I realised that the basic way you described and diagrammed the points there were applicable to a very different topic Iād thought about recently, and provided a seemingly useful additional lens for that.
A few things that parts of this post made me think of:
Can the EA community copy Teach for America? (Looking for Task Y)
I think the question of what longtermism recommends doing in all sorts of everyday situations probably overlaps a lot with, though is somewhat distinct from, the question of what a good Task Y is.
Improving the future by influencing actorsā benevolence, intelligence, and power
(Disclaimer: Written by me)
I think that that post could be seen as providing a useful framework/āset of proxies for thinking about what longtermism recommends doing, which are applicable for anything from everyday scenarios to a smaller class of higher leverage scenarios
I also think that the framework aligns with or complements some of your points and examples.
E.g., you note that someone moving towards being more strategic without moving towards being more virtuous could substantially increase the harm they do, since it might move them towards working in higher-leverage domains.
My post somewhat similarly notes that increasing an actorās āintelligenceā without increasing their ābenevolenceā could be harmful if the actor is below some threshold of benevolence, partly for a reason similar to the one you give, and partly because it might make the actor more effective in pursuing their harmful-in-expectation plan (not just changing what domain they make plans in).
E.g., you give as an example āBob goes to work as a primary school teacher, and helps the children to perceive themselves as moral actors, as well as rewarding clear thinking.ā
This dovetails with the idea that itās fairly robustly good to increase actorsā benevolence, as well as often good to increase their intelligence, especially if thatās packaged with increases to their benevolence.
Illegible impact is still impact
I expect that many āeveryday longtermistā actions would have relatively illegible impacts, due to being quite local, specific, or small-scale
Thus, everyday longtermism might be more attractive and satisfying if combined with the general principle that illegible impact is still impact.
Are we neglecting education? Philosophy in schools as a longtermist area
That post seems to dovetail with some of the principles and examples you mention.
(Though Iām a bit skeptical of ideas in this direction, for reasons I expressed in comments on that post and will express in another comment on this post.)
The Secret of Our Success
Your section on āSimilar optimisation processes [speculative]ā reminded me of some points emphasised in that book, such as (from memory) that:
Historically, humans havenāt so much skilfully created institutions, norms, beliefs, etc. tailored to achieving certain objectives, but rather somewhat randomly created a vast array of institutions, norms, beliefs, etc., with some then surviving and spreading
It is therefore the case that ācultureā is āsmarterā than individual humans
People often donāt understand why or how the institutions etc. that theyāre used to achieve certain objectives, and might not be able to see the point of some components that actually are important. So people tend to just replicate whole packages, rather than picking and choosing components. And this is probably usually best overall, even if it means people replicate unnecessary components.
I really appreciate you highlighting these connections with other pieces of thinkingāa better version of my post would have included more of this kind of thing.