Some strategic decisions available to the effective altruism movement may be difficult to reverse. One example is making the movementās brand explicitly political. Another is growing large. Under high uncertainty, there is often reason to avoid or delay such hard-to-reverse decisions.
And even when weāre confident we wonāt want to reverse a decision, we might want to be careful about locking in a decision that will be hard to tweak later, if it wouldāve been even better to make a somewhat different version of the decision. Relatedly, you write:
The initial development of any curriculum is important as large deviations from this starting point are rare.
With this in mind, I think Iād want us to be quite cautious about trying to push for or accelerate the widespread adoption of philosophy courses, when trying to include fairly explicitly EA-related ideas in proposed courses, or when doing things that could lead to EA being seen as associated with proposed courses or the pushes for them.
It seems possible that associating EA with this could backfire in some way, like if pushes for philosophy courses later become a partisan issue and EA thus comes to be seen as associated with one side of politics. Or if it turns out that mandated philosophy courses end up turning students off whatever they learn in those courses, or giving them a bad first impression of EA somehow, or diluting what EA is seen as being about (e.g., making it seem to be all about obligations to do a certain set of things, rather than also including things like excitement and exploration of new areas).
It also seems possible that this wouldnāt backfire, but that weād achieve more impact if we thought more carefully first, and that itād be very hard to change path once things are set in motion.
So I think what Iād be most excited about in this space is people:
Looking into this further
Trying to find ways to pilot this sort of thing on a relatively small scale, and thereby gather more info
Trying to influence the academic studies already being run on this sort of thing to include more relevant material (e.g., EA-relevant thought experiments) and measures (e.g., measures of expansion of moral circles to future generations; measures of perceptions of EA)
Getting into relevant positions and networks to later be able to influence how this stuff goes
Potentially influencing path-dependent decisions that would be made soon with or without us, if this is relevant
In such cases, it can sometimes be worth just using our best guesses rather than waiting and thinking more, as the window is closing and our best guesses may be better than what other people would do otherwise
But it may be worth staying out of it even in such cases, due to things like reputation risks or risks of coming to be perceived as partisan
Iād be less excited about increasing the chance that path-dependent decisions are made soon, e.g. by pushing for widespread adoption of philosophy courses.
I share your concerns about EA being too associated with all of this. Iām not sure it has to be though.
For example when it comes to my suggestion of prominent EA academics (e.g. Peter Singer, Toby Ord etc.) joining advocacy efforts to boost philosophy in schools I donāt mean they should do this with their EA hat on. Peter Singer could do this as āthe worldās most famous living ethicistā as opposed to as āthe godfather of EAā. Similarly we wouldnāt need EAs to say āplease include these EA ideas in the curriculumā, we could just have them say āthe ethics of eating meat is a huge issue that should be includedā. In short EA doesnāt have to explicitly come into this at all.
The inclusion of EA ideas into curricula was only one of my points anyway and it may not be absolutely necessary. As I mentioned, explicit EA outreach is probably better done at undergraduate level. Before uni, the most important thing is just philosophical learning.
Yes, Iād definitely guess that thereād be ways to do this, or versions of this, which wouldnāt lead to people seeing this push as associated with EA. And that would reduce some risks.
(I didnāt mean to imply my points should push against doing any version of this idea, just that they push against some versions of the idea, or push for particularly great caution regarding those versions of the idea. Also, itās possible that association with EA would actually be net positive by raising EAās profile or associating it with something concrete that many people end up liking; Iām just unsure, and think we should be cautious there.)
But counteracting those risks wonāt necessarily counteract the other sort of risk I mentioned, which is that rushing somewhat means a less good version of this is implemented than what couldāve been implemented, and once itās implemented itās extremely hard to change. So thatās a separate reason to consider things like piloting and doing further research before pushing for widespread rollouts, even if the version of this thatās being done isnāt perceived as associated with EA at all.
(And thatās not a critique of your post, as your post isnāt a public campaign but rather a post to the EA Forum sharing an idea and soliciting input, which is definitely within the category of things Iād suggest at this stage.)
OK thanks that all makes sense. I would love for there to further research and investigation. For example some philosophers/āeducation practitioners in the movement could have a look at the philosophy course I mention to see if itās something that is worth supporting in addition to your suggestions in another comment.
Hard-to-reverse and hard-to-tweak decisions
Summary from the article Hard-to-reverse decisions destroy option value:
And even when weāre confident we wonāt want to reverse a decision, we might want to be careful about locking in a decision that will be hard to tweak later, if it wouldāve been even better to make a somewhat different version of the decision. Relatedly, you write:
With this in mind, I think Iād want us to be quite cautious about trying to push for or accelerate the widespread adoption of philosophy courses, when trying to include fairly explicitly EA-related ideas in proposed courses, or when doing things that could lead to EA being seen as associated with proposed courses or the pushes for them.
It seems possible that associating EA with this could backfire in some way, like if pushes for philosophy courses later become a partisan issue and EA thus comes to be seen as associated with one side of politics. Or if it turns out that mandated philosophy courses end up turning students off whatever they learn in those courses, or giving them a bad first impression of EA somehow, or diluting what EA is seen as being about (e.g., making it seem to be all about obligations to do a certain set of things, rather than also including things like excitement and exploration of new areas).
It also seems possible that this wouldnāt backfire, but that weād achieve more impact if we thought more carefully first, and that itād be very hard to change path once things are set in motion.
So I think what Iād be most excited about in this space is people:
Looking into this further
Trying to find ways to pilot this sort of thing on a relatively small scale, and thereby gather more info
Trying to influence the academic studies already being run on this sort of thing to include more relevant material (e.g., EA-relevant thought experiments) and measures (e.g., measures of expansion of moral circles to future generations; measures of perceptions of EA)
Getting into relevant positions and networks to later be able to influence how this stuff goes
Potentially influencing path-dependent decisions that would be made soon with or without us, if this is relevant
In such cases, it can sometimes be worth just using our best guesses rather than waiting and thinking more, as the window is closing and our best guesses may be better than what other people would do otherwise
But it may be worth staying out of it even in such cases, due to things like reputation risks or risks of coming to be perceived as partisan
Iād be less excited about increasing the chance that path-dependent decisions are made soon, e.g. by pushing for widespread adoption of philosophy courses.
But this is just a tentative view.
Thanks for this, all very fair points.
I share your concerns about EA being too associated with all of this. Iām not sure it has to be though.
For example when it comes to my suggestion of prominent EA academics (e.g. Peter Singer, Toby Ord etc.) joining advocacy efforts to boost philosophy in schools I donāt mean they should do this with their EA hat on. Peter Singer could do this as āthe worldās most famous living ethicistā as opposed to as āthe godfather of EAā. Similarly we wouldnāt need EAs to say āplease include these EA ideas in the curriculumā, we could just have them say āthe ethics of eating meat is a huge issue that should be includedā. In short EA doesnāt have to explicitly come into this at all.
The inclusion of EA ideas into curricula was only one of my points anyway and it may not be absolutely necessary. As I mentioned, explicit EA outreach is probably better done at undergraduate level. Before uni, the most important thing is just philosophical learning.
Yes, Iād definitely guess that thereād be ways to do this, or versions of this, which wouldnāt lead to people seeing this push as associated with EA. And that would reduce some risks.
(I didnāt mean to imply my points should push against doing any version of this idea, just that they push against some versions of the idea, or push for particularly great caution regarding those versions of the idea. Also, itās possible that association with EA would actually be net positive by raising EAās profile or associating it with something concrete that many people end up liking; Iām just unsure, and think we should be cautious there.)
But counteracting those risks wonāt necessarily counteract the other sort of risk I mentioned, which is that rushing somewhat means a less good version of this is implemented than what couldāve been implemented, and once itās implemented itās extremely hard to change. So thatās a separate reason to consider things like piloting and doing further research before pushing for widespread rollouts, even if the version of this thatās being done isnāt perceived as associated with EA at all.
(And thatās not a critique of your post, as your post isnāt a public campaign but rather a post to the EA Forum sharing an idea and soliciting input, which is definitely within the category of things Iād suggest at this stage.)
OK thanks that all makes sense. I would love for there to further research and investigation. For example some philosophers/āeducation practitioners in the movement could have a look at the philosophy course I mention to see if itās something that is worth supporting in addition to your suggestions in another comment.