Some strategic decisions available to the effective altruism movement may be difficult to reverse. One example is making the movement’s brand explicitly political. Another is growing large. Under high uncertainty, there is often reason to avoid or delay such hard-to-reverse decisions.
And even when we’re confident we won’t want to reverse a decision, we might want to be careful about locking in a decision that will be hard to tweak later, if it would’ve been even better to make a somewhat different version of the decision. Relatedly, you write:
The initial development of any curriculum is important as large deviations from this starting point are rare.
With this in mind, I think I’d want us to be quite cautious about trying to push for or accelerate the widespread adoption of philosophy courses, when trying to include fairly explicitly EA-related ideas in proposed courses, or when doing things that could lead to EA being seen as associated with proposed courses or the pushes for them.
It seems possible that associating EA with this could backfire in some way, like if pushes for philosophy courses later become a partisan issue and EA thus comes to be seen as associated with one side of politics. Or if it turns out that mandated philosophy courses end up turning students off whatever they learn in those courses, or giving them a bad first impression of EA somehow, or diluting what EA is seen as being about (e.g., making it seem to be all about obligations to do a certain set of things, rather than also including things like excitement and exploration of new areas).
It also seems possible that this wouldn’t backfire, but that we’d achieve more impact if we thought more carefully first, and that it’d be very hard to change path once things are set in motion.
So I think what I’d be most excited about in this space is people:
Looking into this further
Trying to find ways to pilot this sort of thing on a relatively small scale, and thereby gather more info
Trying to influence the academic studies already being run on this sort of thing to include more relevant material (e.g., EA-relevant thought experiments) and measures (e.g., measures of expansion of moral circles to future generations; measures of perceptions of EA)
Getting into relevant positions and networks to later be able to influence how this stuff goes
Potentially influencing path-dependent decisions that would be made soon with or without us, if this is relevant
In such cases, it can sometimes be worth just using our best guesses rather than waiting and thinking more, as the window is closing and our best guesses may be better than what other people would do otherwise
But it may be worth staying out of it even in such cases, due to things like reputation risks or risks of coming to be perceived as partisan
I’d be less excited about increasing the chance that path-dependent decisions are made soon, e.g. by pushing for widespread adoption of philosophy courses.
I share your concerns about EA being too associated with all of this. I’m not sure it has to be though.
For example when it comes to my suggestion of prominent EA academics (e.g. Peter Singer, Toby Ord etc.) joining advocacy efforts to boost philosophy in schools I don’t mean they should do this with their EA hat on. Peter Singer could do this as “the world’s most famous living ethicist” as opposed to as “the godfather of EA”. Similarly we wouldn’t need EAs to say “please include these EA ideas in the curriculum”, we could just have them say “the ethics of eating meat is a huge issue that should be included”. In short EA doesn’t have to explicitly come into this at all.
The inclusion of EA ideas into curricula was only one of my points anyway and it may not be absolutely necessary. As I mentioned, explicit EA outreach is probably better done at undergraduate level. Before uni, the most important thing is just philosophical learning.
Yes, I’d definitely guess that there’d be ways to do this, or versions of this, which wouldn’t lead to people seeing this push as associated with EA. And that would reduce some risks.
(I didn’t mean to imply my points should push against doing any version of this idea, just that they push against some versions of the idea, or push for particularly great caution regarding those versions of the idea. Also, it’s possible that association with EA would actually be net positive by raising EA’s profile or associating it with something concrete that many people end up liking; I’m just unsure, and think we should be cautious there.)
But counteracting those risks won’t necessarily counteract the other sort of risk I mentioned, which is that rushing somewhat means a less good version of this is implemented than what could’ve been implemented, and once it’s implemented it’s extremely hard to change. So that’s a separate reason to consider things like piloting and doing further research before pushing for widespread rollouts, even if the version of this that’s being done isn’t perceived as associated with EA at all.
(And that’s not a critique of your post, as your post isn’t a public campaign but rather a post to the EA Forum sharing an idea and soliciting input, which is definitely within the category of things I’d suggest at this stage.)
OK thanks that all makes sense. I would love for there to further research and investigation. For example some philosophers/education practitioners in the movement could have a look at the philosophy course I mention to see if it’s something that is worth supporting in addition to your suggestions in another comment.
Hard-to-reverse and hard-to-tweak decisions
Summary from the article Hard-to-reverse decisions destroy option value:
And even when we’re confident we won’t want to reverse a decision, we might want to be careful about locking in a decision that will be hard to tweak later, if it would’ve been even better to make a somewhat different version of the decision. Relatedly, you write:
With this in mind, I think I’d want us to be quite cautious about trying to push for or accelerate the widespread adoption of philosophy courses, when trying to include fairly explicitly EA-related ideas in proposed courses, or when doing things that could lead to EA being seen as associated with proposed courses or the pushes for them.
It seems possible that associating EA with this could backfire in some way, like if pushes for philosophy courses later become a partisan issue and EA thus comes to be seen as associated with one side of politics. Or if it turns out that mandated philosophy courses end up turning students off whatever they learn in those courses, or giving them a bad first impression of EA somehow, or diluting what EA is seen as being about (e.g., making it seem to be all about obligations to do a certain set of things, rather than also including things like excitement and exploration of new areas).
It also seems possible that this wouldn’t backfire, but that we’d achieve more impact if we thought more carefully first, and that it’d be very hard to change path once things are set in motion.
So I think what I’d be most excited about in this space is people:
Looking into this further
Trying to find ways to pilot this sort of thing on a relatively small scale, and thereby gather more info
Trying to influence the academic studies already being run on this sort of thing to include more relevant material (e.g., EA-relevant thought experiments) and measures (e.g., measures of expansion of moral circles to future generations; measures of perceptions of EA)
Getting into relevant positions and networks to later be able to influence how this stuff goes
Potentially influencing path-dependent decisions that would be made soon with or without us, if this is relevant
In such cases, it can sometimes be worth just using our best guesses rather than waiting and thinking more, as the window is closing and our best guesses may be better than what other people would do otherwise
But it may be worth staying out of it even in such cases, due to things like reputation risks or risks of coming to be perceived as partisan
I’d be less excited about increasing the chance that path-dependent decisions are made soon, e.g. by pushing for widespread adoption of philosophy courses.
But this is just a tentative view.
Thanks for this, all very fair points.
I share your concerns about EA being too associated with all of this. I’m not sure it has to be though.
For example when it comes to my suggestion of prominent EA academics (e.g. Peter Singer, Toby Ord etc.) joining advocacy efforts to boost philosophy in schools I don’t mean they should do this with their EA hat on. Peter Singer could do this as “the world’s most famous living ethicist” as opposed to as “the godfather of EA”. Similarly we wouldn’t need EAs to say “please include these EA ideas in the curriculum”, we could just have them say “the ethics of eating meat is a huge issue that should be included”. In short EA doesn’t have to explicitly come into this at all.
The inclusion of EA ideas into curricula was only one of my points anyway and it may not be absolutely necessary. As I mentioned, explicit EA outreach is probably better done at undergraduate level. Before uni, the most important thing is just philosophical learning.
Yes, I’d definitely guess that there’d be ways to do this, or versions of this, which wouldn’t lead to people seeing this push as associated with EA. And that would reduce some risks.
(I didn’t mean to imply my points should push against doing any version of this idea, just that they push against some versions of the idea, or push for particularly great caution regarding those versions of the idea. Also, it’s possible that association with EA would actually be net positive by raising EA’s profile or associating it with something concrete that many people end up liking; I’m just unsure, and think we should be cautious there.)
But counteracting those risks won’t necessarily counteract the other sort of risk I mentioned, which is that rushing somewhat means a less good version of this is implemented than what could’ve been implemented, and once it’s implemented it’s extremely hard to change. So that’s a separate reason to consider things like piloting and doing further research before pushing for widespread rollouts, even if the version of this that’s being done isn’t perceived as associated with EA at all.
(And that’s not a critique of your post, as your post isn’t a public campaign but rather a post to the EA Forum sharing an idea and soliciting input, which is definitely within the category of things I’d suggest at this stage.)
OK thanks that all makes sense. I would love for there to further research and investigation. For example some philosophers/education practitioners in the movement could have a look at the philosophy course I mention to see if it’s something that is worth supporting in addition to your suggestions in another comment.