Can you give a bit more of an explanation about the scoring in the google sheet? E.g. time horizon, readiness, promisingness etc.
I was slightly disappointed to see such low scores for my idea of philosophy in schools (but I guess I should have realised by now that it’s not cause X!). I’m not sure I agree with ‘time horizon’ being ‘very short’ though given that some of the main channels through which I hope the intervention would be good are in terms of values spreading (which you rate as ‘medium’) and moral circle expansion (which you rate as ‘long’). The whole point of my post was to argue for this intervention from a longtermist angle and it was partly in response to 80,000 Hours listing ‘broadly promoting positive values’ as a potential highest priority. So saying time horizon is ‘very short’ is a sign that you didn’t engage with the post at all, or (quite possibly!) that I’ve misunderstood something quite important. If you do have some specific feedback on the idea I’d appreciate it!
Can you give a bit more of an explanation about the scoring in the google sheet?
A post about this is incoming.
With respect to philosophy in schools in particular:
Why I’m not excited about it as a cause area:
Your posts conflicts with my personal experience about how philosophy in schools can be taught. (Spain has philosophy, ethics & civics classes for kids as their curriculum, and I remember them being pretty terrible. In a past life, I also studied philosophy at university and overall came away with a mostly negative impression).
I know an EA who is doing something similar to what you propose re: EAs teaching philosophy and spreading values, but for maths in an ultra-prestigious school. Philosophy doesn’t seem central to that idea.
I believe that there aren’t enough excellent philosophy teachers for it to be implemented at scale.
I don’t give much credence to the papers you cite replicating at scale.
On the above two points, see Khorton’s comments in your post.
To elaborate a bit on that, there are some things on the class of “philosophy in schools” that scale really well, like, say, CBT. But I expect that “philosophy in schools” would scale like, say, budhist meditation (i.e., badly without good teachers).
Philosophy seems like a terrible field. It has low epistemic standards. It can’t come to conclusions. It has Hegel. There is simply a lot of crap to wade through.
Philosophy in schools meshes badly with religion and it’s easy for the curriculum to become political.
I imagine that teaching utilitarianism at scale in schools is not very feasible.
I’d expect EA to loose a political value about teaching EA values (as opposed to, say, Christian values, or liberal values, or feminist values, etc.). I also expect this fight to be costly.
Why I categorized it as “very-short”:
If I think about how philosophy in schools would be implemented, and you can see this in Spain, I imagine this coming about as a result of a campaign promise, and lasting for a term or two (4, 8 years) until the next political party comes with their own priorities. In Spain we had a problem with politicians changing education laws too often.
You in fact propose getting into party-politics as a way to implement “philosophy in schools”
When I think of trying to come up with a 100 or 1000 years research program to study philosophy in schools, the idea doesn’t strike me as much superior to the 10 year version of: do a literature review of existing literature of philosophy in schools and try to get it implemented. This is in contrast with other areas for which e.g., a 100-1000 years+ observatory for global priorities research or unknown existential risks does strike as more meaningful.
One of your arguments was: “One reason why it might be highly impactful for philosophy graduates to teach philosophy is that they may, in many cases, not have a very high-impact alternative.” This doesn’t strike me as a consideration that will last for generations (though, you never know with philosophy graduates)
That said, I can also see why classifying it as longer term would make sense.
OK thanks for this reply! I think some of this is fair and as I say, I’m not clinging to this idea as being hugely promising. Some of your comments seem quite personal and possibly contentious, but then again I don’t know what the context of the scoring is so maybe that’s sort of the idea at this stage.
A few specific thoughts.
Your posts conflicts with my personal experience about how philosophy in schools can be taught. (Spain has philosophy, ethics & civics classes for kids as their curriculum, and I remember them being pretty terrible. In a past life, I also studied philosophy at university and overall came away with a mostly negative impression).
OK this seems fairly personal and anecdotal (as I said maybe this is fine at this stage but I hope this sort of analysis wouldn’t play a huge role in scoring at a later stage).
I know an EA who is doing something similar to what you propose re: EAs teaching philosophy and spreading values, but for maths in an ultra-prestigious school. Philosophy doesn’t seem central to that idea.
Not sure what point you’re making here (I also know this EA by the way).
I believe that there aren’t enough excellent philosophy teachers for it to be implemented at scale.
I don’t give much credence to the papers you cite replicating at scale.
Perhaps fair! We could always train more teachers though.
Philosophy seems like a terrible field. It has low epistemic standards. It can’t come to conclusions. It has Hegel. There is simply a lot of crap to wade through.
Hmm. Well I at least feel fairly confident that a lot of people will disagree with you here. And any good curriculum designer should leave out the crap. My experience with philosophy has led me to go vegan, engage with EA and give effectively (think Peter Singer type arguments). I’ve found it quite important in shaping my views and I’m quite excited by the field of global priorities research which is essentially econ and philosophy.
I imagine that teaching utilitarianism at scale in schools is not very feasible.
If you teach philosophy, you will probably spend at least a little bit of time teaching utilitarianism within that. Not really sure what you’re saying here.
I’d expect EA to loose a political value about teaching EA values (as opposed to, say, Christian values, or liberal values, or feminist values, etc.). I also expect this fight to be costly.
It’s teaching philosophy, not teaching values. In the post I don’t suggest we include EA explicitly in the curriculum. In any case, EA is the natural conclusion of a utilitarian philosophy and I would expect any reasonable philosophy curriculum to include utilitarianism.
If I think about how philosophy in schools would be implemented, and you can see this in Spain, I imagine this coming about as a result of a campaign promise, and lasting for a term or two (4, 8 years) until the next political party comes with their own priorities. In Spain we had a problem with politicians changing education laws too often.
Ok interesting. I didn’t really consider that its inclusion might just be overturned by another party. From my personal experience you don’t get subjects being dropped very often and so I was hopeful for staying power, but maybe this is a fair criticism.
When I think of trying to come up with a 100 or 1000 years research program to study philosophy in schools, the idea doesn’t strike me as much superior to the 10 year version of: do a literature review of existing literature of philosophy in schools and try to get it implemented. This is in contrast with other areas for which e.g., a 100-1000 years+ observatory for global priorities research or unknown existential risks does strike as more meaningful.
OK fine this (and your later comments) was probably me just not knowing what you meant by ‘time horizon’.
Yeah, this is fair. Ideally I’d ask a bunch of people what their subjective promisingness was, and then aggregate over that. I’d have to somehow adjust for the fact that people from EA backgrounds might have gone to excellent universities and schools, and thus their estimate of teacher quality might be much, much higher than average, though.
I’m not sure why your instinct is to go by your own experience or ask some other people. This seems fairly ‘un-EA’ to me and I hope whatever you’re doing regarding the scoring doesn’t take this approach.
I would go by the available empirical evidence, whilst noting any likely weaknesses in the studies. The weaknesses brought up by Khorton (and which you referenced in your comment) were actually noted in the original empirical review paper, which said the following regarding the P4C process:
“Many of the studies could be criticized on grounds of methodological rigour, but the quality and quantity of evidence nevertheless bears favourable comparison with that on many other methods in education.”
“It is not possible to assert that any use of the P4C process will always lead to positive outcomes, since implementation integrity may be highly variable. However, a wide range of evidence has been reported suggesting that, given certain conditions, children can gain significantly in measurable terms both academically and socially through this type of interactive process.”
“further investigation is needed of wider generalization within and beyond school, and of longer term maintenance of gains”
My overall feeling on scale was therefore that it was ‘promising’ but still unclear. I’m not impressed with just giving scale rating = 1 based on personal feeling/experience to be honest. Your tractability points possibly seem more objective and justified.
I’m not sure why your instinct is to go by your own experience or ask some other people. This seems fairly ‘un-EA’ to me and I hope whatever you’re doing regarding the scoring doesn’t take this approach
From where I’m sitting, asking other people is fairly in line with what many EAs do, especially on longtermist things. We don’t really have RCTs around AI safety, governance, or bio risks, so we instead do our best with reasoned judgements.
I’m quite skeptical of taking much from scientific studies on many kinds of questions, and I know this is true for many other members in the community. Scientific studies are often very narrow in scope, don’t cover the thing we’re really interested in, and often they don’t even replicate.
My guess is that if we were to show several senior/respected EAs at OpenPhil/FHI and similar your previous blog post, as is, they’d be similarly skeptical to Nuño here.
All that said, I think there are more easily-arguable proposals around yours (or arguably, modifications of yours). It seems obviously useful to make sure that Effective Altruists have good epistemics and there are initiatives in place to help teach them these. This includes work in Philosophy. Many EA researchers spend quite a while learning about Philosophy.
I think people are already bought into the idea of basically teaching important people how to think better. If large versions of this could be expanded upon, they seem like they could be large cause candidates there could be buy in for.
For example, in-person schools seem expensive, but online education is much cheaper to scale. Perhaps we could help subsidize or pay a few podcasters or Youtubers or similar to teach people the parts of philosophy that are great for reasoning. We could also target who is most important, and very well select the material that seems most useful. Ideally we could find ways to get relatively strong feedback loops; like creating tests that indicate one’s epistemic abilities, and measuring educational interventions on such tests.
Hey, fair enough. I think overall you and Nuno are right. I did write in my original post that it was all pretty speculative anyway. I regret if I was too defensive.
I think those proposals sound good. I think they aim to achieve something different to what I was going for as I was mostly going for a “broadly promote positive values” angle on a societal level which I think is potentially important from a longtermist point of view, as opposed to educating smaller pockets of people, although I think the latter approach could be high value.
OK I mean you can obviously do what you want and I appreciate that you’ve got a lot of causes to get through.
I don’t place that much stock in S1 when evaluating things as complex as how to do the most good in the world. Especially when your S1 leads to comments such as:
Philosophy seems like a terrible field—I’d imagine you’re in the firm minority here and when that is the case I’d imagine it’s reasonable to question your S1 and investigate further. Perhaps you should do a critique of philosophy on the forum (I’d certainly be interested to read it). There are people who have argued that philosophy does make progress and that it may not be as obvious, as philosophical progress tends to spawn other disciplines that then don’t call themselves philosophy. See here for a write-up of philosophical success stories. In any case what I really care about in a philosophical education is teaching people how to think (e.g. Socratic questioning, Bayesian updating etc.), not get people to become philosophers.
I also studied philosophy at university and overall came away with a mostly negative impression—I mean, what about all the people who don’t come away with a negative impression? They seem fairly abundant in EA.
I know an EA who is doing something similar to what you propose re: EAs teaching philosophy and spreading values, but for maths in an ultra-prestigious school. Philosophy doesn’t seem central to that idea—I still don’t get this comment to be honest. In my opinion the EA you speak of isn’t doing something similar to what I propose, and even if they were, why would the fact that they don’t see philosophy as central to what they’re doing mean that teaching philosophy would obviously fail?
Anyway I won’t labour the point much more. 43 karma on my philosophy in schools post is a sign it isn’t going to be revolutionary in EA and I’ve accepted that, so it’s not that I want you to rate it highly, it’s just that I’m sceptical of your process of how you did rate it.
Let me try to translate my thoughts to something which might be more legible / written in a more formal tone.
From my experience observing this in Spain, the philosophy curriculum taught in schools is a political compromise, in which religion plays an important role. Further, if utilitarism is even taught (it wasn’t in my high school philosophy class), it can be taught badly by proponents of some other competing theory. I expect this to happen, because most people (and by expectation most teachers) aren’t utilitarian.
Philosophy doesn’t have high epistemic standards, as evidenced by the fact that it can’t come to a conclusion about “who is right”. Some salient examples of philosophers who continue to be taught and given significant attention despite having few redeeming qualities are Plotinous, Anaximenes, or Hegel. Although it can be argued that they do have redeeming qualities (Anaximenes was an early proponent of proto-scientific thinking, and Hegel has some interesting insights about history, and has shaped further thought), paying too much attention to these philosophers would be the equivalent of coming to deeply understand phologiston or aether theory when studying physics. I understand that grading the healthiness of a field can be counterintuitive or weird, but to the extent that a field can be sick, I think that philosophy ranks near the bottom (in contrast, development economics of the sort where you do an RCT to find out if you’re right would be near the top)
Relatedly, when teaching philosophy too much attention is usually given to the history of philosophy. I agree that an ideal philosophy course which promoted “critical thinking” would be beneficial, but I don’t think that it would be feasible to implement it because: a) it would have to be the result of tricky political compromise and have to be very careful around critizicing whomever is in power, and b) I don’t think that there are enough good teachers who could pull it off.
Note that I’m not saying that philosophy can’t produce success stories, or great philosophers, like Parfit, David Pearce, Peter Singer, arguably Bostrom, etc (though note that all examples except Singer are pretty mathematical). I’m saying that most of the time, the average philosophy class is pretty mediocre
On this note, I believe that my own (negative) experience with philosophy in schools is more representative than yours. Google brings up that you went to Cambridge and UCL, so I posit that you (and many other EAs who have gone to top universities) have an inflated sense of how good teachers are (because you have been exposed to smart and at least somewhat capable teachers, who had the pleasure of teaching top students). In contrast, I have been exposed to average teachers who sometimes tried to do the best they could, and who often didn’t really have great teaching skills.
I have some models of the world which lead me to think that the idea was unpromising. Some of them clearly have a subjective component. Still, I’m using the same “muscles” as when forecasting, and I trust that those muscles will usually produce sensible conclusions.
It is possible that in this case I had too negative a view, though not in a way which is clearly wrong (to me). If I was forecasting the question “will a charity be incubated to work on philosophy in schools” (surprise reveal: this is similar to what I was doing all along), I imagine I’d give it a very low probability, but that my team mates would give it a slightly higher probability. After discussion, we’d both probably move towards the center, and thus be more accurate.
Note that if we model my subjective promisingness = true promisingness + error term, if we pick the candidate idea at the very bottom of my list (in this case, philosophy in schools, the idea under discussion and one of the four ideas to which I assigned a “very unpromising” rating), we’d expect it to both be unpromising (per your own view) and have a large error term (I clearly don’t view philosophy very favorably)
Thanks for the clarifications in your previous two comments. Helpful to get more of an insight into your thought process.
Just a few comments:
I stronglydon’t think a charity to work on philosophy in schools would be helpful and I don’t like that way of thinking about it. My suggestions were having prominent philosophers join (existing) advocacy efforts for philosophy in the curriculum, more people becoming philosophy teachers (if this might be their comparative advantage), trying to shift educational spending towards values-based education, more research into values-based education (to name a few).
This is a whole separate conversation that I’m not sure we have to get into right now too deeply (I think I’d rather not) but I think there are severe issues with development economics as a field to the extent that I would place it near the bottom of the pecking order within EA. Firstly the generalisability of RCT results is highly questionable (for example see Eva Vivalt’s research). More importantly and fundamentally, the problem of complex cluelessness (see here and here). It is partly considerations of cluelessness that makes me interested in longtermist areas such as moral circle expansion and broadly promoting positive values, along with x-risk reduction.
I’m hoping we’re nearing a good enough understanding of each other’s views that we don’t need to keep discussing for much longer, but I’m happy to continue a bit if helpful.
Can you give a bit more of an explanation about the scoring in the google sheet? E.g. time horizon, readiness, promisingness etc.
I was slightly disappointed to see such low scores for my idea of philosophy in schools (but I guess I should have realised by now that it’s not cause X!). I’m not sure I agree with ‘time horizon’ being ‘very short’ though given that some of the main channels through which I hope the intervention would be good are in terms of values spreading (which you rate as ‘medium’) and moral circle expansion (which you rate as ‘long’). The whole point of my post was to argue for this intervention from a longtermist angle and it was partly in response to 80,000 Hours listing ‘broadly promoting positive values’ as a potential highest priority. So saying time horizon is ‘very short’ is a sign that you didn’t engage with the post at all, or (quite possibly!) that I’ve misunderstood something quite important. If you do have some specific feedback on the idea I’d appreciate it!
A post about this is incoming.
With respect to philosophy in schools in particular:
Why I’m not excited about it as a cause area:
Your posts conflicts with my personal experience about how philosophy in schools can be taught. (Spain has philosophy, ethics & civics classes for kids as their curriculum, and I remember them being pretty terrible. In a past life, I also studied philosophy at university and overall came away with a mostly negative impression).
I know an EA who is doing something similar to what you propose re: EAs teaching philosophy and spreading values, but for maths in an ultra-prestigious school. Philosophy doesn’t seem central to that idea.
I believe that there aren’t enough excellent philosophy teachers for it to be implemented at scale.
I don’t give much credence to the papers you cite replicating at scale.
On the above two points, see Khorton’s comments in your post.
To elaborate a bit on that, there are some things on the class of “philosophy in schools” that scale really well, like, say, CBT. But I expect that “philosophy in schools” would scale like, say, budhist meditation (i.e., badly without good teachers).
Philosophy seems like a terrible field. It has low epistemic standards. It can’t come to conclusions. It has Hegel. There is simply a lot of crap to wade through.
Philosophy in schools meshes badly with religion and it’s easy for the curriculum to become political.
I imagine that teaching utilitarianism at scale in schools is not very feasible.
I’d expect EA to loose a political value about teaching EA values (as opposed to, say, Christian values, or liberal values, or feminist values, etc.). I also expect this fight to be costly.
Why I categorized it as “very-short”:
If I think about how philosophy in schools would be implemented, and you can see this in Spain, I imagine this coming about as a result of a campaign promise, and lasting for a term or two (4, 8 years) until the next political party comes with their own priorities. In Spain we had a problem with politicians changing education laws too often.
You in fact propose getting into party-politics as a way to implement “philosophy in schools”
When I think of trying to come up with a 100 or 1000 years research program to study philosophy in schools, the idea doesn’t strike me as much superior to the 10 year version of: do a literature review of existing literature of philosophy in schools and try to get it implemented. This is in contrast with other areas for which e.g., a 100-1000 years+ observatory for global priorities research or unknown existential risks does strike as more meaningful.
One of your arguments was: “One reason why it might be highly impactful for philosophy graduates to teach philosophy is that they may, in many cases, not have a very high-impact alternative.” This doesn’t strike me as a consideration that will last for generations (though, you never know with philosophy graduates)
That said, I can also see why classifying it as longer term would make sense.
OK thanks for this reply! I think some of this is fair and as I say, I’m not clinging to this idea as being hugely promising. Some of your comments seem quite personal and possibly contentious, but then again I don’t know what the context of the scoring is so maybe that’s sort of the idea at this stage.
A few specific thoughts.
OK this seems fairly personal and anecdotal (as I said maybe this is fine at this stage but I hope this sort of analysis wouldn’t play a huge role in scoring at a later stage).
Not sure what point you’re making here (I also know this EA by the way).
Perhaps fair! We could always train more teachers though.
Hmm. Well I at least feel fairly confident that a lot of people will disagree with you here. And any good curriculum designer should leave out the crap. My experience with philosophy has led me to go vegan, engage with EA and give effectively (think Peter Singer type arguments). I’ve found it quite important in shaping my views and I’m quite excited by the field of global priorities research which is essentially econ and philosophy.
If you teach philosophy, you will probably spend at least a little bit of time teaching utilitarianism within that. Not really sure what you’re saying here.
It’s teaching philosophy, not teaching values. In the post I don’t suggest we include EA explicitly in the curriculum. In any case, EA is the natural conclusion of a utilitarian philosophy and I would expect any reasonable philosophy curriculum to include utilitarianism.
Ok interesting. I didn’t really consider that its inclusion might just be overturned by another party. From my personal experience you don’t get subjects being dropped very often and so I was hopeful for staying power, but maybe this is a fair criticism.
OK fine this (and your later comments) was probably me just not knowing what you meant by ‘time horizon’.
Yeah, this is fair. Ideally I’d ask a bunch of people what their subjective promisingness was, and then aggregate over that. I’d have to somehow adjust for the fact that people from EA backgrounds might have gone to excellent universities and schools, and thus their estimate of teacher quality might be much, much higher than average, though.
I’m not sure why your instinct is to go by your own experience or ask some other people. This seems fairly ‘un-EA’ to me and I hope whatever you’re doing regarding the scoring doesn’t take this approach.
I would go by the available empirical evidence, whilst noting any likely weaknesses in the studies. The weaknesses brought up by Khorton (and which you referenced in your comment) were actually noted in the original empirical review paper, which said the following regarding the P4C process:
“Many of the studies could be criticized on grounds of methodological rigour, but the quality and quantity of evidence nevertheless bears favourable comparison with that on many other methods in education.”
“It is not possible to assert that any use of the P4C process will always lead to positive outcomes, since implementation integrity may be highly variable. However, a wide range of evidence has been reported suggesting that, given certain conditions, children can gain significantly in measurable terms both academically and socially through this type of interactive process.”
“further investigation is needed of wider generalization within and beyond school, and of longer term maintenance of gains”
My overall feeling on scale was therefore that it was ‘promising’ but still unclear. I’m not impressed with just giving scale rating = 1 based on personal feeling/experience to be honest. Your tractability points possibly seem more objective and justified.
From where I’m sitting, asking other people is fairly in line with what many EAs do, especially on longtermist things. We don’t really have RCTs around AI safety, governance, or bio risks, so we instead do our best with reasoned judgements.
I’m quite skeptical of taking much from scientific studies on many kinds of questions, and I know this is true for many other members in the community. Scientific studies are often very narrow in scope, don’t cover the thing we’re really interested in, and often they don’t even replicate.
My guess is that if we were to show several senior/respected EAs at OpenPhil/FHI and similar your previous blog post, as is, they’d be similarly skeptical to Nuño here.
All that said, I think there are more easily-arguable proposals around yours (or arguably, modifications of yours). It seems obviously useful to make sure that Effective Altruists have good epistemics and there are initiatives in place to help teach them these. This includes work in Philosophy. Many EA researchers spend quite a while learning about Philosophy.
I think people are already bought into the idea of basically teaching important people how to think better. If large versions of this could be expanded upon, they seem like they could be large cause candidates there could be buy in for.
For example, in-person schools seem expensive, but online education is much cheaper to scale. Perhaps we could help subsidize or pay a few podcasters or Youtubers or similar to teach people the parts of philosophy that are great for reasoning. We could also target who is most important, and very well select the material that seems most useful. Ideally we could find ways to get relatively strong feedback loops; like creating tests that indicate one’s epistemic abilities, and measuring educational interventions on such tests.
Hey, fair enough. I think overall you and Nuno are right. I did write in my original post that it was all pretty speculative anyway. I regret if I was too defensive.
I think those proposals sound good. I think they aim to achieve something different to what I was going for as I was mostly going for a “broadly promote positive values” angle on a societal level which I think is potentially important from a longtermist point of view, as opposed to educating smaller pockets of people, although I think the latter approach could be high value.
I can imagine reconsidering, but I don’t in principle have anything against using my S1. Because:
It is fast, and I am rating 100+ causes
From past experience with forecasting, I basically trust it.
It does in fact have useful information. See here for some discussion I basically agree with.
OK I mean you can obviously do what you want and I appreciate that you’ve got a lot of causes to get through.
I don’t place that much stock in S1 when evaluating things as complex as how to do the most good in the world. Especially when your S1 leads to comments such as:
Philosophy seems like a terrible field—I’d imagine you’re in the firm minority here and when that is the case I’d imagine it’s reasonable to question your S1 and investigate further. Perhaps you should do a critique of philosophy on the forum (I’d certainly be interested to read it). There are people who have argued that philosophy does make progress and that it may not be as obvious, as philosophical progress tends to spawn other disciplines that then don’t call themselves philosophy. See here for a write-up of philosophical success stories. In any case what I really care about in a philosophical education is teaching people how to think (e.g. Socratic questioning, Bayesian updating etc.), not get people to become philosophers.
I also studied philosophy at university and overall came away with a mostly negative impression—I mean, what about all the people who don’t come away with a negative impression? They seem fairly abundant in EA.
I know an EA who is doing something similar to what you propose re: EAs teaching philosophy and spreading values, but for maths in an ultra-prestigious school. Philosophy doesn’t seem central to that idea—I still don’t get this comment to be honest. In my opinion the EA you speak of isn’t doing something similar to what I propose, and even if they were, why would the fact that they don’t see philosophy as central to what they’re doing mean that teaching philosophy would obviously fail?
Anyway I won’t labour the point much more. 43 karma on my philosophy in schools post is a sign it isn’t going to be revolutionary in EA and I’ve accepted that, so it’s not that I want you to rate it highly, it’s just that I’m sceptical of your process of how you did rate it.
Let me try to translate my thoughts to something which might be more legible / written in a more formal tone.
From my experience observing this in Spain, the philosophy curriculum taught in schools is a political compromise, in which religion plays an important role. Further, if utilitarism is even taught (it wasn’t in my high school philosophy class), it can be taught badly by proponents of some other competing theory. I expect this to happen, because most people (and by expectation most teachers) aren’t utilitarian.
Philosophy doesn’t have high epistemic standards, as evidenced by the fact that it can’t come to a conclusion about “who is right”. Some salient examples of philosophers who continue to be taught and given significant attention despite having few redeeming qualities are Plotinous, Anaximenes, or Hegel. Although it can be argued that they do have redeeming qualities (Anaximenes was an early proponent of proto-scientific thinking, and Hegel has some interesting insights about history, and has shaped further thought), paying too much attention to these philosophers would be the equivalent of coming to deeply understand phologiston or aether theory when studying physics. I understand that grading the healthiness of a field can be counterintuitive or weird, but to the extent that a field can be sick, I think that philosophy ranks near the bottom (in contrast, development economics of the sort where you do an RCT to find out if you’re right would be near the top)
Relatedly, when teaching philosophy too much attention is usually given to the history of philosophy. I agree that an ideal philosophy course which promoted “critical thinking” would be beneficial, but I don’t think that it would be feasible to implement it because: a) it would have to be the result of tricky political compromise and have to be very careful around critizicing whomever is in power, and b) I don’t think that there are enough good teachers who could pull it off.
Note that I’m not saying that philosophy can’t produce success stories, or great philosophers, like Parfit, David Pearce, Peter Singer, arguably Bostrom, etc (though note that all examples except Singer are pretty mathematical). I’m saying that most of the time, the average philosophy class is pretty mediocre
On this note, I believe that my own (negative) experience with philosophy in schools is more representative than yours. Google brings up that you went to Cambridge and UCL, so I posit that you (and many other EAs who have gone to top universities) have an inflated sense of how good teachers are (because you have been exposed to smart and at least somewhat capable teachers, who had the pleasure of teaching top students). In contrast, I have been exposed to average teachers who sometimes tried to do the best they could, and who often didn’t really have great teaching skills.
tl;dr/Notes:
I have some models of the world which lead me to think that the idea was unpromising. Some of them clearly have a subjective component. Still, I’m using the same “muscles” as when forecasting, and I trust that those muscles will usually produce sensible conclusions.
It is possible that in this case I had too negative a view, though not in a way which is clearly wrong (to me). If I was forecasting the question “will a charity be incubated to work on philosophy in schools” (surprise reveal: this is similar to what I was doing all along), I imagine I’d give it a very low probability, but that my team mates would give it a slightly higher probability. After discussion, we’d both probably move towards the center, and thus be more accurate.
Note that if we model my subjective promisingness = true promisingness + error term, if we pick the candidate idea at the very bottom of my list (in this case, philosophy in schools, the idea under discussion and one of the four ideas to which I assigned a “very unpromising” rating), we’d expect it to both be unpromising (per your own view) and have a large error term (I clearly don’t view philosophy very favorably)
Thanks for the clarifications in your previous two comments. Helpful to get more of an insight into your thought process.
Just a few comments:
I strongly don’t think a charity to work on philosophy in schools would be helpful and I don’t like that way of thinking about it. My suggestions were having prominent philosophers join (existing) advocacy efforts for philosophy in the curriculum, more people becoming philosophy teachers (if this might be their comparative advantage), trying to shift educational spending towards values-based education, more research into values-based education (to name a few).
This is a whole separate conversation that I’m not sure we have to get into right now too deeply (I think I’d rather not) but I think there are severe issues with development economics as a field to the extent that I would place it near the bottom of the pecking order within EA. Firstly the generalisability of RCT results is highly questionable (for example see Eva Vivalt’s research). More importantly and fundamentally, the problem of complex cluelessness (see here and here). It is partly considerations of cluelessness that makes me interested in longtermist areas such as moral circle expansion and broadly promoting positive values, along with x-risk reduction.
I’m hoping we’re nearing a good enough understanding of each other’s views that we don’t need to keep discussing for much longer, but I’m happy to continue a bit if helpful.
Acknowledged.