Yeah, this is fair. Ideally I’d ask a bunch of people what their subjective promisingness was, and then aggregate over that. I’d have to somehow adjust for the fact that people from EA backgrounds might have gone to excellent universities and schools, and thus their estimate of teacher quality might be much, much higher than average, though.
I’m not sure why your instinct is to go by your own experience or ask some other people. This seems fairly ‘un-EA’ to me and I hope whatever you’re doing regarding the scoring doesn’t take this approach.
I would go by the available empirical evidence, whilst noting any likely weaknesses in the studies. The weaknesses brought up by Khorton (and which you referenced in your comment) were actually noted in the original empirical review paper, which said the following regarding the P4C process:
“Many of the studies could be criticized on grounds of methodological rigour, but the quality and quantity of evidence nevertheless bears favourable comparison with that on many other methods in education.”
“It is not possible to assert that any use of the P4C process will always lead to positive outcomes, since implementation integrity may be highly variable. However, a wide range of evidence has been reported suggesting that, given certain conditions, children can gain significantly in measurable terms both academically and socially through this type of interactive process.”
“further investigation is needed of wider generalization within and beyond school, and of longer term maintenance of gains”
My overall feeling on scale was therefore that it was ‘promising’ but still unclear. I’m not impressed with just giving scale rating = 1 based on personal feeling/experience to be honest. Your tractability points possibly seem more objective and justified.
I’m not sure why your instinct is to go by your own experience or ask some other people. This seems fairly ‘un-EA’ to me and I hope whatever you’re doing regarding the scoring doesn’t take this approach
From where I’m sitting, asking other people is fairly in line with what many EAs do, especially on longtermist things. We don’t really have RCTs around AI safety, governance, or bio risks, so we instead do our best with reasoned judgements.
I’m quite skeptical of taking much from scientific studies on many kinds of questions, and I know this is true for many other members in the community. Scientific studies are often very narrow in scope, don’t cover the thing we’re really interested in, and often they don’t even replicate.
My guess is that if we were to show several senior/respected EAs at OpenPhil/FHI and similar your previous blog post, as is, they’d be similarly skeptical to Nuño here.
All that said, I think there are more easily-arguable proposals around yours (or arguably, modifications of yours). It seems obviously useful to make sure that Effective Altruists have good epistemics and there are initiatives in place to help teach them these. This includes work in Philosophy. Many EA researchers spend quite a while learning about Philosophy.
I think people are already bought into the idea of basically teaching important people how to think better. If large versions of this could be expanded upon, they seem like they could be large cause candidates there could be buy in for.
For example, in-person schools seem expensive, but online education is much cheaper to scale. Perhaps we could help subsidize or pay a few podcasters or Youtubers or similar to teach people the parts of philosophy that are great for reasoning. We could also target who is most important, and very well select the material that seems most useful. Ideally we could find ways to get relatively strong feedback loops; like creating tests that indicate one’s epistemic abilities, and measuring educational interventions on such tests.
Hey, fair enough. I think overall you and Nuno are right. I did write in my original post that it was all pretty speculative anyway. I regret if I was too defensive.
I think those proposals sound good. I think they aim to achieve something different to what I was going for as I was mostly going for a “broadly promote positive values” angle on a societal level which I think is potentially important from a longtermist point of view, as opposed to educating smaller pockets of people, although I think the latter approach could be high value.
OK I mean you can obviously do what you want and I appreciate that you’ve got a lot of causes to get through.
I don’t place that much stock in S1 when evaluating things as complex as how to do the most good in the world. Especially when your S1 leads to comments such as:
Philosophy seems like a terrible field—I’d imagine you’re in the firm minority here and when that is the case I’d imagine it’s reasonable to question your S1 and investigate further. Perhaps you should do a critique of philosophy on the forum (I’d certainly be interested to read it). There are people who have argued that philosophy does make progress and that it may not be as obvious, as philosophical progress tends to spawn other disciplines that then don’t call themselves philosophy. See here for a write-up of philosophical success stories. In any case what I really care about in a philosophical education is teaching people how to think (e.g. Socratic questioning, Bayesian updating etc.), not get people to become philosophers.
I also studied philosophy at university and overall came away with a mostly negative impression—I mean, what about all the people who don’t come away with a negative impression? They seem fairly abundant in EA.
I know an EA who is doing something similar to what you propose re: EAs teaching philosophy and spreading values, but for maths in an ultra-prestigious school. Philosophy doesn’t seem central to that idea—I still don’t get this comment to be honest. In my opinion the EA you speak of isn’t doing something similar to what I propose, and even if they were, why would the fact that they don’t see philosophy as central to what they’re doing mean that teaching philosophy would obviously fail?
Anyway I won’t labour the point much more. 43 karma on my philosophy in schools post is a sign it isn’t going to be revolutionary in EA and I’ve accepted that, so it’s not that I want you to rate it highly, it’s just that I’m sceptical of your process of how you did rate it.
Let me try to translate my thoughts to something which might be more legible / written in a more formal tone.
From my experience observing this in Spain, the philosophy curriculum taught in schools is a political compromise, in which religion plays an important role. Further, if utilitarism is even taught (it wasn’t in my high school philosophy class), it can be taught badly by proponents of some other competing theory. I expect this to happen, because most people (and by expectation most teachers) aren’t utilitarian.
Philosophy doesn’t have high epistemic standards, as evidenced by the fact that it can’t come to a conclusion about “who is right”. Some salient examples of philosophers who continue to be taught and given significant attention despite having few redeeming qualities are Plotinous, Anaximenes, or Hegel. Although it can be argued that they do have redeeming qualities (Anaximenes was an early proponent of proto-scientific thinking, and Hegel has some interesting insights about history, and has shaped further thought), paying too much attention to these philosophers would be the equivalent of coming to deeply understand phologiston or aether theory when studying physics. I understand that grading the healthiness of a field can be counterintuitive or weird, but to the extent that a field can be sick, I think that philosophy ranks near the bottom (in contrast, development economics of the sort where you do an RCT to find out if you’re right would be near the top)
Relatedly, when teaching philosophy too much attention is usually given to the history of philosophy. I agree that an ideal philosophy course which promoted “critical thinking” would be beneficial, but I don’t think that it would be feasible to implement it because: a) it would have to be the result of tricky political compromise and have to be very careful around critizicing whomever is in power, and b) I don’t think that there are enough good teachers who could pull it off.
Note that I’m not saying that philosophy can’t produce success stories, or great philosophers, like Parfit, David Pearce, Peter Singer, arguably Bostrom, etc (though note that all examples except Singer are pretty mathematical). I’m saying that most of the time, the average philosophy class is pretty mediocre
On this note, I believe that my own (negative) experience with philosophy in schools is more representative than yours. Google brings up that you went to Cambridge and UCL, so I posit that you (and many other EAs who have gone to top universities) have an inflated sense of how good teachers are (because you have been exposed to smart and at least somewhat capable teachers, who had the pleasure of teaching top students). In contrast, I have been exposed to average teachers who sometimes tried to do the best they could, and who often didn’t really have great teaching skills.
I have some models of the world which lead me to think that the idea was unpromising. Some of them clearly have a subjective component. Still, I’m using the same “muscles” as when forecasting, and I trust that those muscles will usually produce sensible conclusions.
It is possible that in this case I had too negative a view, though not in a way which is clearly wrong (to me). If I was forecasting the question “will a charity be incubated to work on philosophy in schools” (surprise reveal: this is similar to what I was doing all along), I imagine I’d give it a very low probability, but that my team mates would give it a slightly higher probability. After discussion, we’d both probably move towards the center, and thus be more accurate.
Note that if we model my subjective promisingness = true promisingness + error term, if we pick the candidate idea at the very bottom of my list (in this case, philosophy in schools, the idea under discussion and one of the four ideas to which I assigned a “very unpromising” rating), we’d expect it to both be unpromising (per your own view) and have a large error term (I clearly don’t view philosophy very favorably)
Thanks for the clarifications in your previous two comments. Helpful to get more of an insight into your thought process.
Just a few comments:
I stronglydon’t think a charity to work on philosophy in schools would be helpful and I don’t like that way of thinking about it. My suggestions were having prominent philosophers join (existing) advocacy efforts for philosophy in the curriculum, more people becoming philosophy teachers (if this might be their comparative advantage), trying to shift educational spending towards values-based education, more research into values-based education (to name a few).
This is a whole separate conversation that I’m not sure we have to get into right now too deeply (I think I’d rather not) but I think there are severe issues with development economics as a field to the extent that I would place it near the bottom of the pecking order within EA. Firstly the generalisability of RCT results is highly questionable (for example see Eva Vivalt’s research). More importantly and fundamentally, the problem of complex cluelessness (see here and here). It is partly considerations of cluelessness that makes me interested in longtermist areas such as moral circle expansion and broadly promoting positive values, along with x-risk reduction.
I’m hoping we’re nearing a good enough understanding of each other’s views that we don’t need to keep discussing for much longer, but I’m happy to continue a bit if helpful.
Yeah, this is fair. Ideally I’d ask a bunch of people what their subjective promisingness was, and then aggregate over that. I’d have to somehow adjust for the fact that people from EA backgrounds might have gone to excellent universities and schools, and thus their estimate of teacher quality might be much, much higher than average, though.
I’m not sure why your instinct is to go by your own experience or ask some other people. This seems fairly ‘un-EA’ to me and I hope whatever you’re doing regarding the scoring doesn’t take this approach.
I would go by the available empirical evidence, whilst noting any likely weaknesses in the studies. The weaknesses brought up by Khorton (and which you referenced in your comment) were actually noted in the original empirical review paper, which said the following regarding the P4C process:
“Many of the studies could be criticized on grounds of methodological rigour, but the quality and quantity of evidence nevertheless bears favourable comparison with that on many other methods in education.”
“It is not possible to assert that any use of the P4C process will always lead to positive outcomes, since implementation integrity may be highly variable. However, a wide range of evidence has been reported suggesting that, given certain conditions, children can gain significantly in measurable terms both academically and socially through this type of interactive process.”
“further investigation is needed of wider generalization within and beyond school, and of longer term maintenance of gains”
My overall feeling on scale was therefore that it was ‘promising’ but still unclear. I’m not impressed with just giving scale rating = 1 based on personal feeling/experience to be honest. Your tractability points possibly seem more objective and justified.
From where I’m sitting, asking other people is fairly in line with what many EAs do, especially on longtermist things. We don’t really have RCTs around AI safety, governance, or bio risks, so we instead do our best with reasoned judgements.
I’m quite skeptical of taking much from scientific studies on many kinds of questions, and I know this is true for many other members in the community. Scientific studies are often very narrow in scope, don’t cover the thing we’re really interested in, and often they don’t even replicate.
My guess is that if we were to show several senior/respected EAs at OpenPhil/FHI and similar your previous blog post, as is, they’d be similarly skeptical to Nuño here.
All that said, I think there are more easily-arguable proposals around yours (or arguably, modifications of yours). It seems obviously useful to make sure that Effective Altruists have good epistemics and there are initiatives in place to help teach them these. This includes work in Philosophy. Many EA researchers spend quite a while learning about Philosophy.
I think people are already bought into the idea of basically teaching important people how to think better. If large versions of this could be expanded upon, they seem like they could be large cause candidates there could be buy in for.
For example, in-person schools seem expensive, but online education is much cheaper to scale. Perhaps we could help subsidize or pay a few podcasters or Youtubers or similar to teach people the parts of philosophy that are great for reasoning. We could also target who is most important, and very well select the material that seems most useful. Ideally we could find ways to get relatively strong feedback loops; like creating tests that indicate one’s epistemic abilities, and measuring educational interventions on such tests.
Hey, fair enough. I think overall you and Nuno are right. I did write in my original post that it was all pretty speculative anyway. I regret if I was too defensive.
I think those proposals sound good. I think they aim to achieve something different to what I was going for as I was mostly going for a “broadly promote positive values” angle on a societal level which I think is potentially important from a longtermist point of view, as opposed to educating smaller pockets of people, although I think the latter approach could be high value.
I can imagine reconsidering, but I don’t in principle have anything against using my S1. Because:
It is fast, and I am rating 100+ causes
From past experience with forecasting, I basically trust it.
It does in fact have useful information. See here for some discussion I basically agree with.
OK I mean you can obviously do what you want and I appreciate that you’ve got a lot of causes to get through.
I don’t place that much stock in S1 when evaluating things as complex as how to do the most good in the world. Especially when your S1 leads to comments such as:
Philosophy seems like a terrible field—I’d imagine you’re in the firm minority here and when that is the case I’d imagine it’s reasonable to question your S1 and investigate further. Perhaps you should do a critique of philosophy on the forum (I’d certainly be interested to read it). There are people who have argued that philosophy does make progress and that it may not be as obvious, as philosophical progress tends to spawn other disciplines that then don’t call themselves philosophy. See here for a write-up of philosophical success stories. In any case what I really care about in a philosophical education is teaching people how to think (e.g. Socratic questioning, Bayesian updating etc.), not get people to become philosophers.
I also studied philosophy at university and overall came away with a mostly negative impression—I mean, what about all the people who don’t come away with a negative impression? They seem fairly abundant in EA.
I know an EA who is doing something similar to what you propose re: EAs teaching philosophy and spreading values, but for maths in an ultra-prestigious school. Philosophy doesn’t seem central to that idea—I still don’t get this comment to be honest. In my opinion the EA you speak of isn’t doing something similar to what I propose, and even if they were, why would the fact that they don’t see philosophy as central to what they’re doing mean that teaching philosophy would obviously fail?
Anyway I won’t labour the point much more. 43 karma on my philosophy in schools post is a sign it isn’t going to be revolutionary in EA and I’ve accepted that, so it’s not that I want you to rate it highly, it’s just that I’m sceptical of your process of how you did rate it.
Let me try to translate my thoughts to something which might be more legible / written in a more formal tone.
From my experience observing this in Spain, the philosophy curriculum taught in schools is a political compromise, in which religion plays an important role. Further, if utilitarism is even taught (it wasn’t in my high school philosophy class), it can be taught badly by proponents of some other competing theory. I expect this to happen, because most people (and by expectation most teachers) aren’t utilitarian.
Philosophy doesn’t have high epistemic standards, as evidenced by the fact that it can’t come to a conclusion about “who is right”. Some salient examples of philosophers who continue to be taught and given significant attention despite having few redeeming qualities are Plotinous, Anaximenes, or Hegel. Although it can be argued that they do have redeeming qualities (Anaximenes was an early proponent of proto-scientific thinking, and Hegel has some interesting insights about history, and has shaped further thought), paying too much attention to these philosophers would be the equivalent of coming to deeply understand phologiston or aether theory when studying physics. I understand that grading the healthiness of a field can be counterintuitive or weird, but to the extent that a field can be sick, I think that philosophy ranks near the bottom (in contrast, development economics of the sort where you do an RCT to find out if you’re right would be near the top)
Relatedly, when teaching philosophy too much attention is usually given to the history of philosophy. I agree that an ideal philosophy course which promoted “critical thinking” would be beneficial, but I don’t think that it would be feasible to implement it because: a) it would have to be the result of tricky political compromise and have to be very careful around critizicing whomever is in power, and b) I don’t think that there are enough good teachers who could pull it off.
Note that I’m not saying that philosophy can’t produce success stories, or great philosophers, like Parfit, David Pearce, Peter Singer, arguably Bostrom, etc (though note that all examples except Singer are pretty mathematical). I’m saying that most of the time, the average philosophy class is pretty mediocre
On this note, I believe that my own (negative) experience with philosophy in schools is more representative than yours. Google brings up that you went to Cambridge and UCL, so I posit that you (and many other EAs who have gone to top universities) have an inflated sense of how good teachers are (because you have been exposed to smart and at least somewhat capable teachers, who had the pleasure of teaching top students). In contrast, I have been exposed to average teachers who sometimes tried to do the best they could, and who often didn’t really have great teaching skills.
tl;dr/Notes:
I have some models of the world which lead me to think that the idea was unpromising. Some of them clearly have a subjective component. Still, I’m using the same “muscles” as when forecasting, and I trust that those muscles will usually produce sensible conclusions.
It is possible that in this case I had too negative a view, though not in a way which is clearly wrong (to me). If I was forecasting the question “will a charity be incubated to work on philosophy in schools” (surprise reveal: this is similar to what I was doing all along), I imagine I’d give it a very low probability, but that my team mates would give it a slightly higher probability. After discussion, we’d both probably move towards the center, and thus be more accurate.
Note that if we model my subjective promisingness = true promisingness + error term, if we pick the candidate idea at the very bottom of my list (in this case, philosophy in schools, the idea under discussion and one of the four ideas to which I assigned a “very unpromising” rating), we’d expect it to both be unpromising (per your own view) and have a large error term (I clearly don’t view philosophy very favorably)
Thanks for the clarifications in your previous two comments. Helpful to get more of an insight into your thought process.
Just a few comments:
I strongly don’t think a charity to work on philosophy in schools would be helpful and I don’t like that way of thinking about it. My suggestions were having prominent philosophers join (existing) advocacy efforts for philosophy in the curriculum, more people becoming philosophy teachers (if this might be their comparative advantage), trying to shift educational spending towards values-based education, more research into values-based education (to name a few).
This is a whole separate conversation that I’m not sure we have to get into right now too deeply (I think I’d rather not) but I think there are severe issues with development economics as a field to the extent that I would place it near the bottom of the pecking order within EA. Firstly the generalisability of RCT results is highly questionable (for example see Eva Vivalt’s research). More importantly and fundamentally, the problem of complex cluelessness (see here and here). It is partly considerations of cluelessness that makes me interested in longtermist areas such as moral circle expansion and broadly promoting positive values, along with x-risk reduction.
I’m hoping we’re nearing a good enough understanding of each other’s views that we don’t need to keep discussing for much longer, but I’m happy to continue a bit if helpful.
Acknowledged.