I’m not sure why your instinct is to go by your own experience or ask some other people. This seems fairly ‘un-EA’ to me and I hope whatever you’re doing regarding the scoring doesn’t take this approach
From where I’m sitting, asking other people is fairly in line with what many EAs do, especially on longtermist things. We don’t really have RCTs around AI safety, governance, or bio risks, so we instead do our best with reasoned judgements.
I’m quite skeptical of taking much from scientific studies on many kinds of questions, and I know this is true for many other members in the community. Scientific studies are often very narrow in scope, don’t cover the thing we’re really interested in, and often they don’t even replicate.
My guess is that if we were to show several senior/respected EAs at OpenPhil/FHI and similar your previous blog post, as is, they’d be similarly skeptical to Nuño here.
All that said, I think there are more easily-arguable proposals around yours (or arguably, modifications of yours). It seems obviously useful to make sure that Effective Altruists have good epistemics and there are initiatives in place to help teach them these. This includes work in Philosophy. Many EA researchers spend quite a while learning about Philosophy.
I think people are already bought into the idea of basically teaching important people how to think better. If large versions of this could be expanded upon, they seem like they could be large cause candidates there could be buy in for.
For example, in-person schools seem expensive, but online education is much cheaper to scale. Perhaps we could help subsidize or pay a few podcasters or Youtubers or similar to teach people the parts of philosophy that are great for reasoning. We could also target who is most important, and very well select the material that seems most useful. Ideally we could find ways to get relatively strong feedback loops; like creating tests that indicate one’s epistemic abilities, and measuring educational interventions on such tests.
Hey, fair enough. I think overall you and Nuno are right. I did write in my original post that it was all pretty speculative anyway. I regret if I was too defensive.
I think those proposals sound good. I think they aim to achieve something different to what I was going for as I was mostly going for a “broadly promote positive values” angle on a societal level which I think is potentially important from a longtermist point of view, as opposed to educating smaller pockets of people, although I think the latter approach could be high value.
From where I’m sitting, asking other people is fairly in line with what many EAs do, especially on longtermist things. We don’t really have RCTs around AI safety, governance, or bio risks, so we instead do our best with reasoned judgements.
I’m quite skeptical of taking much from scientific studies on many kinds of questions, and I know this is true for many other members in the community. Scientific studies are often very narrow in scope, don’t cover the thing we’re really interested in, and often they don’t even replicate.
My guess is that if we were to show several senior/respected EAs at OpenPhil/FHI and similar your previous blog post, as is, they’d be similarly skeptical to Nuño here.
All that said, I think there are more easily-arguable proposals around yours (or arguably, modifications of yours). It seems obviously useful to make sure that Effective Altruists have good epistemics and there are initiatives in place to help teach them these. This includes work in Philosophy. Many EA researchers spend quite a while learning about Philosophy.
I think people are already bought into the idea of basically teaching important people how to think better. If large versions of this could be expanded upon, they seem like they could be large cause candidates there could be buy in for.
For example, in-person schools seem expensive, but online education is much cheaper to scale. Perhaps we could help subsidize or pay a few podcasters or Youtubers or similar to teach people the parts of philosophy that are great for reasoning. We could also target who is most important, and very well select the material that seems most useful. Ideally we could find ways to get relatively strong feedback loops; like creating tests that indicate one’s epistemic abilities, and measuring educational interventions on such tests.
Hey, fair enough. I think overall you and Nuno are right. I did write in my original post that it was all pretty speculative anyway. I regret if I was too defensive.
I think those proposals sound good. I think they aim to achieve something different to what I was going for as I was mostly going for a “broadly promote positive values” angle on a societal level which I think is potentially important from a longtermist point of view, as opposed to educating smaller pockets of people, although I think the latter approach could be high value.