I think an important caveat/‘qualifier’ for EA is that it’s really for people who are either a) highly educated/ talented b) wealthy or c) both.. ? Is that fair to say?
I think some of the principles are more broadly applicable—it makes sense for someone who is giving any amount of money to think in this way & the suggested orgs are helpful.
But certainly in terms of choosing a career.. - for someone without high intelligence/education/resources, they probably won’t be in a position to survey the landscape and pick up jobs in any field they feel they are needed.
So what should an averagely educated, averagely intelligent person do? I don’t know that EA thinking is that helpful here, besides as a guide to giving… - how to choose between teacher or social worker? Or what about jobs like bin lorry driver/electrician/other essential workers.. - individually it’s probably hard to measure their impact in terms of QALYs.. it might even be negligible? But without these workers society wouldn’t function, so the cumulative impact is massive..
This must have occurred to EA thinkers and some of what I’ve read has touched on some of these points, so I’m probably stating the obvious here. I don’t see this point as particularly undermining any of the EA stuff, which I think is broadly useful and relevant, but I do think it’s an important caveat that could be more explicit/visible.
I’d be curious to hear what others think about this—has this occurred to people? Am I missing something, or have I misunderstood? Are there more ways that EA thinking is relevant to the average person? How would someone assess the impact of teacher vs. social worker? Is that even possible?
I think this is an excellent point. Something I’d like to write into a forum post someday if I get any actual conclusions is that EA seems to have some difficulties that are inherent in the mathematical realities of the movement.
On the one hand, EA wants to grow and advocate more publicly. This makes sense and is a good thing for a movement. While EA definitely focuses on slower, more sustainable growth, my understanding is that when EA wants quality, they mean quality of fit moreso than explicitly targeting quality of talent/money/resources. We want people aligned with the movement—it’s okay if they aren’t hugely influential in their fields.
On the other hand...EA is essentially a love letter to the Pareto principle. The guiding principle of EA is that some interventions are much, MUCH more effective than others. The unfortunate truth is that in many fields, the same applies to not just organisations, but people. One Sam Bankman-Fried makes the impact of thousands or tens of thousands of “ordinary” people. And even then, an “ordinary” person is someone who makes a median or above-median income in a wealthy country and donates 10% of it per year—even THIS is not a low bar to cross!
Research productivity isn’t quite as bad as this, but there are absolutely some people who have way more impact than others. EA also badly needs people, but it badly needs people who meet a certain talent bar. Once again, the Pareto principle comes into play. If an EA organisation wants to double its headcount, one might think “Oh, it’ll be easy to get a job there!” And yet, often the objective standard is very high. It might be very easy to find a job if you meet that standard (https://www.lesswrong.com/posts/YDF7XhMThhNfHfim9/ai-safety-needs-great-engineers for an example) but meeting that standard is HARD.
The final issue is this. Earning-to-give and donating 10% to charity is a totally reasonable path that can save dozens or hundreds of lives. Basically any effective altruist would say this is a totally good thing, you should be proud of it, and it is a totally worthy contribution to the cause. But if you engage with EA materials, you will hear about this a few times, and you will hear far more about other causes. Why is that? Because...there’s just not that much to say about it, really. Once you’ve gotten the infrastructure of research up (which charities are effective, where do I donate), the only updates in this area tend to be “Here’s a way to advocate to get other people to give more”, “Let’s celebrate X Day”, and “Hey, GiveWell found a new effective charity!”. If we wanted to make a “10%-er post” every week, I feel like we’d run out of content pretty quickly.
Therefore, when most people engage with places like the EA forum, they get people talking about things that probably aren’t relevant to them. Most people are talking about things that are less obvious, still being fleshed out, or require specific, often highly niche and difficult-to-obtain talents. This isn’t because EA is elitist and is deliberately shutting out the plebians who don’t want to devote their whole career to EA, it’s because these are the areas where new content is needed, and where new content won’t repeat itself.
I think an important caveat/‘qualifier’ for EA is that it’s really for people who are either a) highly educated/ talented b) wealthy or c) both.. ? Is that fair to say?
I think some of the principles are more broadly applicable—it makes sense for someone who is giving any amount of money to think in this way & the suggested orgs are helpful.
But certainly in terms of choosing a career.. - for someone without high intelligence/education/resources, they probably won’t be in a position to survey the landscape and pick up jobs in any field they feel they are needed.
So what should an averagely educated, averagely intelligent person do? I don’t know that EA thinking is that helpful here, besides as a guide to giving… - how to choose between teacher or social worker? Or what about jobs like bin lorry driver/electrician/other essential workers.. - individually it’s probably hard to measure their impact in terms of QALYs.. it might even be negligible? But without these workers society wouldn’t function, so the cumulative impact is massive..
This must have occurred to EA thinkers and some of what I’ve read has touched on some of these points, so I’m probably stating the obvious here. I don’t see this point as particularly undermining any of the EA stuff, which I think is broadly useful and relevant, but I do think it’s an important caveat that could be more explicit/visible.
I’d be curious to hear what others think about this—has this occurred to people? Am I missing something, or have I misunderstood? Are there more ways that EA thinking is relevant to the average person? How would someone assess the impact of teacher vs. social worker? Is that even possible?
Thanks
Iain
I think this is an excellent point. Something I’d like to write into a forum post someday if I get any actual conclusions is that EA seems to have some difficulties that are inherent in the mathematical realities of the movement.
On the one hand, EA wants to grow and advocate more publicly. This makes sense and is a good thing for a movement. While EA definitely focuses on slower, more sustainable growth, my understanding is that when EA wants quality, they mean quality of fit moreso than explicitly targeting quality of talent/money/resources. We want people aligned with the movement—it’s okay if they aren’t hugely influential in their fields.
On the other hand...EA is essentially a love letter to the Pareto principle. The guiding principle of EA is that some interventions are much, MUCH more effective than others. The unfortunate truth is that in many fields, the same applies to not just organisations, but people. One Sam Bankman-Fried makes the impact of thousands or tens of thousands of “ordinary” people. And even then, an “ordinary” person is someone who makes a median or above-median income in a wealthy country and donates 10% of it per year—even THIS is not a low bar to cross!
Research productivity isn’t quite as bad as this, but there are absolutely some people who have way more impact than others. EA also badly needs people, but it badly needs people who meet a certain talent bar. Once again, the Pareto principle comes into play. If an EA organisation wants to double its headcount, one might think “Oh, it’ll be easy to get a job there!” And yet, often the objective standard is very high. It might be very easy to find a job if you meet that standard (https://www.lesswrong.com/posts/YDF7XhMThhNfHfim9/ai-safety-needs-great-engineers for an example) but meeting that standard is HARD.
The final issue is this. Earning-to-give and donating 10% to charity is a totally reasonable path that can save dozens or hundreds of lives. Basically any effective altruist would say this is a totally good thing, you should be proud of it, and it is a totally worthy contribution to the cause. But if you engage with EA materials, you will hear about this a few times, and you will hear far more about other causes. Why is that? Because...there’s just not that much to say about it, really. Once you’ve gotten the infrastructure of research up (which charities are effective, where do I donate), the only updates in this area tend to be “Here’s a way to advocate to get other people to give more”, “Let’s celebrate X Day”, and “Hey, GiveWell found a new effective charity!”. If we wanted to make a “10%-er post” every week, I feel like we’d run out of content pretty quickly.
Therefore, when most people engage with places like the EA forum, they get people talking about things that probably aren’t relevant to them. Most people are talking about things that are less obvious, still being fleshed out, or require specific, often highly niche and difficult-to-obtain talents. This isn’t because EA is elitist and is deliberately shutting out the plebians who don’t want to devote their whole career to EA, it’s because these are the areas where new content is needed, and where new content won’t repeat itself.
I don’t yet have any suggestions about this.