I, as someone who is at least trying to be an EA, and who can speak two languages fluently and survive in 3 more, would “count” as an EA who is not from “Anglosphere West”, and who has read world literature. So yes, I know I exist.
My point is that EA, as a community, should encourage that kind of thing among its members. And it really doesn’t. Yes, people can do it as a personal project, but I think EA generally puts a lot of stock on people doing what are ultimately fairly difficult things (like, self-directed study of AI) without providing a consistent community with accountability that would help them achieve those things. And I think that the WEIRD / Anglosphere West / etc. demographic bias of EA is part of the reason why this seems to be the case.
Yes, it is possible to want a perspective to survive in the future without being particularly well-versed in it. I theoretically would not want Hinduism to go extinct in 50 years and can want that without knowing a whole lot about Hinduism.
That said, in order to know what will allow certain worldviews, and certain populations to thrive, you need to understand them at least a little. And if you’re going to try to maximize the good you do for people, which would include a LOT of people who are not Anglosphere West. If I genuinely thought that Hinduism was under threat of extinction and wanted to do something about it, trying to do that without learning anything about Hinduism would be really short-sighted of me.
Given that most human beings for most of history have not been WEIRD in the Heinrich sense, and that a lot of currently WEIRD people are becoming less so (increase in antidemocratic sentiment, the affordability crisis and rising inequality) it is reasonable to believe that the future people EA is so concerned with will not be particularly WEIRD. And if you want to do what is best for that population, there should be more effort put into ensuring they will be WEIRD in some fashion[1]or into ensuring that EA interventions will help non-WEIRD people a meaningful amount in ways that they will value. Which is more than just malaria nets.
And like… I haven’t seen that conversation.
I’ve seen allusions to it. But I haven’t really seen it. Nor have I seen EA engage with the “a bunch of philosophers and computer scientists got together and determined that the most important thing you can be is a philosopher or computer scientist” critique particularly well, nor have I seen EA engage very well with the question of lowering the barriers of entry (which I also received a fairly unhelpful response to when I posited it, which boiled down to “well you understand all of the EA projects that you’re not involved in and create lower barriers of entry for all of them”, which again comes back to the problem that EA creates a community and then doesn’t seem to actually use it to do the things communities are good for..?).
So I think it’s kind of a copout to just say “well, you can care in this theoretical way about perspectives you don’t understand”, given that part of the plan of EA, and the success condition is to affect those people’s lives meaningfully.
Not to mention the question of “promoting” vs “understanding”.
Should EA promote, iunno, fascism, on a community level? Obviously not.
Should EA seek to understand fascism, and authoritarianism more broadly, as a concerning potential threat that has arisen multiple times and could arise yet again with greater technological and military force in the future? Fucking definitely.
The closest thing to this is the “liberal norms” political career path as far as I’m aware, but I think both paths should be taken concurrently and that OR is inclusive, yet the second is largely neglected.
Great comment, thanks for clarifying your position. To be clear, I’m not particularly concerned about the survival of most particular worldviews as long as they decline organically. I just want to ensure that there’s a marketplace in which different worldviews can compete, rather than some kind of irreversible ‘lock-in’ scenario.
I have some issues with the entire ‘WEIRD’ concept and certainly wouldn’t want humanity to lock in ‘WEIRD’ values (which are typically speciesist). Within that marketplace, I do want to promote moral circle expansion and a broadly utilitarian outlook as a whole. I wouldn’t say this is as neglected as you claim it is — MacAskill discusses the value of the future (not just whether there is a future) extensively in his recent book, and there are EA organisations devoted to moral values spreading. It’s also partly why “philosopher” is recommended as a career in some cases, too.
If we want to spread those values, I agree with you that learning about competitor philosophies, ideologies, cultures and perspectives (I personally spend a fair bit of time on this) would be important, and that lowering language barriers could be helpful.
It could also be useful to explore whether there are interventions in cultures that we’re less familiar with that could improve people’s well-being even more than the typical global health interventions that are currently recommended. Perhaps there’s something about a particular culture which, if promoted more effectively, would really improve people’s lives. But maybe not: children dying of malaria is really, really bad, and that’s not a culture-specific phenomenon.
Needless to say, none of the above applies to the vast majority of moral patients on the planet, whether they’re factory farmed land animals, fishes or shrimps. (Though if we want to improve, say, shrimp welfare in Asia, learning local languages could help us work and recruit more effectively as well as spread values.)
If we want to spread those values, I agree with you that learning about competitor philosophies, ideologies, cultures and perspectives (I personally spend a fair bit of time on this) would be important, and that lowering language barriers could be helpful.
Wonderful! What specific actions could we take to make that easier for you (and others like you for whom this would be a worthwhile pursuit)?
Maybe a reading group that meets every week (or month). Or an asynchronous thread in which people provide reviews of philosophical articles or world literature. Or a group of Duolingo “friends” (or some other language-learning app of people’s choice, I have a variety of thoughts on which languages should be prioritized, but starting with something would be good, and Spanish-language EAs seem to be growing in number and organization).
It could also be useful to explore whether there are interventions in cultures that we’re less familiar with that could improve people’s well-being even more than the typical global health interventions that are currently recommended. Perhaps there’s something about a particular culture which, if promoted more effectively, would really improve people’s lives.
Bhutan’s notion of Gross Domestic Happiness, Denmark’s “hygge”, whatever it is that makes certain people with schizophrenia from Africa get the voices to say nice things to them, indigenous practices of farming and sustainable hunting, and maybe the practice of “insulting the meat” just off the top of my head, would probably be good things to make more broadly understood and build into certain institutions. Not to mention that knowledge of cultural features that need to be avoided or handled somewhat (for example, overtly strict beauty standards which harm people in a variety of different cultures).
(Though if we want to improve, say, shrimp welfare in Asia, learning local languages could help us work and recruit more effectively as well as spread values.)
And, very importantly, it could allow you to discover new things to value, new frameworks, new ways of approaching a problem. Every language you learn comes with new intuition pumps, new frames upon which you can hang your thoughts.
Even if you think the vast majority of moral patients are non-human and our priorities should reflect that, there are ways of thinking about animals and their welfare that have been cultivated for centuries by less WEIRD populations that could prove illuminating to you. I don’t know about them, because I have my own areas of ignorance. But that’s the kind of thing that EA could benefit from aggregating somewhere.
I would be very interested in working on a project like that, of aggregating non-EA perspectives in various packages for the convenience of individual EAs who may want to learn about perspectives that are underrepresented in the community and may offer interesting insights.
TBH I think that half-joking take should probably be engaged with more seriously (maybe say, pursuing more translations of EA works into Igbo or something), and I’m glad to hear it.
Sort of related to this, I started to design an easier dialect of English because I think English is too hard and that (1) it would be easier to learn it in stages and (2) two people who have learned the easier dialect could speak it among themselves. This would be nice in reverse; I married a Filipino but found it difficult to learn Tagalog because of the lack of available Tagalog courses and the fact that my wife doesn’t understand and cannot explain the grammar of her language. I wish I could learn an intentionally-designed pidgeon/simplified version of the language before tackling the whole thing. Hearing the language spoken in the house for several years hasn’t helped.
It would be good for EAs to learn other languages, but it’s hard. I studied Spanish in my free time for four years, but I remained terrible at it, my vocabulary is still small and I usually can’t understand what Spanish people are saying. If I moved to Mexico I’m sure I would learn better. But I have various reasons not to.
I think you might have misunderstood my comment.
I, as someone who is at least trying to be an EA, and who can speak two languages fluently and survive in 3 more, would “count” as an EA who is not from “Anglosphere West”, and who has read world literature. So yes, I know I exist.
My point is that EA, as a community, should encourage that kind of thing among its members. And it really doesn’t. Yes, people can do it as a personal project, but I think EA generally puts a lot of stock on people doing what are ultimately fairly difficult things (like, self-directed study of AI) without providing a consistent community with accountability that would help them achieve those things. And I think that the WEIRD / Anglosphere West / etc. demographic bias of EA is part of the reason why this seems to be the case.
Yes, it is possible to want a perspective to survive in the future without being particularly well-versed in it. I theoretically would not want Hinduism to go extinct in 50 years and can want that without knowing a whole lot about Hinduism.
That said, in order to know what will allow certain worldviews, and certain populations to thrive, you need to understand them at least a little. And if you’re going to try to maximize the good you do for people, which would include a LOT of people who are not Anglosphere West. If I genuinely thought that Hinduism was under threat of extinction and wanted to do something about it, trying to do that without learning anything about Hinduism would be really short-sighted of me.
Given that most human beings for most of history have not been WEIRD in the Heinrich sense, and that a lot of currently WEIRD people are becoming less so (increase in antidemocratic sentiment, the affordability crisis and rising inequality) it is reasonable to believe that the future people EA is so concerned with will not be particularly WEIRD. And if you want to do what is best for that population, there should be more effort put into ensuring they will be WEIRD in some fashion[1] or into ensuring that EA interventions will help non-WEIRD people a meaningful amount in ways that they will value. Which is more than just malaria nets.
And like… I haven’t seen that conversation.
I’ve seen allusions to it. But I haven’t really seen it. Nor have I seen EA engage with the “a bunch of philosophers and computer scientists got together and determined that the most important thing you can be is a philosopher or computer scientist” critique particularly well, nor have I seen EA engage very well with the question of lowering the barriers of entry (which I also received a fairly unhelpful response to when I posited it, which boiled down to “well you understand all of the EA projects that you’re not involved in and create lower barriers of entry for all of them”, which again comes back to the problem that EA creates a community and then doesn’t seem to actually use it to do the things communities are good for..?).
So I think it’s kind of a copout to just say “well, you can care in this theoretical way about perspectives you don’t understand”, given that part of the plan of EA, and the success condition is to affect those people’s lives meaningfully.
Not to mention the question of “promoting” vs “understanding”.
Should EA promote, iunno, fascism, on a community level? Obviously not.
Should EA seek to understand fascism, and authoritarianism more broadly, as a concerning potential threat that has arisen multiple times and could arise yet again with greater technological and military force in the future? Fucking definitely.
The closest thing to this is the “liberal norms” political career path as far as I’m aware, but I think both paths should be taken concurrently and that OR is inclusive, yet the second is largely neglected.
Great comment, thanks for clarifying your position. To be clear, I’m not particularly concerned about the survival of most particular worldviews as long as they decline organically. I just want to ensure that there’s a marketplace in which different worldviews can compete, rather than some kind of irreversible ‘lock-in’ scenario.
I have some issues with the entire ‘WEIRD’ concept and certainly wouldn’t want humanity to lock in ‘WEIRD’ values (which are typically speciesist). Within that marketplace, I do want to promote moral circle expansion and a broadly utilitarian outlook as a whole. I wouldn’t say this is as neglected as you claim it is — MacAskill discusses the value of the future (not just whether there is a future) extensively in his recent book, and there are EA organisations devoted to moral values spreading. It’s also partly why “philosopher” is recommended as a career in some cases, too.
If we want to spread those values, I agree with you that learning about competitor philosophies, ideologies, cultures and perspectives (I personally spend a fair bit of time on this) would be important, and that lowering language barriers could be helpful.
It could also be useful to explore whether there are interventions in cultures that we’re less familiar with that could improve people’s well-being even more than the typical global health interventions that are currently recommended. Perhaps there’s something about a particular culture which, if promoted more effectively, would really improve people’s lives. But maybe not: children dying of malaria is really, really bad, and that’s not a culture-specific phenomenon.
Needless to say, none of the above applies to the vast majority of moral patients on the planet, whether they’re factory farmed land animals, fishes or shrimps. (Though if we want to improve, say, shrimp welfare in Asia, learning local languages could help us work and recruit more effectively as well as spread values.)
Wonderful! What specific actions could we take to make that easier for you (and others like you for whom this would be a worthwhile pursuit)?
Maybe a reading group that meets every week (or month). Or an asynchronous thread in which people provide reviews of philosophical articles or world literature. Or a group of Duolingo “friends” (or some other language-learning app of people’s choice, I have a variety of thoughts on which languages should be prioritized, but starting with something would be good, and Spanish-language EAs seem to be growing in number and organization).
Bhutan’s notion of Gross Domestic Happiness, Denmark’s “hygge”, whatever it is that makes certain people with schizophrenia from Africa get the voices to say nice things to them, indigenous practices of farming and sustainable hunting, and maybe the practice of “insulting the meat” just off the top of my head, would probably be good things to make more broadly understood and build into certain institutions. Not to mention that knowledge of cultural features that need to be avoided or handled somewhat (for example, overtly strict beauty standards which harm people in a variety of different cultures).
And, very importantly, it could allow you to discover new things to value, new frameworks, new ways of approaching a problem. Every language you learn comes with new intuition pumps, new frames upon which you can hang your thoughts.
Even if you think the vast majority of moral patients are non-human and our priorities should reflect that, there are ways of thinking about animals and their welfare that have been cultivated for centuries by less WEIRD populations that could prove illuminating to you. I don’t know about them, because I have my own areas of ignorance. But that’s the kind of thing that EA could benefit from aggregating somewhere.
I would be very interested in working on a project like that, of aggregating non-EA perspectives in various packages for the convenience of individual EAs who may want to learn about perspectives that are underrepresented in the community and may offer interesting insights.
There’s a half-joking take that some people in longtermism bring up sometimes that roughly looks like
(i.e. predictions that most new humans will be born in africa)
TBH I think that half-joking take should probably be engaged with more seriously (maybe say, pursuing more translations of EA works into Igbo or something), and I’m glad to hear it.
Sort of related to this, I started to design an easier dialect of English because I think English is too hard and that (1) it would be easier to learn it in stages and (2) two people who have learned the easier dialect could speak it among themselves. This would be nice in reverse; I married a Filipino but found it difficult to learn Tagalog because of the lack of available Tagalog courses and the fact that my wife doesn’t understand and cannot explain the grammar of her language. I wish I could learn an intentionally-designed pidgeon/simplified version of the language before tackling the whole thing. Hearing the language spoken in the house for several years hasn’t helped.
It would be good for EAs to learn other languages, but it’s hard. I studied Spanish in my free time for four years, but I remained terrible at it, my vocabulary is still small and I usually can’t understand what Spanish people are saying. If I moved to Mexico I’m sure I would learn better. But I have various reasons not to.