It’s a problem that terrifies me, I fear its intractability, but at least EAs will share the terror with me and understand where I’m coming from. Leftists (or more precisely prioritarian standpoint theorists) tend to be extremely confident about everything, that we’d all see how right they were if we just gave them power, etc. I don’t see any reasonable way of expecting them to be more trustworthy than us about “who’s vision of the future gets listened to?”
I think this question is more centred about elitism and EA being mostly western, educated, industrialized, rich and democratic (WEIRD) than about the culture war between left and right.
I’m sure Thorn does do this (I haven’t watched the video in full yet), but it seems more productive to criticise the “EA vision of the future” than to ask where it comes from (and there were EA-like ideas in China, India, Ancient Greece and the Islamic world long before Bentham).
MacAskill, Ord and others seem to me to have advocated a highly pluralistic future in which humanity is able to reflect on its values. Clearly, some people don’t like what they think is the “EA vision of the future” and want their vision to prevail instead. The question seems to imply, though, that EAs are the only ones who are excluding others’ visions of the future from their thinking. Actually, everyone is doing that, otherwise they wouldn’t have a specific vision.
Just regarding this bit: “MacAskill, Ord and others seem to me to have advocated a highly pluralistic future in which humanity is able to reflect on its values.”
I have posited, multiple times, in different EA spaces, that EAs should learn more languages in order to be better able to think, better able to understand perspectives further removed from that which they were raised in, healthier (trilinguals are massively protected against dementia, Alzheimer’s, etc), etc.
And the response I have received has been broadly “eh” or at best “this is interesting but I don’t know if it’s worth EAs time”.
I have not seen any EA “world literature” circles based around trying to expand their horizons to perspectives as far from their own as possible. I have not seen any EA language learning groups. I have not seen any effort put towards using the EA community (that is so important to build!) in order to enable individual EAs to become better at understanding radically different perspectives, etc.
So like… Iunno, I don’t buy the “it’s not a problem we’re mostly wealthy white guys” argument. It seems to me like a lot of EAs don’t know what they don’t know, and don’t realize the axes along which they could not-know-things on top. They don’t behave the way people who are genuinely invested in a more pluralistic vision of the future would behave. And they don’t react positively to proposals that aim to improve that.
Thanks for your reply! Firstly, there will many EAs (particularly from the non-Anglosphere West and non-Western countries) who do understand multiple languages. I imagine there are also many EAs who have read world literature.
When we say that EAs “mostly” have a certain demographic background, we should remember that this still means there are hundreds of EAs that don’t fit that background at all and they shouldn’t be forgotten. Relatedly, I (somewhat ironically) think critics of EA could do with studying world history because it would show them that EA-like ideas haven’t just popped up in the West by any means.
I also don’t think one needs to understand radically different perspectives to want a world in which those perspectives can survive and flourish into the future. There are so many worldviews out there that you have to ultimately draw a line somewhere, and many of those perspectives will just be diametrically opposed to core EA principles, so it would be odd to promote them at the community level. Should people try to expand their intellectual horizons as a personal project? Possibly!
I, as someone who is at least trying to be an EA, and who can speak two languages fluently and survive in 3 more, would “count” as an EA who is not from “Anglosphere West”, and who has read world literature. So yes, I know I exist.
My point is that EA, as a community, should encourage that kind of thing among its members. And it really doesn’t. Yes, people can do it as a personal project, but I think EA generally puts a lot of stock on people doing what are ultimately fairly difficult things (like, self-directed study of AI) without providing a consistent community with accountability that would help them achieve those things. And I think that the WEIRD / Anglosphere West / etc. demographic bias of EA is part of the reason why this seems to be the case.
Yes, it is possible to want a perspective to survive in the future without being particularly well-versed in it. I theoretically would not want Hinduism to go extinct in 50 years and can want that without knowing a whole lot about Hinduism.
That said, in order to know what will allow certain worldviews, and certain populations to thrive, you need to understand them at least a little. And if you’re going to try to maximize the good you do for people, which would include a LOT of people who are not Anglosphere West. If I genuinely thought that Hinduism was under threat of extinction and wanted to do something about it, trying to do that without learning anything about Hinduism would be really short-sighted of me.
Given that most human beings for most of history have not been WEIRD in the Heinrich sense, and that a lot of currently WEIRD people are becoming less so (increase in antidemocratic sentiment, the affordability crisis and rising inequality) it is reasonable to believe that the future people EA is so concerned with will not be particularly WEIRD. And if you want to do what is best for that population, there should be more effort put into ensuring they will be WEIRD in some fashion[1]or into ensuring that EA interventions will help non-WEIRD people a meaningful amount in ways that they will value. Which is more than just malaria nets.
And like… I haven’t seen that conversation.
I’ve seen allusions to it. But I haven’t really seen it. Nor have I seen EA engage with the “a bunch of philosophers and computer scientists got together and determined that the most important thing you can be is a philosopher or computer scientist” critique particularly well, nor have I seen EA engage very well with the question of lowering the barriers of entry (which I also received a fairly unhelpful response to when I posited it, which boiled down to “well you understand all of the EA projects that you’re not involved in and create lower barriers of entry for all of them”, which again comes back to the problem that EA creates a community and then doesn’t seem to actually use it to do the things communities are good for..?).
So I think it’s kind of a copout to just say “well, you can care in this theoretical way about perspectives you don’t understand”, given that part of the plan of EA, and the success condition is to affect those people’s lives meaningfully.
Not to mention the question of “promoting” vs “understanding”.
Should EA promote, iunno, fascism, on a community level? Obviously not.
Should EA seek to understand fascism, and authoritarianism more broadly, as a concerning potential threat that has arisen multiple times and could arise yet again with greater technological and military force in the future? Fucking definitely.
The closest thing to this is the “liberal norms” political career path as far as I’m aware, but I think both paths should be taken concurrently and that OR is inclusive, yet the second is largely neglected.
Great comment, thanks for clarifying your position. To be clear, I’m not particularly concerned about the survival of most particular worldviews as long as they decline organically. I just want to ensure that there’s a marketplace in which different worldviews can compete, rather than some kind of irreversible ‘lock-in’ scenario.
I have some issues with the entire ‘WEIRD’ concept and certainly wouldn’t want humanity to lock in ‘WEIRD’ values (which are typically speciesist). Within that marketplace, I do want to promote moral circle expansion and a broadly utilitarian outlook as a whole. I wouldn’t say this is as neglected as you claim it is — MacAskill discusses the value of the future (not just whether there is a future) extensively in his recent book, and there are EA organisations devoted to moral values spreading. It’s also partly why “philosopher” is recommended as a career in some cases, too.
If we want to spread those values, I agree with you that learning about competitor philosophies, ideologies, cultures and perspectives (I personally spend a fair bit of time on this) would be important, and that lowering language barriers could be helpful.
It could also be useful to explore whether there are interventions in cultures that we’re less familiar with that could improve people’s well-being even more than the typical global health interventions that are currently recommended. Perhaps there’s something about a particular culture which, if promoted more effectively, would really improve people’s lives. But maybe not: children dying of malaria is really, really bad, and that’s not a culture-specific phenomenon.
Needless to say, none of the above applies to the vast majority of moral patients on the planet, whether they’re factory farmed land animals, fishes or shrimps. (Though if we want to improve, say, shrimp welfare in Asia, learning local languages could help us work and recruit more effectively as well as spread values.)
If we want to spread those values, I agree with you that learning about competitor philosophies, ideologies, cultures and perspectives (I personally spend a fair bit of time on this) would be important, and that lowering language barriers could be helpful.
Wonderful! What specific actions could we take to make that easier for you (and others like you for whom this would be a worthwhile pursuit)?
Maybe a reading group that meets every week (or month). Or an asynchronous thread in which people provide reviews of philosophical articles or world literature. Or a group of Duolingo “friends” (or some other language-learning app of people’s choice, I have a variety of thoughts on which languages should be prioritized, but starting with something would be good, and Spanish-language EAs seem to be growing in number and organization).
It could also be useful to explore whether there are interventions in cultures that we’re less familiar with that could improve people’s well-being even more than the typical global health interventions that are currently recommended. Perhaps there’s something about a particular culture which, if promoted more effectively, would really improve people’s lives.
Bhutan’s notion of Gross Domestic Happiness, Denmark’s “hygge”, whatever it is that makes certain people with schizophrenia from Africa get the voices to say nice things to them, indigenous practices of farming and sustainable hunting, and maybe the practice of “insulting the meat” just off the top of my head, would probably be good things to make more broadly understood and build into certain institutions. Not to mention that knowledge of cultural features that need to be avoided or handled somewhat (for example, overtly strict beauty standards which harm people in a variety of different cultures).
(Though if we want to improve, say, shrimp welfare in Asia, learning local languages could help us work and recruit more effectively as well as spread values.)
And, very importantly, it could allow you to discover new things to value, new frameworks, new ways of approaching a problem. Every language you learn comes with new intuition pumps, new frames upon which you can hang your thoughts.
Even if you think the vast majority of moral patients are non-human and our priorities should reflect that, there are ways of thinking about animals and their welfare that have been cultivated for centuries by less WEIRD populations that could prove illuminating to you. I don’t know about them, because I have my own areas of ignorance. But that’s the kind of thing that EA could benefit from aggregating somewhere.
I would be very interested in working on a project like that, of aggregating non-EA perspectives in various packages for the convenience of individual EAs who may want to learn about perspectives that are underrepresented in the community and may offer interesting insights.
TBH I think that half-joking take should probably be engaged with more seriously (maybe say, pursuing more translations of EA works into Igbo or something), and I’m glad to hear it.
Sort of related to this, I started to design an easier dialect of English because I think English is too hard and that (1) it would be easier to learn it in stages and (2) two people who have learned the easier dialect could speak it among themselves. This would be nice in reverse; I married a Filipino but found it difficult to learn Tagalog because of the lack of available Tagalog courses and the fact that my wife doesn’t understand and cannot explain the grammar of her language. I wish I could learn an intentionally-designed pidgeon/simplified version of the language before tackling the whole thing. Hearing the language spoken in the house for several years hasn’t helped.
It would be good for EAs to learn other languages, but it’s hard. I studied Spanish in my free time for four years, but I remained terrible at it, my vocabulary is still small and I usually can’t understand what Spanish people are saying. If I moved to Mexico I’m sure I would learn better. But I have various reasons not to.
Excellent reply. I cheaply agree-voted but am not agree-voting in a costly manner because that would require me backing up my cosmopolitan values by learning a language.
Skeptical that language learning is actually the most pivotal part of learning about wild (to you) perspectives, but it’s not obviously wrong.
Thank you! I don’t think it’s necessarily the most pivotal [1] but it is one part that has recently begun having its barrier of entry lowered [2]. Additionally, while reading broadly [3]could also help, the reason why language-learning looks so good in my eyes is because of the stones-to-birds ratio.
If you read very broadly and travel a lot, you may gain more “learning about wild(to you) perspective” benefits. But if you learn a language [4]you are:
1) benefitting your brain,
2) increasing the amount of people in the world you can talk to, and whose work you can learn from,
3) absorb new ideas you may not have otherwise been able to absorb,
You can separately do things that will fulfill all four of those things (and even fulfill some of the other benefits that language learning can provide for you) without learning another language. But I am very bad at executive skills, and juggling 4+ different habits, so I generally don’t find the idea of say…
doing 2 crosswords, 2 4x4x4 sudoku a day, and other brain teasers +
taking dance classes or learning a new instrument +
taking communications classes and reading books about public speaking and active listening +
engaging in comparative-translation reading +
ingratiating myself to radically different communities in order to cultivate those modes of thought [6]
...to be less onerous than learning a new language. Especially since language-learning can help and be done concurrently with these alternatives [7].
Language learning is also something that can help with community bonding, which would probably be helpful to the substantial-seeming portion of EAs who are kind of lonely and depressed. It can also help you remember what it is like to suck at something, which I think a lot of people in Rationalist spaces would benefit from more broadly, since so many of them were gifted kids who now have anxiety, and becoming comfortable with failure and iteration is also good for you and your ability to do things in general.
I find that personally, I am more socially conservative in Spanish and more progressive in English, which has allowed me to test ideas against my own brain in a way that most monolinguals I talk to seem to find somewhat alien and much more effortful. Conversely, in French, I am not very capable, and I find that quite useful because it allows me to force myself to simplify my ideas on the grounds that I am literally unable to express the complex version.
Music terminology is often in French or Italian, learning languages will just broaden your vocabulary for crossword puzzles, knowing another language is a gateway to communities that were previously closed to you, and you can engage in reading different translations of something more easily if you can also just read it in the original language.
Just regarding your last sentence: I disagree that it has any bearing whatsoever whether everyone else is excluding other’s visions of the future or not. No matter if everyone else is great or terrible—I want EA to be as good as it possibly can, and if it fails on some metric it should be criticised and changed in that regard, no matter if everyone else fails on the same metric too, or not, or whatever.
Thanks for your reply! I’m not saying that EA should be able to exclude others’ visions because others are doing so. I’m claiming that it’s impossible not to exclude others’ visions of the future. Let’s take the pluralistic vision of the future that appeals to MacAskill and Ord. There will be many people in the world (fascists, Islamists, evangelical Christians) who disagree with such a vision. MacAskill and Ord are thus excluding those visions of the future. Is this a bad thing? I will let the reader decide.
Prioritarianism is a flavor of utilitarianism that tries to increase impact by starting with the oppressed or unprivileged.
Standpoint theory or standpoint epistemology is about advantages and disadvantages to gaining knowledge based on demographic membership.
Leftist culture is deeply exposed to both of these views, occasionally to the point of them being invisible/commonsensical assumptions.
My internal gpt completion / simulation of someone like Thorn assumed that her rhetorical question was gesturing toward “these EA folks seem to be underrating at least one of prioritarianism or standpoint epistemology”
Thanks for the summary. I hope to make it through the video. I like thorn and fully expect her to be one of EA’s higher quality outside critics.
I’m going to briefly jot down an answer to a (rhetorical?) question of hers. (epistemic status: far left for about 7 years)
It’s a great question, and as far as I know EAs outperform any overly prioritarian standpoint theorist at facing it. I think an old arbital article (probably Eliezer?) did the best job at distilling and walking you through the exercise of generalizing cosmopolitanism. But maybe Soares’ version is a little more to the point, and do also see my shortform about how I think negative longtermism dominates positive longtermism. At the same time Critch has been trying to get the alignment community to pay attention to social choice theory. Feeling a little “yeah, we thought of that” and that lack of enthusiasm for something like Doing EA Better’s “indigenous ways of knowing” remark is a feature not a bug.
It’s a problem that terrifies me, I fear its intractability, but at least EAs will share the terror with me and understand where I’m coming from. Leftists (or more precisely prioritarian standpoint theorists) tend to be extremely confident about everything, that we’d all see how right they were if we just gave them power, etc. I don’t see any reasonable way of expecting them to be more trustworthy than us about “who’s vision of the future gets listened to?”
I think this question is more centred about elitism and EA being mostly western, educated, industrialized, rich and democratic (WEIRD) than about the culture war between left and right.
I’m sure Thorn does do this (I haven’t watched the video in full yet), but it seems more productive to criticise the “EA vision of the future” than to ask where it comes from (and there were EA-like ideas in China, India, Ancient Greece and the Islamic world long before Bentham).
MacAskill, Ord and others seem to me to have advocated a highly pluralistic future in which humanity is able to reflect on its values. Clearly, some people don’t like what they think is the “EA vision of the future” and want their vision to prevail instead. The question seems to imply, though, that EAs are the only ones who are excluding others’ visions of the future from their thinking. Actually, everyone is doing that, otherwise they wouldn’t have a specific vision.
Just regarding this bit: “MacAskill, Ord and others seem to me to have advocated a highly pluralistic future in which humanity is able to reflect on its values.”
I have posited, multiple times, in different EA spaces, that EAs should learn more languages in order to be better able to think, better able to understand perspectives further removed from that which they were raised in, healthier (trilinguals are massively protected against dementia, Alzheimer’s, etc), etc.
And the response I have received has been broadly “eh” or at best “this is interesting but I don’t know if it’s worth EAs time”.
I have not seen any EA “world literature” circles based around trying to expand their horizons to perspectives as far from their own as possible. I have not seen any EA language learning groups. I have not seen any effort put towards using the EA community (that is so important to build!) in order to enable individual EAs to become better at understanding radically different perspectives, etc.
So like… Iunno, I don’t buy the “it’s not a problem we’re mostly wealthy white guys” argument. It seems to me like a lot of EAs don’t know what they don’t know, and don’t realize the axes along which they could not-know-things on top. They don’t behave the way people who are genuinely invested in a more pluralistic vision of the future would behave. And they don’t react positively to proposals that aim to improve that.
Thanks for your reply! Firstly, there will many EAs (particularly from the non-Anglosphere West and non-Western countries) who do understand multiple languages. I imagine there are also many EAs who have read world literature.
When we say that EAs “mostly” have a certain demographic background, we should remember that this still means there are hundreds of EAs that don’t fit that background at all and they shouldn’t be forgotten. Relatedly, I (somewhat ironically) think critics of EA could do with studying world history because it would show them that EA-like ideas haven’t just popped up in the West by any means.
I also don’t think one needs to understand radically different perspectives to want a world in which those perspectives can survive and flourish into the future. There are so many worldviews out there that you have to ultimately draw a line somewhere, and many of those perspectives will just be diametrically opposed to core EA principles, so it would be odd to promote them at the community level. Should people try to expand their intellectual horizons as a personal project? Possibly!
I think you might have misunderstood my comment.
I, as someone who is at least trying to be an EA, and who can speak two languages fluently and survive in 3 more, would “count” as an EA who is not from “Anglosphere West”, and who has read world literature. So yes, I know I exist.
My point is that EA, as a community, should encourage that kind of thing among its members. And it really doesn’t. Yes, people can do it as a personal project, but I think EA generally puts a lot of stock on people doing what are ultimately fairly difficult things (like, self-directed study of AI) without providing a consistent community with accountability that would help them achieve those things. And I think that the WEIRD / Anglosphere West / etc. demographic bias of EA is part of the reason why this seems to be the case.
Yes, it is possible to want a perspective to survive in the future without being particularly well-versed in it. I theoretically would not want Hinduism to go extinct in 50 years and can want that without knowing a whole lot about Hinduism.
That said, in order to know what will allow certain worldviews, and certain populations to thrive, you need to understand them at least a little. And if you’re going to try to maximize the good you do for people, which would include a LOT of people who are not Anglosphere West. If I genuinely thought that Hinduism was under threat of extinction and wanted to do something about it, trying to do that without learning anything about Hinduism would be really short-sighted of me.
Given that most human beings for most of history have not been WEIRD in the Heinrich sense, and that a lot of currently WEIRD people are becoming less so (increase in antidemocratic sentiment, the affordability crisis and rising inequality) it is reasonable to believe that the future people EA is so concerned with will not be particularly WEIRD. And if you want to do what is best for that population, there should be more effort put into ensuring they will be WEIRD in some fashion[1] or into ensuring that EA interventions will help non-WEIRD people a meaningful amount in ways that they will value. Which is more than just malaria nets.
And like… I haven’t seen that conversation.
I’ve seen allusions to it. But I haven’t really seen it. Nor have I seen EA engage with the “a bunch of philosophers and computer scientists got together and determined that the most important thing you can be is a philosopher or computer scientist” critique particularly well, nor have I seen EA engage very well with the question of lowering the barriers of entry (which I also received a fairly unhelpful response to when I posited it, which boiled down to “well you understand all of the EA projects that you’re not involved in and create lower barriers of entry for all of them”, which again comes back to the problem that EA creates a community and then doesn’t seem to actually use it to do the things communities are good for..?).
So I think it’s kind of a copout to just say “well, you can care in this theoretical way about perspectives you don’t understand”, given that part of the plan of EA, and the success condition is to affect those people’s lives meaningfully.
Not to mention the question of “promoting” vs “understanding”.
Should EA promote, iunno, fascism, on a community level? Obviously not.
Should EA seek to understand fascism, and authoritarianism more broadly, as a concerning potential threat that has arisen multiple times and could arise yet again with greater technological and military force in the future? Fucking definitely.
The closest thing to this is the “liberal norms” political career path as far as I’m aware, but I think both paths should be taken concurrently and that OR is inclusive, yet the second is largely neglected.
Great comment, thanks for clarifying your position. To be clear, I’m not particularly concerned about the survival of most particular worldviews as long as they decline organically. I just want to ensure that there’s a marketplace in which different worldviews can compete, rather than some kind of irreversible ‘lock-in’ scenario.
I have some issues with the entire ‘WEIRD’ concept and certainly wouldn’t want humanity to lock in ‘WEIRD’ values (which are typically speciesist). Within that marketplace, I do want to promote moral circle expansion and a broadly utilitarian outlook as a whole. I wouldn’t say this is as neglected as you claim it is — MacAskill discusses the value of the future (not just whether there is a future) extensively in his recent book, and there are EA organisations devoted to moral values spreading. It’s also partly why “philosopher” is recommended as a career in some cases, too.
If we want to spread those values, I agree with you that learning about competitor philosophies, ideologies, cultures and perspectives (I personally spend a fair bit of time on this) would be important, and that lowering language barriers could be helpful.
It could also be useful to explore whether there are interventions in cultures that we’re less familiar with that could improve people’s well-being even more than the typical global health interventions that are currently recommended. Perhaps there’s something about a particular culture which, if promoted more effectively, would really improve people’s lives. But maybe not: children dying of malaria is really, really bad, and that’s not a culture-specific phenomenon.
Needless to say, none of the above applies to the vast majority of moral patients on the planet, whether they’re factory farmed land animals, fishes or shrimps. (Though if we want to improve, say, shrimp welfare in Asia, learning local languages could help us work and recruit more effectively as well as spread values.)
Wonderful! What specific actions could we take to make that easier for you (and others like you for whom this would be a worthwhile pursuit)?
Maybe a reading group that meets every week (or month). Or an asynchronous thread in which people provide reviews of philosophical articles or world literature. Or a group of Duolingo “friends” (or some other language-learning app of people’s choice, I have a variety of thoughts on which languages should be prioritized, but starting with something would be good, and Spanish-language EAs seem to be growing in number and organization).
Bhutan’s notion of Gross Domestic Happiness, Denmark’s “hygge”, whatever it is that makes certain people with schizophrenia from Africa get the voices to say nice things to them, indigenous practices of farming and sustainable hunting, and maybe the practice of “insulting the meat” just off the top of my head, would probably be good things to make more broadly understood and build into certain institutions. Not to mention that knowledge of cultural features that need to be avoided or handled somewhat (for example, overtly strict beauty standards which harm people in a variety of different cultures).
And, very importantly, it could allow you to discover new things to value, new frameworks, new ways of approaching a problem. Every language you learn comes with new intuition pumps, new frames upon which you can hang your thoughts.
Even if you think the vast majority of moral patients are non-human and our priorities should reflect that, there are ways of thinking about animals and their welfare that have been cultivated for centuries by less WEIRD populations that could prove illuminating to you. I don’t know about them, because I have my own areas of ignorance. But that’s the kind of thing that EA could benefit from aggregating somewhere.
I would be very interested in working on a project like that, of aggregating non-EA perspectives in various packages for the convenience of individual EAs who may want to learn about perspectives that are underrepresented in the community and may offer interesting insights.
There’s a half-joking take that some people in longtermism bring up sometimes that roughly looks like
(i.e. predictions that most new humans will be born in africa)
TBH I think that half-joking take should probably be engaged with more seriously (maybe say, pursuing more translations of EA works into Igbo or something), and I’m glad to hear it.
Sort of related to this, I started to design an easier dialect of English because I think English is too hard and that (1) it would be easier to learn it in stages and (2) two people who have learned the easier dialect could speak it among themselves. This would be nice in reverse; I married a Filipino but found it difficult to learn Tagalog because of the lack of available Tagalog courses and the fact that my wife doesn’t understand and cannot explain the grammar of her language. I wish I could learn an intentionally-designed pidgeon/simplified version of the language before tackling the whole thing. Hearing the language spoken in the house for several years hasn’t helped.
It would be good for EAs to learn other languages, but it’s hard. I studied Spanish in my free time for four years, but I remained terrible at it, my vocabulary is still small and I usually can’t understand what Spanish people are saying. If I moved to Mexico I’m sure I would learn better. But I have various reasons not to.
Excellent reply. I cheaply agree-voted but am not agree-voting in a costly manner because that would require me backing up my cosmopolitan values by learning a language.
Skeptical that language learning is actually the most pivotal part of learning about wild (to you) perspectives, but it’s not obviously wrong.
Thank you! I don’t think it’s necessarily the most pivotal [1] but it is one part that has recently begun having its barrier of entry lowered [2]. Additionally, while reading broadly [3]could also help, the reason why language-learning looks so good in my eyes is because of the stones-to-birds ratio.
If you read very broadly and travel a lot, you may gain more “learning about wild(to you) perspective” benefits. But if you learn a language [4]you are:
1) benefitting your brain,
2) increasing the amount of people in the world you can talk to, and whose work you can learn from,
3) absorb new ideas you may not have otherwise been able to absorb,
4) acquire new intuitions [5].
You can separately do things that will fulfill all four of those things (and even fulfill some of the other benefits that language learning can provide for you) without learning another language. But I am very bad at executive skills, and juggling 4+ different habits, so I generally don’t find the idea of say…
doing 2 crosswords, 2 4x4x4 sudoku a day, and other brain teasers +
taking dance classes or learning a new instrument +
taking communications classes and reading books about public speaking and active listening +
engaging in comparative-translation reading +
ingratiating myself to radically different communities in order to cultivate those modes of thought [6]
...to be less onerous than learning a new language. Especially since language-learning can help and be done concurrently with these alternatives [7].
Language learning is also something that can help with community bonding, which would probably be helpful to the substantial-seeming portion of EAs who are kind of lonely and depressed. It can also help you remember what it is like to suck at something, which I think a lot of people in Rationalist spaces would benefit from more broadly, since so many of them were gifted kids who now have anxiety, and becoming comfortable with failure and iteration is also good for you and your ability to do things in general.
Travelling broadly will probably provide better results to most people, but it also costs a lot of money, even more if you need to hire a translator.
Especially with Duolingo offering endangered languages now.
Say, reading a national award-winning book from every nation in the world.
Or, preferrably, if you learn 2, given that the greatest benefits are found in trilinguals+.
I find that personally, I am more socially conservative in Spanish and more progressive in English, which has allowed me to test ideas against my own brain in a way that most monolinguals I talk to seem to find somewhat alien and much more effortful. Conversely, in French, I am not very capable, and I find that quite useful because it allows me to force myself to simplify my ideas on the grounds that I am literally unable to express the complex version.
+ [whatever else I haven’t thought of yet that would help obtain these benefits]
Music terminology is often in French or Italian, learning languages will just broaden your vocabulary for crossword puzzles, knowing another language is a gateway to communities that were previously closed to you, and you can engage in reading different translations of something more easily if you can also just read it in the original language.
Just regarding your last sentence: I disagree that it has any bearing whatsoever whether everyone else is excluding other’s visions of the future or not.
No matter if everyone else is great or terrible—I want EA to be as good as it possibly can, and if it fails on some metric it should be criticised and changed in that regard, no matter if everyone else fails on the same metric too, or not, or whatever.
Thanks for your reply! I’m not saying that EA should be able to exclude others’ visions because others are doing so. I’m claiming that it’s impossible not to exclude others’ visions of the future. Let’s take the pluralistic vision of the future that appeals to MacAskill and Ord. There will be many people in the world (fascists, Islamists, evangelical Christians) who disagree with such a vision. MacAskill and Ord are thus excluding those visions of the future. Is this a bad thing? I will let the reader decide.
What are the beliefs of prioritarian standpoint theorists?
Prioritarianism is a flavor of utilitarianism that tries to increase impact by starting with the oppressed or unprivileged.
Standpoint theory or standpoint epistemology is about advantages and disadvantages to gaining knowledge based on demographic membership.
Leftist culture is deeply exposed to both of these views, occasionally to the point of them being invisible/commonsensical assumptions.
My internal gpt completion / simulation of someone like Thorn assumed that her rhetorical question was gesturing toward “these EA folks seem to be underrating at least one of prioritarianism or standpoint epistemology”