I think this question is more centred about elitism and EA being mostly western, educated, industrialized, rich and democratic (WEIRD) than about the culture war between left and right.
Iām sure Thorn does do this (I havenāt watched the video in full yet), but it seems more productive to criticise the āEA vision of the futureā than to ask where it comes from (and there were EA-like ideas in China, India, Ancient Greece and the Islamic world long before Bentham).
MacAskill, Ord and others seem to me to have advocated a highly pluralistic future in which humanity is able to reflect on its values. Clearly, some people donāt like what they think is the āEA vision of the futureā and want their vision to prevail instead. The question seems to imply, though, that EAs are the only ones who are excluding othersā visions of the future from their thinking. Actually, everyone is doing that, otherwise they wouldnāt have a specific vision.
Just regarding this bit: āMacAskill, Ord and others seem to me to have advocated a highly pluralistic future in which humanity is able to reflect on its values.ā
I have posited, multiple times, in different EA spaces, that EAs should learn more languages in order to be better able to think, better able to understand perspectives further removed from that which they were raised in, healthier (trilinguals are massively protected against dementia, Alzheimerās, etc), etc.
And the response I have received has been broadly āehā or at best āthis is interesting but I donāt know if itās worth EAs timeā.
I have not seen any EA āworld literatureā circles based around trying to expand their horizons to perspectives as far from their own as possible. I have not seen any EA language learning groups. I have not seen any effort put towards using the EA community (that is so important to build!) in order to enable individual EAs to become better at understanding radically different perspectives, etc.
So likeā¦ Iunno, I donāt buy the āitās not a problem weāre mostly wealthy white guysā argument. It seems to me like a lot of EAs donāt know what they donāt know, and donāt realize the axes along which they could not-know-things on top. They donāt behave the way people who are genuinely invested in a more pluralistic vision of the future would behave. And they donāt react positively to proposals that aim to improve that.
Thanks for your reply! Firstly, there will many EAs (particularly from the non-Anglosphere West and non-Western countries) who do understand multiple languages. I imagine there are also many EAs who have read world literature.
When we say that EAs āmostlyā have a certain demographic background, we should remember that this still means there are hundreds of EAs that donāt fit that background at all and they shouldnāt be forgotten. Relatedly, I (somewhat ironically) think critics of EA could do with studying world history because it would show them that EA-like ideas havenāt just popped up in the West by any means.
I also donāt think one needs to understand radically different perspectives to want a world in which those perspectives can survive and flourish into the future. There are so many worldviews out there that you have to ultimately draw a line somewhere, and many of those perspectives will just be diametrically opposed to core EA principles, so it would be odd to promote them at the community level. Should people try to expand their intellectual horizons as a personal project? Possibly!
I, as someone who is at least trying to be an EA, and who can speak two languages fluently and survive in 3 more, would ācountā as an EA who is not from āAnglosphere Westā, and who has read world literature. So yes, I know I exist.
My point is that EA, as a community, should encourage that kind of thing among its members. And it really doesnāt. Yes, people can do it as a personal project, but I think EA generally puts a lot of stock on people doing what are ultimately fairly difficult things (like, self-directed study of AI) without providing a consistent community with accountability that would help them achieve those things. And I think that the WEIRD /ā Anglosphere West /ā etc. demographic bias of EA is part of the reason why this seems to be the case.
Yes, it is possible to want a perspective to survive in the future without being particularly well-versed in it. I theoretically would not want Hinduism to go extinct in 50 years and can want that without knowing a whole lot about Hinduism.
That said, in order to know what will allow certain worldviews, and certain populations to thrive, you need to understand them at least a little. And if youāre going to try to maximize the good you do for people, which would include a LOT of people who are not Anglosphere West. If I genuinely thought that Hinduism was under threat of extinction and wanted to do something about it, trying to do that without learning anything about Hinduism would be really short-sighted of me.
Given that most human beings for most of history have not been WEIRD in the Heinrich sense, and that a lot of currently WEIRD people are becoming less so (increase in antidemocratic sentiment, the affordability crisis and rising inequality) it is reasonable to believe that the future people EA is so concerned with will not be particularly WEIRD. And if you want to do what is best for that population, there should be more effort put into ensuring they will be WEIRD in some fashion[1]or into ensuring that EA interventions will help non-WEIRD people a meaningful amount in ways that they will value. Which is more than just malaria nets.
And likeā¦ I havenāt seen that conversation.
Iāve seen allusions to it. But I havenāt really seen it. Nor have I seen EA engage with the āa bunch of philosophers and computer scientists got together and determined that the most important thing you can be is a philosopher or computer scientistā critique particularly well, nor have I seen EA engage very well with the question of lowering the barriers of entry (which I also received a fairly unhelpful response to when I posited it, which boiled down to āwell you understand all of the EA projects that youāre not involved in and create lower barriers of entry for all of themā, which again comes back to the problem that EA creates a community and then doesnāt seem to actually use it to do the things communities are good for..?).
So I think itās kind of a copout to just say āwell, you can care in this theoretical way about perspectives you donāt understandā, given that part of the plan of EA, and the success condition is to affect those peopleās lives meaningfully.
Not to mention the question of āpromotingā vs āunderstandingā.
Should EA promote, iunno, fascism, on a community level? Obviously not.
Should EA seek to understand fascism, and authoritarianism more broadly, as a concerning potential threat that has arisen multiple times and could arise yet again with greater technological and military force in the future? Fucking definitely.
The closest thing to this is the āliberal normsā political career path as far as Iām aware, but I think both paths should be taken concurrently and that OR is inclusive, yet the second is largely neglected.
Great comment, thanks for clarifying your position. To be clear, Iām not particularly concerned about the survival of most particular worldviews as long as they decline organically. I just want to ensure that thereās a marketplace in which different worldviews can compete, rather than some kind of irreversible ālock-inā scenario.
I have some issues with the entire āWEIRDā concept and certainly wouldnāt want humanity to lock in āWEIRDā values (which are typically speciesist). Within that marketplace, I do want to promote moral circle expansion and a broadly utilitarian outlook as a whole. I wouldnāt say this is as neglected as you claim it is ā MacAskill discusses the value of the future (not just whether there is a future) extensively in his recent book, and there are EA organisations devoted to moral values spreading. Itās also partly why āphilosopherā is recommended as a career in some cases, too.
If we want to spread those values, I agree with you that learning about competitor philosophies, ideologies, cultures and perspectives (I personally spend a fair bit of time on this) would be important, and that lowering language barriers could be helpful.
It could also be useful to explore whether there are interventions in cultures that weāre less familiar with that could improve peopleās well-being even more than the typical global health interventions that are currently recommended. Perhaps thereās something about a particular culture which, if promoted more effectively, would really improve peopleās lives. But maybe not: children dying of malaria is really, really bad, and thatās not a culture-specific phenomenon.
Needless to say, none of the above applies to the vast majority of moral patients on the planet, whether theyāre factory farmed land animals, fishes or shrimps. (Though if we want to improve, say, shrimp welfare in Asia, learning local languages could help us work and recruit more effectively as well as spread values.)
If we want to spread those values, I agree with you that learning about competitor philosophies, ideologies, cultures and perspectives (I personally spend a fair bit of time on this) would be important, and that lowering language barriers could be helpful.
Wonderful! What specific actions could we take to make that easier for you (and others like you for whom this would be a worthwhile pursuit)?
Maybe a reading group that meets every week (or month). Or an asynchronous thread in which people provide reviews of philosophical articles or world literature. Or a group of Duolingo āfriendsā (or some other language-learning app of peopleās choice, I have a variety of thoughts on which languages should be prioritized, but starting with something would be good, and Spanish-language EAs seem to be growing in number and organization).
It could also be useful to explore whether there are interventions in cultures that weāre less familiar with that could improve peopleās well-being even more than the typical global health interventions that are currently recommended. Perhaps thereās something about a particular culture which, if promoted more effectively, would really improve peopleās lives.
Bhutanās notion of Gross Domestic Happiness, Denmarkās āhyggeā, whatever it is that makes certain people with schizophrenia from Africa get the voices to say nice things to them, indigenous practices of farming and sustainable hunting, and maybe the practice of āinsulting the meatā just off the top of my head, would probably be good things to make more broadly understood and build into certain institutions. Not to mention that knowledge of cultural features that need to be avoided or handled somewhat (for example, overtly strict beauty standards which harm people in a variety of different cultures).
(Though if we want to improve, say, shrimp welfare in Asia, learning local languages could help us work and recruit more effectively as well as spread values.)
And, very importantly, it could allow you to discover new things to value, new frameworks, new ways of approaching a problem. Every language you learn comes with new intuition pumps, new frames upon which you can hang your thoughts.
Even if you think the vast majority of moral patients are non-human and our priorities should reflect that, there are ways of thinking about animals and their welfare that have been cultivated for centuries by less WEIRD populations that could prove illuminating to you. I donāt know about them, because I have my own areas of ignorance. But thatās the kind of thing that EA could benefit from aggregating somewhere.
I would be very interested in working on a project like that, of aggregating non-EA perspectives in various packages for the convenience of individual EAs who may want to learn about perspectives that are underrepresented in the community and may offer interesting insights.
TBH I think that half-joking take should probably be engaged with more seriously (maybe say, pursuing more translations of EA works into Igbo or something), and Iām glad to hear it.
Sort of related to this, I started to design an easier dialect of English because I think English is too hard and that (1) it would be easier to learn it in stages and (2) two people who have learned the easier dialect could speak it among themselves. This would be nice in reverse; I married a Filipino but found it difficult to learn Tagalog because of the lack of available Tagalog courses and the fact that my wife doesnāt understand and cannot explain the grammar of her language. I wish I could learn an intentionally-designed pidgeon/āsimplified version of the language before tackling the whole thing. Hearing the language spoken in the house for several years hasnāt helped.
It would be good for EAs to learn other languages, but itās hard. I studied Spanish in my free time for four years, but I remained terrible at it, my vocabulary is still small and I usually canāt understand what Spanish people are saying. If I moved to Mexico Iām sure I would learn better. But I have various reasons not to.
Excellent reply. I cheaply agree-voted but am not agree-voting in a costly manner because that would require me backing up my cosmopolitan values by learning a language.
Skeptical that language learning is actually the most pivotal part of learning about wild (to you) perspectives, but itās not obviously wrong.
Thank you! I donāt think itās necessarily the most pivotal [1] but it is one part that has recently begun having its barrier of entry lowered [2]. Additionally, while reading broadly [3]could also help, the reason why language-learning looks so good in my eyes is because of the stones-to-birds ratio.
If you read very broadly and travel a lot, you may gain more ālearning about wild(to you) perspectiveā benefits. But if you learn a language [4]you are:
1) benefitting your brain,
2) increasing the amount of people in the world you can talk to, and whose work you can learn from,
3) absorb new ideas you may not have otherwise been able to absorb,
You can separately do things that will fulfill all four of those things (and even fulfill some of the other benefits that language learning can provide for you) without learning another language. But I am very bad at executive skills, and juggling 4+ different habits, so I generally donāt find the idea of sayā¦
doing 2 crosswords, 2 4x4x4 sudoku a day, and other brain teasers +
taking dance classes or learning a new instrument +
taking communications classes and reading books about public speaking and active listening +
engaging in comparative-translation reading +
ingratiating myself to radically different communities in order to cultivate those modes of thought [6]
...to be less onerous than learning a new language. Especially since language-learning can help and be done concurrently with these alternatives [7].
Language learning is also something that can help with community bonding, which would probably be helpful to the substantial-seeming portion of EAs who are kind of lonely and depressed. It can also help you remember what it is like to suck at something, which I think a lot of people in Rationalist spaces would benefit from more broadly, since so many of them were gifted kids who now have anxiety, and becoming comfortable with failure and iteration is also good for you and your ability to do things in general.
I find that personally, I am more socially conservative in Spanish and more progressive in English, which has allowed me to test ideas against my own brain in a way that most monolinguals I talk to seem to find somewhat alien and much more effortful. Conversely, in French, I am not very capable, and I find that quite useful because it allows me to force myself to simplify my ideas on the grounds that I am literally unable to express the complex version.
Music terminology is often in French or Italian, learning languages will just broaden your vocabulary for crossword puzzles, knowing another language is a gateway to communities that were previously closed to you, and you can engage in reading different translations of something more easily if you can also just read it in the original language.
Just regarding your last sentence: I disagree that it has any bearing whatsoever whether everyone else is excluding otherās visions of the future or not. No matter if everyone else is great or terribleāI want EA to be as good as it possibly can, and if it fails on some metric it should be criticised and changed in that regard, no matter if everyone else fails on the same metric too, or not, or whatever.
Thanks for your reply! Iām not saying that EA should be able to exclude othersā visions because others are doing so. Iām claiming that itās impossible not to exclude othersā visions of the future. Letās take the pluralistic vision of the future that appeals to MacAskill and Ord. There will be many people in the world (fascists, Islamists, evangelical Christians) who disagree with such a vision. MacAskill and Ord are thus excluding those visions of the future. Is this a bad thing? I will let the reader decide.
I think this question is more centred about elitism and EA being mostly western, educated, industrialized, rich and democratic (WEIRD) than about the culture war between left and right.
Iām sure Thorn does do this (I havenāt watched the video in full yet), but it seems more productive to criticise the āEA vision of the futureā than to ask where it comes from (and there were EA-like ideas in China, India, Ancient Greece and the Islamic world long before Bentham).
MacAskill, Ord and others seem to me to have advocated a highly pluralistic future in which humanity is able to reflect on its values. Clearly, some people donāt like what they think is the āEA vision of the futureā and want their vision to prevail instead. The question seems to imply, though, that EAs are the only ones who are excluding othersā visions of the future from their thinking. Actually, everyone is doing that, otherwise they wouldnāt have a specific vision.
Just regarding this bit: āMacAskill, Ord and others seem to me to have advocated a highly pluralistic future in which humanity is able to reflect on its values.ā
I have posited, multiple times, in different EA spaces, that EAs should learn more languages in order to be better able to think, better able to understand perspectives further removed from that which they were raised in, healthier (trilinguals are massively protected against dementia, Alzheimerās, etc), etc.
And the response I have received has been broadly āehā or at best āthis is interesting but I donāt know if itās worth EAs timeā.
I have not seen any EA āworld literatureā circles based around trying to expand their horizons to perspectives as far from their own as possible. I have not seen any EA language learning groups. I have not seen any effort put towards using the EA community (that is so important to build!) in order to enable individual EAs to become better at understanding radically different perspectives, etc.
So likeā¦ Iunno, I donāt buy the āitās not a problem weāre mostly wealthy white guysā argument. It seems to me like a lot of EAs donāt know what they donāt know, and donāt realize the axes along which they could not-know-things on top. They donāt behave the way people who are genuinely invested in a more pluralistic vision of the future would behave. And they donāt react positively to proposals that aim to improve that.
Thanks for your reply! Firstly, there will many EAs (particularly from the non-Anglosphere West and non-Western countries) who do understand multiple languages. I imagine there are also many EAs who have read world literature.
When we say that EAs āmostlyā have a certain demographic background, we should remember that this still means there are hundreds of EAs that donāt fit that background at all and they shouldnāt be forgotten. Relatedly, I (somewhat ironically) think critics of EA could do with studying world history because it would show them that EA-like ideas havenāt just popped up in the West by any means.
I also donāt think one needs to understand radically different perspectives to want a world in which those perspectives can survive and flourish into the future. There are so many worldviews out there that you have to ultimately draw a line somewhere, and many of those perspectives will just be diametrically opposed to core EA principles, so it would be odd to promote them at the community level. Should people try to expand their intellectual horizons as a personal project? Possibly!
I think you might have misunderstood my comment.
I, as someone who is at least trying to be an EA, and who can speak two languages fluently and survive in 3 more, would ācountā as an EA who is not from āAnglosphere Westā, and who has read world literature. So yes, I know I exist.
My point is that EA, as a community, should encourage that kind of thing among its members. And it really doesnāt. Yes, people can do it as a personal project, but I think EA generally puts a lot of stock on people doing what are ultimately fairly difficult things (like, self-directed study of AI) without providing a consistent community with accountability that would help them achieve those things. And I think that the WEIRD /ā Anglosphere West /ā etc. demographic bias of EA is part of the reason why this seems to be the case.
Yes, it is possible to want a perspective to survive in the future without being particularly well-versed in it. I theoretically would not want Hinduism to go extinct in 50 years and can want that without knowing a whole lot about Hinduism.
That said, in order to know what will allow certain worldviews, and certain populations to thrive, you need to understand them at least a little. And if youāre going to try to maximize the good you do for people, which would include a LOT of people who are not Anglosphere West. If I genuinely thought that Hinduism was under threat of extinction and wanted to do something about it, trying to do that without learning anything about Hinduism would be really short-sighted of me.
Given that most human beings for most of history have not been WEIRD in the Heinrich sense, and that a lot of currently WEIRD people are becoming less so (increase in antidemocratic sentiment, the affordability crisis and rising inequality) it is reasonable to believe that the future people EA is so concerned with will not be particularly WEIRD. And if you want to do what is best for that population, there should be more effort put into ensuring they will be WEIRD in some fashion[1] or into ensuring that EA interventions will help non-WEIRD people a meaningful amount in ways that they will value. Which is more than just malaria nets.
And likeā¦ I havenāt seen that conversation.
Iāve seen allusions to it. But I havenāt really seen it. Nor have I seen EA engage with the āa bunch of philosophers and computer scientists got together and determined that the most important thing you can be is a philosopher or computer scientistā critique particularly well, nor have I seen EA engage very well with the question of lowering the barriers of entry (which I also received a fairly unhelpful response to when I posited it, which boiled down to āwell you understand all of the EA projects that youāre not involved in and create lower barriers of entry for all of themā, which again comes back to the problem that EA creates a community and then doesnāt seem to actually use it to do the things communities are good for..?).
So I think itās kind of a copout to just say āwell, you can care in this theoretical way about perspectives you donāt understandā, given that part of the plan of EA, and the success condition is to affect those peopleās lives meaningfully.
Not to mention the question of āpromotingā vs āunderstandingā.
Should EA promote, iunno, fascism, on a community level? Obviously not.
Should EA seek to understand fascism, and authoritarianism more broadly, as a concerning potential threat that has arisen multiple times and could arise yet again with greater technological and military force in the future? Fucking definitely.
The closest thing to this is the āliberal normsā political career path as far as Iām aware, but I think both paths should be taken concurrently and that OR is inclusive, yet the second is largely neglected.
Great comment, thanks for clarifying your position. To be clear, Iām not particularly concerned about the survival of most particular worldviews as long as they decline organically. I just want to ensure that thereās a marketplace in which different worldviews can compete, rather than some kind of irreversible ālock-inā scenario.
I have some issues with the entire āWEIRDā concept and certainly wouldnāt want humanity to lock in āWEIRDā values (which are typically speciesist). Within that marketplace, I do want to promote moral circle expansion and a broadly utilitarian outlook as a whole. I wouldnāt say this is as neglected as you claim it is ā MacAskill discusses the value of the future (not just whether there is a future) extensively in his recent book, and there are EA organisations devoted to moral values spreading. Itās also partly why āphilosopherā is recommended as a career in some cases, too.
If we want to spread those values, I agree with you that learning about competitor philosophies, ideologies, cultures and perspectives (I personally spend a fair bit of time on this) would be important, and that lowering language barriers could be helpful.
It could also be useful to explore whether there are interventions in cultures that weāre less familiar with that could improve peopleās well-being even more than the typical global health interventions that are currently recommended. Perhaps thereās something about a particular culture which, if promoted more effectively, would really improve peopleās lives. But maybe not: children dying of malaria is really, really bad, and thatās not a culture-specific phenomenon.
Needless to say, none of the above applies to the vast majority of moral patients on the planet, whether theyāre factory farmed land animals, fishes or shrimps. (Though if we want to improve, say, shrimp welfare in Asia, learning local languages could help us work and recruit more effectively as well as spread values.)
Wonderful! What specific actions could we take to make that easier for you (and others like you for whom this would be a worthwhile pursuit)?
Maybe a reading group that meets every week (or month). Or an asynchronous thread in which people provide reviews of philosophical articles or world literature. Or a group of Duolingo āfriendsā (or some other language-learning app of peopleās choice, I have a variety of thoughts on which languages should be prioritized, but starting with something would be good, and Spanish-language EAs seem to be growing in number and organization).
Bhutanās notion of Gross Domestic Happiness, Denmarkās āhyggeā, whatever it is that makes certain people with schizophrenia from Africa get the voices to say nice things to them, indigenous practices of farming and sustainable hunting, and maybe the practice of āinsulting the meatā just off the top of my head, would probably be good things to make more broadly understood and build into certain institutions. Not to mention that knowledge of cultural features that need to be avoided or handled somewhat (for example, overtly strict beauty standards which harm people in a variety of different cultures).
And, very importantly, it could allow you to discover new things to value, new frameworks, new ways of approaching a problem. Every language you learn comes with new intuition pumps, new frames upon which you can hang your thoughts.
Even if you think the vast majority of moral patients are non-human and our priorities should reflect that, there are ways of thinking about animals and their welfare that have been cultivated for centuries by less WEIRD populations that could prove illuminating to you. I donāt know about them, because I have my own areas of ignorance. But thatās the kind of thing that EA could benefit from aggregating somewhere.
I would be very interested in working on a project like that, of aggregating non-EA perspectives in various packages for the convenience of individual EAs who may want to learn about perspectives that are underrepresented in the community and may offer interesting insights.
Thereās a half-joking take that some people in longtermism bring up sometimes that roughly looks like
(i.e. predictions that most new humans will be born in africa)
TBH I think that half-joking take should probably be engaged with more seriously (maybe say, pursuing more translations of EA works into Igbo or something), and Iām glad to hear it.
Sort of related to this, I started to design an easier dialect of English because I think English is too hard and that (1) it would be easier to learn it in stages and (2) two people who have learned the easier dialect could speak it among themselves. This would be nice in reverse; I married a Filipino but found it difficult to learn Tagalog because of the lack of available Tagalog courses and the fact that my wife doesnāt understand and cannot explain the grammar of her language. I wish I could learn an intentionally-designed pidgeon/āsimplified version of the language before tackling the whole thing. Hearing the language spoken in the house for several years hasnāt helped.
It would be good for EAs to learn other languages, but itās hard. I studied Spanish in my free time for four years, but I remained terrible at it, my vocabulary is still small and I usually canāt understand what Spanish people are saying. If I moved to Mexico Iām sure I would learn better. But I have various reasons not to.
Excellent reply. I cheaply agree-voted but am not agree-voting in a costly manner because that would require me backing up my cosmopolitan values by learning a language.
Skeptical that language learning is actually the most pivotal part of learning about wild (to you) perspectives, but itās not obviously wrong.
Thank you! I donāt think itās necessarily the most pivotal [1] but it is one part that has recently begun having its barrier of entry lowered [2]. Additionally, while reading broadly [3]could also help, the reason why language-learning looks so good in my eyes is because of the stones-to-birds ratio.
If you read very broadly and travel a lot, you may gain more ālearning about wild(to you) perspectiveā benefits. But if you learn a language [4]you are:
1) benefitting your brain,
2) increasing the amount of people in the world you can talk to, and whose work you can learn from,
3) absorb new ideas you may not have otherwise been able to absorb,
4) acquire new intuitions [5].
You can separately do things that will fulfill all four of those things (and even fulfill some of the other benefits that language learning can provide for you) without learning another language. But I am very bad at executive skills, and juggling 4+ different habits, so I generally donāt find the idea of sayā¦
doing 2 crosswords, 2 4x4x4 sudoku a day, and other brain teasers +
taking dance classes or learning a new instrument +
taking communications classes and reading books about public speaking and active listening +
engaging in comparative-translation reading +
ingratiating myself to radically different communities in order to cultivate those modes of thought [6]
...to be less onerous than learning a new language. Especially since language-learning can help and be done concurrently with these alternatives [7].
Language learning is also something that can help with community bonding, which would probably be helpful to the substantial-seeming portion of EAs who are kind of lonely and depressed. It can also help you remember what it is like to suck at something, which I think a lot of people in Rationalist spaces would benefit from more broadly, since so many of them were gifted kids who now have anxiety, and becoming comfortable with failure and iteration is also good for you and your ability to do things in general.
Travelling broadly will probably provide better results to most people, but it also costs a lot of money, even more if you need to hire a translator.
Especially with Duolingo offering endangered languages now.
Say, reading a national award-winning book from every nation in the world.
Or, preferrably, if you learn 2, given that the greatest benefits are found in trilinguals+.
I find that personally, I am more socially conservative in Spanish and more progressive in English, which has allowed me to test ideas against my own brain in a way that most monolinguals I talk to seem to find somewhat alien and much more effortful. Conversely, in French, I am not very capable, and I find that quite useful because it allows me to force myself to simplify my ideas on the grounds that I am literally unable to express the complex version.
+ [whatever else I havenāt thought of yet that would help obtain these benefits]
Music terminology is often in French or Italian, learning languages will just broaden your vocabulary for crossword puzzles, knowing another language is a gateway to communities that were previously closed to you, and you can engage in reading different translations of something more easily if you can also just read it in the original language.
Just regarding your last sentence: I disagree that it has any bearing whatsoever whether everyone else is excluding otherās visions of the future or not.
No matter if everyone else is great or terribleāI want EA to be as good as it possibly can, and if it fails on some metric it should be criticised and changed in that regard, no matter if everyone else fails on the same metric too, or not, or whatever.
Thanks for your reply! Iām not saying that EA should be able to exclude othersā visions because others are doing so. Iām claiming that itās impossible not to exclude othersā visions of the future. Letās take the pluralistic vision of the future that appeals to MacAskill and Ord. There will be many people in the world (fascists, Islamists, evangelical Christians) who disagree with such a vision. MacAskill and Ord are thus excluding those visions of the future. Is this a bad thing? I will let the reader decide.