I think most, though no doubt not all people youâd think of as EA leaders think AI is the most important cause area to work in and have thought that for a long time. AI is also more fun to argue about on the internet than global poverty or animal welfare, which drives discussion of it.
But having said all that, there is still plenty EA funding of global health and development stuff, including by Open Philanthropy, who in fact have a huge chunk of the EA money in the world. People do and fund animal stuff too, including Open Phil. If you want to, you can just engage with EA stuff on global development and/âor animal welfare, and just ignore the AI stuff altogether. And even if you decide that the AI stuff is so prominent and-in your view-so wrong that you donât want to call yourself an EA, you donât have to give up on the idea of effective charity. If you want to, you can try and do the most good you can on global poverty or animal welfare, while not identifying as an EA at all. Lots, likely most good work in these areas will be done by organisations that donât see themselves as EA anyway. You can donate to or work for these orgs without engaging with the whole EA scene at all.
Hi David, oks this is the most enlightening and decision orienting answer I could get. Thanks!
Indeed i came to the Forums through a workshop and had a completely inverted expectative. That the leaders at the EA where conscientious of the AI fad and used that galvanising attention to redirect people to more pressing matters. But from your comment, especially this bit âmost, though no doubt not all people youâd think of as EA leaders think AI is the most important cause area to workâ really concerns me that the direction of the movement is somehow deceived and will come crashing few years down the road. Still, I hope what is structurally achieved by then might be âeffectiveâ enough to survive the encounter with reality.
Disclaimer, I come from theoretical and computational cosmology so I have insight on how over-bloated the topic is compared to realistic prospectsânot unlike holography in the 60s and everyday-use of nuclear power in the 50s. Humans, how lovely we are. 2nd disclaimer, now I work on anthropology and cultural loss.
So with that perspective I really need to weight the advantage vs inconvenience. My end game is to preserve and expand cultural diversity, which is a rather unaddressed topic, so to the law of logarithmic returns that this movement profeses I do have some hope of exponential return on focusing on cultural diversity as a theme. Inversely, overly focused on AI related seems logarithmically inefficient, especially in a fad dominated environmentâi can cite plenty of research, plus personal experience, already on the later if anybody is interested.
My guess is you might find it hard to find EA people in global development stuff who are particularly interested in preserving/âexpanding cultural diversity. Generally the people who work on that stuff want to prioritize health, income and economic growth.
I think most, though no doubt not all people youâd think of as EA leaders think AI is the most important cause area to work in and have thought that for a long time. AI is also more fun to argue about on the internet than global poverty or animal welfare, which drives discussion of it.
But having said all that, there is still plenty EA funding of global health and development stuff, including by Open Philanthropy, who in fact have a huge chunk of the EA money in the world. People do and fund animal stuff too, including Open Phil. If you want to, you can just engage with EA stuff on global development and/âor animal welfare, and just ignore the AI stuff altogether. And even if you decide that the AI stuff is so prominent and-in your view-so wrong that you donât want to call yourself an EA, you donât have to give up on the idea of effective charity. If you want to, you can try and do the most good you can on global poverty or animal welfare, while not identifying as an EA at all. Lots, likely most good work in these areas will be done by organisations that donât see themselves as EA anyway. You can donate to or work for these orgs without engaging with the whole EA scene at all.
Hi David, oks this is the most enlightening and decision orienting answer I could get. Thanks!
Indeed i came to the Forums through a workshop and had a completely inverted expectative. That the leaders at the EA where conscientious of the AI fad and used that galvanising attention to redirect people to more pressing matters. But from your comment, especially this bit âmost, though no doubt not all people youâd think of as EA leaders think AI is the most important cause area to workâ really concerns me that the direction of the movement is somehow deceived and will come crashing few years down the road. Still, I hope what is structurally achieved by then might be âeffectiveâ enough to survive the encounter with reality.
Disclaimer, I come from theoretical and computational cosmology so I have insight on how over-bloated the topic is compared to realistic prospectsânot unlike holography in the 60s and everyday-use of nuclear power in the 50s. Humans, how lovely we are. 2nd disclaimer, now I work on anthropology and cultural loss.
So with that perspective I really need to weight the advantage vs inconvenience. My end game is to preserve and expand cultural diversity, which is a rather unaddressed topic, so to the law of logarithmic returns that this movement profeses I do have some hope of exponential return on focusing on cultural diversity as a theme. Inversely, overly focused on AI related seems logarithmically inefficient, especially in a fad dominated environmentâi can cite plenty of research, plus personal experience, already on the later if anybody is interested.
My guess is you might find it hard to find EA people in global development stuff who are particularly interested in preserving/âexpanding cultural diversity. Generally the people who work on that stuff want to prioritize health, income and economic growth.