Disclaimer: I want to start off by saying I think EA gets a lot of things right. I’ve been involved with EA for the better part of 2022 and I’ve watched from the sidelines for a while longer than that. I believe EA does good just by cultivating the high concentration of good-faith, high openness people that make it up and creating opportunities for these people to meet and exchange ideas, and I’m beyond happy to be involved. However, I don’t believe the EA movement is perfect, and I have a few ideas of how it could be improved. I’ll share them below.
I. Culture
1.)EA has a status problem, but probably not the one you think.
I’ve been pondering this for a while and I think that it relates a lot to social group theory and how we can better design systems of social incentives. As much as I recoil at the comparison of EA to a religion, I think it might be better if EA were more like one. Religions do a great job of encouraging participation and contribution towards common goals, even excluding methods that stem from the fear of god. I think if we could selectively institute the good methods of encouraging participation in EA, we could be a lot more effective. One example of this would be to commoditize participation and status, by bringing to the front something like the EA forum’s karma system. Having a strong way to socially signal is key to motivate doing good. Isn’t it sort of weird that we give status to large scale philanthropists but don’t bother tracking the philanthropic actions of those around us? If you donated 10% of your income to GiveWell, that’s great and I think we should do more to reward and spotlight that behavior. Some sort of verification flair like Twitter’s “Blue Checkmark” would be a great way to do this. Maybe two separate rankings for % of income and absolute donation amount would be most effective as a signal that levels playing fields between high and low affluence altruists.
I think EA could do a lot to elevate the people we consider to be most effective in our immediate social vicinity to high status positions, rather than having the majority of the social landscape be taken up by outside funders. This is not a dig against billionaires getting involved in EA, I think that’s great, just an appeal to refocus our attributions of status to be more level.
2.) Not enough criticism/internal auditing.
I see the irony in the juxtaposition of this criticism with the previous one but I still think it’s valid. EAs are probably a bit too nice. The very need for a criticism contest and institutionalized “Red Teaming” arises from a lack of “dunking” culture. I’m not sure why, but there seems to be an unwillingness to call someone out if you think they are doing harm. For example, I know of more than a few AI safety researchers with concerningly short prospective timelines who seem to get along just fine with their peers working on improving AI capability. This is absolutely puzzling to me. Where are the debates? Why do the same people who sound the alarm of AI risk not spend more of their time trying to convert their capability-pilled colleagues? Dunking culture allows social regulation without cancellation. I’d wager that it’s much friendlier (and effective in changing minds) to josh someone via a meme than it is to say nothing and be complicit or to be overly aggressive.
3.)The Ex-Risk net has been cast too wide, and it’s sucking up brainpower.
Basically, too many people seem to think they are a best fit for working on existential risk, and I’m worried this has led to a Brain-Drain effect on other fields where real and material gains could have been made. This is especially true of the more nebulous fields like AI where progress towards AI safety is basically unquantifiable. (perhaps with the exception of interpretability efforts) Ex-Risk stuff is fun to talk about, and might be a great hook to get people in on the movement, but I fear that there has been an over allocation towards this type of work recently, and that the opportunity cost of say, saving lives in Africa, is probably too high to justify the degree to which EA has become saturated by Ex-Risk focused people. This is especially true given that most Ex-Risk domains are almost comically illegible to anyone other than domain experts, so the degree to which they dominate the general discourse implies a gross misallocation of attention. I think the solution here is to elevate and celebrate the EAs doing tangible work and making impacts that can and are being measured, in the hopes of inspiring others to action in practice rather than theoretical work.
II. Philosophy
I don’t exactly have solutions to present in this section so I’ll try to keep it short and explain my worries.
1.) I worry quite a bit that Long-termism may fall prey to Moral Relativism. I’ll try to explain why below.
Taking a retrospective view of history, the moral landscape has been bumpy to say the least. From ideas of the god-given mandate of absolutist rulers, to the justification of colonialism in the name of civilizing the world, many misaligned moral frameworks have existed in the past, and judging by the rate of change, many more frameworks will exist in the future by which our actions today may be considered harmful. This means that even by working to have a virtuous impact on the future, it may only be virtuous in our eyes, and not those of future peoples. Morality as we know it is a flash in the pan compared to the myriad systems that have come before it, and there is no telling how long our notions of good and bad will apply. Since our actions have an essentially equal chance of being good or bad in a future that is sufficiently far from our point in time, it seems that a presency bias is more important to incorporate into long termism than the current consensus view.
2.) I worry that if EA were to involve a sufficiently large amount of the actors or resources available that it would become pointless without an adjustment towards a more even productivity-consumption ratio.
Consider what would happen if everyone were working towards the health and wellbeing of future people, and no one dared to participate in hedonic consumption lest it doom those future people to an infinitely less flourishing world. Then consider what would happen if those future people also decided to completely focus their efforts on the health and wellbeing of future people, and they also refused to participate in hedonic consumption. Regardless of how flourishing your present world may be, the prospect of leveraging that utility into future utility for countless future people should always win out. This continues until we fail to mitigate some Ex-Risk, and we lose out on all of the banked utility that so many past peoples leveraged to improve our lives, and which we foolishly failed to consume. In short terms, if everyone becomes a completely optimized effective altruist, then no one would be left to cash in on the very utility we collectively cultivate.
Based on some estimations, the downfall of humanity is closer than you may think, and therefore the best course of action is to consume as much utility as possible before that happens. So the more doom-pilled you are on AI, Bio threats, or whatever your favorite pet Ex-Risk is, the more you ought to consume as a result. Yet this is the opposite effect I see among the AI researchers most convinced in it’s pending ascendancy.
Anyways, those are my thoughts, thanks for reading and don’t be afraid to dunk in the comments!
EA Worries and Criticism
Disclaimer: I want to start off by saying I think EA gets a lot of things right. I’ve been involved with EA for the better part of 2022 and I’ve watched from the sidelines for a while longer than that. I believe EA does good just by cultivating the high concentration of good-faith, high openness people that make it up and creating opportunities for these people to meet and exchange ideas, and I’m beyond happy to be involved. However, I don’t believe the EA movement is perfect, and I have a few ideas of how it could be improved. I’ll share them below.
I. Culture
1.)EA has a status problem, but probably not the one you think.
I’ve been pondering this for a while and I think that it relates a lot to social group theory and how we can better design systems of social incentives. As much as I recoil at the comparison of EA to a religion, I think it might be better if EA were more like one. Religions do a great job of encouraging participation and contribution towards common goals, even excluding methods that stem from the fear of god. I think if we could selectively institute the good methods of encouraging participation in EA, we could be a lot more effective. One example of this would be to commoditize participation and status, by bringing to the front something like the EA forum’s karma system. Having a strong way to socially signal is key to motivate doing good. Isn’t it sort of weird that we give status to large scale philanthropists but don’t bother tracking the philanthropic actions of those around us? If you donated 10% of your income to GiveWell, that’s great and I think we should do more to reward and spotlight that behavior. Some sort of verification flair like Twitter’s “Blue Checkmark” would be a great way to do this. Maybe two separate rankings for % of income and absolute donation amount would be most effective as a signal that levels playing fields between high and low affluence altruists.
I think EA could do a lot to elevate the people we consider to be most effective in our immediate social vicinity to high status positions, rather than having the majority of the social landscape be taken up by outside funders. This is not a dig against billionaires getting involved in EA, I think that’s great, just an appeal to refocus our attributions of status to be more level.
2.) Not enough criticism/internal auditing.
I see the irony in the juxtaposition of this criticism with the previous one but I still think it’s valid. EAs are probably a bit too nice. The very need for a criticism contest and institutionalized “Red Teaming” arises from a lack of “dunking” culture. I’m not sure why, but there seems to be an unwillingness to call someone out if you think they are doing harm. For example, I know of more than a few AI safety researchers with concerningly short prospective timelines who seem to get along just fine with their peers working on improving AI capability. This is absolutely puzzling to me. Where are the debates? Why do the same people who sound the alarm of AI risk not spend more of their time trying to convert their capability-pilled colleagues? Dunking culture allows social regulation without cancellation. I’d wager that it’s much friendlier (and effective in changing minds) to josh someone via a meme than it is to say nothing and be complicit or to be overly aggressive.
3.)The Ex-Risk net has been cast too wide, and it’s sucking up brainpower.
Basically, too many people seem to think they are a best fit for working on existential risk, and I’m worried this has led to a Brain-Drain effect on other fields where real and material gains could have been made. This is especially true of the more nebulous fields like AI where progress towards AI safety is basically unquantifiable. (perhaps with the exception of interpretability efforts) Ex-Risk stuff is fun to talk about, and might be a great hook to get people in on the movement, but I fear that there has been an over allocation towards this type of work recently, and that the opportunity cost of say, saving lives in Africa, is probably too high to justify the degree to which EA has become saturated by Ex-Risk focused people. This is especially true given that most Ex-Risk domains are almost comically illegible to anyone other than domain experts, so the degree to which they dominate the general discourse implies a gross misallocation of attention. I think the solution here is to elevate and celebrate the EAs doing tangible work and making impacts that can and are being measured, in the hopes of inspiring others to action in practice rather than theoretical work.
II. Philosophy
I don’t exactly have solutions to present in this section so I’ll try to keep it short and explain my worries.
1.) I worry quite a bit that Long-termism may fall prey to Moral Relativism. I’ll try to explain why below.
Taking a retrospective view of history, the moral landscape has been bumpy to say the least. From ideas of the god-given mandate of absolutist rulers, to the justification of colonialism in the name of civilizing the world, many misaligned moral frameworks have existed in the past, and judging by the rate of change, many more frameworks will exist in the future by which our actions today may be considered harmful. This means that even by working to have a virtuous impact on the future, it may only be virtuous in our eyes, and not those of future peoples. Morality as we know it is a flash in the pan compared to the myriad systems that have come before it, and there is no telling how long our notions of good and bad will apply. Since our actions have an essentially equal chance of being good or bad in a future that is sufficiently far from our point in time, it seems that a presency bias is more important to incorporate into long termism than the current consensus view.
2.) I worry that if EA were to involve a sufficiently large amount of the actors or resources available that it would become pointless without an adjustment towards a more even productivity-consumption ratio.
Consider what would happen if everyone were working towards the health and wellbeing of future people, and no one dared to participate in hedonic consumption lest it doom those future people to an infinitely less flourishing world. Then consider what would happen if those future people also decided to completely focus their efforts on the health and wellbeing of future people, and they also refused to participate in hedonic consumption. Regardless of how flourishing your present world may be, the prospect of leveraging that utility into future utility for countless future people should always win out. This continues until we fail to mitigate some Ex-Risk, and we lose out on all of the banked utility that so many past peoples leveraged to improve our lives, and which we foolishly failed to consume. In short terms, if everyone becomes a completely optimized effective altruist, then no one would be left to cash in on the very utility we collectively cultivate.
Based on some estimations, the downfall of humanity is closer than you may think, and therefore the best course of action is to consume as much utility as possible before that happens. So the more doom-pilled you are on AI, Bio threats, or whatever your favorite pet Ex-Risk is, the more you ought to consume as a result. Yet this is the opposite effect I see among the AI researchers most convinced in it’s pending ascendancy.
Anyways, those are my thoughts, thanks for reading and don’t be afraid to dunk in the comments!