I wish this postâand others like itâhad more specific details when it comes to these kind of criticisms, and had a more specific statement of what they are really taking issue with, because otherwise it sort of comes across as âI wish EA paid more attention to my object-level concernsâ which approximately ~everyone believes.
If the post itâs just meant to represent your opinions thats perfectly fine, but I donât really think it changed my mind on its own merits. I also just donât like withholding private evidence, I know there are often good reasons for it, but it means I just canât give much credence to it tbqh. I know itâs a quick take but still, it left me lacking in actually evaluating.
I general this discussion reminds me a bit of my response to another criticism of EA/âActions by the EA Community, and I think what you view as âsub-optimalâ actions are instead best explained by people having:
bounded rationality and resources (e.g. You canât evaluate every cause area, and your skills might not be as useful across all possible jobs)
different values/âmoral ends (Some people may not be consequentialists at all, or prioritise existing vs possible people difference, or happiness vs suffering)
different values in terms of process (e.g. Some people are happy making big bets on non-robust evidence, others much less so)
And that, with these constraints accepted, many people are not actually acting sub-optimally given their information and values (they may later with hindsight regret their actions or admit they were wrong, of course)
Some specific counterexamples:
Being insufficiently open-minded about which areas and interventions might warrant resources/âattention, and unwarranted deference to EA canon, 80K, Open Phil, EA/ârationality thought leaders, etc. - I think this might be because of resource constraints, or because people disagree about what interventions are âunder investedâ vs âcorrectly not much invested inâ. I feel like you just disagree with 80K & OpenPhil on something big and you know, Iâd have liked you to say what that is and why you disagree with them instead of dancing around it.
Attachment to feeling certainty, comfort and assurance about the ethical and epistemic justification of our past actions, thinking, and deferenceâWhat decisions do you mean here? And what do you even mean by âourâ? Surely the decisions you are thinking of apply to a subset of EAs, not all of EA? Like this is probably the point that most needed something specific.
Trying to read between the linesâI think you think the AGI is going to be a super big deal soon, and that its political consequences might be the most important and consequently political actions might be the most important ones to take? But OpenPhil and 80K havenât been on the ball with political stuff and now youâre disillusioned? And people in other cause areas that arenât AI policy should presumably stop donating there and working their and pivot? I donât know, it doesnât feel accurate to me,[1] but like I canât get a more accurate picture because you just didnât provide any specifics for me to tune my mental model on đ¤ˇââď¸
Even having said all of this, I do think working on mitigating concentration-of-power-risks is a really promising direction for impact and wish you the best in pursuing it :)
Thanks for your comment, and I understand your frustration. Iâm still figuring out how to communicate about specifics around why I feel strongly that incorrectly applying the neglectedness heuristic as a shortcut to avoid investigating whether investment in an area is warranted has led to tons of lost potential impact. And yes, US politics are, in my opinion, a central example. But I also think there are tons of others Iâm not aware of, which brings me to the broader (meta) point I wanted to emphasize in the above take.
I wanted to focus on the case for more independent thinking, and discussion of how little cross-cause prioritization work there seems to be in EA world, rather than trying to convince the community of my current beliefs. I did initially include object-level takes on prioritization (which I may make another quick take about soon) in the comment, but decided to remove them for this reason: to keep the focus on the meta issue.
My guess is that many community members implicitly assume that cross-cause prioritization work is done frequently and rigorously enough to take into account important changes in the world, and that the takeaways get communicated such that EA resources get effectively allocated. I donât think this is the case. If it is, the takeaways donât seem to be communicated widely. I donât know of a longtermist Givewell alternative for donations. I donât see much rigorous cross-cause prioritization analysis from Open Phil, 80K, or on the EA forum to inform how to most impactfully spend time. Also, the priorities of the community have stayed surprisingly consistent over the years, despite many large changes in AI, politics, and more.
Given how important and difficult it is to do well, I think EAs should feel empowered to regularly contribute to cross-cause prioritization discourse, so we can all understand the world better and make better decisions.
I disagree voted because I think that withholding of private info should be a strong norm and that itâs not the posterâs job to please the community with privileged info that could hurt them when they are already doing a service by posting. I also think it could possibly serve as an indicator of some sort (eg, if people searched the forum for comments like this perhaps it might point towards a trend of posters worrying about how much blowback they think they might get from funders/âother EA orgs if actual criticism of them/âbackdoor convos were revealedâwhether thatâs a warranted worry or not). I also think that by leaking private convos that will hurt a person because now every time someone interacts with that person they will think that their convo might get leaked online and will not engages with that person. Seems mean to ask someone to do that for you just so you can have more data to judge them onâthey are trying to communicate something real to you but obviously canât. I donât have any reason to doubt the poster unless theyâve lied before and have a strong trust norm unless there is a reason not to trust. But I double liked because most of the rest of your comment was good :-)
I wish this postâand others like itâhad more specific details when it comes to these kind of criticisms, and had a more specific statement of what they are really taking issue with, because otherwise it sort of comes across as âI wish EA paid more attention to my object-level concernsâ which approximately ~everyone believes.
If the post itâs just meant to represent your opinions thats perfectly fine, but I donât really think it changed my mind on its own merits. I also just donât like withholding private evidence, I know there are often good reasons for it, but it means I just canât give much credence to it tbqh. I know itâs a quick take but still, it left me lacking in actually evaluating.
I general this discussion reminds me a bit of my response to another criticism of EA/âActions by the EA Community, and I think what you view as âsub-optimalâ actions are instead best explained by people having:
bounded rationality and resources (e.g. You canât evaluate every cause area, and your skills might not be as useful across all possible jobs)
different values/âmoral ends (Some people may not be consequentialists at all, or prioritise existing vs possible people difference, or happiness vs suffering)
different values in terms of process (e.g. Some people are happy making big bets on non-robust evidence, others much less so)
And that, with these constraints accepted, many people are not actually acting sub-optimally given their information and values (they may later with hindsight regret their actions or admit they were wrong, of course)
Some specific counterexamples:
Being insufficiently open-minded about which areas and interventions might warrant resources/âattention, and unwarranted deference to EA canon, 80K, Open Phil, EA/ârationality thought leaders, etc. - I think this might be because of resource constraints, or because people disagree about what interventions are âunder investedâ vs âcorrectly not much invested inâ. I feel like you just disagree with 80K & OpenPhil on something big and you know, Iâd have liked you to say what that is and why you disagree with them instead of dancing around it.
Attachment to feeling certainty, comfort and assurance about the ethical and epistemic justification of our past actions, thinking, and deferenceâWhat decisions do you mean here? And what do you even mean by âourâ? Surely the decisions you are thinking of apply to a subset of EAs, not all of EA? Like this is probably the point that most needed something specific.
Trying to read between the linesâI think you think the AGI is going to be a super big deal soon, and that its political consequences might be the most important and consequently political actions might be the most important ones to take? But OpenPhil and 80K havenât been on the ball with political stuff and now youâre disillusioned? And people in other cause areas that arenât AI policy should presumably stop donating there and working their and pivot? I donât know, it doesnât feel accurate to me,[1] but like I canât get a more accurate picture because you just didnât provide any specifics for me to tune my mental model on đ¤ˇââď¸
Even having said all of this, I do think working on mitigating concentration-of-power-risks is a really promising direction for impact and wish you the best in pursuing it :)
As in, i donât think youâd endorse this as a fair or accurate description of your views
Thanks for your comment, and I understand your frustration. Iâm still figuring out how to communicate about specifics around why I feel strongly that incorrectly applying the neglectedness heuristic as a shortcut to avoid investigating whether investment in an area is warranted has led to tons of lost potential impact. And yes, US politics are, in my opinion, a central example. But I also think there are tons of others Iâm not aware of, which brings me to the broader (meta) point I wanted to emphasize in the above take.
I wanted to focus on the case for more independent thinking, and discussion of how little cross-cause prioritization work there seems to be in EA world, rather than trying to convince the community of my current beliefs. I did initially include object-level takes on prioritization (which I may make another quick take about soon) in the comment, but decided to remove them for this reason: to keep the focus on the meta issue.
My guess is that many community members implicitly assume that cross-cause prioritization work is done frequently and rigorously enough to take into account important changes in the world, and that the takeaways get communicated such that EA resources get effectively allocated. I donât think this is the case. If it is, the takeaways donât seem to be communicated widely. I donât know of a longtermist Givewell alternative for donations. I donât see much rigorous cross-cause prioritization analysis from Open Phil, 80K, or on the EA forum to inform how to most impactfully spend time. Also, the priorities of the community have stayed surprisingly consistent over the years, despite many large changes in AI, politics, and more.
Given how important and difficult it is to do well, I think EAs should feel empowered to regularly contribute to cross-cause prioritization discourse, so we can all understand the world better and make better decisions.
I disagree voted because I think that withholding of private info should be a strong norm and that itâs not the posterâs job to please the community with privileged info that could hurt them when they are already doing a service by posting. I also think it could possibly serve as an indicator of some sort (eg, if people searched the forum for comments like this perhaps it might point towards a trend of posters worrying about how much blowback they think they might get from funders/âother EA orgs if actual criticism of them/âbackdoor convos were revealedâwhether thatâs a warranted worry or not). I also think that by leaking private convos that will hurt a person because now every time someone interacts with that person they will think that their convo might get leaked online and will not engages with that person. Seems mean to ask someone to do that for you just so you can have more data to judge them onâthey are trying to communicate something real to you but obviously canât. I donât have any reason to doubt the poster unless theyâve lied before and have a strong trust norm unless there is a reason not to trust. But I double liked because most of the rest of your comment was good :-)