Is it actually bad if AI, longtermism, or x-risk are dominant in EA? That seems to crucially depend on whether these cause areas are actually the ones in which the most good can be done—and whether we should believe that depends on how strong arguments back up these cause areas. Assume, for example, that we can do by far the most good by focusing on AI x-risks and that there is an excellent case / compelling arguments for this. Then, this cause area should receive significantly more resources and should be much more talked about, and promoted, than other cause areas. Treating it just like other cause areas would be a big mistake: the (assumed) fact that we can do much more good in this cause area is a great reason to treat it differently!
To be clear: my point is not that AI, longtermism, or anything else should be dominant in EA, but that how these cause areas should be represented in EA (including whether they should be dominant) depends on the object-level discourse about their cost-effectiveness. It is therefore unobvious, and depends on difficult object-level questions, whether a given degree of dominance of AI, longtermism, or any other cause area, is justified or not. (I take this to be in tension with some points of the post, and some of the comments, but not as incompatible with most of its points.)
I am puzzled that, at the time of writing, this comment has received as many disagreement votes as agreement votes. Shouldn’t we all agree that the EA community should allocate significantly more resources to an area, if by far the most good can be done by this allocation and there are sound public arguments for this conclusion? What are the main reasons for disagreement?
Different people in EA define ‘good’ in different ways. You can argue that some cause is better for some family of definitions, but the aim is, I think, to help people with different definitions too achieve the goal.
You say “if by far the most good can be done by this allocation and there are sound public arguments for this conclusion”, but the idea of ‘sound public arguments’ is tricky. We’re not scientists with some very-well-tested models. You’re never going to have arguments which are conclusive enough to shut down other causes, even if it sometimes seems to some people here that they do.
In my view, the comment isn’t particularly responsive to the post. I take the post’s main critique as being something like: groups present themselves as devoted to EA as a question and to helping participants find their own path in EA, but in practice steer participants heavily toward certain approved conclusions.
That critique is not inconsistent with “EA resources should be focused on AI and longtermism,” or maybe even “EA funding for university groups should concentrate on x-risk/AI groups that don’t present themselves to be full-spectrum EA groups.”
In my view, the comment isn’t particularly responsive to the post.
Shouldn’t we expect people who believe that a comment isn’t responsive to its parent post to downvote it rather than to disagree-vote it, if they don’t have any substantive disagreements with it?
Is it actually bad if AI, longtermism, or x-risk are dominant in EA? That seems to crucially depend on whether these cause areas are actually the ones in which the most good can be done—and whether we should believe that depends on how strong arguments back up these cause areas. Assume, for example, that we can do by far the most good by focusing on AI x-risks and that there is an excellent case / compelling arguments for this. Then, this cause area should receive significantly more resources and should be much more talked about, and promoted, than other cause areas. Treating it just like other cause areas would be a big mistake: the (assumed) fact that we can do much more good in this cause area is a great reason to treat it differently!
To be clear: my point is not that AI, longtermism, or anything else should be dominant in EA, but that how these cause areas should be represented in EA (including whether they should be dominant) depends on the object-level discourse about their cost-effectiveness. It is therefore unobvious, and depends on difficult object-level questions, whether a given degree of dominance of AI, longtermism, or any other cause area, is justified or not. (I take this to be in tension with some points of the post, and some of the comments, but not as incompatible with most of its points.)
I am puzzled that, at the time of writing, this comment has received as many disagreement votes as agreement votes. Shouldn’t we all agree that the EA community should allocate significantly more resources to an area, if by far the most good can be done by this allocation and there are sound public arguments for this conclusion? What are the main reasons for disagreement?
Different people in EA define ‘good’ in different ways. You can argue that some cause is better for some family of definitions, but the aim is, I think, to help people with different definitions too achieve the goal.
You say “if by far the most good can be done by this allocation and there are sound public arguments for this conclusion”, but the idea of ‘sound public arguments’ is tricky. We’re not scientists with some very-well-tested models. You’re never going to have arguments which are conclusive enough to shut down other causes, even if it sometimes seems to some people here that they do.
In my view, the comment isn’t particularly responsive to the post. I take the post’s main critique as being something like: groups present themselves as devoted to EA as a question and to helping participants find their own path in EA, but in practice steer participants heavily toward certain approved conclusions.
That critique is not inconsistent with “EA resources should be focused on AI and longtermism,” or maybe even “EA funding for university groups should concentrate on x-risk/AI groups that don’t present themselves to be full-spectrum EA groups.”
Shouldn’t we expect people who believe that a comment isn’t responsive to its parent post to downvote it rather than to disagree-vote it, if they don’t have any substantive disagreements with it?