I am puzzled that, at the time of writing, this comment has received as many disagreement votes as agreement votes. Shouldn’t we all agree that the EA community should allocate significantly more resources to an area, if by far the most good can be done by this allocation and there are sound public arguments for this conclusion? What are the main reasons for disagreement?
Different people in EA define ‘good’ in different ways. You can argue that some cause is better for some family of definitions, but the aim is, I think, to help people with different definitions too achieve the goal.
You say “if by far the most good can be done by this allocation and there are sound public arguments for this conclusion”, but the idea of ‘sound public arguments’ is tricky. We’re not scientists with some very-well-tested models. You’re never going to have arguments which are conclusive enough to shut down other causes, even if it sometimes seems to some people here that they do.
In my view, the comment isn’t particularly responsive to the post. I take the post’s main critique as being something like: groups present themselves as devoted to EA as a question and to helping participants find their own path in EA, but in practice steer participants heavily toward certain approved conclusions.
That critique is not inconsistent with “EA resources should be focused on AI and longtermism,” or maybe even “EA funding for university groups should concentrate on x-risk/AI groups that don’t present themselves to be full-spectrum EA groups.”
In my view, the comment isn’t particularly responsive to the post.
Shouldn’t we expect people who believe that a comment isn’t responsive to its parent post to downvote it rather than to disagree-vote it, if they don’t have any substantive disagreements with it?
I am puzzled that, at the time of writing, this comment has received as many disagreement votes as agreement votes. Shouldn’t we all agree that the EA community should allocate significantly more resources to an area, if by far the most good can be done by this allocation and there are sound public arguments for this conclusion? What are the main reasons for disagreement?
Different people in EA define ‘good’ in different ways. You can argue that some cause is better for some family of definitions, but the aim is, I think, to help people with different definitions too achieve the goal.
You say “if by far the most good can be done by this allocation and there are sound public arguments for this conclusion”, but the idea of ‘sound public arguments’ is tricky. We’re not scientists with some very-well-tested models. You’re never going to have arguments which are conclusive enough to shut down other causes, even if it sometimes seems to some people here that they do.
In my view, the comment isn’t particularly responsive to the post. I take the post’s main critique as being something like: groups present themselves as devoted to EA as a question and to helping participants find their own path in EA, but in practice steer participants heavily toward certain approved conclusions.
That critique is not inconsistent with “EA resources should be focused on AI and longtermism,” or maybe even “EA funding for university groups should concentrate on x-risk/AI groups that don’t present themselves to be full-spectrum EA groups.”
Shouldn’t we expect people who believe that a comment isn’t responsive to its parent post to downvote it rather than to disagree-vote it, if they don’t have any substantive disagreements with it?