(This comment focuses on meta-level issues; I left another comment with object-level disagreements.)
The EA case for Trump was heavily downvoted, with commenters arguing that e.g. “a lot of your arguments are extremely one-sided in that they ignore very obvious counterarguments and fail to make the relevant comparisons on the same issue.”
This post is effectively an EA case for Kamala, but less even-handed—e.g. because it:
Is framed it not just as a case for Kamala, but as a case for action (which, I think, requires a significantly higher bar than just believing that it’d be better on net if Kamala won).
Doesn’t address the biggest concerns with another Democrat administration (some of which I lay out here).
Generally feels like it’s primarily talking to an audience who already agrees that Trump is bad, and just needs to be persuaded about how bad he is (e.g. with headings like “A second Trump term would likely be far more damaging for liberal democracy than the last”).
And yet it has been heavily upvoted. Very disappointing lack of consistency here, which suggests that the criticisms of the previous post, while framed as criticisms of the post itself, were actually about the side chosen.
This matters both on epistemic grounds and because one of the most harmful things that can be done for AI safety is to heavily politicize it. By default, we should expect that a lot more people will end up getting on the AI safety train over time; the main blocker to that is if they’re so entrenched in their positions that they fail to update even in the face of overwhelming evidence. We’re already heading towards entrenchment; efforts like this will make it worse. (My impression is that political motivations were also a significant contributor to Good Ventures decoupling itself from the rationalist community—e.g. see this comment about fringe opinion holders. It’s easy to imagine this process spiraling further.)
Generally feels like it’s primarily talking to an audience who already agrees that Trump is bad, and just needs to be persuaded about how bad he is
This is true to some extent. I did not write this thinking it would be ‘the EA case for Kamala’ in response to Hammond’s piece. I also was wary about adding length to an already too-long piece so didn’t go into detail on various counterpoints to Kamala.
Is framed it not just as a case for Kamala, but as a case for action (which, I think, requires a significantly higher bar than just believing that it’d be better on net if Kamala won).
I personally see Trump’s anti-democratic behavior and demonstrably bad values as very-nearly disqualifying on their own (similar to, e.g., Scott Aaronson’s case against Trump). That’s why I focus so much on likely damage to liberal democracy. In my view these are crucial enough considerations that I would require some strong and clearly positive data points in Trump’s favor to override his obvious flaws. I am not aware of clear and strong positives on Trump’s side, only some points which seem closer to ‘maybe he would do this good thing. He hasn’t talked about it, but it seems more likely he’d do it than that Harris would’.
Except where business-as-usual decisions would affect catastrophic risk scenarios I think they generally wash out when compared to Trump’s flaws.
Doesn’t address the biggest concerns with another Democrat administration (some of which I lay out here).
I address a good chunk of those concerns here. Agree that I could have talked about this more (though again, the piece was already very long).
And yet it has been heavily upvoted. Very disappointing lack of consistency here, which essentially demonstrates that the criticisms of the previous post, while framed as criticisms of the post itself, were actually about the side chosen.
I don’t see why this follows from the above. The claim seems to be that the only reason that post could have been downvoted and this post upvoted is because of bias. You’ve argued that there’s some content I didn’t address, and that it’s written for a Harris-leaning audience, but haven’t put forward a critique of the positions put forward in the post. It also seems clear that I’ve, on each cause area at least, attempted to present both sides of the argument. I’m curious why you see it as inconsistent? People disagree on object-level politics – and many people on here seem to strongly disagree with you – but one side is generally right, on net. Two posts advocating for different sides of an issue shouldn’t be treated the same just because it’s about politics. Also, this post has received its fair share of criticism (e.g. Larks’ comment, which I thought was useful and led me to update the post).
one of the most harmful things that can be done for AI safety is to heavily politicize it
Agreed, I don’t want to politicize AI safety. I really hope that, should Trump be elected, he’ll have good advisors and make good decisions on AI policy. I suspect he won’t, but I really hope he does.
Here’s my thoughts on why it seems fine to post this:
There’s been pro-Trump content on the Forum already but virtually no pro-Harris content AFAIK.
This post doesn’t show up on the front page because it’s politics (at least that’s my understanding, I didn’t see it there personally despite the upvotes).
We’re not spreading this publicly in any ways that non-EAs are likely to see.
This kind of post seems like a drop in the bucket. Lots of EAs identify as Democrat, many as Republican. Having debates about who to elect seems perfectly reasonable. I’m glad there’s not a ton of posts like this on the forum, if there were I probably wouldn’t have written it. Adding one on the margin doesn’t seem like a big deal to me.
This post talks about AI safety fairly little and, what content there is, is mainly in the appendix.
By default, we should expect that a lot more people will end up getting on the AI safety train over time; the main blocker to that is if they’re so entrenched in their positions that they fail to update even in the face of overwhelming evidence. We’re already heading towards entrenchment; efforts like this will make it worse.
Not sure I fully understand this point but will attempt to answer. Again, I do not think this post or any of my other efforts are contributing meaningfully to politicizing/polarizing AI safety or “entrenching” positions about it, and I really hope a Trump administration will make good decisions on AI policy in case he is elected (and I’ll support efforts to this end). However, this is fully compatible with believing that a Harris administration would be far better – or far less bad – in expectation for AI policy. I give several important reasons for believing this in the post, e.g.: Trump has vowed to repeal Biden’s executive order on AI on day 1; Trump generally favors non-regulation and plans to abolish various agencies (Vance favors tech non-regulation in particular); and the demographics and professions that make up the AI safety/governance movement seem to have a far better chance at getting close to and influencing a Democratic administration than a MAGA administration, for several reasons.
Very disappointing lack of consistency here, which essentially demonstrates that the criticisms of the previous post, while framed as criticisms of the post itself, were actually about the side chosen.
A few observations on that from someone who did not vote on the Trump post or this one:
This seems to rely on an assumption that the commenters on the prior post had the same motivations as one might assign to the broader voter pool. It’s certainly possible, but hardly certain.
It’s impossible to completely divorce oneself from object-level views when deciding whether a post has failed to address or acknowledge sufficiently important considerations in the opposite direction. Yet such a failure is (and I think, has to be) a valid reason to downvote. It’s reasonable to me that a voter would find the missing issues in the Trump piece sufficiently important, and the issues you identify for Harris as having much less significance for a number of reasons.
Partisan political posts are disfavored for various reasons, including some your comment mentions. I think it’s fine for voters to maintain higher voting standards for such posts. Moreover, it feels easier for those posts to be net-negative because they are closer to zero-sum in nature; John’s candidate winning the election means Jane’s candidate losing. “It would be better for this post not to be on the Forum” is a plausible reason to downvote. Those factors make downvoting for strong disagreement more plausible than on non-political posts. This is especially true insofar as the voter thinks the resulting discussion will sound like ten thousand other political debates and contribute little if at all to finding truth.
Finally, there are good reasons for people to be less willing to leave object-level comments on posts like this one or the Trump one. First, arguing about politics is exhausting and usually unfruitful. Two, it risks derailing the Forum into a discussion of topics rather removed from effective altruism (e.g., were the various criminal charges against Trump and lawsuits against Musk legit? how biased in the mainstream US media?)
Setting aside the substantive issues about how accurate this post is vs. the other one, I’ll admit I’m very uncertain on how much we should avoid talking about partisan politics in AI forums, how much it politicizes the debate vs. clarifies the stakes in ways that help us act more strategically
(This comment focuses on meta-level issues; I left another comment with object-level disagreements.)
The EA case for Trump was heavily downvoted, with commenters arguing that e.g. “a lot of your arguments are extremely one-sided in that they ignore very obvious counterarguments and fail to make the relevant comparisons on the same issue.”
This post is effectively an EA case for Kamala, but less even-handed—e.g. because it:
Is framed it not just as a case for Kamala, but as a case for action (which, I think, requires a significantly higher bar than just believing that it’d be better on net if Kamala won).
Doesn’t address the biggest concerns with another Democrat administration (some of which I lay out here).
Generally feels like it’s primarily talking to an audience who already agrees that Trump is bad, and just needs to be persuaded about how bad he is (e.g. with headings like “A second Trump term would likely be far more damaging for liberal democracy than the last”).
And yet it has been heavily upvoted. Very disappointing lack of consistency here, which suggests that the criticisms of the previous post, while framed as criticisms of the post itself, were actually about the side chosen.
This matters both on epistemic grounds and because one of the most harmful things that can be done for AI safety is to heavily politicize it. By default, we should expect that a lot more people will end up getting on the AI safety train over time; the main blocker to that is if they’re so entrenched in their positions that they fail to update even in the face of overwhelming evidence. We’re already heading towards entrenchment; efforts like this will make it worse. (My impression is that political motivations were also a significant contributor to Good Ventures decoupling itself from the rationalist community—e.g. see this comment about fringe opinion holders. It’s easy to imagine this process spiraling further.)
This is true to some extent. I did not write this thinking it would be ‘the EA case for Kamala’ in response to Hammond’s piece. I also was wary about adding length to an already too-long piece so didn’t go into detail on various counterpoints to Kamala.
I personally see Trump’s anti-democratic behavior and demonstrably bad values as very-nearly disqualifying on their own (similar to, e.g., Scott Aaronson’s case against Trump). That’s why I focus so much on likely damage to liberal democracy. In my view these are crucial enough considerations that I would require some strong and clearly positive data points in Trump’s favor to override his obvious flaws. I am not aware of clear and strong positives on Trump’s side, only some points which seem closer to ‘maybe he would do this good thing. He hasn’t talked about it, but it seems more likely he’d do it than that Harris would’.
Except where business-as-usual decisions would affect catastrophic risk scenarios I think they generally wash out when compared to Trump’s flaws.
I address a good chunk of those concerns here. Agree that I could have talked about this more (though again, the piece was already very long).
I don’t see why this follows from the above. The claim seems to be that the only reason that post could have been downvoted and this post upvoted is because of bias. You’ve argued that there’s some content I didn’t address, and that it’s written for a Harris-leaning audience, but haven’t put forward a critique of the positions put forward in the post. It also seems clear that I’ve, on each cause area at least, attempted to present both sides of the argument. I’m curious why you see it as inconsistent? People disagree on object-level politics – and many people on here seem to strongly disagree with you – but one side is generally right, on net. Two posts advocating for different sides of an issue shouldn’t be treated the same just because it’s about politics. Also, this post has received its fair share of criticism (e.g. Larks’ comment, which I thought was useful and led me to update the post).
Agreed, I don’t want to politicize AI safety. I really hope that, should Trump be elected, he’ll have good advisors and make good decisions on AI policy. I suspect he won’t, but I really hope he does.
Here’s my thoughts on why it seems fine to post this:
There’s been pro-Trump content on the Forum already but virtually no pro-Harris content AFAIK.
This post doesn’t show up on the front page because it’s politics (at least that’s my understanding, I didn’t see it there personally despite the upvotes).
We’re not spreading this publicly in any ways that non-EAs are likely to see.
This kind of post seems like a drop in the bucket. Lots of EAs identify as Democrat, many as Republican. Having debates about who to elect seems perfectly reasonable. I’m glad there’s not a ton of posts like this on the forum, if there were I probably wouldn’t have written it. Adding one on the margin doesn’t seem like a big deal to me.
This post talks about AI safety fairly little and, what content there is, is mainly in the appendix.
Not sure I fully understand this point but will attempt to answer. Again, I do not think this post or any of my other efforts are contributing meaningfully to politicizing/polarizing AI safety or “entrenching” positions about it, and I really hope a Trump administration will make good decisions on AI policy in case he is elected (and I’ll support efforts to this end). However, this is fully compatible with believing that a Harris administration would be far better – or far less bad – in expectation for AI policy. I give several important reasons for believing this in the post, e.g.: Trump has vowed to repeal Biden’s executive order on AI on day 1; Trump generally favors non-regulation and plans to abolish various agencies (Vance favors tech non-regulation in particular); and the demographics and professions that make up the AI safety/governance movement seem to have a far better chance at getting close to and influencing a Democratic administration than a MAGA administration, for several reasons.
A few observations on that from someone who did not vote on the Trump post or this one:
This seems to rely on an assumption that the commenters on the prior post had the same motivations as one might assign to the broader voter pool. It’s certainly possible, but hardly certain.
It’s impossible to completely divorce oneself from object-level views when deciding whether a post has failed to address or acknowledge sufficiently important considerations in the opposite direction. Yet such a failure is (and I think, has to be) a valid reason to downvote. It’s reasonable to me that a voter would find the missing issues in the Trump piece sufficiently important, and the issues you identify for Harris as having much less significance for a number of reasons.
Partisan political posts are disfavored for various reasons, including some your comment mentions. I think it’s fine for voters to maintain higher voting standards for such posts. Moreover, it feels easier for those posts to be net-negative because they are closer to zero-sum in nature; John’s candidate winning the election means Jane’s candidate losing. “It would be better for this post not to be on the Forum” is a plausible reason to downvote. Those factors make downvoting for strong disagreement more plausible than on non-political posts. This is especially true insofar as the voter thinks the resulting discussion will sound like ten thousand other political debates and contribute little if at all to finding truth.
Finally, there are good reasons for people to be less willing to leave object-level comments on posts like this one or the Trump one. First, arguing about politics is exhausting and usually unfruitful. Two, it risks derailing the Forum into a discussion of topics rather removed from effective altruism (e.g., were the various criminal charges against Trump and lawsuits against Musk legit? how biased in the mainstream US media?)
Setting aside the substantive issues about how accurate this post is vs. the other one, I’ll admit I’m very uncertain on how much we should avoid talking about partisan politics in AI forums, how much it politicizes the debate vs. clarifies the stakes in ways that help us act more strategically