Can you explain what you mean by “contextualizing more”? (What a curiously recursive question...)
You definitely have more popular opinions (among the EA Forum audience), and also you seem to court controversy less, i.e. a lot of your posts are about topics that aren’t controversial on the EA Forum. For example, if you were to make a pseudonymous account and write posts/comments arguing that near-term AGI is highly unlikely, I think you would definitely get a much lower karma to submission ratio, even if you put just as much effort and care into them as the posts/comments you’ve written on the forum so far. Do you think it wouldn’t turn out that way?
I’ve been downvoted on things that are clearly correct, e.g. the standard definitions of terms in machine learning (which anyone can Google); a methodological error that the Forecasting Research Institute later acknowledged was correct and revised their research to reflect. In other cases, the claims are controversial, but they are also claims where prominent AI experts like Andrej Karpathy, Yann LeCun, or Ilya Sutskever have said exactly the same thing as I said — and, indeed, in some cases I’m literally citing them — and it would be wild to think these sort of claims are below the quality threshold for the EA Forum. I think that should make you question whether downvotes are a reliable guide to the quality of contributions.
One-off instances of one person downvoting don’t bother me that much — that literally doesn’t matter, as long as it really is one-off — what bothers me is the pattern. It isn’t just with my posts/comments, either, it’s across the board on the forum. I see it all the time with other contributors as well. I feel uneasy dragging those people into this discussion without their permission — it’s easier to talk about myself — but this is an overall pattern.
Whether reasoning is good or bad is always bound to be controversial when debating about topics that are controversial, about which there is a lot of disagreement. Just downvoting what you judge to be bad reasoning will, statistically, amount to downvoting what you disagree with. Since downvotes discourage and, in some cases, disable (through the forum’s software) disagreement, you should ask: is that the desired outcome? Personally, I rarely, pretty much never, downvote based on what I perceive to be the reasoning quality for exactly this reason.
When people on the EA Forum deeply engage with the substance of what I have to say, I’ve actually found a really high rate of them changing their minds (not necessarily from P to ¬P,but shifting along a spectrum and rethinking some details). It’s a very small sample size, only a few people, but it’s something like out of five people that I’ve had a lengthy back-and-forth with over the last two months, three of them changed their minds in some significant way. (I’m not doing rigorous statistics here, just counting examples from memory.) And in two of the three cases, the other person’s tone started out highly confident, giving me the impression they initially thought there was basically no chance I had any good points that were going to convince them. That is the counterbalance to everything else because that’s really encouraging!
I put in an effort to make my tone friendly and conciliatory, and I’m aware I probably come off as a bit testy some of the time, but I’m often responding to a much harsher delivery from the other person and underreacting in order to deescalate the tension. (For example, the person who got the ML definitions wrong started out by accusing me of “bad faith” based on their misunderstanding of the definitions. There were multiple rounds of me engaging with politeness and cordiality before I started getting a bit testy. That’s just one example, but there are others — it’s frequently a similar dynamic. Disagreeing with the majority opinion of the group is a thankless job because you have to be nicer to people than they are to you, and then that still isn’t good enough and people say you should be even nicer.)
Can you explain what you mean by “contextualizing more”? (What a curiously recursive question...)
I mean it in this sense; making people think you’re not part of the outgroup and don’t have objectionable beliefs related to the ones you actually hold, in whatever way is sensible and honest.
Maybe LW is better at using disagreement button as I find it’s pretty common for unpopular opinions to get lots of upvotes and disagree votes. One could use the API to see if the correlations are different there.
I think this is a significant reason why people downvote some, but not all, things they disagree with. Especially a member of the outgroup who makes arguments EAs have refuted before and need to reexplain, not saying it’s actually you
Claude thinks possible outgroups include the following, which is similar to what I had in mind
Based on the EA Forum’s general orientation, here are five individuals/groups whose characteristic opinions would likely face downvotes:
Effective accelerationists (e/acc) - Advocates for rapid AI development with minimal safety precautions, viewing existential risk concerns as overblown or counterproductive
TESCREAL critics (like Emile Torres, as you mentioned) - Scholars who frame longtermism/EA as ideologically dangerous, often linking it to eugenics, colonialism, or techno-utopianism
Anti-utilitarian philosophers—Strong deontologists or virtue ethicists who reject consequentialist frameworks as fundamentally misguided, particularly on issues like population ethics or AI risk trade-offs
Degrowth/anti-progress advocates—Those who argue economic/technological growth is net-negative and should be reduced, contrary to EA’s generally pro-progress orientation
Left-accelerationists and systemic change advocates—Critics who view EA as a “neoliberal” distraction from necessary revolutionary change, or who see philanthropic approaches as fundamentally illegitimate compared to state redistribution
a) I’m not sure all of those count as someone who would necessarily be an outsider to EA (e.g. Will MacAskill only assigns a 50% probability to consequentialism being correct, and he and others in EA have long emphasized pluralism about normative ethical theories; there’s been an EA system change group on Facebook since 2015 and discourse around systemic change has been happening in EA since before then)
b) Even if you do consider people in all those categories to be outsiders to EA or part of “the out-group”, us/them or in-group/out-group thinking seems like a bad idea, possibly leading to insularity, incuriosity, and overconfidence in wrong views
c) It’s especially a bad idea to not only think in in-group/out-group terms and seek to shut down perspectives of “the out-group” but also to cast suspicion on the in-group/out-group status of anyone in an EA context who you happen to disagree with about something, even something minor — that seems like a morally, subculturally, and epistemically bankrupt approach
You’re shooting the messenger. I’m not advocating for downvoting posts that smell of “the outgroup”, just saying that this happens in most communities that are centered around an ideological or even methodological framework. It’s a way you can be downvoted while still being correct, especially from the LEAST thoughtful 25% of EA forum voters
Please read the quote from Claude more carefully. MacAskill is not an “anti-utilitarian” who thinks consequentialism is “fundamentally misguided”, he’s the moral uncertainty guy. The moral parliament usually recommends actions similar to consequentialism with side constraints in practice.
I probably won’t engage more with this conversation.
I don’t know, sorry. I admittedly tend to steer clear of community debates as they make me sad, probably shouldn’t have commented in the first place...
Can you explain what you mean by “contextualizing more”? (What a curiously recursive question...)
You definitely have more popular opinions (among the EA Forum audience), and also you seem to court controversy less, i.e. a lot of your posts are about topics that aren’t controversial on the EA Forum. For example, if you were to make a pseudonymous account and write posts/comments arguing that near-term AGI is highly unlikely, I think you would definitely get a much lower karma to submission ratio, even if you put just as much effort and care into them as the posts/comments you’ve written on the forum so far. Do you think it wouldn’t turn out that way?
I’ve been downvoted on things that are clearly correct, e.g. the standard definitions of terms in machine learning (which anyone can Google); a methodological error that the Forecasting Research Institute later acknowledged was correct and revised their research to reflect. In other cases, the claims are controversial, but they are also claims where prominent AI experts like Andrej Karpathy, Yann LeCun, or Ilya Sutskever have said exactly the same thing as I said — and, indeed, in some cases I’m literally citing them — and it would be wild to think these sort of claims are below the quality threshold for the EA Forum. I think that should make you question whether downvotes are a reliable guide to the quality of contributions.
One-off instances of one person downvoting don’t bother me that much — that literally doesn’t matter, as long as it really is one-off — what bothers me is the pattern. It isn’t just with my posts/comments, either, it’s across the board on the forum. I see it all the time with other contributors as well. I feel uneasy dragging those people into this discussion without their permission — it’s easier to talk about myself — but this is an overall pattern.
Whether reasoning is good or bad is always bound to be controversial when debating about topics that are controversial, about which there is a lot of disagreement. Just downvoting what you judge to be bad reasoning will, statistically, amount to downvoting what you disagree with. Since downvotes discourage and, in some cases, disable (through the forum’s software) disagreement, you should ask: is that the desired outcome? Personally, I rarely, pretty much never, downvote based on what I perceive to be the reasoning quality for exactly this reason.
When people on the EA Forum deeply engage with the substance of what I have to say, I’ve actually found a really high rate of them changing their minds (not necessarily from P to ¬P, but shifting along a spectrum and rethinking some details). It’s a very small sample size, only a few people, but it’s something like out of five people that I’ve had a lengthy back-and-forth with over the last two months, three of them changed their minds in some significant way. (I’m not doing rigorous statistics here, just counting examples from memory.) And in two of the three cases, the other person’s tone started out highly confident, giving me the impression they initially thought there was basically no chance I had any good points that were going to convince them. That is the counterbalance to everything else because that’s really encouraging!
I put in an effort to make my tone friendly and conciliatory, and I’m aware I probably come off as a bit testy some of the time, but I’m often responding to a much harsher delivery from the other person and underreacting in order to deescalate the tension. (For example, the person who got the ML definitions wrong started out by accusing me of “bad faith” based on their misunderstanding of the definitions. There were multiple rounds of me engaging with politeness and cordiality before I started getting a bit testy. That’s just one example, but there are others — it’s frequently a similar dynamic. Disagreeing with the majority opinion of the group is a thankless job because you have to be nicer to people than they are to you, and then that still isn’t good enough and people say you should be even nicer.)
I mean it in this sense; making people think you’re not part of the outgroup and don’t have objectionable beliefs related to the ones you actually hold, in whatever way is sensible and honest.
Maybe LW is better at using disagreement button as I find it’s pretty common for unpopular opinions to get lots of upvotes and disagree votes. One could use the API to see if the correlations are different there.
Huh? Why would it matter whether or not I’m part of “the outgroup”...? What does that mean?
I think this is a significant reason why people downvote some, but not all, things they disagree with. Especially a member of the outgroup who makes arguments EAs have refuted before and need to reexplain, not saying it’s actually you
What is “the outgroup”?
Claude thinks possible outgroups include the following, which is similar to what I had in mind
a) I’m not sure all of those count as someone who would necessarily be an outsider to EA (e.g. Will MacAskill only assigns a 50% probability to consequentialism being correct, and he and others in EA have long emphasized pluralism about normative ethical theories; there’s been an EA system change group on Facebook since 2015 and discourse around systemic change has been happening in EA since before then)
b) Even if you do consider people in all those categories to be outsiders to EA or part of “the out-group”, us/them or in-group/out-group thinking seems like a bad idea, possibly leading to insularity, incuriosity, and overconfidence in wrong views
c) It’s especially a bad idea to not only think in in-group/out-group terms and seek to shut down perspectives of “the out-group” but also to cast suspicion on the in-group/out-group status of anyone in an EA context who you happen to disagree with about something, even something minor — that seems like a morally, subculturally, and epistemically bankrupt approach
You’re shooting the messenger. I’m not advocating for downvoting posts that smell of “the outgroup”, just saying that this happens in most communities that are centered around an ideological or even methodological framework. It’s a way you can be downvoted while still being correct, especially from the LEAST thoughtful 25% of EA forum voters
Please read the quote from Claude more carefully. MacAskill is not an “anti-utilitarian” who thinks consequentialism is “fundamentally misguided”, he’s the moral uncertainty guy. The moral parliament usually recommends actions similar to consequentialism with side constraints in practice.
I probably won’t engage more with this conversation.
I don’t know what he meant, but my guess FWIW is this 2014 essay.
I understand the general concept of ingroup/outgroup, but what specifically does that mean in this context?
I don’t know, sorry. I admittedly tend to steer clear of community debates as they make me sad, probably shouldn’t have commented in the first place...