The people with the highest karma naturally tend to be the most active users, who’re likely already the most committed EAs. This means we already have a natural source of groupthink (assuming the more committed you are to a social group the more likely you are to have bought into any given belief it tends to hold). So groupthinky posts would already tend to get more attention, and having these active users have greater voting power multiplies this effect.
A major argument of this post is “groupthink”.
Unfortunately, most uses of “groupthink” in arguments are disappointing.
Often authors mention the issue, but don’t offer any specific instances of groupthink, or how their solution solves it, even though it seems easy to do—they wrote up a whole idea motivated by it.
The simplest explanation for the above is that “groupthink” is a safe rhetorical thing to sprinkle onto arguments. Then, well, it becomes sort of a red flag for arguments without substance.
I guess I can immediately think of 3-5 instances or outcomes of groupthink off the top of my head[1], and like, if I spent more time, maybe 15 total different actual realizations of groupthink or issues.
Most of the issues are probably due to a streetlamp effect, very low tractability/EV, and are thorny politically, driven by founders/lock-in effect, or have a dependency on another issue.
Many of these issues are not blocked by virtue or ability to think about it, and it’s unclear how they would be affected by voting.
I think there are several voting reforms and ways of changing the forum. In addition to the ones vaguely mentioned in this comment (admittedly the comment sort of feels like vaporware since the person won’t be able to get back to it), a modification or editing of voting power or karma could be useful.
I’m mentioning this because it would be good to have issues/groupthink that could be solved or addressed (or maybe risk made worse) by any of these reforms.
One groupthink issue is the baked in tendency toward low quality or pseudo criticism.
This both crowds out and degrades real criticism (bigotry of low expectations). It is rewarded and self-perpetuates without any impact, and so seems like the definition of groupthink.
Often authors mention the issue, but don’t offer any specific instances of groupthink, or how their solution solves it, even though it seems easy to do—they wrote up a whole idea motivated by it.
You’ve seriously loaded the terms of engagement here. Any given belief shared widely among EAs and not among intelligent people in general is a candidate for potential groupthink, but qua them being shared EA beliefs, if I just listed a few of them I would expect you and most other forum users to consider them not groupthink—because things we believe are true don’t qualify.
So can you tell me what conditions you think would be sufficient to judge something as groupthink before I try to satisfy you?
Also do we agree that if groupthink turns out to be a phenomenon among EAs then the karma system would tend to accentuate it? Because if that’s true then unless you think the probability of EA groupthink is 0, this is going to be an expected downside of the karma system—so the argument should be whether the upsides outweigh the downsides, not whether the downsides exist.
So can you tell me what conditions you think would be sufficient to judge something as groupthink before I try to satisfy you?
A system or pattern or general belief that leads to a defect or plausible potential defect (even if there is some benefit to it), and even if this defect is abstract or somewhat disagreed upon.
The most clear defect would be something like “We are funding personal projects of the first people who joined EA and these haven’t gotten a review because all his friends shout down criticism on the forum and the forum self selects for devotees. Last week the director has been posting pictures of Bentleys on Instagram with his charity’s logo”.
The most marginal defects would be “meta” and whose consequences are abstract. A pretty tenuous but still acceptable one (I think?) is “we only are getting very intelligent people with high conscientiousness and this isn’t adequate. ”.
Because if that’s true then unless you think the probability of EA groupthink is 0, this is going to be an expected downside of the karma system—so the argument should be whether the upsides outweigh the downsides, not whether the downsides exist.
Right, you say this…but seem a little shy to list the downsides.
Also, it seems like you are close to implicating literally any belief?
As we both know, the probably of groupthink isn’t zero. I mentioned I can think of up to 15 instances, and gave one example.
I would expect you and most other forum users to consider them not groupthink—because things we believe are true don’t qualify.
My current read is that this seems a little ideological to me and relies on sharp views of the world.
I’m worried what you will end up saying is not only that EAs must examine themselves with useful and sharp criticism that covers a wide range of issues, but all mechanical ways where prior beliefs are maintained must be removed, even without any specific or likely issue?
One pragmatic and key issue is that you might have highly divergent and low valuations of the benefits of these systems. For example, there is a general sentiment worrying about a kind of EA “Eternal September” and your vision of karma reforms are exactly the opposite of most solutions to this (and well, have no real chance of taking place).
Another issue are systemic effects. Karma and voting is unlikely to be the root issue of any defects in EA (and IMO not even close). However, we might think it affects “systems of discussion” in pernicious ways as you mention. Yet, since it’s not central or the root reason, deleting the current karma system without a clear reason or tight theory might lead to a reaction that is the opposite of what you intend (I think this is likely), so it’s blind and counterproductive.
The deepest and biggest issue of all is that many ideological views that involve disruption are hollow and themselves just expressions of futile dogma, e.g cultural revolution with red books, or a tech startup with a narrative of disrupting the world but simply deploying historically large amounts of investor money.
Wrote this on my phone, there might be bad spelling or grammar issues but if the comment has the dumbs it’s on me.
Fwiw I didn’t downvote this comment, though I would guess the downvotes were based on the somewhat personal remarks/rhetoric. I’m also finding it hard to parse some of what you say.
A system or pattern or general belief that leads to a defect or plausible potential defect (even if there is some benefit to it), and even if this defect is abstract or somewhat disagreed upon.
This still leaves a lot of room for subjective interpretation, but in the interests of good faith, I’ll give what I believe is a fairly clear example from my own recent investigations: it seems that somewhere between 20-80% of the EA community believe that the orthogonality thesis shows that AI is extremely likely to wipe us all out. This is based on a drastic misreading of an often-cited 10-year old paper, which is available publicly for any EA to check.
Another odd belief, albeit one which seems more muddled than mistaken is the role of neglectedness in ‘ITN’ reasoning. What we ultimately care about is the amount of good done per resource unit, ie, roughly, <importance>*<tractability>. Neglectedness is just a heuristic for estimating tractability absent more precise methods. Perhaps it’s a heuristic with interesting mathematical properties, but it’s not a separate factor, as it’s often presented. For example, in 80k’s new climate change profile, they cite ‘not neglected’ as one of the two main arguments against working on it. I find this quite disappointing—all it gives us is a weak a priori probabilistic inference which is totally insensitive to the type of things the money has been spent on and the scale of the problem, which seems much less than we could learn about tractability by looking directly at the best opportunities to contribute to the field, as Founders Pledge did.
Also, it seems like you are close to implicating literally any belief?
I don’t know why you conclude this. I specified ‘belief shared widely among EAs and not among intelligent people in general’. That is a very small subset of beliefs, albeit a fairly large subset of EA ones. And I do think we should be very cautious about a karma system that biases towards promoting those views.
I don’t know why you conclude this. I specified ‘belief shared widely among EAs and not among intelligent people in general’. That is a very small subset of beliefs, albeit a fairly large subset of EA ones. And I do think we should be very cautious about a karma system that biases towards promoting those views.
You are right. My mindset writing this comment was bad, but I remember thinking the reply seemed not specific and general, and I reacted harshly, this was unnecessary and wrong.
This still leaves a lot of room for subjective interpretation, but in the interests of good faith, I’ll give what I believe is a fairly clear example from my own recent investigations: it seems that somewhere between 20-80% of the EA community believe that the orthogonality thesis shows that AI is extremely likely to wipe us all out. This is based on a drastic misreading of an often-cited 10-year old paper, which is available publicly for any EA to check.
I do not know the details of the orthogonality thesis and can’t speak to this very specific claim (but this is not at all refuting you, I am just literally clueless and can’t comment on something I don’t understand).
To both say the truth and be agreeable, it’s clear that the beliefs in AI safety are from EAs following the opinions of a group of experts. This just comes from people’s outright statements.
In reality, those experts are not at the majority of AI people and it’s unclear exactly how EA would update or change its mind.
Furthermore, I see things like the below, that, without further context, could be wild violations of “epistemic norms”, or just common sense.
For background, I believe this person is interviewing or speaking to researchers in AI, some of whom are world experts. Below is how they seem to represent their processes and mindset when communicating with these experts.
One of my models about community-building in general is that there’s many types of people, some who will be markedly more sympathetic to AI safety arguments than others, and saying the same things that would convince an EA to someone whose values don’t align will not be fruitful. A second model is that older people who are established in their careers will have more formalized world models and will be more resistance to change. This means that changing one’s mind requires much more of a dialogue and integration of ideas into a world model than with younger people. The thing I want to say overall: I think changing minds takes more careful, individual-focused or individual-type-focused effort than would be expected initially.
I think one’s attitude as an interviewer matters a lot for outcomes. Like in therapy, which is also about changing beliefs and behaviors, I think the relationship between the two people substantially influences openness to discussion, separate from the persuasiveness of the arguments. I also suspect interviewers might have to be decently “in-group” to have these conversations with interviewees. However, I expect that that in-group-ness could take many forms: college students working under a professor in their school (I hear this works for the AltProtein space), graduate students (faculty frequently do report their research being guided by their graduate students) or colleagues. In any case, I think the following probably helped my case as an interviewer: I typically come across as noticeably friendly (also AFAB), decently-versed in AI and safety arguments, and with status markers. (Though this was not a university-associated project, I’m a postdoc at Stanford who did some AI work at UC Berkeley).
The person who wrote the above is concerned about image, PR and things like initial conditions, and this is entirely justified, reasonable and prudent for any EA intervention or belief. Also, the person who wrote the above is conscientious, intellectually modest, and highly thoughtful, altruistic and principled.
However, at the same time, at least from their writing above, their entire attitude seems to be based on conversion—yet, their conversations is not with students or laypeople like important public figures, but the actual experts in AI.
So if you’re speaking with the experts in AI and adopting this attitude that they are preconverts and you are focused on working around their beliefs, it seems like, in some reads of this, would be that you are cutting off criticism and outside thought. In this ungenerous view, it’s a further red flag that you have to be so careful—that’s an issue in itself.
For context, in any intervention, getting the opinion or updating from experts is sort of the whole game (maybe once you’re at “GiveWell levels” and working with dozens of experts it’s different, but even then I’m not sure—EA has updated heavily on cultured meat from almost a single expert).
My apologies: I now see that earlier today I accidentally strongly downvoted your comment. What happened was that I followed the wrong Vimium link: I wanted to select the permalink (“k”) but selected the downvote arrow instead (“o”), and then my attempt to undo this mistake resulted in an unintended strong downvote. Not sure why I didn’t notice this originally.
Can the OP give instances of groupthink?
A major argument of this post is “groupthink”.
Unfortunately, most uses of “groupthink” in arguments are disappointing.
Often authors mention the issue, but don’t offer any specific instances of groupthink, or how their solution solves it, even though it seems easy to do—they wrote up a whole idea motivated by it.
The simplest explanation for the above is that “groupthink” is a safe rhetorical thing to sprinkle onto arguments. Then, well, it becomes sort of a red flag for arguments without substance.
I guess I can immediately think of 3-5 instances or outcomes of groupthink off the top of my head[1], and like, if I spent more time, maybe 15 total different actual realizations of groupthink or issues.
Most of the issues are probably due to a streetlamp effect, very low tractability/EV, and are thorny politically, driven by founders/lock-in effect, or have a dependency on another issue.
Many of these issues are not blocked by virtue or ability to think about it, and it’s unclear how they would be affected by voting.
I think there are several voting reforms and ways of changing the forum. In addition to the ones vaguely mentioned in this comment (admittedly the comment sort of feels like vaporware since the person won’t be able to get back to it), a modification or editing of voting power or karma could be useful.
I’m mentioning this because it would be good to have issues/groupthink that could be solved or addressed (or maybe risk made worse) by any of these reforms.
One groupthink issue is the baked in tendency toward low quality or pseudo criticism.
This both crowds out and degrades real criticism (bigotry of low expectations). It is rewarded and self-perpetuates without any impact, and so seems like the definition of groupthink.
You’ve seriously loaded the terms of engagement here. Any given belief shared widely among EAs and not among intelligent people in general is a candidate for potential groupthink, but qua them being shared EA beliefs, if I just listed a few of them I would expect you and most other forum users to consider them not groupthink—because things we believe are true don’t qualify.
So can you tell me what conditions you think would be sufficient to judge something as groupthink before I try to satisfy you?
Also do we agree that if groupthink turns out to be a phenomenon among EAs then the karma system would tend to accentuate it? Because if that’s true then unless you think the probability of EA groupthink is 0, this is going to be an expected downside of the karma system—so the argument should be whether the upsides outweigh the downsides, not whether the downsides exist.
A system or pattern or general belief that leads to a defect or plausible potential defect (even if there is some benefit to it), and even if this defect is abstract or somewhat disagreed upon.
The most clear defect would be something like “We are funding personal projects of the first people who joined EA and these haven’t gotten a review because all his friends shout down criticism on the forum and the forum self selects for devotees. Last week the director has been posting pictures of Bentleys on Instagram with his charity’s logo”.
The most marginal defects would be “meta” and whose consequences are abstract. A pretty tenuous but still acceptable one (I think?) is “we only are getting very intelligent people with high conscientiousness and this isn’t adequate. ”.
Right, you say this…but seem a little shy to list the downsides.
Also, it seems like you are close to implicating literally any belief?
As we both know, the probably of groupthink isn’t zero. I mentioned I can think of up to 15 instances, and gave one example.
My current read is that this seems a little ideological to me and relies on sharp views of the world.
I’m worried what you will end up saying is not only that EAs must examine themselves with useful and sharp criticism that covers a wide range of issues, but all mechanical ways where prior beliefs are maintained must be removed, even without any specific or likely issue?
One pragmatic and key issue is that you might have highly divergent and low valuations of the benefits of these systems. For example, there is a general sentiment worrying about a kind of EA “Eternal September” and your vision of karma reforms are exactly the opposite of most solutions to this (and well, have no real chance of taking place).
Another issue are systemic effects. Karma and voting is unlikely to be the root issue of any defects in EA (and IMO not even close). However, we might think it affects “systems of discussion” in pernicious ways as you mention. Yet, since it’s not central or the root reason, deleting the current karma system without a clear reason or tight theory might lead to a reaction that is the opposite of what you intend (I think this is likely), so it’s blind and counterproductive.
The deepest and biggest issue of all is that many ideological views that involve disruption are hollow and themselves just expressions of futile dogma, e.g cultural revolution with red books, or a tech startup with a narrative of disrupting the world but simply deploying historically large amounts of investor money.
Wrote this on my phone, there might be bad spelling or grammar issues but if the comment has the dumbs it’s on me.
Fwiw I didn’t downvote this comment, though I would guess the downvotes were based on the somewhat personal remarks/rhetoric. I’m also finding it hard to parse some of what you say.
This still leaves a lot of room for subjective interpretation, but in the interests of good faith, I’ll give what I believe is a fairly clear example from my own recent investigations: it seems that somewhere between 20-80% of the EA community believe that the orthogonality thesis shows that AI is extremely likely to wipe us all out. This is based on a drastic misreading of an often-cited 10-year old paper, which is available publicly for any EA to check.
Another odd belief, albeit one which seems more muddled than mistaken is the role of neglectedness in ‘ITN’ reasoning. What we ultimately care about is the amount of good done per resource unit, ie, roughly, <importance>*<tractability>. Neglectedness is just a heuristic for estimating tractability absent more precise methods. Perhaps it’s a heuristic with interesting mathematical properties, but it’s not a separate factor, as it’s often presented. For example, in 80k’s new climate change profile, they cite ‘not neglected’ as one of the two main arguments against working on it. I find this quite disappointing—all it gives us is a weak a priori probabilistic inference which is totally insensitive to the type of things the money has been spent on and the scale of the problem, which seems much less than we could learn about tractability by looking directly at the best opportunities to contribute to the field, as Founders Pledge did.
I don’t know why you conclude this. I specified ‘belief shared widely among EAs and not among intelligent people in general’. That is a very small subset of beliefs, albeit a fairly large subset of EA ones. And I do think we should be very cautious about a karma system that biases towards promoting those views.
You are right. My mindset writing this comment was bad, but I remember thinking the reply seemed not specific and general, and I reacted harshly, this was unnecessary and wrong.
I do not know the details of the orthogonality thesis and can’t speak to this very specific claim (but this is not at all refuting you, I am just literally clueless and can’t comment on something I don’t understand).
To both say the truth and be agreeable, it’s clear that the beliefs in AI safety are from EAs following the opinions of a group of experts. This just comes from people’s outright statements.
In reality, those experts are not at the majority of AI people and it’s unclear exactly how EA would update or change its mind.
Furthermore, I see things like the below, that, without further context, could be wild violations of “epistemic norms”, or just common sense.
For background, I believe this person is interviewing or speaking to researchers in AI, some of whom are world experts. Below is how they seem to represent their processes and mindset when communicating with these experts.
The person who wrote the above is concerned about image, PR and things like initial conditions, and this is entirely justified, reasonable and prudent for any EA intervention or belief. Also, the person who wrote the above is conscientious, intellectually modest, and highly thoughtful, altruistic and principled.
However, at the same time, at least from their writing above, their entire attitude seems to be based on conversion—yet, their conversations is not with students or laypeople like important public figures, but the actual experts in AI.
So if you’re speaking with the experts in AI and adopting this attitude that they are preconverts and you are focused on working around their beliefs, it seems like, in some reads of this, would be that you are cutting off criticism and outside thought. In this ungenerous view, it’s a further red flag that you have to be so careful—that’s an issue in itself.
For context, in any intervention, getting the opinion or updating from experts is sort of the whole game (maybe once you’re at “GiveWell levels” and working with dozens of experts it’s different, but even then I’m not sure—EA has updated heavily on cultured meat from almost a single expert).
My apologies: I now see that earlier today I accidentally strongly downvoted your comment. What happened was that I followed the wrong Vimium link: I wanted to select the permalink (“k”) but selected the downvote arrow instead (“o”), and then my attempt to undo this mistake resulted in an unintended strong downvote. Not sure why I didn’t notice this originally.
Ha.
Honestly, I was being an ass in the comment and I was updating based on your downvote.
Now I’m not sure anymore!
Bayes is hard.
(Note that I cancelled the upvote on this comment so it doesn’t show up in the “newsfeed”)