Thanks for the summary! I don’t know if that came up during your discussion, but I would have found concrete examples useful for judging the arguments.
“ii) This causes EA to prioritise building relationships with high-status people, such as offering them major speaking slots at EA conferences, even when they aren’t particularly rigorous thinker”
I’d hope that bad arguments from high-status people will be pointed out and the discussion moves forward (e.g. Steven Pinker strawmanning worries about x-risks).
“iii) It also causes EA to want to dissociate from low-status people who produce ideas worth paying attention to.”
For example I find it unlikely that an anonymous writer with good ideas and comments won’t be read and discussed on the forum. Maybe it’s different on conferences and behind the scences at EA orgs, though?
“iv) By acquiring resources and status EA had drawn the attention of people who were interested in these resources, instead of the mission of EA. These people would damage the epistemic norms by attempting to shift the outcomes of truth-finding processes towards outcomes that would benefit them.”
EAs seem to mostly interact with research groups (part of the institution with the best track record in truth-finding) and non-profits. I’m not worried that research groups pose a significant threat to EAs epistemic standards, rather I expect researchers to 1) enrich them and 2) be a good match for altruistic/ethical motivations and being rigorous about this. Examples that come to mind are OpenPhil causing/convincing bioriks researchers to shift their research in the direction of existential threats.
Does someone know of examples or mechanism how non-profits might manipulate or have manipulated discussions? Maybe they find very consequential & self-serving arguments that are very difficult to evaluate? I believe some people think about AI Safety in this way, but my impression is that this issue has enjoyed a lot of scrutiny.
I agree that I don’t think we ignore good ideas from anonymous posts. I think it’s true that we distance ourselves from controversial figures, which might be what OP means by low status?
I can’t say exactly what the people I was talking about meant since I don’t want to put words in their mouth, but controversial figures was likely at least part of it.
“EAs seem to mostly interact with research groups and non-profits”—They were talking more about the kinds of people who are joining effective altruism than the groups we interact with
Thanks for the summary! I don’t know if that came up during your discussion, but I would have found concrete examples useful for judging the arguments.
I’d hope that bad arguments from high-status people will be pointed out and the discussion moves forward (e.g. Steven Pinker strawmanning worries about x-risks).
For example I find it unlikely that an anonymous writer with good ideas and comments won’t be read and discussed on the forum. Maybe it’s different on conferences and behind the scences at EA orgs, though?
EAs seem to mostly interact with research groups (part of the institution with the best track record in truth-finding) and non-profits. I’m not worried that research groups pose a significant threat to EAs epistemic standards, rather I expect researchers to 1) enrich them and 2) be a good match for altruistic/ethical motivations and being rigorous about this. Examples that come to mind are OpenPhil causing/convincing bioriks researchers to shift their research in the direction of existential threats.
Does someone know of examples or mechanism how non-profits might manipulate or have manipulated discussions? Maybe they find very consequential & self-serving arguments that are very difficult to evaluate? I believe some people think about AI Safety in this way, but my impression is that this issue has enjoyed a lot of scrutiny.
I agree that I don’t think we ignore good ideas from anonymous posts. I think it’s true that we distance ourselves from controversial figures, which might be what OP means by low status?
I can’t say exactly what the people I was talking about meant since I don’t want to put words in their mouth, but controversial figures was likely at least part of it.
“EAs seem to mostly interact with research groups and non-profits”—They were talking more about the kinds of people who are joining effective altruism than the groups we interact with