In the spirit of communication style you advocate for… my immediate emotional reaction to this is “Eternal September has arrived”.
I dislike my comment being summarized as “brings up the “declining epistemics” argument to defend EA orgs from criticism”. In the blunt style you want, this is something between distortion and manipulation.
On my side, I wanted to express my view on the Wytham debate. And I wrote a comment expressing my views on the debate.
I also dislike the way my comment is straw-manned by selective quotation.
In the next bullet point to “The discussion often almost completely misses the direct, object-level, even if just at back-of-the-envelope estimate way.” I do explicitly acknowledge the possible large effects of higher order factors.
In contrast, large fraction attention in the discussion seems spent on topics which are both two steps removed from the actual thing , and very open to opinions. Where by one step removed I mean e.g. “how was this announced” or “how was this decided”, and two steps removed is e.g. “what will be the impact of how was this announced on the sentiment of the twitter discussion”. While I do agree such considerations can have large effect, driving decisions by this type of reasoning in my view moves people and orgs into the sphere of pure PR, spin and appearance.
What I object to is a combination of 1. ignore the object level, or discuss it in a very lazy way 2. focus on the 2nd order … but not in a systematic way, but mostly based on saliency and emotional pull (e.g., how will this look on twitter)
Yes, it is a simple matter to judge where this leads in the limit. We have a bunch of examples how the discourse looks like when completely taken over by these considerations—e.g., political campaigns. Words have little meaning connected to physical reality, but are mostly tools in the fight for the emotional states and minds of other people.
Also: while those with high quality epistemics usually agree on similar things is a distortion making the argument personal, about people, in reality yes, good reasoning often converges to similar conclusions.
Also: It’s a given that the path of catering to a smaller group of people with higher quality epistemics will have more impact than spreading the core EA messaging to a larger group of people with lower quality epistemics
No, it’s not given. Just, so far, effective altruism was about using evidence and reason to figure out how to benefit others as much as possible, and acting based on that. Based on the thinking so far, It was decidedly not trying to be a mass movement, making our core insights more appealing to the public at large .
In my view, no one figured out yet how the appealing to the masses, don’t need to think much version of effective altruism should look like to be actually good.
(edit: Also, I quite dislike the frame-manipulation move of shifting from “epistemic decline of the community” to “less intelligent or thoughtful people joining”. You can imagine a randomized experiment where you take two groups of equally intelligent and thoughtful people, and you make them join community with different styles of epistemic culture (eg physics, and multi-level marketing). You will get very different results. While you seem to interpret a lot of things as about people (are they smart? had they studied philosophy?) I think it’s often much more about norms.)
Thanks for responding, I wasn’t trying to call you out and perhaps shouldn’t have quoted your comment so selectively.
We seem to have opposite intuitions on this topic. My point with this post is that my visceral reaction to these arguments is that I’m being patronized. I even admit that the declining epistemic quality is a legitimate concern at the end of my post.
In some of my other comments I’ve admitted that I could’ve phrased this whole issue better, for sure.
I suppose to me, current charities/NGOs are so bad, and young people feel so powerless to change things, that the core EA principles could be extremely effective if spread.
Also: while those with high quality epistemics usually agree on similar things is a distortion making the argument personal, about people, in reality yes, good reasoning often converges to similar conclusions
This. Aumann’s Agreement Theorem tells us that Bayesian that have common priors and trust each other to be honest cannot disagree.
The in practice version of this is that a group agreeing on similar views around certain subjects isn’t automatically irrational, unless we have outside evidence or one of the conditions is wrong.
Aumann’s agreement theorem is pretty vacuous because the common prior assumption never holds in important situations, e.g. everyone has different priors on AI risk.
In the spirit of communication style you advocate for… my immediate emotional reaction to this is “Eternal September has arrived”.
I dislike my comment being summarized as “brings up the “declining epistemics” argument to defend EA orgs from criticism”. In the blunt style you want, this is something between distortion and manipulation.
On my side, I wanted to express my view on the Wytham debate. And I wrote a comment expressing my views on the debate.
I also dislike the way my comment is straw-manned by selective quotation.
In the next bullet point to “The discussion often almost completely misses the direct, object-level, even if just at back-of-the-envelope estimate way.” I do explicitly acknowledge the possible large effects of higher order factors.
What I object to is a combination of
1. ignore the object level, or discuss it in a very lazy way
2. focus on the 2nd order … but not in a systematic way, but mostly based on saliency and emotional pull (e.g., how will this look on twitter)
Yes, it is a simple matter to judge where this leads in the limit. We have a bunch of examples how the discourse looks like when completely taken over by these considerations—e.g., political campaigns. Words have little meaning connected to physical reality, but are mostly tools in the fight for the emotional states and minds of other people.
Also: while those with high quality epistemics usually agree on similar things is a distortion making the argument personal, about people, in reality yes, good reasoning often converges to similar conclusions.
Also: It’s a given that the path of catering to a smaller group of people with higher quality epistemics will have more impact than spreading the core EA messaging to a larger group of people with lower quality epistemics
No, it’s not given. Just, so far, effective altruism was about using evidence and reason to figure out how to benefit others as much as possible, and acting based on that. Based on the thinking so far, It was decidedly not trying to be a mass movement, making our core insights more appealing to the public at large .
In my view, no one figured out yet how the appealing to the masses, don’t need to think much version of effective altruism should look like to be actually good.
(edit: Also, I quite dislike the frame-manipulation move of shifting from “epistemic decline of the community” to “less intelligent or thoughtful people joining”. You can imagine a randomized experiment where you take two groups of equally intelligent and thoughtful people, and you make them join community with different styles of epistemic culture (eg physics, and multi-level marketing). You will get very different results. While you seem to interpret a lot of things as about people (are they smart? had they studied philosophy?) I think it’s often much more about norms.)
Thanks for responding, I wasn’t trying to call you out and perhaps shouldn’t have quoted your comment so selectively.
We seem to have opposite intuitions on this topic. My point with this post is that my visceral reaction to these arguments is that I’m being patronized. I even admit that the declining epistemic quality is a legitimate concern at the end of my post.
In some of my other comments I’ve admitted that I could’ve phrased this whole issue better, for sure.
I suppose to me, current charities/NGOs are so bad, and young people feel so powerless to change things, that the core EA principles could be extremely effective if spread.
This. Aumann’s Agreement Theorem tells us that Bayesian that have common priors and trust each other to be honest cannot disagree.
The in practice version of this is that a group agreeing on similar views around certain subjects isn’t automatically irrational, unless we have outside evidence or one of the conditions is wrong.
Aumann’s agreement theorem is pretty vacuous because the common prior assumption never holds in important situations, e.g. everyone has different priors on AI risk.