I’ll speak to question 6, since I am on the community health team, and in particular was hired in large part to work on community epistemics, but am only speaking to the work I’ve done rather than the whole team since I’m newish to the team. (Haven’t done tons of work on this yet, and my initial experiments and forays have been pretty varied, since the epistemics space is really large)
Tl;dr I think this matters, in and of itself it hasn’t been the top thing on my list, adjacent/related things have been high priority.
(Other CEA teams online (via the forum), groups and events teams have all thought about this as well.)
Whether people feel “able” to disagree itself might take some disambiguation—I tried to think a bunch about (1) intellectual challenge of having an inside view in a world with tons of information and how to make that easier and (2) the emotional difficulty of believing in your own ideas, not falling prey to epistemic learned helplessness, noticing your own intuitions, etc.
When I thought about working on the latter at scale, I thought about:
Modelling thinking out loud, what it looks like when people try to figure things out and show all the messiness, that people others respect a lot have plenty of uncertainties, and trying to make figuring things out more accessible
Talking a lot about the mental and conversational motions I think are great, including those that solicit disagreement
Getting high status people to encourage disagreement
Before the FTX situation happened, I had been updating more towards “doing things that don’t scale” and considering things like:
Epistemics coaching / “epistemics therapy”
A residence at a uni group to be a person who could focus on helping people shake up their thinking / get red-teaming on their current ideas / encouragement to think for themselves
Asking a lot of people what helped them think better and think about what social and physical contexts let people really think
E.g. the pros and cons of sharper and softer cultures for this, and whether EA should more explicitly think of itself as an archipelago, where there are different areas for different vibes, and your job is to figure out which one works best for you or move around as needed
I’ve definitely heard that some spaces feel like they privilege only a certain kind of thinking or set of conclusions, and that makes it hard for others to think straight, especially when access to funding / coworking spaces / etc feels contigent on it. That sucks and is hard. My team has done some thinking about this—I think the current sense is that adding more support is a better move than trying to get people to change how they run their own things, but I am definitely not super sure.
And more generally trying to give support to people like group leaders, anyone who is closer to the ground and has more leverage over the social environment. My guess is a lot of the value of “feel viscerally like you have social support for disagreeing” happens in smaller contexts like that, and I’ve been in conversations with a handful about how they support their groups to think (like, I’m obsessed with this). E.g. my guess is that getting high status people to encourage disagreement is more useful here than it is at scale (but not sure whether it’s so much more useful that it out-does scale). In general people being excited about criticism, saying when they’ve updated and highlighting their favorites seems really great.
When I thought about the problem of inside views, I was much more focused on people feeling afraid to even start thinking, and deferring too much / more than they endorsed, and trying to make figuring out what’s true easier. I suspect that kind of thing has valuable knock-on effects on “feeling like you’ll have social support to speak up”—personally when I know why I think what I think, I feel much more able to articulate it and fight for it than if I feel much more confused about the world.
Maybe the direct “social support for disagreeing” should have been more my focus, I’m not sure. It was definitely on my radar.
I thought “the forum being scary” might end up being a real epistemics problem (though I wasn’t sure it was the top of my list of such problems, and the forum team have worked hard on this).
I think it’s very possible we should have more debates at EAGs and have bid for it.
I was at high school programs tracking in part how pressure-y we were being (and I’m so appreciative to others at those programs who have a lot more experience at it than me and were amazing influences). (In practice, I think people on average are overworried about this in high school contexts rather than under, but it definitely matters.)
I also taught a class that involved talking about how to actually make people feel like disagreeing was good (one feedback I got was that we’d done too much to make disagreeing feel like the thing to do and people felt a little pressured to come up with a disagreement!)
Julia Wise has also written in part about how to get real feedback, in the context of power dynamics, and there’s a whole world of “how does funding affect epistemics” I haven’t delved into.
One thing I don’t want to lose track of is that it can feel shitty to have people disagree that one’s ideas or critiques are valuable or true, and that alone is an emotional and often tracked-as-social or in-fact-social hit. But of course no one wants us to be in a position where as a community we can’t say “I don’t think your critique is any good” or “I want to hire that person less because I think their judgments of ideas have been systematically wrong.” Like, lots of criticism is bad. So it’s tricky.
Really appreciative of the agree/disagree voting system and all the people who say “Thanks so much for voicing your disagreement here” before they say why they don’t buy it. I think those things are great. (Really lovely example here and here). If I may name names, I think Rob Bensinger and Nathan Young are unusually good at this, and I appreciate them for it.
I think this is important but hard, and there are a lot of important things in community epistemics. If you have thoughts on addressing this particular thing, I’d love to hear them (noting that in my role, I might decide there are things that are higher priority—but anyone can help community epistemics, I certainly can’t do it alone)! I have a form here.
(Also, if people aren’t feeling able to disagree with community builders or anyone else, I’d really appreciate hearing about that—the form can be for that too).
I’ll speak to question 6, since I am on the community health team, and in particular was hired in large part to work on community epistemics, but am only speaking to the work I’ve done rather than the whole team since I’m newish to the team. (Haven’t done tons of work on this yet, and my initial experiments and forays have been pretty varied, since the epistemics space is really large)
Tl;dr I think this matters, in and of itself it hasn’t been the top thing on my list, adjacent/related things have been high priority.
(Other CEA teams online (via the forum), groups and events teams have all thought about this as well.)
Whether people feel “able” to disagree itself might take some disambiguation—I tried to think a bunch about (1) intellectual challenge of having an inside view in a world with tons of information and how to make that easier and (2) the emotional difficulty of believing in your own ideas, not falling prey to epistemic learned helplessness, noticing your own intuitions, etc.
When I thought about working on the latter at scale, I thought about:
Modelling thinking out loud, what it looks like when people try to figure things out and show all the messiness, that people others respect a lot have plenty of uncertainties, and trying to make figuring things out more accessible
Talking a lot about the mental and conversational motions I think are great, including those that solicit disagreement
Getting high status people to encourage disagreement
Before the FTX situation happened, I had been updating more towards “doing things that don’t scale” and considering things like:
Epistemics coaching / “epistemics therapy”
A residence at a uni group to be a person who could focus on helping people shake up their thinking / get red-teaming on their current ideas / encouragement to think for themselves
Asking a lot of people what helped them think better and think about what social and physical contexts let people really think
E.g. the pros and cons of sharper and softer cultures for this, and whether EA should more explicitly think of itself as an archipelago, where there are different areas for different vibes, and your job is to figure out which one works best for you or move around as needed
I’ve definitely heard that some spaces feel like they privilege only a certain kind of thinking or set of conclusions, and that makes it hard for others to think straight, especially when access to funding / coworking spaces / etc feels contigent on it. That sucks and is hard. My team has done some thinking about this—I think the current sense is that adding more support is a better move than trying to get people to change how they run their own things, but I am definitely not super sure.
And more generally trying to give support to people like group leaders, anyone who is closer to the ground and has more leverage over the social environment. My guess is a lot of the value of “feel viscerally like you have social support for disagreeing” happens in smaller contexts like that, and I’ve been in conversations with a handful about how they support their groups to think (like, I’m obsessed with this). E.g. my guess is that getting high status people to encourage disagreement is more useful here than it is at scale (but not sure whether it’s so much more useful that it out-does scale). In general people being excited about criticism, saying when they’ve updated and highlighting their favorites seems really great.
When I thought about the problem of inside views, I was much more focused on people feeling afraid to even start thinking, and deferring too much / more than they endorsed, and trying to make figuring out what’s true easier. I suspect that kind of thing has valuable knock-on effects on “feeling like you’ll have social support to speak up”—personally when I know why I think what I think, I feel much more able to articulate it and fight for it than if I feel much more confused about the world.
Maybe the direct “social support for disagreeing” should have been more my focus, I’m not sure. It was definitely on my radar.
I thought “the forum being scary” might end up being a real epistemics problem (though I wasn’t sure it was the top of my list of such problems, and the forum team have worked hard on this).
I think it’s very possible we should have more debates at EAGs and have bid for it.
I was at high school programs tracking in part how pressure-y we were being (and I’m so appreciative to others at those programs who have a lot more experience at it than me and were amazing influences). (In practice, I think people on average are overworried about this in high school contexts rather than under, but it definitely matters.)
I also taught a class that involved talking about how to actually make people feel like disagreeing was good (one feedback I got was that we’d done too much to make disagreeing feel like the thing to do and people felt a little pressured to come up with a disagreement!)
Julia Wise has also written in part about how to get real feedback, in the context of power dynamics, and there’s a whole world of “how does funding affect epistemics” I haven’t delved into.
One thing I don’t want to lose track of is that it can feel shitty to have people disagree that one’s ideas or critiques are valuable or true, and that alone is an emotional and often tracked-as-social or in-fact-social hit. But of course no one wants us to be in a position where as a community we can’t say “I don’t think your critique is any good” or “I want to hire that person less because I think their judgments of ideas have been systematically wrong.” Like, lots of criticism is bad. So it’s tricky.
Really appreciative of the agree/disagree voting system and all the people who say “Thanks so much for voicing your disagreement here” before they say why they don’t buy it. I think those things are great. (Really lovely example here and here). If I may name names, I think Rob Bensinger and Nathan Young are unusually good at this, and I appreciate them for it.
I think this is important but hard, and there are a lot of important things in community epistemics. If you have thoughts on addressing this particular thing, I’d love to hear them (noting that in my role, I might decide there are things that are higher priority—but anyone can help community epistemics, I certainly can’t do it alone)! I have a form here.
(Also, if people aren’t feeling able to disagree with community builders or anyone else, I’d really appreciate hearing about that—the form can be for that too).