Head of Video at 80,000 Hours
(Opinions here my own by default though will sometimes speak in a professional capacity).
Personal website: www.chanamessinger.com
Head of Video at 80,000 Hours
(Opinions here my own by default though will sometimes speak in a professional capacity).
Personal website: www.chanamessinger.com
I’d be interested in more thoughts if you have them on evidence or predictions one could have made ahead of time that would distinguish this model from other (like maybe a lot of what’s going on is youth and will evaporate over time (youth still has to be mediated by things like what you describe, but as an example).
Also, my understanding is that SBF wasn’t very insecure? Does that affect your model or is the point that the leader / norm setter doesn’t have to be?
Yeah, I’m confused about this. Seems like some amount of “collapsing social uncertainty” is very good for healthy community dynamics, and too much (like having a live ranking of where you stand) would be wildly bad. I don’t think I currently have a precise way of cutting these things. My current best guess is that the more you push to make the work descriptive, the better, and the more it becomes normative and “shape up!”-oriented, the worse, but it’s hard to know exactly what ratio of descriptive:normative you’re accomplishing via any given attempt at transparency or common knowledge creation.
I strongly resonate with this; I think this dynamic also selects for people who are open-minded in a particular way (which I broadly think is great!), so you’re going to get more of it than usual.
Thanks for writing this! I’m not sure how I’d feel if orgs I worked for went more in this direction, but I did find myself nodding along to a bunch of parts (though not all) of what you wrote.
One thing I’m curious about is you have thoughts on avoiding a “nitpick” culture, where every perk or line item becomes a big discussion among leadership or an org, or the org broadly—that seems to me like a big downside of moving in this direction.
Just because, things I especially liked:
1.
We should try to be especially virtuous whenever we find ourselves setting a moral example for others
(Though sometimes/often I think the excellent thing to model for others is “yes, I am really going to do this weird / not-altruistic-looking thing because it is the right thing to do)
2. Bringing in services to make them convenient but then asking people to pay sounds like a bit of a boondoggle but also really clever—I don’t think I’d encountered this kind of compromise before. I’d be interested in more of this form!
I don’t know if this is right, but I take Lincoln to be (a bit implicitly but I see it throughout the post) taking the default cultural norm as a pretty strong starting point, and aiming to vary from that when you have a good reason (I imagine because variations from what’s normal is what sends the most salient messages), rather than think about what a perk is from first principles, which explains the dishwashing and toilet cleaning.
Reminds me of C.S. Lewis’s view on modesty
The Christian rule of chastity must not be confused with the social rule of ‘modesty’ (in one sense of that word); i.e. propriety, or decency. The social rule of propriety lays down how much of the human body should be displayed and what subjects can be referred to, and in what words, according to the customs of a given social circle. Thus, while the rule of chastity is the same for all Christians at all times, the rule of propriety changes. A girl in the Pacific islands wearing hardly any clothes and a Victorian lady completely covered in clothes might both be equally ‘modest’, proper, or decent, according to the standards of their own societies: and both, for all we could tell by their dress, might be equally chaste (or equally unchaste). Some of the language which chaste women used in Shakespeare’s time would have been used in the nineteenth century only by a woman completely abandoned.
I wonder if another way of saying what Lincoln is trying to get at is something like “it’s about the work, not you”, a message of “happy to invest in your work” can have all the same outward features of “happy to invest in you” but with different effects, but not sure he’d endorse this.
I take your disagreement with Lincoln to be something of the form “Lincoln wants and gestures at a certain vibe change that is underspecified”—“Halstead is like “is that even a consistent thing to want / does it make sense in reality”″ which feels like a really common conversational dynamic and can be frustrating for everyone involved.
Thanks for this! I feel like I have a bunch of thoughts swirling in my head as a result of reading this :)
Again quick take: would be interested in more discussion on (conditional on there being any board members) takes on what a good ratio of funders to non funders is in different situations.
I haven’t thought hard about this yet, so this is just a quick take: I’m broadly enthused but don’t feel convinced that experts have actual reason to get engaged. Can you flesh that out more?
A dynamic I keep seeing is that it feels hard to whistleblow or report concerns or make a bid for more EA attention on things that “everyone knows”, because it feels like there’s no one to tell who doesn’t already know. It’s easy to think that surely this is priced in to everyone’s decision making. Some reasons to do it anyway:
You might be wrong about what “everyone” knows—maybe everyone in your social circle does, but not outside. I see this a lot in Bay gossip vs. London gossip—what “everyone knows” is very different in those two places
You might be wrong about what “everyone knows”—sometimes people use a vague shorthand, like “the FTX stuff” and it could mean a million different things, and either double illusion of transparency (you both think you know what the other person is talking about but don’t) or the pressure to nod along in social situations means that it seems like you’re all talking about the same thing but you’re actually not
Just because people know doesn’t mean it’s the right level of salient—people forget, are busy with other things, and so on.
Bystander effect: People might all be looking around assuming someone else has the concern covered because surely everyone knows and is taking the right amount of action on it.
In short, if you’re acting based on the belief that there’s a thing “everyone knows”, check that that’s true.
Relatedly: Everybody Knows, by Zvi Mowshowitz
[Caveat: There’s an important balance to strike here between the value of public conversation about concerns and the energy that gets put into those public community conversations. There are reasons to take action on the above non-publicly, and not every concern will make it above people’s bar for spending the time and effort to get more engagement with it. Just wanted to point to some lenses that might get missed.]
Fwiw, I think we have different perspectives here—outside of epistemics, everything on that list is there precisely because we think it’s a potential source of some of the biggest risks. It’s not always clear where risks are going to come from, so we look at a wide range of things, but we are in fact trying to be on the lookout for those big risks. Thanks for flagging it doesn’t seem like we are; I’m not sure if this comes from miscommunication or a disagreement about where big risks come from.
Maybe another place of discrepancy is that we primarily think of ourselves as looking for where high-impact gaps are, places where someone should be doing something but no one is, and risks are a subset of that but not the entirety.
(To be clear I also agree with Julia that it’s very plausible EA should have more capacity on this)
Wow, excited about this!!
I think as an overall gloss, it’s absolutely true that we have fewer levers in the AI Safety space. There are two sets of reasons why I think it’s worth considering anyway:
Impact—in a basic kind of “high importance can balance out lower tractability” way, we don’t want to only look where the streetlight is, and it’s possible that the AI Safety space will seem to us sufficiently high impact to aim some of our energy there
Don’t want to underestimate the levers—we have fewer explicit moves to make in the broader AI Safety space (e.g. disallowing people from events), but there is both a high overlap with EA and my guess is that some set of people in a new space will appreciate people who have thought about community management a lot giving thoughts / advice / sharing models and so on.
But both of these could be insufficient for a decision to put more of our effort there, and it remains to be seen.
I’m a little worried that use of the word pivot was a mistake on my part in that it maybe implies more of a change than I expect; if so, apologies.
I think this is best understood as a combination of
Maybe this is really important, especially right now [which I guess is indeed a subset of cause prioritization]
Maybe there are unusually high leverage things to do in that space right now
Maybe the counterfactual is worse—it’s a space with a lot of new energy, new organizations, etc, and so a lot more opportunity for re-making old mistakes, not a lot of institutional knowledge, and so on.
I think this basically agrees with your point (1), but as a hypothesis, not a conclusion
In addition, there is an unusual amount of money and power flowing around this space right now, and so it might warrant extra attention
This is a small effect, but we’ve received some requests from within this space to pay more attention to it, which seems like some (small) evidence
In general I use and like this concept quite a lot, but someone else advocating it does give me the chance to float my feelings the other direction:
I think sometimes when I want to go to missing moods as a concept to explain to my interlocutor what I think is going off in our conversation, I end up feeling like I’m saying “I am demanding you are required to be sad because I would feel better if you were”, which I want to be careful of imposing on people. It sometimes also feels like I’m assuming we have the same values in a way I would want to do more upfront work before calling on.
More generally, I think it’s good to notice the costs of the place you’re taking on some tradeoff spectrum, but also good to feel good about doing the right thing, making the right call, balancing things correctly etc.
I really like the frame of “fanfiction of your idea”, I think it helpfully undermines the implicit status-thing of “I will steelman your views because your version is bad and I can state your views better than you can.”
Some more in-the-weeds thinking that also may be less precise.
It’s absolutely true that the trust we have with the community matters to doing parts of our work well (for other parts, it’s more dependent on trust with specific stakeholders), and the topic is definitely on my mind. I’ll say I don’t have any direct evidence that overall trust has gone down, though I see it as an extremely reasonable prior. I say this only because I think it’s the kind of thing that would be easy to have an information cascade about, where “everyone knows” that “everyone knows” that trust is being lost. Sometimes there’s surprising results, like that the “overall affect scores [to EA as a brand] haven’t noticeably changed post FTX collapse.” There are some possibilities we’re considering about tracking this more directly.
For me, I’m trying to balance trust being important with knowing that it’s normal to get some amount of criticism and trying not to be overreactive to that. For instance, in preparation for a specific project we might run, we did some temperature checking and getting takes on how people felt about our plan.
I do want people to have a sense of what we actually do, what our work looks like, and how we think and approach things so that they can make informed decisions about how to update on what we say and do. We have some ideas for conveying those things, and are thinking about how to prioritize those ideas against putting our heads down and doing good work directly.
To say a bit about the ideas raised in light of things I’ve learned and thought about recently.
External people
Conscientious EAs are often keen to get external reviews from consultants on various parts of EA, which I really get and I think comes from an important place about not reinventing the wheel or thinking we’re oh so special. There are many situations in which that makes sense and would be helpful; I recently recommended an HR consultant to a group thinking of doing something HR-related. But my experience has been that it’s surprisingly hard to find professionals who can give advice about the set of things we’re trying to do, since they’re often mostly optimizing for one thing (like minimizing legal risk to a company, etc.) or just don’t have experience with what we’re trying to do.
In the conversations I’ve had with HR consultants, ombudspeople, and an employment lawyer, I continually have it pointed out that what the Community Health team does doesn’t fall into any of those categories (because unlike HR, we work with organizations outside of CEA and also put more emphasis on protecting confidentiality, and unlike ombudspeople, we try more to act on the information we have).
When I explain the difference, the external people I talk to extremely frequently say “oh, that sounds complicated” or “wow, that sounds hard”. So I just want to flag that getting external perspectives (something I’ve been working a bunch on recently) is harder than it might seem. There just isn’t much in the way of direct analogues of our work where there are already known best practices. A good version according to me might look more like talking to different people and trying to apply their thinking to an out-of-distribution situation, as well as being able to admit that we might be doing an unusual thing and the outside perspectives aren’t as helpful as I’d hope.
If people have suggestions for external people it would be useful to talk to, feel free to let me know!
More updates
Appreciate the point about updating the community more often—this definitely seems really plausible. We were already planning some upcoming updates, so look out for those. Just to say something that it’s easy to lose track of, it’s often much easier to talk or answer questions 1:1 or in groups than publicly. Figuring out how to talk to many audiences at once takes a lot more thought and care. For example, readers of the Forum include established community members, new community members, journalists, and people who are none of the above. While conversations here can still be important, this isn’t the only venue for productive conversation, and it’s not always the best one. I want to empower people to feel free to chat with us about their questions, thoughts or perspectives on the team, for instance at conferences. This is part of why I set up two different office hours at the most recent EAG Bay Area.
It’s also worth saying that we have multiple stakeholders in addition to the community at large [for instance when we think about things like the AI space and whether there’s more we could be doing there, or epistemics work, or work with specific organizations and people where a community health lens is especially important], and a lot of important conversations happen with those stakeholders directly (or if not, that’s a different mistake we’re making), which won’t always be outwardly visible.
The other suggestions
Don’t have strong takes or thoughts to share right now, but thanks for the suggestions!
For what it’s worth, I’m interested in talking to community members about their perspective on the team [modulo time availability]. This already happens to some extent informally (and people are welcome to pass on feedback to me (directly or by form) or to the team (including anonymously)). When I went looking for people to talk to (somewhat casually/informally) to get feedback from people who had lost trust, I ended up getting relatively few responses, even anonymously. I don’t know if that’s because people felt uncomfortable or scared or upset, or just didn’t have the time or something else. So I want to reiterate that I’m interested in this.
I wanted to say here that we’ve said for a while that we share information (that we get consent to share) about people with other organizations and parts of CEA [not saying you disagree, just wanted to clarify]. While I agree one could have concerns, overall I think this is a huge upside to our work. If it was hard to act on the information we have, we would be much more like therapists or ombudspeople, and my guess is that would hugely curtail the impact we could have. [This may not engage with the specifics you brought up, but I thought it might be good to convey my model].
Just to add to my point about there still being a board, a director and funders in a world where we spin out, I’ll note that there are other potential creative solutions to gaining independence e.g. getting diverse funding, getting funding promises for more time in advance (like an endowment), and clever legal approaches such as those an ombudsperson I interviewed said they had. We haven’t yet looked into any of those in depth.
For what it’s worth, I think on the whole we’ve been able to act quite independently of CEA, but I acknowledge that that wouldn’t be legible from the outside.
Things I’d like to try more to make conversations better
Running gather towns / town halls when there’s a big conversation happening in EA
I thought the Less Wrong gather town on FTX was good and I’m glad it happened
I’ve run a few internal-to-CEA gather towns on tricky issues so far and I’m glad I did, though they didn’t tend to be super well attended.
Offering calls / conversations when a specific conversation is getting more heated or difficult
I haven’t done this yet myself, but it’s been on my mind for a while, and I was just given this advice in a work context, which reminded me.
If a back and forth on the forum isn’t going well, it seems really plausible that having a face to face call, or offering to mediate one for other people (this one I’m more skeptical of, especially without having any experience in mediation, though I do think there can be value add here) will make that conversation go better, give space for people to be better understood and so on.
An off-beat but related idea
Using podcasts where people can explain their thinking and decision making, including in situations where they wouldn’t do the same thing again, since podcasts allow for longer more natural explanations
Hi Kaleem! (For people who don’t know, I’m the interim head of the Community Health team). Thanks for caring about and raising these issues.
I’ll give the short version here and aim to respond to myself with wider ranging, more in the weeds thinking another day. Basically, we have been independently thinking about whether to spin out or be more independent, and think this post is correctly pointing out a bunch of considerations that point to and away from that being the right call, with a lot of overlap in the things we’ve been tracking. At the moment the decision doesn’t feel obvious, for basically the reasons you list. I’ll add just a few things:
[This one might be obvious, but just going to say] Spinning out of CEA or EV doesn’t by itself solve COI issues, since we’ll still have a board and an executive director, and require funding.
One thing I want to add to the coordination and knowledge sharing points: if it’s low cost to pass on concerns we have means we can be more free with pointing to issues we see, even if there isn’t something concrete we can yet point to.
Lastly, there’s a lot of maybe implicit focus on casework in this post, and I want to reiterate that that is one part of what we do, but not all of it, and different aspects of what we do requires buy-in from different stakeholders.
THB we should strongly disapprove of working at an AI lab, even on a safety team
I like the distinction between overreacting and underreacting as being “in the world” vs. “memes”—another way of saying this is something like “object level reality” vs. “social reality”.
If the longtermism wave is real, then that was pretty about social reality, at least within EA, and changed how money was spent and things people said (as I understand it, I wasn’t really socially involved at the time).
So to the extent that this is about “what’s happening to EA” I think there’s clearly a third wave here, where people are running and getting funded to run AI specific groups, people are doing policy and advocacy in a way I’ve never seen before.
If this ends up being a flash in the pan, then maybe the way to see this is something like a “trend” or “fad”, like maybe 2022-spending was.
Which maybe brings me to something like “we might want these waves to consistently be about “what’s happening in EA” vs “what’s happening in the world”, and they’re currently not.