Head of Video at 80,000 Hours
(Opinions here my own by default though will sometimes speak in a professional capacity).
Personal website: www.chanamessinger.com
Head of Video at 80,000 Hours
(Opinions here my own by default though will sometimes speak in a professional capacity).
Personal website: www.chanamessinger.com
I really like the frame of “fanfiction of your idea”, I think it helpfully undermines the implicit status-thing of “I will steelman your views because your version is bad and I can state your views better than you can.”
Some more in-the-weeds thinking that also may be less precise.
It’s absolutely true that the trust we have with the community matters to doing parts of our work well (for other parts, it’s more dependent on trust with specific stakeholders), and the topic is definitely on my mind. I’ll say I don’t have any direct evidence that overall trust has gone down, though I see it as an extremely reasonable prior. I say this only because I think it’s the kind of thing that would be easy to have an information cascade about, where “everyone knows” that “everyone knows” that trust is being lost. Sometimes there’s surprising results, like that the “overall affect scores [to EA as a brand] haven’t noticeably changed post FTX collapse.” There are some possibilities we’re considering about tracking this more directly.
For me, I’m trying to balance trust being important with knowing that it’s normal to get some amount of criticism and trying not to be overreactive to that. For instance, in preparation for a specific project we might run, we did some temperature checking and getting takes on how people felt about our plan.
I do want people to have a sense of what we actually do, what our work looks like, and how we think and approach things so that they can make informed decisions about how to update on what we say and do. We have some ideas for conveying those things, and are thinking about how to prioritize those ideas against putting our heads down and doing good work directly.
To say a bit about the ideas raised in light of things I’ve learned and thought about recently.
External people
Conscientious EAs are often keen to get external reviews from consultants on various parts of EA, which I really get and I think comes from an important place about not reinventing the wheel or thinking we’re oh so special. There are many situations in which that makes sense and would be helpful; I recently recommended an HR consultant to a group thinking of doing something HR-related. But my experience has been that it’s surprisingly hard to find professionals who can give advice about the set of things we’re trying to do, since they’re often mostly optimizing for one thing (like minimizing legal risk to a company, etc.) or just don’t have experience with what we’re trying to do.
In the conversations I’ve had with HR consultants, ombudspeople, and an employment lawyer, I continually have it pointed out that what the Community Health team does doesn’t fall into any of those categories (because unlike HR, we work with organizations outside of CEA and also put more emphasis on protecting confidentiality, and unlike ombudspeople, we try more to act on the information we have).
When I explain the difference, the external people I talk to extremely frequently say “oh, that sounds complicated” or “wow, that sounds hard”. So I just want to flag that getting external perspectives (something I’ve been working a bunch on recently) is harder than it might seem. There just isn’t much in the way of direct analogues of our work where there are already known best practices. A good version according to me might look more like talking to different people and trying to apply their thinking to an out-of-distribution situation, as well as being able to admit that we might be doing an unusual thing and the outside perspectives aren’t as helpful as I’d hope.
If people have suggestions for external people it would be useful to talk to, feel free to let me know!
More updates
Appreciate the point about updating the community more often—this definitely seems really plausible. We were already planning some upcoming updates, so look out for those. Just to say something that it’s easy to lose track of, it’s often much easier to talk or answer questions 1:1 or in groups than publicly. Figuring out how to talk to many audiences at once takes a lot more thought and care. For example, readers of the Forum include established community members, new community members, journalists, and people who are none of the above. While conversations here can still be important, this isn’t the only venue for productive conversation, and it’s not always the best one. I want to empower people to feel free to chat with us about their questions, thoughts or perspectives on the team, for instance at conferences. This is part of why I set up two different office hours at the most recent EAG Bay Area.
It’s also worth saying that we have multiple stakeholders in addition to the community at large [for instance when we think about things like the AI space and whether there’s more we could be doing there, or epistemics work, or work with specific organizations and people where a community health lens is especially important], and a lot of important conversations happen with those stakeholders directly (or if not, that’s a different mistake we’re making), which won’t always be outwardly visible.
The other suggestions
Don’t have strong takes or thoughts to share right now, but thanks for the suggestions!
For what it’s worth, I’m interested in talking to community members about their perspective on the team [modulo time availability]. This already happens to some extent informally (and people are welcome to pass on feedback to me (directly or by form) or to the team (including anonymously)). When I went looking for people to talk to (somewhat casually/informally) to get feedback from people who had lost trust, I ended up getting relatively few responses, even anonymously. I don’t know if that’s because people felt uncomfortable or scared or upset, or just didn’t have the time or something else. So I want to reiterate that I’m interested in this.
I wanted to say here that we’ve said for a while that we share information (that we get consent to share) about people with other organizations and parts of CEA [not saying you disagree, just wanted to clarify]. While I agree one could have concerns, overall I think this is a huge upside to our work. If it was hard to act on the information we have, we would be much more like therapists or ombudspeople, and my guess is that would hugely curtail the impact we could have. [This may not engage with the specifics you brought up, but I thought it might be good to convey my model].
Just to add to my point about there still being a board, a director and funders in a world where we spin out, I’ll note that there are other potential creative solutions to gaining independence e.g. getting diverse funding, getting funding promises for more time in advance (like an endowment), and clever legal approaches such as those an ombudsperson I interviewed said they had. We haven’t yet looked into any of those in depth.
For what it’s worth, I think on the whole we’ve been able to act quite independently of CEA, but I acknowledge that that wouldn’t be legible from the outside.
Things I’d like to try more to make conversations better
Running gather towns / town halls when there’s a big conversation happening in EA
I thought the Less Wrong gather town on FTX was good and I’m glad it happened
I’ve run a few internal-to-CEA gather towns on tricky issues so far and I’m glad I did, though they didn’t tend to be super well attended.
Offering calls / conversations when a specific conversation is getting more heated or difficult
I haven’t done this yet myself, but it’s been on my mind for a while, and I was just given this advice in a work context, which reminded me.
If a back and forth on the forum isn’t going well, it seems really plausible that having a face to face call, or offering to mediate one for other people (this one I’m more skeptical of, especially without having any experience in mediation, though I do think there can be value add here) will make that conversation go better, give space for people to be better understood and so on.
An off-beat but related idea
Using podcasts where people can explain their thinking and decision making, including in situations where they wouldn’t do the same thing again, since podcasts allow for longer more natural explanations
Hi Kaleem! (For people who don’t know, I’m the interim head of the Community Health team). Thanks for caring about and raising these issues.
I’ll give the short version here and aim to respond to myself with wider ranging, more in the weeds thinking another day. Basically, we have been independently thinking about whether to spin out or be more independent, and think this post is correctly pointing out a bunch of considerations that point to and away from that being the right call, with a lot of overlap in the things we’ve been tracking. At the moment the decision doesn’t feel obvious, for basically the reasons you list. I’ll add just a few things:
[This one might be obvious, but just going to say] Spinning out of CEA or EV doesn’t by itself solve COI issues, since we’ll still have a board and an executive director, and require funding.
One thing I want to add to the coordination and knowledge sharing points: if it’s low cost to pass on concerns we have means we can be more free with pointing to issues we see, even if there isn’t something concrete we can yet point to.
Lastly, there’s a lot of maybe implicit focus on casework in this post, and I want to reiterate that that is one part of what we do, but not all of it, and different aspects of what we do requires buy-in from different stakeholders.
THB we should strongly disapprove of working at an AI lab, even on a safety team
THB EAs should stop working at major AI labs
Thanks for this concrete prediction! Will the public know when this switch has happened (ie will it be announced, not just through the model’s behavior).
Thanks so much for all your vulnerability and openness here—I think all the kinds of emotionally complex things you’re talking about have real and important effects on individual and group epistemics and I’m really glad we can do some talking about them.
This is all really thoughtful and an interesting insight into others’ minds—thanks so much for sharing.
I don’t have answers, but some thoughts might be:
Are there other thoughts that depend on going to superintelligence quickly that you want to re-examine now?
Are there predictions you’d now make about GPT-5?
Do you know why you trust others more? One hypothesis I have based on what you wrote is that you have a more granular sense of what goes wrong when you mess things up, and a less detailed sense of how other people do things. No idea if that’s right, but thought I’d throw it out there?
Are there other things that if they were socially rewarded you would believe more strongly?
I think a couple times, e.g. here and here, I got pretty swept up in how much I liked some of what a post was doing (with Evan Hubinger’s fraud post I liked for instance that he wasn’t trying to abandon or jettison consequentialism while still caring a lot about what went wrong with FTX, with Gavin’s post about moral compromise I liked that he was pointing at purity and perfection being unrealistic goals), that I wrote strongly positive comments that I later felt, after further comments and/or conversations, didn’t track all the threads I would have wanted to. (For the post-FTX one I also think I got swept up in worries that we would downplay the event as a community or not respond with integrity and so was emotionally relieved at someone taking a strong stance).
I don’t think this is always bad, it’s fine to comment on one aspect of a post, but if I’m doing that, I want to notice it and do it on purpose, especially when I’m writing a comment that gives a vibe of strong overall support.
Do you have a sense of why you made that particular mistake?
Here’s my current understanding (certainty has been more difficult to achieve than I’d like, in part because the time period of Owen’s involvement traverses a number of changes to the structure of CEA and related organisations, possible there remain errors here but have tried to be clear about my level of certainty):
Looks like in 2014 (or possibly 2013), he joined some part of the CEA legal entity, eventually working at the Global Priorities Project (which was legally part of the CEA umbrella organisation) as Director of Research. He believes he was part-time, I couldn’t confirm.
The Global Priorities Project was split between FHI and CEA, but he was on the CEA side.
He reports he started working primarily at FHI in 2015, but may have continued to have a part-time advisory role at CEA the umbrella organisation (I couldn’t confirm).
He started working with the main part of CEA (CEA-the-project) in summer 2017, part-time (16 hours a week). His part-time contract starting then describes him as “advisor to the CEO” and reporting to the CEO
He is described by a CEA employee there at the time as being trusted to give input on many CEA teams. This involved things like participating in staff discussions on Slack and at some meetings, and giving more input on specific projects.
At least one of the incidents that led to the woman reporting to Julia (as described in Time) happened after he started working for CEA-the-project.
The advisor position is described by a few people who have a general but not super precise sense of how things were at the time as probably signifying that he was independent from the rest of CEA, given space to think about high level things, but not likely meant to imply peers with the CEO (I hope it’s clear from my hedging words that there’s uncertainty here).
Between 2017 and 2019, he was at a number of CEA team retreats and involved in conversations about CEA leadership.
In 2018 he was setting up the Research Scholars Program at FHI.
His last contract with CEA the project is March 2019, reporting directly to the board, still described as an advisor, with even more limited time (8 hours a week)
He was, as you point out, one of the people on the hiring committee for CEA’s executive director in 2019.
He stopped working at CEA-the-project July 2019, and was appointed a trustee (of EV, then called CEA) March 2020.
From descriptions by others at CEA, from 2020-2023, both in the role of trustee and more generally in the community he had big input on strategic questions and was a trusted senior advisor type; I don’t know how much weight goes on the CEA aspect versus the more general aspect.
My own interpretation is that it seems like there were two parallel tracks:
Owen was doing work for Global Priorities Project, then moving to FHI, then setting up the Research Scholars Program, and ongoingly doing his own work / research
People running or high up in CEA trusted Owen quite a bit on strategic questions, and he was given roles that let him give thoughts and strategic input on CEA activity. Starting at least in 2017, he was involved in or spearheaded at least some CEA projects and was involved in important conversations about leadership at CEA. I don’t know how much advising he did beyond that, could have been a lot or a little, but was in a position where his views were given a good amount of weight.
In terms of evaluation, as you mention in your original comment, I do think things in this category will be inputs to both our internal Community Health review and the external investigation occurring.
(Written in a personal capacity) Yeah, agree, and your comment made me realize that some of these are actually my experimental thoughts on something like “facilitating / moderating” sensitive conversations. I don’t know if what you’re pointing at is common knowledge, but I’d hope it is, and in my head it’s firmly in “nonexperimental”, standard and important wisdom (as contained, I believe, in some other written advice on this for EA group leaders and others who might be in this position).
From my perspective, a hard thing is how much work is done by tone and presence—I know people who can do the “talk about a bayesian analysis of harassment” with non-rationalists with sensitivity, warmth, care, and people who do “displaying kindness, empathy and common ground” in a way that leaves people more tense than before. But that doesn’t mean the latter isn’t generally better advice, I think it probably is for most people—and I hope it’s in people’s standard toolkits.
Some experimental thoughts on how to moderate / facilitate sensitive conversations
Drawn from talking to a few people who I think do this well. Written in a personal capacity.
Go meta—talk about where you’re at and what you’re grappling with. Do circling-y things. Talk about your goals for the discussion.
Be super clear about the way in which your thing is or isn’t a safe space and for what
Be super clear about what bayesian updates people might make
Consider starting with polls to get calibrated on where people are
Go meta on what the things people are trying to protect are, or what the confusion you think is at play is
Aim first to create common knowledge
Distinguish between what’s thinkable and what’s sayable in this space and why that distinction matters
Reference relevant norms around cooperative spaces or whatever space you’ve set up here
If you didn’t set up specific norms but want to now, apologize for not doing so until that point in a “no fault up to this point but no more” way
If someone says something you wish they hadn’t:
Do many of the above
Figure out what your goals are—who are you trying to protect / make sure you have their back
If possible, strategize with the person/people who are hurt, get their feedback (though don’t precommit to doing what they say)
Have 1:1s with people able
If you want to dissociate with someone or criticize them, explain the history and connection to them, don’t memoryhole stuff, give people context for understanding
Display courage.
Be specific about what you’re criticizing
Cheerlead and remind yourself and others of the values you’re trying to hold to
People are mad for reasonable and unreasonable reasons, you can speak to the reasonable things you overlap on with strength
Yeah, happy to clarify; to reiterate I’m very sympathetic to what Will said about how EA isn’t about making sure you have the personal life you want to have. What I’m sympathetic to in the sentence I quoted (while not necessarily agreeing with the whole article, which I didn’t intend to imply by quoting the sentence) was the healthiness of feeling a certain protectiveness over your personal life. While I intended my comment in a personal capacity (and should have said so), it also seems to me to be a matter of community health that people (including when I think particularly of women’s experiences) are able to set boundaries around how much EA takes over their lives, ends up taking away things that are nourishing or important to them, or makes unsustainable demands.
I think you’re right that the sentence doesn’t really, on its own, help us figure out what the right limits are on what’s appropriate and what’s not, and where the “this is personal” line can get drawn. I’m pretty open to a wide range of possible norms, including those that ask for people to act quite differently in their personal life than they otherwise would because it’s better for the world, and I’m quite up for figuring out the right consequentialist view on it (I personally take on sacrifices in that direction, and think it’s appropriate that many others do as well). I read Ozy as agreeing with this, given their paragraph here[1].
I do think there is some tension here. I take it as part of the Community Health team’s job to navigate that tension and figure out what overall helps the community be a place where people can flourish and do their best work. Sometimes this isn’t hard—harassment and assault and abuse of power and so on aren’t categorically out of our remit just because they sometimes happen in people’s personal lives—and sometimes it’s more complicated.
Not the least bit my intention to trivialize the discussion, and the team cares a tremendous amount about figuring these things out well.
“I of course believe that effective altruists should follow common-sense sexual ethics, which includes not hitting on people in your chain of command, avoiding any pressure into sex, and being careful about relationships with coworkers, with people much younger than you, and people you have power over. I support a norm of no cuddle piles or flirting at effective altruist retreats, meetups, events, or afterparties; effective altruism can certainly set the norms at official EA events, and it makes sense to me that these should be asexual spaces. I agree that certain people, by virtue of their position, grant the effective altruism movement more say in their romantic lives. A grantmaker, a charity founder or executive, or a community organizer should take great care to make sure they don’t have sex with anyone they might wind up with power over.)”
Yeah, broadly agree (and feel confused about how much I feel like it applies to me personally—like, I feel reasonably comfortable making personal sacrifices in how I organize my personal life for EA reasons, and I imagine others do, too, including with the frame of personal obligation. I just think it’s can also be quite healthy for people to have a more protective attitude towards their private life—noting that Ozy points to a lot of constraints on people’s personal behaviors they are with and also that depending on what Ozy means I’m not necessarily in full agreement with the piece).
Also (though I’m not sure I fully agree): https://thingofthings.substack.com/p/on-demandingness-polyamory-and-effective
In general I use and like this concept quite a lot, but someone else advocating it does give me the chance to float my feelings the other direction:
I think sometimes when I want to go to missing moods as a concept to explain to my interlocutor what I think is going off in our conversation, I end up feeling like I’m saying “I am demanding you are required to be sad because I would feel better if you were”, which I want to be careful of imposing on people. It sometimes also feels like I’m assuming we have the same values in a way I would want to do more upfront work before calling on.
More generally, I think it’s good to notice the costs of the place you’re taking on some tradeoff spectrum, but also good to feel good about doing the right thing, making the right call, balancing things correctly etc.