Announcing the Meta Coordination Forum 2023
Continuing our efforts to be more transparent about the events we organize, we want to share that we’re running the Meta Coordination Forum 2023 (an invite-only event scheduled for late September in the Bay) and provide community members with the opportunity to give input via a pre-event survey.
Highlights
Event Goal: Help key people working in meta-EA make better plans over the next two years to help set EA and related communities on a better trajectory.[1]
Agenda:
Updates from subject-matter experts from key cause areas.
Discussions on significant strategic questions.
Clarification of project ownership and forming of actionable plans.
Attendees: A group of key people focused on meta / community-building work. Not focused on other key figures beyond the meta space.
Organizing Team: The Partner Events team at CEA (Sophie Thomson, Michel Justen, and Elinor Camlin) is organizing this event, with Max Dalton and other senior figures in the meta space advising on strategy.
Community Engagement: We’d like to hear your perspectives via a survey by 11:59 PM PDT on Sunday, 17 September. The survey asks about the future of EA, the interrelation between EA and AI safety, and potential reforms in meta-EA projects.
Post-Event: A summary of survey responses from the event will be made public to encourage wider discussion and reflection in the community.
The event is a successor to past “Leaders Forums” and “Coordination Forums” but is also different in some important ways. For further details, please see the post below.
Why we’re running this event
Now seems like a pivotal time for EA and related communities
The FTX crisis has eroded trust within and outside the EA community and highlighted some important issues. Also, AI discourse has taken off, changing the feasibility of various policy and talent projects. This means that now is an especially important time for various programs to reconsider their strategy.
We think that more coordination and cooperation could help people doing meta work make better plans
We think it could be useful for attendees to share updates and priorities, discuss group resource allocation, and identify ways to boost or flag any concerns they have with each other’s efforts. (But we don’t expect to reach total agreement or a single unified strategy.)
What the event is and isn’t
We think it’s important to try to give an accurate sense of how important this event is. We think it’s easy for community members to overestimate its importance, but we’re also aware that it might be in our interests to downplay its importance (thus inviting less scrutiny).
The event will potentially shape the future trajectory and strategies of EA and related communities
First, some ways in which the event is fairly important:
It will bring together many important decision-makers in the meta-EA space (more on attendees below).
These people will be discussing topics that are important for EA’s future, and discussions at the event might shape their actions.
The event aims to improve plans, and hopes that this leads to a better trajectory for EA and related communities.
The event may facilitate further trust and collaboration between this set of people, possibly further entrenching their roles (though we’re also trying to be more careful about how much we limit this; see below).
The event will not foster unanimous decisions or a single grand strategy
Some ways in which the event is less important:
It is not a collective decision-making body: all attendees will make their own decisions about what they do. We expect that attendees will come in with lots of disagreements and will leave with lots of disagreements (but hopefully with better-informed and coordinated plans). This is how previous versions of this event have gone.
Relatedly, we are not hoping or expecting that we’ll agree on a single grand strategy.
We think that many attendees have spent a long time thinking about these questions. We hope that they’ll learn some new things and find opportunities to collaborate, but these will likely be second-order tweaks to their models and plans, with the main shape of their plans determined by work outside of the event.
It is only focused on the “meta”/community-building space rather than object-level decisions in any cause area.
We don’t want this event to slow down other efforts to build a better world
While this event is aimed at improving the trajectories of EA and related communities, we think it will be very far from solving all of the problems that exist, and we support others trying to do other work that could help (e.g., running projects to address a problem they see, trying to coordinate important decision-makers). Please don’t rely on this event for too much, and don’t let it slow down any efforts you’re considering to build a better world. Just because there are private discussions, don’t assume that those private discussions will solve everything!
What we’re aiming to do with the event
Our Goal: Help key people working in meta–EA make better plans over the next two years
The event aims to help key people working in meta-EA make better plans over the next two years, to help set EA and related communities on a better trajectory.
Our Strategy: Build shared context, discuss disputed strategy questions, clarify project ownership, and maintain professional relationships
We’re aiming to help key people working in meta-EA make better plans over the next two years by:
Building shared context by syncing up about EA and its associated cause areas:
Hosting expert presentations and Q&As to discuss recent developments and field needs.
Generating reports on meta-EA topics such as the growth rates of key programs and data on program cost-effectiveness.
Discussing disputed meta-EA strategy questions
We’re focusing discussions on significant yet tractable disagreements with direct implications for important decisions.
We’re using a pre-event survey and memos to identify and draw out these disagreements.
We’re not trying to reach agreement on big strategy questions. For many big strategy questions, we think that this is likely intractable, and it’s often fine/good for people to pursue a variety of complementary strategies.
Clarifying who’s owning which projects
We’re gathering a list of important projects through the pre-event survey, memos, and throughout the event.
Towards the end of the event, we’ll host a session to rank, discuss, and assign ownership for highly-ranked proposals.
We’ll share a list of unowned projects publicly so that people outside of the event know which projects might be good to pursue.
Building professional relationships with calibrated trust
While past events have emphasized trust-building, we want to take a balanced approach here. We want to help people to understand each other’s goals, thinking, strengths, and weaknesses better so that they have a better sense of when and how to trust and coordinate with each other.
We think that this will be built throughout the event, especially in 1:1s and informal discussions.
Who’s attending the event
We designed the attendee list in consultation with Claire Zabel and James Snowden, as well as suggestions from our initial invitees. Our objective was to assemble a group of attendees that is roughly representative of the meta work that is going on (e.g., in terms of the split between effective giving, EA principles, existential risk community building, etc.)
43 people working in EA meta are currently planning to attend the event. These include:
Alexander Berger
Amy Labenz
Anne Schulze
Arden Koehler
Bastian Stern
Ben West
Buddy Shah
Caleb Parikh
Chana Messinger
Claire Zabel
Dewi Erwan
James Snowden
Jan Kulveit
Joey Savoie
Jonas Vollmer
Julia Wise
Kuhan Jeyapragasan
Lewis Bollard
Lincoln Quirk
Max Dalton
Max Daniel
Michelle Hutchinson
Nick Beckstead
Nicole Ross
Niel Bowerman
Oliver Habryka
Rob Gledhill
Sim Dhaliwal
Sjir Hoeijmakers
William MacAskill
Zach Robinson
Note this isn’t the full list; some people preferred not to be publicly listed.
What the attendee list is not
This is not a canonical list of “key people working in meta EA” or “EA leaders.” There are plenty of people who are not attending this event who are doing high-value work in the meta-EA space. We’ve invited people for a variety of reasons: some because they are leaders of meta EA orgs or run teams within those orgs and so have significant decision-making power or strategic influence, and others because they have expertise or work for an organization that we think it could be valuable to learn from or cooperate with.
Invitation to this event is not proof of trustworthiness. Post FTX, we want to make it especially clear that being an important player in EA doesn’t guarantee a high degree of trustworthiness or endorsement from others. We value collaboration and learning among people, taking place in an environment of well-calibrated trust.
How You Can Help
We value the insights and perspectives of the community and understand that the event could influence decisions that impact some community members. So, we wanted to ask you to share your input through a brief survey to inform and shape the discussions at the event (deadline: 11:59 PM PDT on Sunday, 17 September).
Because we’re in the final sprint of event planning, we will prioritize responding to comments that could influence the event, and our responses to other questions will be delayed.
Updates Post-Event
After the event, we plan to publish a summary of the survey responses and a list of unowned projects publicly (edit: see footnote[2]). We will also encourage attendees to share their memos on the EA event and think about what other updates we can share that will aid transparency and coordination.
- ^
Two years is obviously a bit arbitrary, but feels like the right sort of time frame. We think that decisions made at the event will have an impact over months, but that relationship-building will have an impact over years (but especially the next couple of years, before turnover in leadership and weakening ties causes the effect to decrease).
- ^
We’ve decided to call off sharing the list of owned and unowned projects.
We’ve shared an extensive summary of Meta Coordination Forum’s pre-event survey, and the cost-benefit for sharing this additional project list didn’t look great. This is because: 1) The existing MCF recaps posted received relatively little attention, suggesting limited benefit; 2) sharing this post still requires significant work from the organisers to check in with project owners about what they’re happy to share; 3) the organisers are currently very busy with another upcoming event. We wish we didn’t have to drop this, but it seems like the best call.
- Estimating EA Growth Rates (MCF memo) by 25 Oct 2023 8:48 UTC; 133 points) (
- The role of effective giving in effective altruism (MCF 2023 memo) by 16 Oct 2023 13:31 UTC; 85 points) (
- Five interventions to strengthen a principles-based EA (MCF 2023 memo) by 16 Oct 2023 13:55 UTC; 72 points) (
- Memo on some neglected topics by 11 Nov 2023 2:01 UTC; 71 points) (
- I’m glad I joined an experienced, 20+ person organization by 15 Mar 2024 15:24 UTC; 67 points) (
- AI safety field-building survey: Talent needs, infrastructure needs, and relationship to EA by 27 Oct 2023 21:08 UTC; 67 points) (
- Key EA decision-makers on the future of EA, reflections on the past year, and more (MCF 2023) by 3 Nov 2023 22:29 UTC; 60 points) (
- CEA is fundraising, and funding constrained by 20 Nov 2023 21:41 UTC; 58 points) (
- The CEA Events Team is hiring (apply by April 8) by 3 Apr 2024 19:08 UTC; 47 points) (
- 21 Dec 2023 12:52 UTC; 36 points) 's comment on The privilege of native English speakers in reaching high-status, influential positions in EA by (
- 28 Dec 2023 16:00 UTC; 29 points) 's comment on Zach Robinson will be CEA’s next CEO by (
- Memo on some neglected topics by 11 Nov 2023 2:01 UTC; 28 points) (LessWrong;
- A brainstorm of meta-EA projects (MCF 2023) by 3 Nov 2023 22:38 UTC; 27 points) (
- EA Community Survey (MCF 2023) by 3 Nov 2023 23:18 UTC; 23 points) (
Thanks for sharing this—I really appreciate the transparency!
A quick question on the attendees: Are there any other (primarily) animal advocacy-focused folks within the 43 attendees or is it just Lewis? I don’t know the exact breakdown of meta EA efforts across various cause areas but I would be somewhat surprised if meta animal work was below 2% of all of meta EA spending (as is implied by your 1⁄43 ratio). There are several notable meta EA animal orgs doing work in this space (e.g. Animal Charity Evaluators, EA Animal Welfare Fund, Farmed Animal Funders, Focus Philanthropy and Animal Advocacy Careers) so wondering if Lewis is meant to represent them all? If so, I think that’s a pretty tough gig! Would be curious to hear more about what determined the relative cause area focuses of the attendees or if there’s some dataset that shows meta EA spending across various cause areas.
(Note: I’m aware there is some overlap between other attendees and animal work e.g. Joey and Charity Entrepreneurship, but it’s not their primary focus hence me not including them in my count above).
I think there aren’t really any attendees who are doing meta work for a single cause. Instead, it seems to be mostly people who are doing meta work for multiple areas.
(I also know of many people doing AI safety meta work who were not invited.)
Yeah I interpreted the scope of the forum as ‘meta-EA’/meta-meta rather than meta-[specific causes].
Thanks for sharing this post, this is a really positive step forward in transparency, I especially appreciated the list of attendees and description of the structure of the Forum. I like that there will be assigned owners of different projects, and I hope that outcomes of the Forum and information on initiatives that came out of it will also be shared with the community.
TL:DR;
I think that ultimately, issues pertinent to the community need to have meaningful, two way, sustained engagement with the community. I’d like to see participation from more “regular” community members as well who will add necessary and valuable perspectives to understand our community better and help improve it.
I believe there is a need for two different spaces—one for leaders to coordinate with each other, and one for a dialogue to be had. But I think the latter is essential to informing the agenda of the former. I’d love for the community survey to be a first step towards creating that second space. Perhaps something like what I’m describing could even be the kind of thing that’s discussed during the Forum.
To elaborate:
Often I think the term “EA community” or “community building” can be used to mean different things. I think there’s a difference between efforts towards “EA the movement” vs those (perhaps more internally facing) “EA the community. For example, funding a new organization to target policy professionals for AI governance would “EA the movement” decision (even though it’s community building), while improving sexual misconduct practices would be “EA the community”.
From the post, I understand that the forum’s remit is both “movement” and “community”(emphasis mine):
and is
They are of course very intertwined, and from the events of the past year it is likely that community concerns will be an important topic of conversation.
For such issues, I think it’s essential for there to be meaningful dialogue between (regular, representative) community members and EA decision-makers and leaders, and more mechanisms by which community members can systematically raise concerns that are directly relevant to them (e.g. a place to see current issues, how much support there is for addressing those issues, potential solutions being considered & worked on, etc.)
I think that in any community, leaders will likely not always be aware of what is happening “on-the-ground”, and that there are many systemic, power-differential related reasons that they won’t always be getting the information they need. I wouldn’t be surprised if this has improved post-FTX, but I’m pretty sure we have a ways to go in this regard.
Up until now, leaders haven’t prioritized this very highly (even if they believe it’s important). I think this is because there has historically been a lack of clarity as to whose responsibility managing the community is. The community itself has grown faster than the community infrastructure could keep up.
I don’t think setting up the infrastructure to allow for this dialogue to occur is trivial or easy. It’s costly to do well. It requires investment from leaders and others who would work on these issues full time. The work is (often emotionally) hard, unrewarding, and sometimes it’s not clear if you’ve had an impact. That being said, I believe it is essential infrastructure to make this community sustainable and health (and allow it to grow).
A suggestion of a minimal viable way to do this might be to have a small group of randomly chosen EAs who attend part of events like this. That would probably make it easier to empathise with the community as it currently is.
I am pretty uncertain if I endorse this idea.
Yeah, dunno if this would be good but, if people are interested in exploring it further, I can recommend this report from the OECD.
Amongst other things, it gathers close to 300 representative deliberative practices to explore trends in such processes, identify different models, and analyse the trade-offs among different design choices as well as the benefits and limits of public deliberation.
It divides these processes into four different types:
Informed citizen recommendations on policy questions: These processes require more time (on average a minimum of four days, and often longer) to allow citizens adequate time and resources to develop considered and detailed collective recommendations. They are particularly useful for complex policy problems that involve many trade-offs, or where there is entrenched political deadlock on an issue.
Citizen opinion on policy questions: These processes require less time than those in the first category, though still respect the principles of representativeness and deliberation, to provide decision makers with more considered citizen opinions on a policy issue. Due to the time constraints, their results are less detailed than those of the processes designed for informed citizen recommendations.
Informed citizen evaluation of ballot measures: This process allows for a representative group of citizens to identify the pro and con arguments for both sides of a ballot issue to be distributed to voters ahead of the vote.
Permanent representative deliberative bodies: These new institutional arrangements allow for representative citizen deliberation to inform public decision making on an ongoing basis.
Yeah this looks great. Thanks so much. Exactly the kind of thing I wanted.
Hmm, this doesn’t feel true of my experience. I’m mentally running through a list of recent large-ish CEA projects, and they all involved user interviews, surveys, or both.
It’s possible that you mean something else by “meaningful dialogue”? (Or are referring to non-CEA projects?)
I suppose you could think of it as a manner of degree, right? Submitting feedback, doing interviews etc. are a good start, but involve people having less of a say than either 1. being part of the conversation or 2. having decision-making power, eg through a vote. People like to feel their concerns are heard—not just in EA, but in general—and when eg. a company says “please send in this feedback form” I’m not sure many people feel as heard as if someone (important) from that company listens to you live and publicly responds.
Thanks for sharing this, really looking forward to the results of the survey.
As this event is for “a group of key people focused on meta / community-building work”, wouldn’t it make sense to include those doing community-building work ‘on the ground’? E.g., organisers that work for national, uni, city, or profession-based community-building organisations?
Rob provides some representation as head of CEA’s Groups Team, and obviously many of the attendees were working at the coalface in the recent past, but it still feels like a missed opportunity.
I recognise I’m probably biased because I co-run the Dutch national organisation.
Seeing the other replies, it seems the specific experience of running a national EA organisation is not specifically represented, although, in our (EA Germany) case, Anne Schulze is part of the community. Bigger EA organisations like ours (>100 members, supporting 27 local/uni groups, providing fiscal sponsorship/employer of record services, having a community health contact, etc.) might bring an additional perspective.
However, I see that having a representative for each sub-group would make for a big forum and that it’s okay to have people who represent multiple perspectives.
Thanks for sharing your perspective Patrick!
Yeah, every sub-group being present would be ridiculous, but I think one or two people who have previously done the work and are now working full-time supporting people who are still doing it would be a big improvement, e.g. Naomi/Amarins for national and city groups, or Jessica/Joris for uni groups (I’m not sure who the equivalent would be for professional groups like High Impact Engineers, etc.).
Perhaps an even better solution would be to have the CBG/UGAP/Infra Fund[1] recipients elect one or two people, as Rocky suggested, or even just select random representatives through sortition.
E.g. EA Denmark or EA Philippines
Thanks for sharing James! We did invite a few people doing more on-the-ground community building in various university/national groups, and some of them (e.g. Anne Schulze) are attending (note that not all attendees are public). But I’m not sure whether we got the balance right here, maybe we should have invited more such people.
Cheers! I haven’t met Anne, does she do community building work alongside her role as a Co-director at Effektiv Spenden? Because I don’t think I’d count Effektiv Spenden as a community building organisation, and certainly not in the way I’d count EA Germany as a community building organisation.
Makes sense! I was thinking of Effectiv Spenden, but I see that that’s an ambiguous example. Another public attendee who is doing on-the-ground community building is Kuhan Jeyapragasan (and I think that there were 1-2 others who were invited but can’t make it, or aren’t public).
Maybe I’m being picky but hasn’t it been quite a while since Kuhan was involved in the day-to-day running of EA Stanford? Wasn’t it around 2021? Because I don’t think I’d count SERI or CBAI as EA community building either.
(I appreciate this is probably very irritating if you’ve got 1-2 people from the 12 non-public names who are perfect examples of people who have been doing on-the-ground EA community building in the past two years or so).
Chiming in just to second James. There are dozens of us operating large regional meta EA organizations and I don’t see anyone representative of that perspective on the public list. I think it would be extremely valuable to have at least one leader from the CBG organizations present, ideally nominated by other CBGs such that they could represent our collective “on the ground” perspective. I’m happy to write a full list of why I think this perspective is valuable and not covered by the (also very valuable) perspectives in the public attendee list, if that would be useful.
Fwiw, I would be really interested in hearing why you think the currently on the ground running a city or national group perspective is not already covered and would be a valuable addition.
It seems very plausible to me that the event is missing this perspective, but, I think several listed attendees have past hands-on CB experience or work fairly closely with community builders e.g. Anne, myself, Dewi, Jan, Kuhan, Max Daniel, Max Dalton, Rob (I could be wrong about some of these people).
Thanks! I’m happy to expound.
I’ve tried categorizing the public attendee list by their area of meta EA work. There are many different ways to categorize and this is just one version I put together quickly. It looks something like:
Funding
Fundraising
Grantmaking
Programming
Events
Education
Advising
Growth and Strategy
Service providers (Not included in the public list)
Comms (Not included in the public list)
Incubation
Community Health
Field-building
High-level meta EA
OP, CEA, EVF, LessWrong
“On the ground” meta EA (Not included in the public list)
Regional organizations
Professional organizations
University groups
Kuhan checks this last box but also has a cause-specific bent
While the people listed make critical decisions regarding resource allocation, granting, setting strategic directions, or providing critical infrastructure, their experience is fundamentally different from those who are directly involved in “on the ground” organizations. Vaidehi writes that “issues pertinent to the community need to have meaningful, two way, sustained engagement with the community.” “On the ground” organizations likely do this among the most of any orgs in the EA ecosystem.
I think the perspective of the wider breadth of “on the ground” community leaders is important, but I’ll speak to regional EA organizations, as that’s what I know best:
Before the FTX collapse, there was a heavy emphasis on making community building a long-term and sustainable career path. As a result, there are now dozens of people working professionally and often full-time on meta EA regional organizations (MEAROs).[1] By and large, we are a team of sorts: we’re in regular communication with each other, we have a shared and evolving sense of what MEAROs are and can be, and our strategic approaches intertwine and are mutually reinforcing. We essentially function as extended colleagues in a niche profession that feels very distinct to me from even other “on the ground” meta-EA community building (such as professional or uni groups). I don’t think anyone on the attendee list has run a MEARO, and certainly not in 2023.
There is a distinct zeitgeist among MEAROs. Consistently, I’ve been amazed how MEARO leaders seem to independently land on the same conclusions and strategic directions as our peers across the globe, “multiple discovery” if you will. This zeitgeist is not captured in larger EA discourse, from the Forum to conversations I have with non-MEARO community leaders. And this MERAO zeitgeist is evolving rapidly, such that it looks very different from even four months ago. As a result, I don’t think anyone who hasn’t been intimately involved in MEAROs in the past 3-6 months can represent our general shared perspective.
This shared perspective is born out of three main ingredients:
“On the ground” intensive feedback loops: We are engaging directly with community members at all stages of the funnel—across EA causes and professions—understanding their concerns, aspirations, and challenges in real time. This provides a richness of information on everything from how people are finding EA, to reactions to current events, to what HEAs see as their biggest needs from community builders. Think of us as carrying out unofficial and constant surveying on everything you’d want the broader EA community’s feedback on.
High-level EA org feedback: EA orgs and projects from throughout the ecosystem consistently correspond and collaborate with MEAROs in a way that provides us with a decently holistic and up-to-date understanding of where EA is and where it is headed.
MEARO-level strategy: It is our job to think about what MEAROs are and what they should be to achieve maximum impact. We arguably have the most mental bandwidth for this task of anyone in EA and, again, this is shifting dramatically as the EA community and the causes we care about rapidly change.
I think segments of #1 and #3 are captured by some of the publicly listed attendees, and I imagine the attendees have an equally good or even substantially better experience of #2, but it is the unique perspective that the combination of the three enables that I’m referencing.
At an event focused on meta coordination, it seems really important to have the perspective of those engaging constantly and deeply with “the EA masses,” immersed in regional strategy, and among the best able to shape the future of EA perception as the on-the-ground representatives of EA to thousands of people worldwide.
I talked this through with @James Herbert a bit and we discussed three possible cruxes here:
Are the people in the public attendee list doing different work from MEARO leaders?
For example, have they directly done things listed in Patrick’s comment, or advised hundreds of regular people in their geographic region?
If they have, how long is that knowledge valid?
For example, EA looks very different in September 2023 than it did in September 2022, and that changes the nature of some aspects of MEARO leadership more than others.
Does directly doing the type of work involved in operating a MEARO give you a different set of knowledge that is useful in contexts like the Mera Coordination Forum?
I hope the above gestures at why I think the answer is “yes” and believe most other MEARO leaders are likely to agree.
Yes, I totally just coined this acronym.
In addition to Rocky’s comment, there’s also the fact that only a tiny proportion of the attendees have experience with CB outside the anglosphere (Sjir and Jan are the two I know of, but I might be missing some). This seems disproportionate given that approx 40% of the 2022 survey respondents reside in non-English speaking countries.
If you’re willing to write up some of your on the ground perspective and it seem valuable, we’d be happy to share it with attendees!
Off the top of my head, I’m thinking things like:
What changes in community members attitudes might ‘leaders’ not be tracking?
From your engagement with ‘leaders’ and community members, what seem to be the biggest misunderstandings?
What cheap actions could leaders take that might have a really positive influence on the community?
I’ll DM you with info on on how to share such a write up, if you’re interested.
Thank you, Michel! I’m replying over DM.
To add a bit of context in terms of on-the-ground community building, I’ve been working on EA and AI safety community building at MIT and Harvard for most of the last two years (including now), though I have been more focused on AI safety field-building. I’ve also been helping out with advising for university EA groups, workshops/retreats for uni group organizers (both EA and AI safety), and organized beginning-of-year residencies at a few universities to support beginning-of-year EA outreach in 2021 and 2022 along with other miscellaneous EA CB projects (e.g. working with the CEA events team last year).
I do agree though that my experience is pretty different from that of regional/city/national group organizers.
Thanks Kuhan!
Thank you for all of your work organizing the event, communicating about it, and answering people’s questions. None of these seem like easy tasks!
Thank you for sharing this!
My answer to the survey’s question “Given the growing salience of AI safety, how would you like EA to evolve?”:
I think EA is in a great place to influence the direction of AI progress and many orgs and people should be involved with this project. However, I think that many people in this forum think that the most important outcome of the EA community is by influencing this technology, and I think this is mistaken and misleading.
The alternative would be to continue supporting initiatives in this space, including AI safety-specific subcommunities, but to support a thriving EA community which is measured by the quality of thought and decision making, and the number of people actively dedicating a sizable proportion of their resources toward doing the most good they can (in contrast with measuring communities and individuals based on their deference to top-down cause prioritization).
I’m reasonably sure that the current wave of orgs and people working on AI safety is strong enough to maintain itself and grow well, and I’m worried about over-optimizing on near timelines.
(Sharing this because I’m uncertain and would be interested in thoughts/pushbacks)
I appreciate the partial transparency, but I’m disappointed with the choice to leave the names of some attendees unpublished. It is inappropriate for some people to direct these important decisions from the shadows, even if they themselves would prefer it that way.
I disagree fwiw. The benefits of transparency seem real but ultimately relatively small to me, whereas there could be strong personal reasons for some people to decline to publicise their participation.
I think it’s hard for a lot of people to think of what those reasons could be that are compelling, without further specificity
The scandals of the last year have shown us that the importance of transparency and oversight is anything but small.
It’s easy to dismiss it, but the fact is you have no idea if the people whose identities are hidden are ones you could trust. And even if they were, as long as much money and influence are at play, they corrupt people.
My understanding is that the main reason people wouldn’t want to publicize their involvement is to minimize reputation risk(most likely because of FTX). For those doing direct work it could hurt their ability to engage with non-EA actors. I think this is a pretty compelling reason not to publicize your involvement.
An alternative solution would be not to participate in the forum.
I’m pretty curious about any tools/events that could be built to help such events run better. To give a short but not exhaustive list:
Do people write short statements of their views on topics beforehand? Are these useful?
Is there any attempt to specifically channel people who disagree together and get them to talk? Does this work?
Do people in general change their minds (the recent Xrisk Prediction Tournament was kind of a downer in that many superforecasters and experts seemingly don’t)
Would it be worth having a live voting session that people could submit to through the event to anonymously judge sentiment
Has there ever been on-site mediation to try and resolve longstanding differences? I bet some of these people don’t trust one another and that hampers information sharing
I sort of guess the main value add is having a load of decision makers in one space for an extended period of time to develop trust. But in turn it seems surprising to me that we can’t do better than that. It would be really interesting to understand problems you folks have because I guess those apply to political and non-profit decision makers too.
Finally a confusion
and
These two statements read as contradictory to me? These people aren’t necessarily trustworthy but you value well-calibrated trust? Perhaps I’m meant to understand that you lot know how much to trust each other but that we shouldn’t necessarily do so? I don’t really understand what you mean here.
[Brief comment, sorry!]
Thanks for those thoughts—we’re planning to do some of those (e.g. have people write memos on important topics before the event), and I think we’ve considered doing all of those things. (Not sure if we made the right decision on how to handle each of these, and not explaining our stance on all of them because of time.)
Re trust: Sorry, that second sentence is rather confusing. What I mean is that: we’re not guaranteeing that everyone attending the event is 100% trustworthy. And I hope that the event will allow attendees to understand each other’s motivations/strengths/weaknesses/etc in more depth, so that attendees can get a better understanding of when/how to trust each other and collaborate. I think that non-attendees won’t get these benefits, and shouldn’t make big updates from the fact that someone is invited/not. I hope that’s a bit clearer.
I do very much agree with Nathan’s sentiment here.
I appreciate the original post announcing this forum is aims to expectation manage and temper potential concerns people will have about this group producing a ‘grand strategy’ for EA or similarly agree solutions to all the big problems. However, there is also acknowledgment that the event is aiming to help plan the next two years and set the trajectory going forward.
These are important topics and issues (as reflected by the significant senior time involved in the event), and pretty much all of them require a lot of individual and group reasoning under uncertainty. As such I do think there is a very beneficial role for robust methods to help facilitate discussion and decision-making.
I don’t know what things you may already be planning to implement, so I’m mostly just putting a flag down to say if you haven’t already, it’d be worth investing in such methods. So I’m not entirely ‘talk and no suggestion’, some very basic things to introduce (if not already) at low/cost and effort could be:
Clear framework for all attendees on how uncertainty and predictions should be communicated during the event to ensure consistency and transparency of reasoning between attendees, to help reduce misinterpretation errors which is always a risk in such forums/events.
External/third party (to the attendees but could still be EA) to provide a mediation and challenge function (similar to what Nathan suggested).
Collection of prior positions on key topics, with confidence %’s provided before the event. With updating rounds during and at the event of the event, with short notes on what contributed to any % change (one to show impact of the event, but also helps identify where and with whom the more intractable differences lie—which can help more focused action/discussion later).
I vastly prefer a brief comment to none, thanks for your time.
It would be useful to know the date of the last such forum as it would make it easier for us to know what topics would be new since that event.
I think that the last one was in July 2022.
Yeah I can only find reference to the Virtual Coordination Forum in 2020 and the Leaders Forum in 2019.
Talks about “calibrated trust”, has no forecasters, sad. In the absence of someone to represent my exact niche interest I guess that if we were doing a transferable votes, mine would go to Habryka.
Naming nitpick: Given the title, an expression of valuing transparency, and this being in the Bay, I originally thought this was about Zuckerberg’s Meta, not Meta-EA :)
Do you have plan to share the results of the community survey?
“After the event, we plan to publish a summary of the survey responses.”
Ah yes thanks! Meant to delete this.