(I was the interim director of CEA during Leaders Forum, and I’m now the executive director.)
I think that CEA has a history of pushing longtermism in somewhat underhand ways (e.g. I think that I made a mistake when I published an “EA handbook” without sufficiently consulting non-longtermist researchers, and in a way that probably over-represented AI safety and under-represented material outside of traditional EA cause areas, resulting in a product that appeared to represent EA, without accurately doing so). Given this background, I think it’s reasonable to be suspicious of CEA’s cause prioritisation.
(I’ll be writing more about this in the future, and it feels a bit odd to get into this in a comment when it’s a major-ish update to CEA’s strategy, but I think it’s better to share more rather than less.) In the future, I’d like CEA to take a more agnostic approach to cause prioritisation, trying to construct non-gameable mechanisms for making decisions about how much we talk about different causes. An example of how this might work is that we might pay for an independent contractor to try to figure out who has spent more than two years full time thinking about cause prioritization, and then surveying those people. Obviously that project would be complicated—it’s hard to figure out exactly what “cause prio” means, it would be important to reach out through diverse networks to make sure there aren’t network biases etc.
Anyway, given this background of pushing longtermism, I think it’s reasonable to be skeptical of CEA’s approach on this sort of thing.
When I look at the list of organizations that were surveyed, it doesn’t look like the list of organizations most involved in movement building and coordination. It looks much more like a specific subset of that type of org: those focused on longtermism or x-risk (especially AI) and based in one of the main hubs (London accounts for ~50% of respondents, and the Bay accounts for ~30%).* Those that prioritize global poverty, and to a lesser extent animal welfare, seem notably missing. It’s possible the list of organizations that didn’t respond or weren’t named looks a lot different, but if that’s the case it seems worth calling attention to and possibly trying to rectify (e.g. did you email the survey to anyone or was it all done in person at the Leaders Forum?)
I think you’re probably right that there are some biases here. How the invite process worked this year was that Amy Labenz, who runs the event, draws up a longlist of potential attendees (asking some external advisors for suggestions about who should be invited). Then Amy, Julia Wise, and I voted yes/no/maybe on all of the individuals on the longlist (often adding comments). Amy made a final call about who to invite, based on those votes. I expect that all of this means that the final invite list is somewhat biased by our networks, and some background assumptions we have about individuals and orgs.
Given this, I think that it would be fair to view the attendees of the event as “some people who CEA staff think it would be useful to get together for a few days” rather than “the definitive list of EA leaders”. I think that we were also somewhat loose about what the criteria for inviting people should be, and I’d like us to be a bit clearer on that in the future (see a couple of paragraphs below). Given this, I think that calling the event “EA Leaders Forum” is probably a mistake, but others on the team think that changing the name could be confusing and have transition costs—we’re still talking about this, and haven’t reached resolution about whether we’ll keep the name for next year.
I also think CEA made some mistakes in the way we framed this post (not just the author, since it went through other readers before publication.) I think the post kind of frames this as “EA leaders think X”, which I expect would be the sort of thing that lots of EAs should update on. (Even though I think it does try to explicitly disavow this interpretation (see the section on “What this data does and does not represent”, I think the title suggests something that’s more like “EA leaders think these are the priorities—probably you should update towards these being the priorities”). I think that the reality is more like “some people that CEA staff think it’s useful to get together for an event think X”, which is something that people should update on less.
We’re currently at a team retreat where we’re talking more about what the goals of the event should be in the future. I think that it’s possible that the event looks pretty different in future years, and we’re not yet sure how. But I think that whatever we decide, we should think more carefully about the criteria for attendees, and that will include thinking carefully about the approach to cause prioritization.
Thanks for raising these points, John! I hadn’t considered the “cash prize for criticism” idea before, but it does seem like it’s worth more consideration.
I agree that CEA could do better on the front of generating criticisms from outside the organization, as well as making it easier for staff to criticize leadership. This is one of the key things that we have been working to improve since I took up the Interim Executive Director role in early 2019. Back in January/February, we did a big push on this, logging around 100 hours of user interviews in a few weeks, and sending out surveys to dozens of community members for feedback. Since then, we’ve continued to invest in getting feedback, e.g. staff regularly talk to community members to get feedback on our projects (though I think we could do more); similarly, we reach out to donors and advisors to get feedback on how we could improve our projects; we also have various (including anonymous) mechanisms for staff to raise concerns about management decisions. Together, I think these represent more than 0.1% of CEA’s staff time. None of this is to say that this is going as well as we’d like—maybe I’d say one of CEA’s “known weaknesses” is that I think we could stand to do more of this.
I agree that more of this could be public and transparent also—e.g. I’m aware that our mistakes page (https://centreforeffectivealtruism.org/our-mistakes) is incomplete. We’re currently nearing the end of our search for a new CEO, and one of the things that I think they’re likely to want to do is to communicate more with the community, and solicit the community’s thoughts on future plans.
I wonder if this is also a thing that ALLFED might be interested in—I haven’t looked into this much, but the article claims that the process only requires water, CO2, and electricity, which we might have in lots of disaster scenarios. So if production of this were scaled up in the short term, that might be helpful for ALLFED’s mission.
Thanks for the writeup! I really appreciate people taking the time to share what they’ve learned. I agree that activities fairs are a really high leverage time for student groups.
My summary of this approach is “Try to get as many email addresses as possible, and anticipate that many people will unsubscribe/never engage”. I’d be interested to hear more about why this approach is recommended over others.
I think that this could well be the right approach, but it’s not totally clear to me. It could be that having slightly longer conversations with people would build more raport, give them a better sense of the ideas, and make them a lot more likely to continue to engage, so you get more/higher quality people lower down your funnel. My memory of going to freshers fairs was that if I had a proper conversation with someone it did make some difference to the likelihood that I engaged later on.
I also worry a bit about the maximizing for email addresses approach coming across as unfriendly.
It does seem right to me that arguing with people isn’t worth the time.
I’d be interested in why Eli and Aaron think that the “maximize for email addresses” approach is correct long-term. I could well imagine that they’ve tried both approaches, and seen more engagement lower down the funnel with the “max for email addresses” approach.
[Speaking from my experience as a groups organizer, not on behalf of CEA]
I strong upvoted this. I think it’s great to have a reference piece on this, and particularly one which has such a good summary.
That’s right, this is intended as a feature. All comments and posts start with a weak upvote (we assume you think the thing is good, or you wouldn’t have posted it). You can strong upvote your content, which is designed as a way for you to signal-boost contributions that you think are unusually valuable. Obviously, we don’t want people to be strong-upvoting all their content, and we’ll keep an eye on that happening.
To link this to JP’s other point, you might be right that subjectivism is implausible, but it’s hard to tell how low a credence to give it.
If your credence in subjectivism + model uncertainty (+ I think also constructivism + quasi-realism + maybe others?) is sufficiently high relative to your credence in God, then this weakens your argument (although it still seems plausible to me that theistic moralities end up with a large slice of the pie).
I’m pretty uncertain about my credence in each of those views though.
Upvote for starting with praise, and splitting out separate threads.
I found the Manager Tools basics podcasts, and the Effective Manager a great way to cover the basics. (But I know others have found them less helpful.)
A great piece on this from the Forum is: Ben West’s post on Deliberate Performance in People Management.
As long as you make clear how it’s relevant to figuring out how to do as much good as possible, that sort of content is welcome.
That’s right—one of the main goals of having posts sorted by karma (as well as having two sections) - is to allow people to feel more comfortable posting, knowing that the best posts will rise to the top.
If you highlight the text, a hover appears above the text, and the link icon is one of the options—click on it, paste the url, and press enter.
I sleep a lot better when I’m cooler, and I’ve found this helpful: https://www.chilitechnology.com/. Others recommend https://bedjet.com/.
Link to Zvi’s sequence on LessWrong, which includes the posts you mentioned: https://www.lesswrong.com/s/HXkpm9b8o964jbQ89
Hi Richard, I think you’re right that “basic concepts” is incorrect: I agree that it’s important to discuss advanced ideas which build off each other. We’d want both of the posts you mention to be frontpage posts. I’ll suggest an edit to Aaron.
By default, we’re moving all content to either Frontpage or Community, since we’re trying to have a slightly less active moderation policy than LessWrong. We might revisit this at some point. You can still click on a user’s name to see their personal feed of posts.
Moderation notice: stickied on community.
Moderation notice: Stickied in Community to give context for people familiar with the old Forum.
I agree with your point about subjective expected value (although realized value is evidence for subjective expected value). I’m not sure I understand the point in your last paragraph?