We (the CEA Events Team) recently posted about how we cut costs for EA Global last year. That’s a big contributing factor, and involved hiring someone (a production associate) to help us cut overall costs.
OllieBase
It seems like you’re making a few slightly different points:
There are much more pressing things to discuss than this question.
This question will alienate people and harm the EA brand because it’s too philosophical/weird.
The fact that the EA Forum team chose this question given the circumstances will alienate people (kind of a mix between 1 and 2).
I’m sympathetic to 1, but disagree with 2 and 3 for the reasons I outlined in my first comment.
I disagree that we should avoid discussing topics so as to avoid putting people off this community.[1]
I think some of EA’s greatest contributions come from being willing to voice, discuss and seriously tackle questions that seemed weird or out of touch at the time (e.g. AI safety). If we couldn’t do that, and instead remained within the overton window, I think we lose a lot of the value of taking EA principles seriously.
If someone finds the discussion of extinction or incredibly good/bad futures offputting, this community likely isn’t for them. That happens a lot!
- ^
Perhaps for some distasteful-to-almost-everyone topics, but this topic doesn’t seem like that at all.
Really excited to see EAGx return to Prague!
few people are thinking about how to navigate our way to a worthwhile future.
This might be true on the kinds of scales EAs are thinking about (potentially enourmous value, long time horizons) but is it not the case that many people want to steer humanity in a better direction? E.g. the Left, environmentalists, libertarians, … ~all political movements?
I worry EAs think of this as some unique and obscure thing to think about, when it isn’t.
(on the other hand, people neglect small probabilities of disastrous outcomes)
It seems plausible to me we might be approaching a “time of perils’ where total x-risk is unacceptably high and will continue to be as we develop powerful AI systems, but might decrease later since we can use AI systems to tackle x-risk (though that seems hard and risky in its own myriad ways).
Broadly think we should still prioritise avoiding catastrophes in this phase, and bet on being able to steer later but low confidence.
(30 is the mean, median is 29)
Walking around the conference halls this February at EAG Global in the Bay Area, the average age seemed to be in the mid-20s or so.
The average age of EAG Bay Area 2025 feedback survey respondents was 30, FYI.
I don’t think this removes the thrust of your questions, which I think are good and important questions, but people do seem to consistently underestimate the average age of EA Global attendees.
In our survey data from EAG London 2021, where we tried this, we see that the virtual participants had a lower likelihood to recommend (8.1 vs. 9.1) and made ~4x fewer connections than in-person attendees (10.2 vs. 2.4).
I think Lizka expressed the main case against well (as does Neel)
lots of in-person attendees or speakers who would want to interact with people who are attending virtually are too busy with the in-person conference, the organizers are split between the two sides (and largely focus on the more involved in-person side), and there’s a bit more confusion about how everything works.
I expect that this effect will be even stronger now that there are regular virtual events (i.e. fewer virtual attendees would attend hybrid events). If the main benefit comes from watching content, that’s usually posted on Youtube shortly after the event (though not livestreamed)
I haven’t visited CEELAR and I don’t know how impactful it has been, but one thing I’ve always admired about you via your work on this project is your grit and agency. When you thought it was a good idea back in 2018, you went ahead and bought the place. When you needed funding, you asked and wrote a lot about what was needed. You clearly care a lot about this project, and that really shows. I hope your successor will too.
I’m reminded of Lizka’s Invisible Impact post. It’s easy to spot flaws in projects that actually materialise but hard/impossible to criticise the absence of projects that never materialised. I get the sense you aren’t error adverse, and you go out and try things. I think more people in the community should try things like CEELAR and be more like you in this regard.All the best :)
Thanks for this. I was about to contact my MP (Anneliese Dodds), but she seems to share my view here and has resigned as Minister for International Development and for Women and Equalities in protest (not confident that’s the best call but I respect it).
A 2017 discussion of this concept by Stefan Schubert :) He also discussed this on an 80k podcast episode.
Hi Eevee,
As you know, the EA Global team are currently running the event in Oakland, but we’ve seen this and will share some thoughts after the event (and some time off).
FYI this was briefly discussed a few years back.
This is really cool! Huge props for making this happen :)
Flag that I didn’t catch that this was an important announcement, and I think that’s because it’s posted by one user with initials. Hard to explicate exactly what’s going on, but that made me think it was one anonymous user’s reactions to an OP announcement rather than the real deal.
By contrast, the technical AIS RFP has three co-authors with full names, and I recognised them as people who work on that team. I’d guess posts with multiple full-name co-authors are more likely to be understood as important and therefore get more reach :)
Maybe too much for a Draft Amnesty week, but I’d be excited for someone / some people to think about how we’d prioritise R&D efforts if/when R&D is ~automated by very powerful narrow or general AI. “EA for the post-AGI world” or something.
I wonder if the ITN framework can offer an additional perspective to the one outlined by Dario in Machines of Loving Grace. He uses Alzheimer’s as an example of a problem he thinks could be solved soon, but is that one of the most pressing problems that becomes very tractable post-AGI? How does that trade-off against e.g. increasing life expectancy by a few years for everyone? (Dario doesn’t claim Alzheimer’s is the most pressing problem, and I would also be very happy if we could win the fight against Alzheimer’s).
I’m going to post about a great paper I read about the National Woman’s Party, and 20th century feminism that I think has relevance to the EA communtiy :)
I found this finding in the MCF 2024 survey interesting:
The average value to an organization of their most preferred over their second most preferred candidate, in a typical hiring round, was estimated to be $50,737 (junior hire) and $455,278 (senior hire).
This survey was hard and only given to a small number of people, so we shouldn’t read too much into the specific numbers, but I think it’s still a data point against putting significant weight on replacability concerns if you have a job offer for an org you consider impactful.
Survey respondents here (who all work at EA orgs like Open Phil, 80k, CEA, Giving What We Can) are saying that if they make someone a job offer, they would need to receive, in the typical case for junior staff, tens of thousands of dollars to be indifferent about that person taking the job instead of the next best candidate. As someone who’s been involved in several hiring rounds, this sounds plausible to me.
If you get a job offer from an org you consider impactful, I suggest not putting significant weight on the idea that the next best candidate could also take the role and have just as much or more impact as you, unless you have a good reason to think you’re in an atypical situation. There’s often a (very) large gap!
FYI the question posed was:
Imagine a typical hiring round for a [junior/senior] position within your organization. How much financial compensation would you expect to need to receive to make you indifferent about hiring your second most preferred applicant, rather than your most preferred applicant?
(there’s a debate to be had about how “EA org receiving X in financial compensation” compares to “value to the world in $ terms” or “value in EA-aligned donations” but I stand by the above bolded claim).
Full disclosure: I work at CEA and helped build the survey, so I’m somewhat incentivised to say this work was interesting and valuable.
Already so many EAs work at Anthropic that it is shielded from scrutiny within EA
What makes you think this? Zach’s post is a clear counterexample here (though comments are friendlier to Anthropic) and I’ve heard of criticism of the RSPs (though I’m not watching closely).
Maybe you think there should be much more criticism?
Just a heads up that this was posted on April Fool’s day, but it seems like a serious post. You might want to add a quick disclaimer at the top for today :)