Edited.
Fermi–Dirac Distribution
From my understanding, it is almost certainly not a question of money. They have the option of making admissions less competitive but increasing the ticket price and then having a competitive process for financial aid, but they do not think that that is a good option.
CEA’s concern with allowing a wider range of people into the conference seems to be that they’d take up the time of the people CEA most wants to be at an EAG.
I’ve edited the homepage to be more accurate / to match the information on our FAQ page and admissions page to say:
”EA Global is designed for people who have a solid understanding of the main concepts of effective altruism, and who are making decisions and taking significant actions based on them.From my understanding, this new description seems fairly misleading, given the following EA Forum comments:
Data point: in the three cases I know of, undergraduates with around 30–40 hours of engagement with EA ideas were accepted to EAG London 2022.
I can second the vibe of Zach’s ‘Data point’ comment. I know/met a few (<5 but I suspect more were there based on my sampling) students at EAG SF who had only recently engaged with EA ideas and had not (yet) taken any ‘significant action’ based on them.
I know first hand of someone being accepted after only learning about EA a month prior.
If we started receiving lots of “this conference felt too big” feedback, then yes we would possibly action on that, but that hasn’t really happened yet
This directly contradicts this December 2019 EA Forum post about EAG admissions, which has the following as a reply to “Why not make [EAG] larger?” (emphasis mine):
The largest EA Global was about 1000 people in 2016, and we got feedback that it was too big and that it was easy to get lost in the shuffle. Our recent events have been between 500 − 650 people including speakers, volunteers, and staff.
Venues above that size tend to be significantly more expensive, or less suited to the event. We already subsidize tickets and provide financial aid to keep prices reasonable, so more attendees cost CEA more. (We know there are a variety of opinions about the tradeoffs between cost and the quality of the venue/logistics/catering, and we’ll continue to look at those tradeoffs carefully.)
We’ll continue exploring the question of how big the event should be, including ways to help people connect better even within a large event.
more than someone else they could admit
I think this part is wrong.
Eli Nathan has said the following:
We simply have a specific bar for admissions and everyone above that bar gets admitted (though previous comms have unfortunately mentioned or implied capacity limits). This is why the events have been getting larger as the community grows.
So it seems that they do not explicitly compare applicants with each other when making admissions decisions. [1]
- ^
Which, unrelatedly, is very confusing. My EAG SF 2020 rejection email said
Due to the large number of applicants, many dedicated community members doing valuable work may not be accepted to EAG this year.
The email also linked to this EA Forum post from December 2019, which says
We think an application process is the best way to allocate the limited spaces to the people who can best use them.
and
Why not make it larger?
The largest EA Global was about 1000 people in 2016, and we got feedback that it was too big and that it was easy to get lost in the shuffle. Our recent events have been between 500 − 650 people including speakers, volunteers, and staff.
Venues above that size tend to be significantly more expensive, or less suited to the event. We already subsidize tickets and provide financial aid to keep prices reasonable, so more attendees cost CEA more
I’m not sure if Eli Nathan’s comment is implying that these statements I quoted were false at the time they were made, or if the CEA has changed its mind since EAG SF 2020 about whether to limit the number of attendees or not.
… okay, so I just read a few more of Eli Nathan’s comments and I am now really confused. For instance, he’s said the following (emphasis mine)
In setting the bar, desired conference size is not really a factor in our decision making, though perhaps it should be (and it possibly will be if the events get much larger) — we mostly just think about what type of applicants would be a good fit for the event. We seem to receive more feedback about the types of attendees that come (or don’t come) rather than feedback about the raw size of the conference, and so we mostly action on the former. If we started receiving lots of “this conference felt too big” feedback, then yes we would possibly action on that, but that hasn’t really happened yet and I don’t expect it to in the near future.
This appears to directly contradict the December 2019 EA Forum post I linked to.
- ^
Thanks for finding this. Assuming he wrote this around the time that it was posted, he’d have been 24.
I have a similar-ish story. I became an EA (and a longtermist, though I think that word did not exist back then) as a high school junior, after debating a lot of people online about ethics and binge-reading works from Nick Bostrom, Eliezer Yudkowsky and Brian Tomasik. At the time, being an EA felt so philosophically right and exhilaratingly consistent with my ethical intuitions. Since then I have almost only had friends that considered themselves EAs.
For three years (2017, 2018 and 2019) my friends recommended I apply to EA Global. I didn’t apply in 2017 because I was underage and my parents didn’t let me go, and didn’t apply in the next two years because I didn’t feel psychologically ready for a lot of social interaction (I’m extremely introverted).
Then I excitedly applied for EAG SF 2020, and got promptly rejected. And that was extremely, extremely discouraging, and played an important role in the major depressive episode I was in for two and a half years after the rejection. (Other EA-related rejections also played a role.)
I started recovering from depression after I decided to distance myself from EA. I think that was the only correct choice for me. I still care a lot about making the future go well, but have resigned to the fact that the only thing I can realistically do to achieve that goal is donate to longtermist charities.
Both you and Kelsey (and, I suspect, future analogues) were successful and high-potential in ways that are highly legible to EA-types.
I’m curious, how was Scott Alexander “successful and high-potential in ways that are highly legible to EA-types” in his early 20s? I wouldn’t be surprised, at all, if he was, but I’m just curious because I have little idea of what he was like back then. As far as I know, he started posting on LessWrong in 2009, at the age of 24 (and started Slate Star Codex four years later). I’m not sure if that is what you are counting as “early 20s,” or if are referring to his earlier work on LiveJournal, or perhaps on another platform that I’m not aware of. I’ve read very few (perhaps none) of his pre-2009 LJ posts, so I don’t know how notable they were.
This resonates a lot with me. I actually studied physics at a pretty good college and did very well in all my physics classes, but I was depressed for a long time (two years?) [ETA: more like two and a half] for not feeling smart enough to be part of the EA community.
I’m feeling better now, though that’s unfortunately because I stopped trying so hard to fit in. I stopped trying (and failing) to get into EAGs or get hired at EA orgs, and haven’t been reading the EA Forum as much as I used to. I still… have longtermist EA values, but I have no idea of what someone like me can do to help the future go well. Even donating part of my income seems approximately useless, given how longtermism is far from funding-constrained.
But basically from this you get it being worth ~$252 to market effective altruism to a particular person and break even.
I don’t think that’s how it works. Your reasoning here is basically the same as “I value having Internet connection at $50,000/year, so it’s worth it for me to pay that much for it.”
The flaw is that, taking the market price of a good/service as given, your willingness to pay for it only dictates whether you should get it, now how much you should pay for it. If you value people at a certain level of talent at $1M/career, that only means that, so long as it’s not impossible to recruit such talent for less than $1M, you should recruit it. But if you can recruit it for $100,000, whether you value it at $100,001 or $1M or $ does not matter: you should pay $100,000, and no more. Foregoing consumer surplus has opportunity costs.
To put it more explicitly: suppose you value 1 EA with talent X at $1M. Suppose it is possible to recruit, in expectation, one such EA for $100,000. If you pay $1M/EA instead, the opportunity cost of doing so is 10 EAs for each person you recruit, so the expected value of the action is −9 EAs per recruit, and you are in no way breaking even.
Of course, the assumption I made in the previous paragraph, that both the value of an EA and the cost of recruiting one are constant, does not reflect reality: if we had a million EAs, the cost of an additional recruit would be higher and its value would be lower, if we hold other EA assets constant, and so the opportunity cost isn’t constant. But my main point, that you should pay no more than the market price for goods and services if you want to break even (taking into account time costs and everything), still stands.
I think the market price is a bit higher than that.
Someone else in this thread found a report claiming that employers spend an average of ~$6,100 to hire someone at a US university. I also found this report saying that the average cost per hire in the United States is <$5,000, $15k for an executive. At 1 career = 10 jobs that’s $150,000/career for executive-level talent, or $180,000/career adjusting for inflation since the report was released.
I’m not sure how well those numbers reflect reality (the $15k/executive number looks quite low), but it seems at least fairly plausible that the market price is substantially less than $750k/career.
The mean impact from someone at a top school is worth over $750k/year, which means we should fund all interventions that produce a career change for $750k (unless they have large non-financial costs) since those have a <2 year payback period.
This line of reasoning is precisely what I’m claiming to be misguided. Giving you a gallon of water to drink allows you to live at least two additional days (compared to you having no water), which at $750k of impact/year (~$2000/day ) means, by your reasoning, that EA should fund all interventions that ensure you have 1 gallon of water for <=$4000, up to the amount you need to survive.
If water happened to be that expensive, that would be a worthwhile trade. But given the current market price of water (with the time cost of acquiring it included) being willing to pay anywhere near $4000/gallon is absurd.
In general, if you value something at $x, and its market price is $y, x only matters for deciding whether you should pay for the thing or not, not for deciding how much you should pay for it. If x >= y, then you should pay $y, otherwise you should pay $0.
Is $750k the market price for 1 expected career change from someone at a top school, excluding compensation costs? Alternatively, is there no cheaper way to cause such a career change? IMO, this is the important question here: if there is a cheaper way, then paying $750k has an opportunity cost of >1 career changes.
Bloomberg now estimates that FTX and Alameda are both essentially worth $0, and that SBF is no longer a billionaire.
His remaining estimated wealth ($991 million) seems to mostly be based on his stake on FTX.us, which AFAICT has not been affected by today’s events. [ETA: also Robinhood stock.]