Hi — I’m Alex! I run the 80k headhunting service, which provides organisations with lists of promising candidates for their open roles.
You can give me (anonymous) feedback here: admonymous.co/alex_ht
Hi — I’m Alex! I run the 80k headhunting service, which provides organisations with lists of promising candidates for their open roles.
You can give me (anonymous) feedback here: admonymous.co/alex_ht
I really appreciate you writing this. Getting clear on one’s own reasoning about AI seems really valuable, but for many people, myself included, it’s too daunting to actually do.
If you think it’s relevant to your overall point, I would suggest moving the first two footnotes (clarifying what you mean by short timelines and high risk) into the main text. Short timelines sometimes means <10 years and high risk sometimes means >95%.
I think you’re expressing your attitude to the general cluster of EA/rationalist views around AI risk typified by eg. Holden and Ajeya’s views (and maybe Paul Christiano’s, I don’t know) rather than a subset of those views typified by eg. Eliezer (and maybe other MIRI people and Daniel Kokotajlo, I don’t know). To me, the main text implies you’re thinking about the second kind of view, but the footnotes are about the first.
And different arguments in the post apply more strongly to different views. Eg
Fewer ‘smart people disagree’ about the numbers in your footnote than about the more extreme view.
I’m not sure Eliezer having occasionally been overconfident, but got the general shape of things right is any evidence at all against >50% AGI in 30 years or >15% chance of catastrophe this century (though it could be evidence against Eliezer’s very high risk view).
The Carlsmith post you say you roughly endorse seems to have 65% on AGI in 50 years, with a 10% chance of existential catastophe overall. So I’m not sure if that means your conclusion is
‘I agree with this view I’ve been critically examining’
‘I’m still skeptical of 30 year timelines with >15% risk, but I roughly endorse 50 year timelines with 10% risk’
‘I’m skeptical of 10 year timelines with >50% risk, but I roughly endorse 30-50 year timelines with 5-20% risk’
Or something else
This seems like a good place to look for studies:
The research I’ve reviewed broadly supports this impression. For example:
Rieber (2004) lists “training for calibration feedback” as his first recommendation for improving calibration, and summarizes a number of studies indicating both short- and long-term improvements on calibration.4 In particular, decades ago, Royal Dutch Shell began to provide calibration for their geologists, who are now (reportedly) quite well-calibrated when forecasting which sites will produce oil.5
Since 2001, Hubbard Decision Research trained over 1,000 people across a variety of industries. Analyzing the data from these participants, Doug Hubbard reports that 80% of people achieve perfect calibration (on trivia questions) after just a few hours of training. He also claims that, according to his data and at least one controlled (but not randomized) trial, this training predicts subsequent real-world forecasting success.
Are these roles visa eligible, or do candidates need a right to work in the US already? (Or can you pay contractors outside of the US?)
[A quick babble based on your premise]
What are the best bets to take to fill the galaxies with meaningful value?
How can I personally contribute to the project of filling the universe with value, given other actors’ expected work and funding on the project?
What are the best expected-value strategies for influencing highly pivotal (eg galaxy-affecting) lock-in events?
What are the tractable ways of affecting the longterm trajectory of civilisation? Of those, which are the most labour-efficient?
How can we use our life’s work to guide the galaxies to better trajectories?
Themes I notice
Thinking in bets feels helpful epistemically, though the lack of feedback loops is annoying
The object of attention is something like ‘civilisation’, ‘our lightcone’, or ‘our local galaxies’
The key constraint isn’t money, but it’s less obvious what it is (just ‘labour’ or ‘careers’ doesn’t feel quite right)
We think most of them could reduce catastrophic biorisk by more than 1% or so on the current margin (in relative[1] terms).
Imagine all six of these projects was implemented to a high standard. How robust do you think the world would be to catastrophic biorisk? Ie. how sufficient do you think this list of projects is?
The job application for the Campus Specialist programme has been published. Apologies for the delay
Hi Elliot, thanks for your questions.
Is this indicative of your wider plans?/ Is CEA planning on keeping a narrow focus re: universities?
I’m on the Campus Specialist Manager team at CEA, which is a sub-team of the CEA Groups team, so this post does give a good overview of my plans, but it’s not necessarily indicative of CEA’s wider plans.
As well as the Campus Specialist programme, the Groups team runs a Broad University Group programme staffed by Jessica McCurdy with support from Jesse Rothman. This team provides support for all university groups regardless of ranking through general group funding and the EA Groups Resource Centre. The team is also launching UGAP (University Groups Accelerator Program) where they will be offering extra support to ~20 universities this semester. They plan to continue scaling the programme each semester.
Outside of university groups, Rob Gledhill joined the Groups team last year to work specifically on the city and national Community Building Grants programme, which was funding 10 total full-time equivalent staff (FTE) as of September (I think the number now is slightly higher).
Additionally, both university groups and city/national groups can apply to the EA Infrastructure Fund.
Besides the Groups team, CEA also has:
The Events team, which runs EAG(x)
The Online team, which runs this forum, EA.org, and EA virtual programmes
The Operations team, which enables the whole of CEA (and other organisations under the legal entity) to run smoothly
The Community Health team, which aims to reduce risks that could cause the EA community to lose out on a lot of value, and to preserve the community’s ability to grow and produce value in the future
Basically, I see two options 1) A tiered approach whereby “Focus” universities get the majority of attention 2) “Focus” universities get all of CEA’s attention at the exclusion of all of universities.
Across the Groups team, Focus universities currently get around half of the team’s attention, and less than half of funding from grants. We’re planning to scale up most areas of the Groups team, so it’s hard to say exactly how the balance will change. Our guiding star is figuring out how to create the most “highly-engaged EAs” per FTE of staff capacity. However, we don’t anticipate Focus universities getting all of the Groups team’s attention at the exclusion of all other universities, and it’s not the status quo trajectory.
Do you plan on head hunting for these roles?
Off the top of my head there’s a few incredibly successful university groups that have successfully flourished under their own volition (e.g. NTNU, PISE). There’s likely people in these groups who would be exceptionally good at community growth if given the resources you’ve described above, but I suspect that they may not think to apply for these roles.
Some quick notes here:
We are planning to do active outreach for these roles.
I agree that someone who has independently done excellent university group organising could be a great fit for this role.
CEA supports EA NTNU via a Community Building Grant (CBG) to EA Norway.
Also, quite a few group organisers have reached out to me since posting this, which makes me think people in this category might be quite likely to apply anyway.
But I think it’s still worth encouraging people to apply, and clarifying that you don’t need to have attended a focus university to be a Campus Specialist
Do you plan on comparing the success of the project, against similar organisations?
There are many organisations that aim to facilitate and build communities on University campuses. There are even EA adjacent organisations, i.e. GFI. It makes sense to me to measure the success of your project against these (especially GFI), as they essentially provide a free counterfactual regarding a change of tactics.
I ask this because I strongly suspect GFI will show stronger community building growth metrics than CEA. They provide comprehensive and beautifully designed resources for students. They public and personable (i.e. they have dedicated speakers who speak for any audience size (at least that’s what it appears to me)). And they seem to have a broader global perspective (so perhaps I am a bit bias). But in general they seem to have “the full package” which CEA is currently missing.
I agree having clear benchmarks to compare our work to is important. I’m not familiar with GFI’s community building activities. It seems fairly likely to me that the Campus Specialist team at CEA has moderately different goals to GFI, such that our community growth metrics might be hard to compare directly.
To track the impact of our programmes, the Campus Specialist team looks at how many people at our Focus universities are becoming “highly-engaged EAs”—individuals that have a good understanding of EA principles, show high quality reasoning, and are taking significant actions, like career plans, based on these principles. As mentioned in the post, our current benchmark is that Campus Specialists can help at least eight people per year to become highly engaged.
One interesting component to point out is that while I think our end goal is clear—creating highly-engaged EAs—we believe we’re still pretty strongly in the ‘exploration mode’ of finding the most effective tactics to achieve this. As a result, we want to spend less of our time in the Campus Specialist Programme standardising resources, and more time encouraging innovation and comparing these innovations against the core model.
By contrast, our University Group Accelerator Programme is a bit more like GFI’s programme as it has more structured tactics and resources for group leaders to implement. Jessica, who is running the programme, has been in touch with GFI to exchange lessons learned and additional resources.
Can you expand on how much money you plan on spending on each campus?
I noticed you say “managing a multi-million dollar budget within three years of starting” can you explain what exactly this money is going to be spent on? Currently this appears to me (perhaps naively) to be an order of magnitude larger than the budget for the largest national organisations. How confident are you that you will follow through on this? And how confident are you that spending millions of dollars on one campus is more efficient than community building across 10 countries?
How confident are you that you will follow through on this?
This depends on what Campus Specialists do. It’s an entrepreneurial role and we’re looking for people to initiate ambitious projects. CEA would enthusiastically support a Campus Specialist in this scaling if it seemed like a good use of resources.
I’m pretty confident that if a Campus Specialist had a good use of $3mil/year in 2025 CEA would fund it.
Will a Campus Specialist have a good use of $3mil/year in 2025? Probably. One group is looking to spend about $1m/year already (with programmes that benefit both their campus and the global community, via online options).
Can you explain what exactly this money is going to be spent on?
I can’t tell you exactly what this money will be spent on, as this depends on what projects Campus Specialists identify as high priority. Some possible examples:
Prestigious fellowships or scholarships
Lots of large, high-quality retreats e.g. using an external events company to save organiser time
Renting a space for students to co-work
Running a mini-conference every week (one group has done this already—they have coworking, seminar programmes, a talk, and a social every week of term, and it seems to have been very good for engagement, with attendance regularly around 70 people). I could imagine this being even bigger if there were even more concurrent ‘tracks’
Seed funding for students to start projects
Salaries for a team of ten
Travel expenses for speakers
Bootcamps for in-demand skills
Running an EAGx at the university
Research fellowships over the summer for students (like SERI or CERI, though they need not be in the -ERI format)
The ultimate goal across all of these programs is to find effective ways to create “highly-engaged EAs.”
And how confident are you that spending millions of dollars on one campus is more efficient than community building across 10 countries?
I’m not sure this is the right hypothetical to be comparing—CEA is supporting community building across 10 countries*. We are also looking to support 200+ universities. I think both of those things are great.
I think the relevant comparison is something like ‘how confident are you that spending millions of dollars on one campus is more efficient than the EA community’s last (interest-weighted) dollar?’
My answer depends exactly on what the millions of dollars would be spent on, but I feel pretty confident that some Campus Specialists will find ways of spending millions of dollars on one campus per year which are more efficient (in expectation) than the EA community’s last (interest-weighted) dollar.
*I listed out the first ten countries that came to mind where I know CEA supports groups: USA, Canada, Germany, Switzerland, UK, Malaysia, Hong Kong (via partnership), Netherlands, Israel, Czech Republic. (This is not an exhaustive list.)
Thanks for this comment and the discussion it’s generated! I’m afraid I don’t have time to give as detailed response as I would like, but here are some key considerations:
In terms of selecting focus universities, we mentioned our methodology here (which includes more than just university rankings, such as looking at alumni outcomes like number of politicians, high net worth individuals, and prize winners).
We are supporting other university groups—see my response to Elliot below for more detail on CEA’s work outside Focus universities.
You can view our two programmes as a ‘high touch’ programme and a ‘medium touch’ programme. We’re currently analysing which programme creates the most highly-engaged EAs per full-time equivalent staff member (FTE) (our org-wide metric).
In the medium term, this is the main model that will likely inform strategic decisions, such as whether to expand the focus university list.
However, we don’t think this is particularly decision-relevant for us in the short term. This is because:
At the moment, most of our Focus universities don’t have Campus Specialists.
You don’t need to have gone to a Focus university to be a Campus Specialist.
So we think qualified Campus Specialists won’t be limited by the number of opportunities available.
Thanks Vaidehi!
One set of caveats is that you might not be a good fit for this type of work (see what might make you a good fit above). For instance:
This is a role with a lot of autonomy, so if you prefer more externally set structure, this role probably isn’t a good fit for you
If you find talking to people about EA ideas difficult or uncomfortable, this may be a bad fit
You might be a good fit for doing field building, but prefer doing so with another age range (e.g. mid career, high school)
Some other things people considering this path might want to take into consideration:
If you would like to enter a non-EA career that is looking for traditional markers of prestige, is extremely competitive, and you have a current opportunity that won’t come around later, then being a campus specialist might be less good than directly entering that career or doing more signalling (although we think that the career capital from this route is better than most people think). This might be true for some specific post-undergrad awards in policy or unusual entrepreneurial opportunities—like having a co-founder with seed funding.
If you think it’s likely we’re in a particularly pivotal moment in the next 5-10 years – for example if you have extremely short AI timelines (with a median distribution of <5-10 years), then you might think that the benefits of doing outreach to talented individuals might not come to fruition. (But we think that this option can be good even for people with relatively short timelines—i.e. − 15-20 years.)
You might not feel compelled by the data in multiplier arguments, or you might think you’ll crowd out someone who would be better at generating multipliers compared to you.
What factors do you think would have to be in place for some other people to set up some similar but different organisation in 5 years time?
I imagine this is mainly about the skills and experience of the team, but also interested in other things if you think that’s relevant
This looks brilliant, and I want to strong-strong upvote!
What do you foresee as your biggest bottlenecks or obstacles in the next 5 years? Eg. finding people with a certain skillset, or just not being able to hire quickly while preserving good culture.
What if LessWrong is taken down for another reason? Eg. the organisers of this game/exercise want to imitate the situation Petrov was in, so they create some kind of false alarm
An obvious question which I’m keen to hear people’s thoughts on—does MAD work here? Specifically, does it make sense for the EA forum users with launch codes to commit to a retaliatory attack? The obvious case for it is deterrence. The obvious counterarguments are that the Forum could go down for a reason other than a strike from LessWrong, and that once the Forum is down, it doesn’t help us to take down LW (though this type of situation might be regular enough that future credibility makes it worth it)
Though of course it would be really bad for us to have to take down LW, and we really don’t want to. And I imagine most of us trust the 100 LW users with codes not to use them :)
This is great! I’m tentatively interested in groups trying outreach slightly before the start of term. It seems like there’s a discontinuous increase in people’s opportunity cost when they arrive at university—suddenly there are loads more cool clubs and people vying for their attention. Currently, EA groups are mixed in with this crowd of stuff.
One way this could look is running a 1-2 week residential course for offer holders the summer before they start at university (a bit like SPARC or Uncommon Sense).
To see if this is something a few groups should be doing, it might be good for one group to try this and then see how many core members of the group come out of the project, compared to other things like running intro fellowships. You could roughly track how much time each project took to get a rough sense of the time-effectiveness.
This might have some of the benefits you list for outreach at the start of term, but the additional benefit of having less competition. This kind of thing also has some of the benefits of high school outreach talked about here, but avoids some of the downsides—attendees won’t be minors, and we already know their university destination. There might be a couple of extra obstacles, like advertising the course to all the offer-holders, and some kind of framing issue to make sure it didn’t feel weird, but I think these are surmountable.
I’m not sure whether ‘EA’ would necessarily be the best framing here—there are four camps that I know of (SPARC, ESPR, Uncommon Sense, and Building a Better Future) and none of them use a direct EA framing, but all seem to be intended to create really impactful people long-term. (But maybe that means it’s time to try an EA camp!)
Pretty unsure about all of this though—and I’m really keen to hear things I might be missing!
I think I’d find this really useful
I tentatively believe (ii), depending on some definitions. I’m somewhat surprised to see Ben and Darius implying it’s a really weird view, and makes me wonder what I’m missing.
I don’t want the EA community to stop working on all non-longtermist things. But the reason is because I think many of those things have positive indirect effects on the EA community. (I just mean indirect effects on the EA community, and maybe on the broader philanthropic community, I don’t mean indirect effects more broadly in the sense of ‘better health in poor countries’ --> ‘more economic growth’ --> ‘more innovation’)
For example non-longtermist interventions are often a good way to demonstrate EA ideas and successes (eg. pointing to GiveWell is really helpful as an intro to EA); non-longtermist causes are a way for people to get involved with EA and end up working on longtermist causes (eg. [name removed] incoming at GPI comes to mind as a great success story along those lines); work on non-longtermist causes has better feedback loops so it might improve the community’s skills (eg. Charity Entrepreneurship incubatees probably are highly skilled 2-5 years after the program. Though I’m not sure that actually translates to more skill-hours going towards longtermist causes).
But none of these reasons are that I think the actual intended impact of non-longtermist interventions is competitive with longtermist interventions. Eg. I think Charity Entrepreneurship is good because it’s creating a community and culture of founding impact-oriented nonprofits, not because [it’s better for shrimp/there’s less lead in paint/fewer children smoke tobacco products]. Basically I think the only reasons the near-term interventions might be good is because they might make the long-term future go better.
I’m not sure what counts as ‘astronomically’ more cost effective, but if it means ~1000x more important/cost-effective I might agree with (ii). It’s hard to come up with a good thought experiment here to test this intuition.
One hypothetical is ‘would you rather $10,000 gets donated to the Longterm Future Fund, or $10 mil gets donated to Give Well’s Maximum Impact Fund’. This is confusing though, because I’m not sure how important extra funding is in these areas. Another hypothetical is ‘would you rather 10 fairly smart people devote their careers to longtermist causes (eg. following 80k advice), or 10,000 fairly smart people devote their careers to neartermist causes (eg. following AAC advice)’. This is confusing because I expect 10,000 people working on effective animal advocacy to have some effect on the long-term future. Some of them might end up working on nearby long-termist things like digital sentience. They might slightly shift the culture of veganism to be more evidence-based and welfarist, which could lead to faster flow of people from veganism to EA over time. They would also do projects which EA could point to as success, which could be helpful for getting more people into EA and eventually into longtermist causes.
If I try to imagine a version of this hypothetical without those externalities, I think I prefer the longtermist option, indicating that the 1000x difference seems plausible to me.
I wonder if some of the reasons people don’t hold the view I do is some combination of (1) ‘this feels weird so maybe it’s wrong’ and (2) ‘I don’t want to be unkind to people working on neartermist causes’.
I think (1) does carry some weight and we should be cautious when acting on new, weird ideas that imply strange actions. However, I’m not sure how much longtermism actually falls into this category.
The idea is not that new, and there’s been quite a lot of energy devoted to criticising the ideas. I don’t know what others in this thread think, but I haven’t found much of this criticism very convincing.
Weak longtermism (future people matter morally) is intuitive for lots of people (though not all, which is fine). I concede strong longtermism is initially very intuitive though
Strong longtermism doesn’t imply we should do particularly weird things. It implies we should do things like: get prepared for pandemics, make it harder for people to create novel pathogens, reduce the risk of nuclear war, take seriously the facts that we can’t get current AI systems to do what we want but AI systems are quickly becoming really impressive, and some/most kinds of trend-extrapolation or forecasts imply AGI in the next 10-120 years. Sure, strong longtermism implies we shouldn’t prioritise helping people in extreme poverty. But helping people in extreme poverty is not the default action, most people don’t spend any resources on that at all. (This is similar to Eliezer’s point above).
I also feel the weight of (2). It makes me squirm to reconcile my tentative belief in strong longtermism with my admiration of many people who do really impressive work on non-longtermist causes and my desire to get along with those people. I really think longtermists shouldn’t make people who work on other causes feel bad. However, I think it’s possible to commit to strong longtermism without making other people feel attacked, or too unappreciated. And I don’t think these kinds of social considerations have any bearing on which cause to prioritise working on.
I feel like a big part of the edge of the EA and rationality community is that we follow arguments to their conclusions even when it’s weird, or it feels difficult, or we’re not completely sure. We make tradeoffs even when it feels really hard—like working on reducing existential risk instead of helping people in extreme poverty or animals in factory farms today.
I feel like I also need to clarify some things:
I don’t try to get everyone I talk to to work on longtermist things. I don’t think that would be good for the people I talk to, the EA community, or the longterm future
I really value hearing arguments against longtermism. These are helpful for finding out if longtermism is wrong, figuring out the best ways to explain longtermism, and spotting potential failure modes of acting on longtermism. I sometimes think about paying someone to write a really good, clear case for why acting on strong longtermism is most likely to be a bad idea
My all-things-considered view is a bit more moderate than this comment suggests, and I’m eager to hear Darius’, Ben’s, and others views on this
Nice, thanks for these thoughts.
But there’s no way to save up labor to be used later, except in the sense that you can convert labor into capital and then back into labor (although these conversions might not be efficient, e.g., if you can’t find enough talented people to do the work you want). So the tradeoff with labor is that you have to choose what to prioritize. This question is more about traditional cause prioritization than about giving now vs. later.
Ah sorry I think I was unclear. I meant ‘capacity-building’ in the narrow sense of ‘getting more people to work on AI’ eg. by building the EA community, rather than building civilisation’s capacity eg. by improving institutional decision-making. Did you think I meant the second one? I think the first one is more analogous to capital as building the EA community looks a bit more like investing (you use some of the resource to make more later)
How do you think people should do this?