I’m working as the Interim Head of Operations at the Centre for Effective Altruism (CEA), where I was previously the lead organizer for EA Global. Before working at CEA I was an Operations Assistant at Open Philanthropy, and prior to that was involved in various community building projects at EA Oxford.
Eli_Nathan
Relevant experience might include: organizing some kind of student group (EA or otherwise), volunteering at a conference, working part time as someone’s assistant, supporting or running a project where there would have been ops-type work (like running a cake delivery business), or doing any kind of service-related job like working in a coffee shop or restaurant.
As counterexamples, things that are not relevant experience might include: working on a challenging EA research project, academic credentials, building something technical where technical skills are not really part of the job.
For strong writing I’m thinking of things like: a near complete lack of typos, incorrect word choices, or writing-related formatting issues. I’m also thinking of whether the writing flows well, i.e., if I read it aloud (or in my head) does it make sense and sound good. In certain cases tone or register might matter too, for example whether the writing is too formal/informal for the required context. In many cases I expect applicants can actually write quite well but underperform, perhaps because they’re stressed, tired, or don’t realize how high the bar will be.
For strong work tests in general: this will depend on the work test, but I’m thinking of things like whether they wrote a sufficient amount of copy for the allotted time, whether they answered all parts of the question, and whether they provided sufficient reasoning if required. There’s also naturally a quality aspect, for example if a work test is asking them to investigate conference venues I might want to see that the applicant was thinking about the right sorts of trade-offs and whether they’d explained these trade-offs clearly.
In both these cases I expect it’d be easier if I could point to examples of strong vs weak work test responses, which I can’t easily do without making them up myself.
Applications are still open for upcoming EA Global conferences in 2024!
• EA Global: London (31 May–2 June) | Application deadline is in ~6 weeks
• EA Global: Boston (1–3 November)
Apply here and find more details on our website, you can also email the team at hello@eaglobal.org if you have any questions.
The Centre for Effective Altruism (CEA) is making a recruiter hire! We’re ideally looking for someone with professional experience in relevant domains, but we’re also open to hiring for a more junior version of this role—hence why the title of this role is variable.
The importance of recruiting is being heightened, as the team is entering a new era as we take on a new CEO and begin the process of spinning out from Effective Ventures to become an independent organisation. This means that in addition to our pre-existing recruiting needs, our spin out will create increased demand as we move various operations functions in-house (such as finance, payroll, and legal). We’re entering one of the largest hiring bursts in CEA’s history, and the person in this role will play a key role in ensuring we’re able to find and attract the top talent we need to be successful in our mission.
Apply here!
Applications for EA Global: Bay Area 2024 (Global Catastrophic Risks) are still open and close on January 21, at 11:59 pm PT (apply here)!
We’re excited to be hosting our first EA Global focussed on global catastrophic risks (GCRs). We’ll be welcoming up to 1000 attendees at the Oakland Marriott City Center and platforming high-quality content related to GCRs, including AI safety, biorisks, nuclear security, and more.
We have limited travel funding available. More information can be found on the event page and EA Global FAQ. If you have any questions, please email us at hello@eaglobal.org!
Thanks for the suggestion David — we’ve thought about this and might consider it for the future, but I worry it would be a fair amount of work for a low-quality product (that I expect wouldn’t get many views). However for our recent Boston event we did take audio recordings of most talks and are planning to have many of them written up as Forum posts soon.
Hi Joie — unfortunately there won’t be an option to attend this conference virtually, though we expect talks from the conference will be available on our YouTube channel after the event. It’s also very likely that there’ll be some sort of virtual EA conference in 2024 (an EAGxVirtual or an EAG Virtual or both).
Open to it for 2025, though looks like at least Oxford will still have exams then (exams often stretch until 1–2 weeks after the end of term). But early July might work and we can look into what dates we can get when we start booking.
It’s possible we should move the US event to May/June, though it’s not obvious to me that that’s the best move as finals still do have some weight there — I know it’s less than the UK but I don’t know that people’s behavior is going to shift that much if an exam season is worth 50% of their grade vs 100%.
The US also has more potential attendees in general than the UK, which I think counts for something here. There are also just other downsides to shifting our conference schedule around too much, and most of our attendees aren’t students anyway.
Hi Oliver — thanks for the feedback! I agree with your general point here but want to flag a couple things on our end:
Currently we are still approving travel grants (for all conferences from any location), albeit at a more limited capacity than we were approving in 2022.
The point here also stands somewhat for our other conferences, and if we were to move the London conference to say October, we’d then need to move our other conferences around. This would likely position some conference in April/May/June (which is exam season in other countries too), especially as we generally like to host conferences in warmer seasons.
The main issue is that some DC-based stakeholders have expressed concern that an EAG DC would draw unwanted attention to their work, partly because EA has negative connotations in certain policy/politics crowds. We’re trying to evaluate how serious these concerns (still) are before making a decision for 2024.
Just want to clarify — it’s still possible that there is a cause-general EAG in the Americas next year (I expect slightly more than 50% likely, but this number is semi-made up).
Thanks for the questions Rocky! Will try to answer them below:
1. On whether to run an east coast EAG: I’d say cost is definitely the biggest factor here, though there are other smaller factors, such as whether a third EAG gets enough unique attendees and the general question of at what point we hit diminishing returns for number of EAGs per year. Re what city it would be hosted in, my guess is that Boston is the most likely option, followed by either NYC or DC, but I’m not sure. My rough sense is that the trade-offs aren’t quite worth it to do the event in a cheaper city because it likely wouldn’t be sufficiently cheaper, though I’m open to it and haven’t thought about this super deeply.2. If we had a much larger budget I do think we’d at least push harder for a cause-neutral EAGx in the Bay Area (this is something we’re considering anyway, though we’d need to find a team to run the event, as well as funding for it). Though with a much larger budget the thing I’d probably do first is provide more travel grants for our events, as we currently only provide these on a fairly limited basis. I’m not sure that funding would strongly affect our proportions of cause-specific vs big-tent events at this stage, especially as I see the GCR event as a test (and as such am not that keen to run two of them in one year).
3. I’m open to content on digital sentience and s-risks at the GCR EA Global, as well as some of the other sub-topics you mention — and I do expect they would be within the scope of the event. The main question would be whether there are any specific talks or sessions within those areas we’re sufficiently excited about hosting (and whether we think there are high quality and eager speakers who would do these topics justice).
Thanks for the comment! I expect the main cause areas represented at the Bay Area event to be AI safety, biorisk, and nuclear security. I also expect there’ll be some meta-related content, including things like community building, improving decision making, and careers in policy.
We weren’t sure exactly what to call this event and were torn between this name and EA Global (X-Risk). We decided on EA Global (GCRs) because it was the majority preference of the advisors we polled, and because we felt it would more fully represent the types of ideas we expect to see at the event, as nuclear security and some types of risks from advanced AI or synthetic biology may not quite be considered to be existential in nature.
Hi Mahendra — we’re hoping to get a post out about this shortly, sorry for the delay here. The next EAG Bay Area will take place from Feb 2–4, 2024. We’re planning for this event to have a more x-risk/global catastrophic risk focus than our standard EAGs, though will explain this in more detail in our upcoming post.
Hi Xavier — really sorry about this. We’re aware of this as an issue and are working to resolve it ASAP. Basically had a big bug in our system that’s spamming people with lots of fake emails. Apologies for any inconvenience caused.
Not sure, we don’t have any particular measure for impact-adjusted plan changes per $, it’s more just what in theory I think we should be aiming for. In practice we mostly just track connections, to what extent people find EAGs valuable, attendance, and other easy-to-track metrics.
Yeah that’s right, I wouldn’t want to raise the prices so much such that more senior folk/experts are put off (especially as they might be providing value, and as such it might feel weird to them to have to pay for that). Right now I expect we’ll have a variety of ticket prices with optional discounts for those who need them, so I’m not too worried about more senior folks getting priced out here.
I think that’s roughly right (I might put it as “impact-adjusted plan changes per $”).
Having this non-relevant experience is unlikely to harm someone’s chances of getting a junior generalist ops role, but it might not help much either, and an application might be seen as weak overall if this is all they’re putting forward.