I’m working as the Interim Head of Operations at the Centre for Effective Altruism (CEA), where I was previously the lead organizer for EA Global. Before working at CEA I was an Operations Assistant at Open Philanthropy, and prior to that was involved in various community building projects at EA Oxford.
Eli_Nathan
By agentive I sort of meant “how effectively an agent is able to execute actions in accordance with their goals and values”—which seems to be independent of their values/how aligned they are with doing the most good.
I think this is a different scenario to the agent causing harm due to negative corrigibility (though I agree with your point about how this could be taken into account with your model).
It seems possible however that you could incorporate their values/alignment into corrigibility depending on one’s meta-ethical stance.
Ah okay—I think I understand you, but this is entering areas where I become more confused and have little knowledge.
I’m also a bit lost as to what I meant by my latter point, so will think about it some more if possible.
Thanks Rossa,
I’m wondering how you see 1FTW’s position changing due to the presence of OpenPhil and a shift towards a more money rich, talent poor community (across certain cause areas)?
In my eyes, the comparative advantage for student groups is more about driving engagement and plan changes and less about raising funds. Of course, money still goes a long way, but I’m skeptical that group leaders should be spending their time focusing on (relatively) small donations over building communities of talented, engaged individuals.
Is your view that 1FTW will be a better outreach vehicle (than standard community building techniques) for certain demographics? It seems that 1FTW attracts similar types of people that the GWWC pledge would, but at higher quantities due to the lower barrier. However, I’m skeptical that this lower barrier is necessarily a positive thing, because it would seem that, on average, these individuals are less likely to further engage with the EA community at large.
Is this something you’re concerned about, or do you think these concerns are relatively minor?
Thanks Marek,
I remember some suggestions a while back to store the EA funds cash (not crypto) in an investment vehicle rather than in a low-interest bank account. One benefit to this would be donors feeling comfortable donating whenever they wish, rather than waiting for the last possible minute when funds are to be allocated (especially if the fund manager does not have a particular schedule). Just wondering whether there’s been any thinking on this front?
Optimizing Activities Fairs
Thanks Max! I too am not certain that this is the correct approach, and think there is a good case for longer form conversations due to the reasons you give. The rough case I’d make for the “maximizing” approach is:
1. It’s easy to scale: You can easily gather 5-10 members of your group, give them 10-15 minutes of guidance and put them on the stall. I slightly worry about group members who are newer to EA having long form on-boarding conversations with new and interested people (in EA Oxford, we’ve previously taken some time to verify that people are knowledgable enough to have formal 1-1 conversations with newcomers).
2. Activities fairs are often noisy and as such don’t represent the best environment to engage in long form conversations.
3. Even if you do have long form conversations at the stall, they likely won’t last longer than 5-10 minutes, which I think is generally not enough time for someone to properly understand what EA is. Often, when engaging in longer conversations at activity fairs, I’ve observed people come across as somewhat skeptical of EA, but in such a way that upon further reflection I could imagine them being reasonably excited about it. As such, it may be better to optimize for driving attendance at longer form events, such as a 1-1 coffee chat or a 1-hour introductory talk.
I agree that this approach could come across as unfriendly, and that it’s important to make sure stall-runners are aware of this. Overall, I see this as a downside, but one that is probably worth it in the long run.
When (if ever) will marijuana be legal for recreational use, or effectively so, across all 50 US states?
Yeah — this seems pretty reasonable to me. I’d not thought about this explicitly before, but the rough numbers/boundaries you provide seem quite plausible!
When Planning Your Career, Start Early
Thanks Timothy!
I think this is broadly fair, and perhaps a reframing of “think more actively about your interests” would be better than just “think more actively about your career” for many readers.
That said, I think for a lot of people, what they’re immediately excited about doesn’t line up well with what might be good for their career, especially if they’re trying to do good. I worry that “keep noticing what excites you and find ways to do more of that” would lead some people down career paths with little impact, whilst also making it hard to transition to high impact roles in the future. I also suspect that many people’s passions are more flexible than they might expect, and that without careful planning, they may narrow down their options unnecessarily.
Some Scattered Thoughts on Operations
Thanks for the post — I only sort of skimmed the post and comments, and crucially I don’t think this is what your post is really about, but it seems like you have the view that we’re kinda clueless about whether factory farmed animals have good or bad lives. In reference to this, you mention in a comment: “It’s hard to be confident of any view on this, when we understand so little about consciousness, animal cognition, or morality.”
As an aside, the term “factory farmed animals” is kind of weird category that includes both cows and chickens (among other animals). You could plausibly make the case that cows have net positive lives, but it seems pretty difficult to say the same for chickens.
Sure, we don’t understand everything and everything about morality, but given the evidence we do have with regards to animal suffering and a few other basic axioms and intuitions, it seems hard to put this at 50:50 or similar. There are a bunch of arguments in favor of factory farmed chickens having bad lives, and I’m not aware of many arguments saying that they have positive lives. I think the Holocaust case is interesting but a bit confusing because those people had (probably) happy/positive lives before the Holocaust, and could have had happy/positive lives if they had been released. If someone were to intentionally breed humans into existence in order to place them into concentration camps (and later kill them), I think most plausible ethical theories would consider this to be uncontroversially bad.
Eli from the EAG London team here: there will be plenty (hundreds) of COVID tests available at the event for any attendee who wants them. Please ask an on-site volunteer or organizer if you’d like a rapid/lateral flow test!
Eli from the EA Global team here: For anyone that has travelled to London for the conference, we will reimburse you for any extra travel or accommodation costs that arise should you be stuck in town due to contracting COVID-19 (e.g. if you have to stay in your hotel for an extra week and book new flights due to contracting COVID-19 at or slightly before the event).
You can see more information in our COVID protocol here, though please feel free to reach out to hello@eaglobal.org should you have any questions or concerns — thanks!
Hi Alastair — sorry to hear you had such a rough experience! I work on the EA Global team and posts like these are super helpful to us as well as the wider EA community (helping folks manage expectations, helping people who have burnt out feel like they aren’t alone, etc.). I think EA Globals have a lot going on (including afterparties) and many attendees definitely feel like they are under a lot of pressure, which can be a lot. Glad to hear you’re doing better now — and definitely keen to hear any feedback you or anyone else might have for the organizing team (feedback forms were sent out to all attendees)!
Hi Luke — sorry to hear about all of this! I work on the EA Global team and I can confirm that we definitely definitely don’t want you sleeping on the bus! Please apply for more travel/accommodation funding next time if it’d be useful, it definitely won’t affect your chances and we won’t reject you for thinking you’re taking advantage of us!
For folks who need it, funding is also available up-front (rather than having to wait to be reimbursed), with an option to return extra money should you have any leftover.
I really liked this post and the model you’ve introduced!
With regards to your pseudomaths, a minor suggestion could be that your product notation is equal to how agentive our actor is. This could allow us to take into account impact that is negative (i.e., harmful processes) by then multiplying the product notation by another factor that takes into account the sign of the action. Then the change in impact could be proportional to the product of these two terms.