I’m a PhD student in Logic and Philosophy of Science at UC Irvine. I’ve been involved in Effective Altruism since the start of my undergrad at LSE in 2018. I’m working on evolutionary game theory with the ambition of contributing to AI safety. I’m reachable by email (neilc543@gmail.com) and by Zoom (https://calendly.com/neilsc/30min).
Neil Crawford
Thanks a lot! I’ve approved them and added you as a co-author :)
Thanks! And no problem.
We did this with 6-8 people. Having a small group like this probably helps. Only around half have completed the EA Intro Programme. In terms of progress, I think we learnt a lot but not enough to become experts. I think we would see diminishing returns by spending more than 90 minutes on a research question.
All 3 of our meetings went well. Maybe the problem you encountered can be avoided by breaking down the question and getting groups to focus first on these sub-questions before bringing everyone together to look at the big picture. Providing autonomy to the groups works well when there’s a more experienced researcher in each group who can help the others.
I think presenting the activity as a debate could be done well, but I think the question should still first be broken down into sub-questions and then there should be quiet group research. There could then be a short debate on each sub-question, e.g. How viable are cultured protein sources? How viable are fungi-based protein sources?
1 Question, 90 Minutes
Yay! I’m glad they were helpful for your group! Suggest away! I think I’ve given everyone with the link commenting permission so you can comment directly on the doc or contact me directly (details on my profile page).
The alignment problem...
This seems great!
Thank you so much for sharing your experience, Catherine! I found this super helpful!
I also prefer listening and speaking to reading and writing (unless there’s a diagram or maths involved). I suppose it’d be best to excel in all 4, but at least text-to-speech makes reading easier :)
Great! I’d love to hear how it goes!
Anyone can call themselves a part of the EA movement.
Don’t you think there are some minimal values that one must hold to be an Effective Altruist? E.g. Four Ideas You Already Agree With (That Mean You’re Probably on Board with Effective Altruism) · Giving What We Can.It seems to me that there are some core principles of Effective Altruism such that if someone doesn’t hold them, I don’t think it’d make sense to consider them an Effective Altruist.
To be clear, I don’t disagree that anyone can call themselves part of the EA movement. I’m more wondering whether I would/should call someone an Effective Altruist if, for example, they don’t think it’s important to help others.
We ran a reading group on The Scout Mindset (question bank included)
Poster Session on AI Safety
Thanks, Ryan. You make a good point! The idea of external interest groups hijacking academic departments doesn’t sound like a good precendent to set. At the least, I would weaken my proposal’s Point 3, ruling out these EA hires taking part in their department’s future hiring decisions. They shouldn’t have the same privileges as other department faculty members, though they should be able to advise PhD students and set up research groups.
This seems reasonable to be, though we should factor in the risks that come with being seen to influence politics. I think it makes sense for individual EAs to get involved as opposed to EA orgs getting involved.