I do operations / recruiting / AIS field building work at Redwood Research.
Anjay F
Thanks for writing this! The fact that it highlights a premise in EA (“some ways of doing good are much better than others”) that a lot of people (myself included) take without very careful consideration makes me happy that it’s been written.
Having said that, I am not sure that I believe this more generally because of the reasoning that you give: “well if it’s true even there [in global health] where we can measure carefully, it’s probably more true in the general case”. I think this is part of my belief, but the other part is that just directly comparing the naive expected value of interventions in different cause areas makes this seem true.
For example, under some views of comparing animal welfare to humans, it seems far more impactful to donate to cage-free hen corporate outreach campaigns which per dollar, affects between 9 and 120 years of chicken life, compared to AMF. Further, my impression is that considering the expected value of longtermist interventions also would represent quite a large difference.
This is partially why I try to advocate for members of my group to develop their own cause-prioritization.
Thank you for sharing your experience, Andy! I am truly sorry for your loss. I thought this was a really well-written post, and I really appreciate your reference to signs and connecting the dots. Framing a career change in these terms if not often done in the EA community but it feels more real and accurate and therefore, relatable.
Thanks for writing this and sharing your reflections! One additional demographic that EA VP might be able to do more to reach are older, mid-career or late-career professionals.
Hi Tom! Thanks for writing this post. Just curious… would you consider donating to cost-effective climate charities? (e.g. Effective Environmentalism recommended ones) Seems like it could look better from an optics point of view and fit more with longtermism, depending on your views.
As someone who often feels overwhelmed by all there is to learn in Effective Altruism (and outside of EA), I appreciate this post!
Thanks Ines for this thoughtful answer! It makes me want to emphasize the aptitude-building approach more at my group.
This makes a lot of sense and thanks for sharing that post! It’s certainly true that my role is to help individuals and as such it’s important to recognize their individuality and other priorities.
I suppose I also believe that one can contribute to these fields in the long-run by building aptitudes like Ines’ response discusses, but maybe these problems are urgent & require direct work soon, in which case I can see what you are saying about the high levels of specialization.
Thanks Ben for sharing this!
Hi Misha. Thanks for your answer. I was wondering why you believe top EA cause areas to not be capable of utilizing people with a wide range of backgrounds and preferences. It seems to me like many of the top causes require various backgrounds. For example, reducing existential risk seems to require people in academia doing research, in policy enacting insights, in the media raising concerns, in tech building solutions, etc.
This is a good option. I hadn’t really considered this. And I agree that we definitely shouldn’t try to deceive anyone.
I believe this primarily due to arguments in So Good They Can’t Ignore You by Cal Newport that suggest that the application of skills we excel at is what leads to enjoyable work as opposed to a passion for a specific job or cause, but also because I think that community & purpose is super important for happiness and most top EA causes seem to provide both.
Thanks for writing this. I really like the idea. One thought is that this is a great activity for local EA groups to do and maybe an organizer with a particularly nice voice can lead it. At the group I help organize at Vanderbilt, there seems to be a lot of desire for activities that focus more on the altruism and feeling behind EA.
Thanks for writing this, Ashley! I really think this is important.
An idea I had is to have a series of weekend workshops that combine the content from the readings with exercises and opportunities for discussion. Maybe this could be split into three parts (ex: I. The EA Mindset II. Longtermism III. EA in the world/Putting it into Practice)
If a workshop was hosted each weekend, this might give students the ability to attend when they are available and at their own pace. It also could allow for deeper engagement by having a full day of thinking about these topics. There could also be additional opportunities like an optional discussion group for each topic in the following week after a workshop and a social event in the evening of the workshop day.- Nov 21, 2021, 1:44 AM; 17 points) 's comment on We need alternatives to Intro EA Fellowships by (
That’s helpful. Thanks!
Based on this Choose-a-Provider page, there seem to be a few cheaper day 2 tests (less than £10). This one costs £1.99 but is in Park Royal, which is an hour away by public transport in , or this one is in Battersea, London and is 45 minutes away by public transport. It seems like they get booked up fast though and have less support than the Randox one.
A (possibly wrong) sense I have about being an elected politician is that because you are beholden to your constituents, it may be difficult to act independently and support the policies that have the best consequences for society (as these may conflict with either your constituent’s perceptions or immediate interests). Did you find that this was true, or were there examples of this?
Another related question regards representing future generations. I feel like a democratic process encourages short-term policies for various reasons like constituent’s impatience, interest-groups, reversibility of policies, etc. Did you find that this was true? Were longer timeline policies, those with their effects coming further in the future, generally neglected?
Re 1. That makes a lot of sense now. My intuition is still leaning towards trajectory change interacting with XRR for the reason that maybe the best ways to reduce x-risks that appear after 500+ years is to focus on changing the trajectory of humanity (i.e. stronger institutions, cultural shift, etc.) But I do think that your model is valuable for illustrating the intuition you mentioned, that it seems easier to create a positive future via XRR rather than trajectory change that aims to increase quality.
Re 2,3. I think that is reasonable and maybe when I mentioned the meta-work before, it was due to my confusion between GPR and trajectory change.
Hey Alex. Really interesting post! To have a go at your last question, my intuition is that the spillover effects of GPR on increasing the probability of the future cannot be neglected. I suppose my view differs in that where you define “patient longtermist work” as GPR and distinct from XRR, I don’t see that it has to be. For example, I may believe that XRR is the more impactful cause in the long run, but just believe that I should wait a couple hundred years before putting my resources towards this. Or we should figure out if we are living at the hinge of history first (which I’d classify as GPR). Does that make sense?
I suppose one other observation is that working on s-risks typically falls within the scope of XRR and clearly also improves the quality of the future, but maybe this ignores your assumption of safely reaching technological maturity.
Hello! I’m here because of my interest in moral philosophy and global priorities research. If anyone is aware of one, I’d be curious to read a history of bioethics and its impact on research.