Hi there!
I currently co-lead the biosecurity grantmaking program at Effective Giving. Before that, I’ve worked in various research roles focused on pandemic preparedness and biosecurity.
Joshua TM
Hi there!
I currently co-lead the biosecurity grantmaking program at Effective Giving. Before that, I’ve worked in various research roles focused on pandemic preparedness and biosecurity.
Joshua TM
Hey! Obviously, the list you got is a great place to start and I’m sure your project will be awesome.
One thing that the list kind of lacks is focused discussions on one cause area at a time, which we had for existential risks, animal welfare, and global health and development. If you want to make room for deeper dives into each of these topics, it might be a great idea to do a workshop in the beginning of the stipend where you cover a bunch of the essentials (expected value theory, neglectedness, counterfactual thinking), so you don’t have to spend whole sessions on them.
I would perhaps also recommend picking a different topic than the chapter on conscious consumerism. While I think that MacAskill has a really great point, I think there are more important topics to cover, and you risk turning off people who care deeply about conscious consumerism already.
Let me know if you have other questions :)
Thanks for the encouraging words, I really appreciate it!
Hi OP! Thanks for writing this up. A few comments on the section about Booker’s policy proposal.
1) I agree that journalists should focus more on poverty alleviation in the poorest parts of the world, such as sub-Saharan African countries. Fortunately, Future Perfect (FP) does cover global poverty reduction efforts much more than most mainstream media outlets. Now, you are right that the piece on Booker’s proposal is part of a tendency for FP to focus more on US politics and US poverty alleviation than most EA organisations. However, I think this approach is justified for (at least) two reasons: a) For the foreseeable future, the US will inevitably spend a lot more on domestic social programs than on foreign aid. Completely neglecting a conversation about how the US should approach social expenditure would, I believe, be a huge foregone opportunity to do a lot of good. Yes, a big part of EA is to figure out which general cause areas that should receive most attention. But I believe that EA is also about figuring out what the best approaches are within different important cause areas, such as poverty in the US. I think that FP doing this is a very good thing. b) Part of the intended audience for FP (rightly) cares a lot about poverty in the US. Covering this issue can be a way of widening the FP audience, thus bringing much-needed attention to other important issues also covered by FP, such as AI safety.
2) I personally agree with the “basic moral imperative to get as many people as possible out of poverty” as you call it. But, without getting deep into normative ethics, I think it is fair to say that several moral theories are concerned with grave injustices such as the current state of racial inequity in the United States. Closing the race-wealth gap will only be a “strange thing to focus on” if you assume, with great confidence, utilitarianism to be true.
3) Even if one assumes utilitarianism to be true, there are solid arguments for focusing on racial inequity in the US. Efforts to support people of colour specifically in the US is not just to “fixate” on an arbitrarily selected race. It is to fixate on a group of people who have been systematically downtrodden for most of US history and who until very recently (if not still) have been discriminated against by the government in ways that have kept them from prospering. (For anyone curious about this claim, I strongly encourage you to read this essay for historical context.) I totally agree with you that “unequal racial distribution can have important secondary effects”, and this is why there is a solid case for paying attention to the race-wealth gap, even on utilitarian grounds. You argue that this “should take a backstage” to general poverty alleviation. I actually agree, and that is also how the EA movement is already acting and prioritising. But ‘taking a backstage’ does not have to (and should not) mean being completely neglected, and I for one really appreciate that FP is applying the methods and concepts of effective altruism to a wider range of issues.
Cheers! :)
Joshua, former Co-President of Yale EA.
Hi! What a comprehensive review, thanks for writing it up!
One quibble is that the OP is very dismissive of the issue of biases, discrimination, and AI.
While I don’t necessarily think that this issue should fall under the category of AI alignment that people in the EA community normally are concerned with, I also believe that it is inappropriate to completely dismiss it. So, I just wanted to add a comment saying that some of us in the community are concerned about biases and AI, and I hope the EA community will being having a healthy discussion about it.
Cheers!
Hey Aaron!
So, I think we agree and I may have been unclear in my comment. I didn’t mean to imply that the problem of AI bias necessarily is large/neglected/tractable enough that the EA community should be very preoccupied with it.
The reason I commented was that I read OP’s paragraph to not only say ‘bias isn’t the kind of thing that the EA community should focus on’ but rather something much more bold, i.e. ‘bias isn’t a problem at all’.
And I quite confidently and strongly disagree with the latter claim.
-Joshua from YEA.
Wow, this is incredibly comprehensive—great work, thanks to authors!
Considering how many graphs and tables there are, I am surprised there’s no mention of subjective welcomeness conditional on race, ethnicity, and socioeconomic background.
Do you know if this data exists?
Also, were there any questions getting at why EA is or is not welcoming?
Thanks! :)
Thanks for a great post, Greg! Loved this quote:
But it should temper our enthusiasm about how many insights we can glean by getting some data and doing something sciency to it.
-Joshua
Thanks for listing this as one of your five topics of interest and thanks to everyone for insightful comments.
I do basically think that EA could learn a lot of things from SJ in terms of being an inclusive movement.
I wholeheartedly agree.
Beyond movement building & inclusivity, I’d be curious to hear about other domains where you think that EA could learn from the social justice movement/philosophy? E.g., in terms of the methodologies and academic disciplines that the respective movements tend to rely on, epistemic norms, ethical frameworks, etc.
Just logging in to say that, as someone who co-ran a large university EA group for three years (incidentally the one that Aaron founded many years prior!), I find it plausible that, in some scenarios, the decision that EA Munich made would be the all-things-considered best one.
I’ll probably write a longer comment later but for now, I’d just like to record that I strongly disagreed with the downvoting of the original post. I’d concede that the meta-level discussion on the role of electoral politics in EA isn’t straightforward, but I think that the object-level case for engaging in the US election to get Donald Trump out of office is sufficiently strong that it – at the very, very least – deserves to be heard and discussed. Particularly because I would argue that the case can be substantiated by appeals to many of the things that EAs purport to care about (reducing catastrophic risks from climate change, anthropogenic biological events, nuclear conflict; improving global health and development) and can be supported by the kinds of evidence and quantitative estimates that EAs tend to rely on.
I think this name reduces the risk of people expecting a level of certainty that we’ll never reach (and is very commonly marketed in non-EA career advice)
Just commenting to say that, in my view, it’s really promising for your project that this concern is so front-and-center already.
I’m probably preaching to the choir, but I think that epistemic modesty is absolutely key in EA, and working hard to communicate your uncertainty – even when your audience is looking for certainty – is even better.
Best of luck!
Revisiting this just to say that, for what it’s worth, the Danish beer company Carlsberg has been very successful with its slogan of being “Probably the Best Beer in the World.”
Thanks, much appreciated! I should perhaps have indicated which of the pieces on this list have been published with peer review. Generally, most of the articles have, with the exception of the book chapters, GPI working papers, and a few other working papers.
Thanks for your comment. I wholeheartedly agree that this is generally a neglected issue in the community, which is partly why included the brief note – although, as stated, I believe it deserves separate and longer discussions.
Thanks! You can just use my full name (this is Joshua from the Yale group).
Strongly agree with alexrjl here.
And even if you assume consequentialism to be true and set moral uncertainty aside, I believe this is the sort of thing where the empirical uncertainty is so deep, and the potential for profound harm so great, that we should seriously err on the side of not doing things that intuitively seem terribly wrong, since commonsense morality is a decent (if not perfect) starting point for determining the net consequences of actions. Not sure I’m making this point very clearly, but the general reasoning is discussed in this essay: Ethical Injunctions.
More generally I would say that – with all due respect to OP – this is an example of a risk associated with longtermist reasoning, whereby terrible things can seem alluring when astronomical stakes are involved. I think we, as a community, should be extremely careful about that.
I second most of what Alex says here. Like him, I only know about this particular essay from Torres, so I will limit my comments to that.
Notwithstanding my own objections to its tone and arguments, this essay did provoke important thoughts for me – as well as for other committed longtermists with whom I shared it – and that was why I ultimately ended up including it on the syllabus. The fact that, within 48 hours, someone put in enough effort to write a detailed forum post about the substance of the essay suggests that it can, in fact, provoke the kinds of discussions about important subjects that I was hoping to see.
Indeed, it is exactly because I think the presentation in this essay leaves something to be desired that I would love to see more community discussion on some of these critiques of longtermism, so that their strongest possible versions can be evaluated. I realise I haven’t actually specified which among the essay’s many arguments that I find interesting, so I hope I will find time to do that at some point, whether in this thread or a separate post.
Thanks so much, Risto_Uuk, I really appreciate it. I agree that admissions are quite difficult and ultimately we relied on intuition to some extent as well, but I do believe that putting the criteria in explicit terms helps structure the process a bit. Another thing that helps is to be multiple people going through the list of candidates together. :)