So, I think we agree and I may have been unclear in my comment. I didn’t mean to imply that the problem of AI bias necessarily is large/neglected/tractable enough that the EA community should be very preoccupied with it.
The reason I commented was that I read OP’s paragraph to not only say ‘bias isn’t the kind of thing that the EA community should focus on’ but rather something much more bold, i.e. ‘bias isn’t a problem at all’.
And I quite confidently and strongly disagree with the latter claim.
-Joshua from YEA.
Hi! What a comprehensive review, thanks for writing it up!
One quibble is that the OP is very dismissive of the issue of biases, discrimination, and AI.
While I don’t necessarily think that this issue should fall under the category of AI alignment that people in the EA community normally are concerned with, I also believe that it is inappropriate to completely dismiss it. So, I just wanted to add a comment saying that some of us in the community are concerned about biases and AI, and I hope the EA community will being having a healthy discussion about it.
Hi OP! Thanks for writing this up. A few comments on the section about Booker’s policy proposal.
1) I agree that journalists should focus more on poverty alleviation in the poorest parts of the world, such as sub-Saharan African countries. Fortunately, Future Perfect (FP) does cover global poverty reduction efforts much more than most mainstream media outlets. Now, you are right that the piece on Booker’s proposal is part of a tendency for FP to focus more on US politics and US poverty alleviation than most EA organisations. However, I think this approach is justified for (at least) two reasons: a) For the foreseeable future, the US will inevitably spend a lot more on domestic social programs than on foreign aid. Completely neglecting a conversation about how the US should approach social expenditure would, I believe, be a huge foregone opportunity to do a lot of good. Yes, a big part of EA is to figure out which general cause areas that should receive most attention. But I believe that EA is also about figuring out what the best approaches are within different important cause areas, such as poverty in the US. I think that FP doing this is a very good thing. b) Part of the intended audience for FP (rightly) cares a lot about poverty in the US. Covering this issue can be a way of widening the FP audience, thus bringing much-needed attention to other important issues also covered by FP, such as AI safety.
2) I personally agree with the “basic moral imperative to get as many people as possible out of poverty” as you call it. But, without getting deep into normative ethics, I think it is fair to say that several moral theories are concerned with grave injustices such as the current state of racial inequity in the United States. Closing the race-wealth gap will only be a “strange thing to focus on” if you assume, with great confidence, utilitarianism to be true.
3) Even if one assumes utilitarianism to be true, there are solid arguments for focusing on racial inequity in the US. Efforts to support people of colour specifically in the US is not just to “fixate” on an arbitrarily selected race. It is to fixate on a group of people who have been systematically downtrodden for most of US history and who until very recently (if not still) have been discriminated against by the government in ways that have kept them from prospering. (For anyone curious about this claim, I strongly encourage you to read this essay for historical context.) I totally agree with you that “unequal racial distribution can have important secondary effects”, and this is why there is a solid case for paying attention to the race-wealth gap, even on utilitarian grounds. You argue that this “should take a backstage” to general poverty alleviation. I actually agree, and that is also how the EA movement is already acting and prioritising. But ‘taking a backstage’ does not have to (and should not) mean being completely neglected, and I for one really appreciate that FP is applying the methods and concepts of effective altruism to a wider range of issues.
Joshua, former Co-President of Yale EA.
Thanks for the encouraging words, I really appreciate it!
Hey! Obviously, the list you got is a great place to start and I’m sure your project will be awesome.
One thing that the list kind of lacks is focused discussions on one cause area at a time, which we had for existential risks, animal welfare, and global health and development. If you want to make room for deeper dives into each of these topics, it might be a great idea to do a workshop in the beginning of the stipend where you cover a bunch of the essentials (expected value theory, neglectedness, counterfactual thinking), so you don’t have to spend whole sessions on them.
I would perhaps also recommend picking a different topic than the chapter on conscious consumerism. While I think that MacAskill has a really great point, I think there are more important topics to cover, and you risk turning off people who care deeply about conscious consumerism already.
Let me know if you have other questions :)
Thanks so much, Risto_Uuk, I really appreciate it. I agree that admissions are quite difficult and ultimately we relied on intuition to some extent as well, but I do believe that putting the criteria in explicit terms helps structure the process a bit. Another thing that helps is to be multiple people going through the list of candidates together. :)