Agree, besides being further away, this would most probably reduce the number of EAs from LMICs who go to EA conferences. I’m from Turkey and the limited number of people from Turkey who have gone to an EAGx did so because there was travel funding(including my first two conferences) and I’m quite confident none of them would be able to go if there was no funding(because I personally know them). I was thinking that 6 people from our college group would come to the next EAGx in Europe, but if there is no travel funding no one besides me most probably won’t go(and I would be able to go because I’m on an EA fellowship!)
Still, I’m not saying all EAs from LMICs should be reimbursed or it makes sense to fund people who wouldn’t otherwise be able to come to conferences(if they didn’t receive funding) but i) on the margin providing travels grants to people from countries with low EA presence may have higher bang for the buck ii)A very selective travel grants policy would have this consequence(effectively reducing a considerable number of EAs based in LMICs from participating in EAGs)
Berke
EA career advice tailored for people in based in LMICs was urgently needed, very glad to see this!
People in countries with low-EA presence can be very well-positioned to have a lot of impact even in the very short-run, as the number of low hanging fruits(really neglected high-impact opportunities where even a single person can plausibly make a substantial difference) in most of the LMICs are considerably higher compared to Western European and American countries, this post will probably empower a lot of people have more impact, thank you for writing this great post!
De-emphasizing cause neutrality could(my guess is) probably would reduce the long-term impact of the movement substantially. Trying to answer the question “How do the most good”, without attempting to be neutral between causes we are passionate about and causes we don’t (intuitively) care that much about would bias us towards causes and paths that are interesting to us rather than particularly impactful causes. Personal fit and being passionate about what you do is absolutely important, but when we’re trying to compare causes and comparing actions/careers in terms of impact(or ITN), our answer shouldn’t be dependent on our personal interests and passions, but when we’re taking action based on those answers then we should think about personal fit and passions, as these prevent us from being miserable while we’re pursuing impact. And also, cause neutrality should nudge people against associating EA with a singular cause like AI Safety or global development or even 80k careers, I think extreme cause neutrality is a solution to the problem you describe, rather than being root of the problem.
De-emphasizing cause neutrality would increase the likelihood of EA becoming mainstream and popular, but it would also undermine our focus and emphasis on impartiality and good epistemics, which were/are vital factors why EA was able to identify so many high-impact problems and take action to tackle those problems effectively imho.
An absolutely terrific post, thank you very much for writing this!
I think I disagree with several arguments here, and one of the main arguments could be thought of as an argument for longtermism. And I have to add, this post is really well-written and the arguments/intuitions are really clearly expressed! Also, epistemic status of the last paragraph is quite speculative.
First of all, most longtermist causes and projects aim to increase the chance of survival for existing humans(misaligned AI, engineered pandemics and nuclear war are catastrophes that could take within this century, if you don’t have a reason to completely disregard forecasters and experts) or reduce the chance of global catastrophic events for the generation who is already alive, again biorisk and pandemics could be thought of longtermist causes but if more people would be working on these issues pre-2020, their actions and work would be impactful not only for future generations, but already existing people who suffered throughout COVID-19.
If I’m not misunderstanding one of the main ideas/intuitions that form the basis for this review is “It is uncertain whether future people will exist or not, so therefore we should give more weight to the idea that humanity may cease to exist and donating to or working for longtermist causes may be less impactful compared to neartermist causes”. If we ought to give more weight to the idea that future people may not exist, isn’t this argument for working on x-risk reduction? Even if you have a person-affecting view of population ethics, since the world could be destroyed tomorrow, the following week or within this year/decade/century, thinking about s-risk that could result from a misaligned AI or stable totalitarianism are all events that could impact people who are already alive, and cause them to suffer at an astronomical level, or if we’re optimistic, curtail humanity’s potential in a way that will render the lives of already existing more unbearable and prevent us from coordinating to reduce suffering.
Thirdly, I think it wouldn’t be wrong to say “excited altruism” rather than “obligatory altruism” emphasized more and more as EAs started focusing on scaling and community-building. Peter Singer do think we have an obligation to help those who suffer as long as it doesn’t cost us astronomically. Most variants of utilitarianism and Kantian-ish moral views would use the word “obligation” in a non-trivial way for framing our responsibility to help those who suffer and who are worse-off. Should I buy a yacht or save 100 children in Africa? Even though a lot of EAs wouldn’t say “they are obligated to not buy the yacht and donate to GiveWell”, some EAs including me would probably agree that is a moral dilemma where we could say that billionaire kinda has an obligation to help. But you may disagree with this and I would totally understand and you may even be right, because maybe there are no moral truths! But, I wouldn’t say longtermism too can be and is framed within a paradigm of excited altruism, because the stakes are too high and longtermism is usually targeted at already EA audiences, people use the word “should” because this conversation usually takes place between people who already agree that we should do good. So even if you’re not a moral realist and don’t believe in moral obligations, you can be a longtermist.
As a final point, I do agree we don’t care about humanity in abstract, usually people care about existing people because of intuitions/sentiments. But, also most people with the exception of few cultures, didn’t care about animals at all throughout humanity’s history. So when it comes who should we care about and how we should think about that questions, our hunch and intuitions usually don’t work really well. We, I personally don’t at a sentimental level, tend to also don’t think about the welfare of insects and shrimps, but is there some chance that we should include these beings into our moral circles and care about them? I definitely wouldn’t say no. Also a lot of people’s hunch is that we should care about people around us, but again, that is incompatible with the idea that certain people aren’t more worthy of saving and caring just because people closer to us, which probably isn’t the case and a Brit should save a British person instead 180 people from Malawi, even though almost everyone(in the literal sense) acted that way until Peter Singer because they had a hunch, but that hunch is unfortunately probably inaccurate if we want to do good, so we may have this intuition that when we’re doing good, we shall think more about people who already exist, but we may have to disregard that intuition, and think about uncertainty more seriously and rationally rather than just disregard future people’s welfare because those people may not exist.
As a final-final point, coming up with a decision theory that prevents us from caring about our posteriority and future people is really really hard, even if you are very skeptical of uncertainty, if you don’t believe Toby Ord, Macaskill or top forecasters like Eli Lifland who published a magnificent critique of this book completely and think x-risk probability is very overestimated, I think arguments based on the intuition that “It’s uncertain whether future people will exist”, isn’t a counterargument against not only weak longtermism but also strong longtermism, and I think this argument should lead us to think about which decision theory is best to navigate the uncertainty we face, rather than prioritize people who already exist.
Btw, if you’re from Turkey and would like to connect with the community in Turkey, feel free to dm!
Even when you are trying advance equity, there will be certain charities that are more cost-effective and “efficient”, efficient in the sense that they’ll be successful. Again, if you want to do human rights lobbying, probably doing that in the US would probably be more expensive compared to a relatively globally irrevelant low-income country x where there isn’t much lobbying. Cost-effectiveness isn’t the endpoint of EA, it’s a method that enables you to choosse the best intervention when you have scarce money.
For billionaire philanthropy, there are a lot of moral theories that don’t assume what you assume about democracy or assume billionaires shouldn’t make decisions about public good.Most consequentalists doesn’t assume automatically or a priori that billionaires should be less powerful, their stance on this would be based on more empirical truths but still, this part of your post also has a moral assumption involved in it. Libertarian-ish moral views, prioritarianism, utilitarianism and not a theory but a view called high-stakes instrumentalism are all views that are quite popular and we should integrate into our normative uncertainty model. You can check this blogpost on why some people aren’t against billionaire philanthropy. I personally wouldn’t want state or masses to prevent people from spending their money as they’d like, many people from countries that are experiencing democratic backsliding or have low trust in government too wouldn’t agree with you. In Turkey, it’s really hard to have abortions outside of private hospitals for instance, universal healthcare for the globally disadvantaged people means growing a state that’s usually corrupt and anti-liberal. I’m not saying this is defintiely wrong, we should be less confident of our views when we’re talking about this issue.
Aiming higher in our altruistic goals doesn’t alleviate the requirement of having a theory of change and noticing the skulls, there are many organizations trying to do what you want to do, advance equity, but world and a lot of places these specific charities operate are still quite unequal, they aren’t very successful, vaccines still have patents, what will you do differently this time?
Also I think a probabilistic standpoint is useful, like for instance when equity and health outcomes tradeoff, let’s imagine a parallel universe when universal healthcare will result in slightly worse outcomes and slightly worse wellbeing overall, both for the average and the well-off person. But, it will be equal, variation of health outcomes between wealthy people and poor people will decrease, even though poor people’s health outcomes won’t improve and this will take palce because of wealthy people’s loss of welfare. Do you think still, effective altruists should advance equity? This is a very specific conceptualization of good. I’m not saying equity is unimportant, other things may be important too, that’s why taking normative uncertainty and empirical uncertainty is really important when we’re talking about these issues.
Cost-effectiveness doesn’t mean only efficiency. I think when you’re trying to do the most good, ditching the use of cost-effectiveness is quite hard because what will you use instead? Cost-effectiveness isn’t only about efficiency or consequentalist perspectives, it’s about doing the most good possible the scarce money we have(as EAs). Don’t you think it’d be better to think about how cost effective human-rights lobbying is or will be before taking these actions? When you’re trying to decide on which programs to fund from a Rawlsian framework, what will you use if you won’t cost-effectiveness? If two programs achieve the same thing, and one of them costs 10k and the other costs 25k, you shouldn’t donate money to the latter program.
Also, saving African children from Malaria by distributing bednets, vaccinating Nigerian kids with certain incentives or preventing humanity from destroying itself is not valuable only from a utilitarian point of view, the number of moral views that somehow imply “No you shouldn’t save an African kid for 4.5k, just buy a better car” isn’t probably high. But on the other hand “Billionaire philanthropy isn’t okay, it’d be better if the masses decided what to do” and “Universal healthcare is a moral imperative” are claims which a lot of moral theories would disagree with. So if you accept that it’s quite possible for us to be mistaken about which moral theory is correct, the case for changing global discourse and setting up effective bureucracies that would be able to provide high-quality universal healthcare would quite hard.
A third critique is tractability. Isn’t it quite hard to change global political discourse, especially in Africa where most EAs do not have no connection to, and institute health as a global right and actually enforce this? This seems quite unlikely, because this would require increasing state capacity all over the global south, advancement of technologies in underdeveloped countries(if we take veil of ignorance seriously), setting effective and capable health bureucracies in countries where bureucracies tend to be home clientelistic and kleptocratic tendencies rather than effectiveness. Again, I don’t think the goal this post propose are actually tractable. This is different distributing bednets.
What are we actually optimizing for? Are we optimizing for improving health outcomes of the most disadvantaged people? If I was behind a veil of ignorance, I’d like to have a more functional FDA and overall better medical innovation, when there are tradeoffs between medical innovation and extending universal healthcare, what should we do? How would we understand if we’re making progress on these goals? Number of states claiming that healthcare is a human right? I live in Turkey where healthcare is universally provisioned by the state, but can’t get an appointment before 3 months on most hospital, and quality of healthcare at state hospitals are quite low.(at a level where 4 doctors misdiagnosed me, they all diagnosed with different diseases, failing spectacularly)
Location:Turkey (As of September, I’ll be in NL for five months)
Remote:yes
Willing to relocate:Maybe
Skills:Interdisciplinary research and charity evaluation, I’m a volunteer analyst at SoGive. I speak Turkish(and learning Ottoman Turkish at the moment), well-versed in development, meta-EA and philosophy. Currently I’m part of a team that’s looking into how GiveDirectly’s actions might be impacting the broader political economy, with a focus on programs in Kenya and Rwanda.
CV:https://www.dropbox.com/s/0q2308lj1b7wdb9/CVm1.jpg?dl=0
Email: berke.celik@boun.edu.tr
Notes:Because I am a student based Turkey, cost of hiring me is quite low! I’m looking for part-time or project-based work. Since April I’ve been doing some community building here in Turkey and as a result I’ve thought a lot about how to do EA community building (and EA-related work in general) in an LMIC context.
Turkey, Kenya and Philippines have lower GDP per capita than Romania but all of these countries have community builders(correct me if I’m wrong) that receive financial or infrastructural support from EA organizations/funds, so I’m not sure how much this has to do with the fact that Romania is poorer compared to Western Europe(or cultural biases that result from this wealth discrepancy between Romania and Western Europe).
Although there may be some other reason that explains why EAIF may prefer not to fund any EA projects in Romania(or some other country x), but even if such a country-specific reason exists, not being transparent about those reasons and lack of proper feedback seems problematic(and frustrating for applicants).