Doesn’t that assume EAs should value the lives of fetuses and e.g. adult humans equally?
Liam_Donovan
Due to politicization, I’d expect reducing farm animal suffering/death to be much cheaper/more tractable per animal than reducing abortion is per fetus; choosing abortion as a cause area would also imperil EA’s ability to recruit smart people across the political spectrum. I’d guess that saving a fetus would need to be ~100x more important in expectation than saving a farm animal for reducing abortions to be a potential cause area; in an EA framework, what grounds are there for believing that to be true?
Note: It would also be quite costly for EA as a movement to generate a better-researched estimate of the parameters due to the risk of politicizing the movement.
Reducing global poverty, and improving farming practices, lack philosophically attractive problems (for a consequentialist, at least) - yet EAs work heavily on them all the same.
I think this comes from an initial emphasis towards short-term, easily measured interventions (promoted by the $x saves a life meme, drowning child argument, etc.) among the early cluster of EA advocates. Obviously, the movement has since branched out into cause areas that trade certainty and immediate benefit for the chance of higher impact, but these tend to be clustered in “philosophically attractive” fields. It seems plausible to me that climate change has fallen between two stools: not concrete enough to appeal to the instinct for quantified altruism, but not intellectually attractive enough to compete with AI risk and other long-termist interventions.
What does it mean to be “pro-science”? In other words, what might a potential welfarist, maximizing, impartial, and non-normative movement that doesn’t meet this criterion look like?
I ask because I don’t have a clear picture of a definition that would be both informative and uncontroversial. For instance, the mainstream scientific community was largely dismissive of SIAI/MIRI for many years; would “proto-EAs” who supported them at that time be considered pro-science? I assume that excluding MIRI does indeed count as controversial, but then I don’t have a clear picture of what activities/causes being “pro-science” would exclude.
edit: Why was this downvoted?
An example of what I had in mind was focusing more on climate change when running events like Raemon’s Question Answering hackathons. My intuition says that it would be much easier to turn up insights like the OP than insights of “equal importance to EA” (however that’s defined) in e.g. technical AI safety.
The answer to your question is basically what I phrased as a hypothetical before:
participation in the EA movement as one way to bring oneself closer to God through the theological virtue of charity.
I was involved in EA at university for 2 years before coming to believe Catholicism is true, and it didn’t seem like Church dogma conflicted with my pro-EA intuitions at all, so I’ve just stayed with it. It helped that I wasn’t ever an EA for rigidly consequentialist reasons; I just wanted to help people and EA’s analytical approach was a natural fit for my existing interests (e.g. LW-style rationality).
I’m not sure my case (becoming both EA and Catholic due to LW-style reasoning) is broadly applicable; I think EA would be better served sticking to traditional recruiting channels rather than trying to extend outreach to religious people qua religious people. Moreover, I feel that it’s very very important for EA to defend the value of taking ideas seriously, which would rule out a lot of the proposed religious outreach strategies you see (such as this post from Ozy).
I downvoted the post because I didn’t learn anything from it that would be relevant to a discussion of C-GCRs (it’s possible I missed something). I agree that the questions are serious ones, and I’d be interested to see a top level post that explored them in more detail. I can’t speak for anyone else on this, and I admit I downvote things quite liberally.
Tl;dr the moral framework of most religions is different enough from EA to make this reasoning nonsensical; it’s an adversarial move to try to change religions’ moral framework but there’s potentially scope for religions to adopt EA tools
Like I said in my reply to khorton, this logic seems very strange to me. Surely the veracity of the Christian conception of heaven/hell strongly implies the existence of an objective, non-consequentialist morality? At that point, it’s not clear why “effectively doing the most good” in this manner is a more moral [edit: terminal] goal than “effectively producing the most paperclips”. It’s not surprising that trying to shoehorn Christian ideas into a utilitarian framework is going to produce garbage!
I agree that this implies that EA would have to develop a distinct set of arguments in order to convince priests to hijack the resources of the Church to further the goals of the EA subculture; I also think this is an unnecessarily adversarial move that shouldn’t be under serious consideration.
That doesn’t mean that the ideas and tools of the EA community are inapplicable in principle to Catholic charity, as long as they are situated within a Catholic moral framework. I’m confident that e.g. Catholic Relief Services would rather spend money on interventions like malaria nets rather than interventions like PlayPumps. However, even if the Catholic Church deferred every such decision to a team of top EAs, I don’t think the cumulative impact (under an EA framework) would be high enough to justify the cost of outreach to the Church. I’m not confident of this though; could be an interesting Fermi estimate problem.
(I’ve been trying to make universally applicable arguments, but it feels dishonest at this point not to mention that I am in fact Catholic.)
Thank you! I’m not sure, but I assume that I accidentally highlighted part of the post while trying to fix a typo, then accidentally e.g. pressed “ctrl-v” instead of “v” (I often instinctively copy half-finished posts into the clipboard). That seems like a pretty weird accident, but I’m pretty sure it was just user error rather than anything to do with the EA forum.
This post seems to have become garbled when I tried to fix a typo, any idea how I can restore the original verson?
This doesn’t seem like a great idea to me for two reasons:
1. The notion of explicitly manipulating one’s beliefs about something as central as religion for non-truthseeking reasons seems very sketchy, especially when the core premise of EA relies on an accurate understanding of highly uncertain subjects.
2. Am I correct in saying the ultimate aim of this strategy is to shift religious groups’ dogma from (what they believe to be) divinely revealed truth to [divinely revealed truth + random things EAs want]? I’m genuinely not sure if I interpreted the post correctly, but that seems like an unnecessarily adversarial move against a set of organized groups with largely benign goals.
Yeah, I don’t think I phrased my comment very clearly.
I was trying to say that, if the Christian conception of heaven/hell exists, then it is highly likely than an objective non-utilitarian morality exists. It shouldn’t be surprising that continuing to use utilitarianism within an otherwise Christian framework yields garbage results! As you say, a Christian can still be an EA, for most relevant definitions of “be an EA”.
I’m fairly confident the Church does not endorse basing moral decisions on expected value analysis; that says absolutely nothing about the compatibility of Catholicism and EA. For example, someone with an unusually analytical mindset might see participation in the EA movement as one way to bring oneself closer to God through the theological virtue of charity.
This example of a potentially impactful and neglected climate change intervention seems like good evidence that EAs should put substantially more energy towards researching other such examples. In particular, I’m concerned that the neglect of climate change has more to do with the lack of philosophically attractive problems relative to e.g. AI risk, and less to do with marginal impact of working on the cause area.
Great answer, thank you!
Do you know of any examples of the “direct work+” strategy working, especially for EA-recommended charities? The closest thing I can think of would be the GiveDirectly UBI trial; is that the sort of thing you had in mind?
[Question] How to evaluate the impact of influencing governments vs direct work in a given cause area?
It seems like that question would interact weirdly with expectations of future income: as a college student I donate ~1% of expenses, but if I could only save one life, right now, I would probably try to take out a large, high interest loan to donate a large sum. That depends on availability of loans, risk aversion, expectations of future income, etc. much more than it does on my moral values.
-
Isn’t this essentially a reformulation of the common EA argument that the most high-impact ideas are likely to be “weird-sounding” or unintuitive? I think it’s a strong point in favor of explicit modelling, but I want to avoid double-counting evidence if they are in fact similar arguments.
Maybe the most successful recruitment books directly target people 1-2 stages away in the recruitment funnel? In the case of HPMOR/Crystal Society, that would be quantitatively minded people who enjoy LW-style rationality rather than those who are already interested in AI alignment specifically.