You can send me a message anonymously here: https://www.admonymous.co/will
WilliamKiely
How does marginal spending on animal welfare and global health influence the long-term future?
I’d guess that most of the expected impact in both cases comes from the futures in which Earth-originating intelligent life (E-OIL) avoids near-term existential catastrophe and goes on to create a vast amount of value in the universe by creating a much larger economy and colonizing other galaxies and solar systems, and transforming the matter there into stuff that matters a lot more morally than lifeless matter (“big futures”).
For animal welfare spending, then, perhaps most of the expected impact come from the spending reducing the amount of animal suffering and suffering of other non-human sentient beings (e.g. future AIs) in the universe compared to the big futures without the late 2020s animal welfare spending. Perhaps the causal pathway for this is affecting what people think about the moral value of animal suffering and that positively affecting what E-OIL does with the reachable universe in big futures (less animal suffering and lower probability of neglecting the importance of sentient AI moral patients).
For global health spending, perhaps most of the expected impact comes from increasing the probability that E-OIL goes on to have a big future. Assuming the big futures are net positive (as I think is likely) this would be a good thing.
I think some global health spending probably has much more of an impact on this than others. For example, $100M would only put a dent in annual malaria deaths (~20,000 fewer deaths, a <5% reduction in annual deaths for 1 year) and it seems like that would have quite a small effect on existential risk. Whereas it seems plausible to me that if the money was spent on reducing the probability of a severe global pandemic in the 2030s (spending which seems like it could qualify as “global health” spending) plausibly could have a much more significant effect. I don’t know how much $100M could reduce the odds of a global pandemic in the 2030s, but intuitively I’d guess that it could make enough of a difference to be much more impactful on reducing 21st century existential risk than reducing malaria deaths.
How would the best “global health” spending compare to the “animal welfare” spending? Could it reduce existential risk by enough to do more good than better values achieved via animal welfare spending could do?
I think it plausibly could (i.e. the global health spending plausibly could do much more good), especially in the best futures in which it turns out that AI does our moral philosophy really well such that our current values don’t get locked in, but rather we figure out fantastic moral values after e.g. a long reflection and terraform the reachable universe based on those values.
But I think that in expectation, $100M of the global health spending would only reduce existential risk by a small amount, increasing the EV of the future by a small amount (something like <0.001%), and intuitively $100M extra spent on animal welfare (given the relatively small size of current spending on animal welfare), could do a lot more good (to increase the value of the big future by a larger small amount (than the small amount of increased probability of a big future from the global health scenario).
Initially I was answering about halfway toward Agree from Neutral, but after thinking this out, I’m moving further toward Agree.
This is horrifying! A friend of the author just shared this along with a Business Insider post that was just published that links to this post:
I’m curious if you or the other past participants you know had a good experience with AISC are in a position to help fill the funding gap AISC currently has. Even if you (collectively) can’t fully fund the gap, I’d see that as a pretty strong signal that AISC is worth funding. Or, if you do donate but you prefer other giving opportunities instead (whether in AIS or other cause areas) I’d find that valuable to know too.
But on the other hand, I’ve regularly meet alumni who tell me how useful AISC was for them, which convinces me AISC is clearly very net positive.
Naive question, but does AISC have enough of such past alumni that you could meet your current funding need by asking them for support? It seems like they’d be in the best position to evaluate the program and know that it’s worth funding.
Nevertheless, AISC is probably about ~50x cheaper than MATS
~50x is a big difference, and I notice the post says:
We commissioned Arb Research to do an impact assessment.
One preliminary result is that AISC creates one new AI safety researcher per around $12k-$30k USD of funding.Multiplying that number (which I’m agnostic about) by 50 gives $600k-$1.5M USD. Does your ~50x still seem accurate in light of this?
I’m a big fan of OpenPhil/GiveWell popularizing longtermist-relevant facts via sponsoring popular YouTube channels like Kurzgesagt (21M subscribers). That said, I just watched two of their videos and found a mistake in one[1] and took issue with the script-writing in the other one (not sure how best to give feedback—do I need to become a Patreon supporter or something?):
Why Aliens Might Already Be On Their Way To UsMy comment:
9:40 “If we really are early, we have an incredible opportunity to mold *thousands* or *even millions* of planets according to our visions and dreams.”—Why understate this? Kurzgesagt already made a video imagining humanity colonizing the Milky Way Galaxy to create a future of “a tredecillion potential lives” (10^42 people), so why not say ‘hundreds of billions of planets’ (the number of planets in the Milky Way), ‘or even more if we colonize other galaxies before other loud/grabby aliens reach them first’? This also seems inaccurate because the chance that we colonize between 1,000-9,999,999 planets (or even 1,000-9,999,999 planets) is less than the probability that we colonize >10 million (or even >1 billion) planets.
As an aside, the reason I watched these two videos just now was because I was inspired to look them up after watching the depressing new Veritasium video Do People Understand the Scale of the Universe? in which he shows a bunch of college students from a university with 66th percentile average SAT scores who do not know basic facts about the universe.
[1] The mistake I found was in the most recent video You Are The Center of The Universe (Literally) was that it said (9:10) that the diameter of the observable universe is 465,000 Milky Way galaxies side-by-side, but that’s actually the radius of the observable universe, not the diameter.
I also had a similar experience making my first substantial donation before learning about non-employer counterfactual donation matches that existed.
It was the only donation I regretted since by delaying making it 6 months I could have doubled the amount of money I directed to the charity for no extra cost to me.
Great point, thanks for sharing!
While I assume that all long-time EAs learn that employer donation matching is a thing, we’d do well as a community to ensure that everyone learns about it before donating a substantial amount of money, and clearly that’s not the case now.
Reminds me of this insightful XKCD: https://xkcd.com/1053/
For each thing ‘everyone knows’ by the time they’re adults, every day there are, on average, 10,000 people in the US hearing about it for the first time.
Thanks for sharing about your experience.
I see 4 people said they agreed with the post and 3 disagreed, so I thought I’d share my thoughts on this. (I was the 5th person to give the post Agreement Karma, which I endorse with some nuance added below.)
I’ve considered going on a long hike before and like you I believed the main consideration against doing so was the opportunity cost for my career and pursuit of having an altruistic impact.
It seemed to me that clearly there was something else I could do that would be better for my career and altruistic impact that e.g. taking 6 months to go hike the Appalachian Trail so I dismissed considering the possibility more seriously, as tempting as it was. (Bill Bryson’s book A Walk in the Woods tempted me when I read it in 2012.)
I still think that most young people who actually do decide to go on such a long hike could have done something else that would have been better for their career and pursuit of the most good, and I think the same would have been true of my former self had I decided to actually spend 6 months going for such a long walk.
That said, what my life experience thus far (a very lackluster career) makes obvious to me now is that deciding against going for a 6-month hike on the basis that it was almost definitely subpotimal was a mistake. After all, almost every potential path is suboptimal, whether it’s a 6-month hike, Job A, Job B, or almost every other concrete option.
A more reasonable way to think about the question is whether the long hike seems better or worse than the other options one is considering. And on that note I’d opine that there are many unideal jobs that one could work for 6 months that’d be worse than spending those 6 months on a long hike that one is really motivated to do.
And I don’t just mean trash jobs one isn’t considering. Rather, I think going on a 6-month hike can actually often be better than the job-path one would have taken otherwise.
Reflecting on my own past, it’s not clear to me that had younger-me spent 6 months going for a long hike that that would have been worse than what I actually did. I’ve spent a lot of time in mediocre jobs and also a lot of time not working and yet not doing any intentional career-break project like a long hike. So I think going for a long hike would have been quite a reasonable decision had I chosen to do so. It very likely wouldn’t have been optimal path, but it may well have been a good decision, better than the likely counterfactuals.
I’ll also add that I didn’t like the subtitle of the video: “A case for optimism”.
A lot of popular takes on futurism topics seem to me to focus on being optimistic or pessimistic, but whether one is optimistic or pessimistic about something doesn’t seem like the sort of thing one should argue for. It seems a little like writing the bottom line first.
Rather, people should attempt to figure out what the actual probabilities of different futures are and how we are able to influence the future to make certain futures more or less probable. From there it’s just a semantic question whether having a certain credence in a certain kind of future makes one an optimistic or a pessimist.
If one sets out to argue for being an optimist or pessimist, that seems like it would just introduce a bias into one’s thinking, where once one identifies as e.g. an optimist, they’ll have trouble updating their beliefs about the probability that the future will be good or bad to various degrees. Paul Graham says Keep Your Identity Small, which seems very relevant.
I’ve been a fan of melodysheep since discovering his Symphony of Science series about 12 years ago.
Some thoughts as I watch:
- Toby Ord’s The Precipice and his 16 percent estimate of existential catastrophe (in the next century) is cited directly
- The first part of the script seems heavily-inspired by Will MacAskill’s What We Owe the Future
- In particular there is a strong focus on non-extinction, non-existentially catastrophic civilization collapse, just like in WWOTF- 12:40 “But extinction in the long-term is nothing to fear. No species survives forever. Time will shape us into something new. The noble way to go extinct will be to evolve naturally to a higher species.”—This is kind of ambiguous. I’m not clear what message melodysheep is trying to get across, but it’s also vague enough that I don’t I have a specific critique of it.
- 14:12 “But the best way to secure our long-term survival is to take the leap that no other lifeform has ever taken, to become a multi-planetary species.” “Once a self-sustaining civilization is established on another planet, the chances of our extinction will plummet.”—No argument is made for either of these points in the video, and due to me thinking that colonizing another planet as a strategy to reduce existential risk is quite overrated in general, I’m disappointed about that.
- As usual, melodysheep’s music and visuals are stunning, and I can’t help but feel that the weakest part of the video is the script.
- Melodysheep’s top Patreon tier is $100 per video, and includes a one-on-one hangout with him (John Boswell). Given his videos get millions of views and are on important future-oriented topics, this seems like a cost-effective way to get in touch and potentially positively influence the direction of his videos.
- I skimmed his list of $10+ Patreon supporters and didn’t see any names I recognized, so I think it it may be worthwhile for some EAs/longtermists who can provide useful feedback on his scripts to become supporters or otherwise get in touch in order to do that. I’m not sure how open to feedback he is, but it seems worth trying. Anyone potentially interested?
That is, I wasn’t viscerally worried. I had the concepts. But I didn’t have the “actually” part.
For me I don’t think having a concrete picture of the mechanism for how AI could actually kill everyone ever felt necessary to viscerally believing that AI could kill everyone.
And I think this is because every since I was a kid, long before hearing about AI risk or EA, the long-term future that seemed most intuitive to me was a future without humans (or post-humans).
The idea that humanity would go on to live forever and colonize the galaxy and the universe and live a sci-fi future has always seemed too fantastical to me to assume as the default scenario. Sure it’s conceivable—I’ve never assumed it’s extremely unlikely—but I have always assumed that in the median scenario humanity somehow goes extinct before ever getting to make civilizations in hundreds of billions of star systems. What would make us go extinct? I don’t know. But to think otherwise would be to think that all of us today are super special (by being among the first 0.000...001% (a significant number of 0s) of humans to ever live). And that has always felt like an extraordinary thing to just assume, so my intuitive, gut, visceral belief has always been that we’ll probably go extinct somehow before achieving all that.
So when I learned about AI risk I intellectually though “Ah, okay, I can see how something smarter than us that doesn’t share our goals could cause our extinction; so maybe AI is the thing that will prevent us from making civilizations on hundreds of billions of stars.”
I don’t know when I first formulated a credence that AI would cause doom, but I’m pretty sure that I always viscerally felt that AI could cause human extinction ever since first hearing an argument that it could.
(The first time I heard an argument for AI risk was probably in 2015, when I read HPMOR and Superintelligence; I don’t recall knowing much at all about EY’s views on AI until Jan-Mar 2015 when I read /r/HPMOR and people mentioned AI) I think reading Superintelligence the same year I read HPMOR (both in 2015) was roughly the first time I thought about AI risk. Just looked it up actually: From my Goodreads I see that I finished reading HPMOR on March 4th, 2015, 10 days before HPMOR finished coming out. I read it in a span of a couple weeks and no doubt learned about it via a recommendation that stemmed from my reading of HPMOR. So Superintelligence was my first exposure to AI risk arguments. I didn’t read a lot of stuff online at that time; e.g. I didn’t read anything on LW that I can recall.)
Thinking out loud about credences and PDFs for credences (is there a name for these?):
I don’t think “highly confident people bare the burden of proof” is a correct way of saying my thought necessarily, but I’m trying to point at this idea that when two people disagree on X (e.g. 0.3% vs 30% credences), there’s an asymmetry in which the person who is more confident (i.e. 0.3% in this case) is necessarily highly confident that the person they disagree with is wrong, whereas the the person who is less confident (30% credence person) is not necessarily highly confident that the person they disagree with is wrong. So maybe this is another way of saying that “high confidence requires strong evidence”, but I think I’m saying more than that.
I’m observing that the high-confidence person needs an account of why the low-confidence person is wrong, whereas the opposite isn’t true.
Some math to help communicate my thoughts: The 0.3% credence person is necessarily at least 99% confident that a 30% credence is too high. Whereas a 30% credence is compatible with thinking there’s, say, a 50% chance that a 0.3% credence is the best credence one could have with the information available.
So a person who is 30% confident X is true may or may not think that a person with a 0.3% credence in X is likely reasonable in their belief. They may think that that person is likely correct, or they may think that they are very likely wrong. Both possibilities are coherent.
Whereas the person who credence in X is 0.3% necessarily believes the person whose credence is 30% is >99% likely wrong.
Maybe another good way to think about this:
If my point-estimate is X%, I can restate that by giving a PDF in which I give a weight for all possible estimates/forecasts from 0-100%.
E.g. “I’m not sure if the odds of winning this poker hand are 45% or 55% or somewhere in between; my point-credence is about 50% but I think the true odds may be a few percentage points different, though I’m quite confident that the odds are not <30% or >70%. (We could draw a PDF).”
Or “If I researched this for an hour I think I’d probably conclude that it’s very likely false, or at least <1%, but on the surface it seems plausible that I might instead discover that it’s probably true, though it’d be hard to verify for sure, so my point-credence is ~15%, but after an hour of research I’d expect (>80%) my credence to be either less than 3% or >50%.
Is there a name for the uncertainty (PDF) about one’s credence?
I just got notified that my December 7th test donation was matched. This is extremely unexpected to me, and leads me to believe I got my forecast wrong and that the EA community actually could have gotten ~$1M matched this year with the donation trade scheme I had in mind.
What was the date of your donation?
By “messaged” do you mean you got an email, Facebook notification, or something else?
I’m not sure. I think you are the first person I heard of saying they got matched. When I asked in the EA Facebook group for this on December 15th if anyone got matched, all three people who responded (including myself) reported that they were double-charged for their December 15th donations. Initially we assumed the second receipt was a match, but then we saw that Facebook had actually just charged us twice. I haven’t heard anything else about the match since then and just assumed I didn’t get matched.
I like the idea of operationalizing the Agree/Disagree as probability that the statement is true. So “Agree” is 100%, neutral is 50%, disagree is 0%. In this case, 20% vs 40% means something concrete.