AppliedDivinityStudies
The problem (of worrying that you’re being silly and getting mugged) doesn’t arise when probabilities are tiny, it’s when probabilities are tiny and you’re highly uncertain. We have pretty good bounds in the three areas you listed, but I do not have good bounds on say, the odds that “spending the next year of my life on AI Safety research” will prevent x-risk.
In the former cases, we have base rates and many trials. In the latter case, I’m just doing a very rough fermi estimate. Say I have 5 parameters with an order of magnitude of uncertainty on each one, which when multiplied out, is just really horrendous.
Anyway, I mostly agree with what you’re saying, but it’s possible that you’re somewhat misunderstanding where the anxieties you’re responding to are coming from.
Hey, great post, I pretty much agree with all of this.
My caveat is: One aspect of longtermism is that the future should be big and long, because that’s how we’ll create the most moral value. But a slightly different perspective is that the future might be big and long, and so that’s where the most moral value will be, even in expectation.
The more strongly you believe that humanity is not inherently super awesome, the more important that latter view seems to be. It’s not “moral value” in the sense of positive utility, it’s “moral value” in the sense of lives that can potentially be affected.
For example, you write:
I do not feel compelled by arguments that the future could be very long. I do not see how this is possible without at least soft totalitarianism, which brings its own risks of reducing the value of the future.
And I agree! But where you seem to be implying “the future will only be stable under totalitarianism, so it’s not really worth fighting for”, I would argue “the future will be stable under totalitarianism, so it’s really important to fight totalitarianism in particular!” An overly simplistic way of thinking about this is that longtermism is (at least in public popular writing) mostly concerned with x-risk, but under your worldview, we ought to be much more concerned about s-risk. I completely agree with this conclusion, I just don’t think it goes against longtermism, but that might come down to semantics.
FWIW my completely personal and highly speculative view is that EA orgs and EA leaders tend to talk too much about x-risk and not enough about s-risk, mostly because the former is more palatable, and is currently sufficient for advocating for s-risk relevant causes anyway. Or more concretely: It’s pretty easy to imagine an asteroid hitting the planet, killing everyone, and eliminating the possibility of future humans. It’s a lot wackier, more alienating and more bizarre to imagine an AI that not only destroys humanity, but permanently enslaves it in some kind of extended intergalactic torture chamber. So (again, totally guessing), many people have decided to just talk about x-risk, but use it as a way to advocate for getting talent and funding into AI Safety, which was the real goal anyway.
On a final note, if we take flavors of your view with varying degrees of extremity, we get, in order of strength of claim:
X-risk is less important than s-risk
We should be indifferent about x-risk, there’s too much uncertainty both ethically and in terms of what the future will actually look like
The potential for s-risk is so bad that we should invite and actually trying to cause x-risk, unless s-risk reduction is really tractable
S-risks aside, humanity is just really net negative and we should invite x-risk no matter what (to be clear, I don’t think you’re making any of these claims yourself, but they’re possible paths views similar to yours might lead to).
Some of these strike me as way too strong and unsubstantiated, but regardless of what we think object-level, it’s not hard to think of reasons these views might be under-discussed. So I think what you’re really getting at is something like, “does EA have the ability to productively discuss info-hazards”. And the answer is that we probably wouldn’t know if it did.
If this dynamic leads you to put less “trust” in our decisions, I think that’s a good thing!
I will push back a bit on this as well. I think it’s very healthy for the community to be skeptical of Open Philanthropy’s reasoning ability, and to be vigilant about trying to point out errors.
On the other hand, I don’t think it’s great if we have a dynamic where the community is skeptical of Open Philanthropy’s intentions. Basically, there’s a big difference between “OP made a mistake because they over/underrated X” and “OP made a mistake because they were politically or PR motivated and intentionally made sub-optimal grants.”
People like to hear nice things about themselves from prominent people, and Bryan is non-EA enough to make it feel not entirely self-congratulatory.
Strongly agree on this. It’s been a pet peeve of mine to hear exactly these kinds of phrases. You’re right that it’s nearly a passive formulation, and frames things in a very low-agentiness way.
At the same time, I think we should recognize the phrasing as a symptom of some underlying feeling of powerlessness. Tabooing the phrase might help, but won’t eradicate the condition. E.g.:
- If someone says “EA should consider funding North Korean refugees”
- You or I might respond “You should write up that analysis! You should make that case!”
- But the corresponding question is: Why didn’t they feel like they could do that in the first place? Is it just because people are lazy? Or were they uncertain that their writeup would be taken seriously? Maybe they feel that EA decision making only happens through “official channels” and random EA Forum writers not employed by large EA organizations don’t actually have a say?
One really useful way to execute this would be to bring in more outside non-EA experts in relevant disciplines. So have people in development econ evaluate GiveWell (great example of this here), engage people like Glen Wely to see how EA could better incorporate market-based thinking and mechanism design, engage hardcore anti-natalist philosophers (if you can find a credible one), engage anti-capitalist theorists skeptical of welfare and billionaire philanthropy, etc.
One specific pet project I’d love to see funded is more EA history. There are plenty of good legitimate expert historians, and we should be commissioning them to write for example on the history of philanthropy (Open Phil did a bit here), better understanding the causes of past civilizations’ ruin, better understanding intellectual moral history and how ideas have progressed over time, and so on. I think there’s a ton to dig into here, and think history is generally underestimated as a perspective (you can’t just read a couple secondary sources and call it a day).
Sorry about all that, changed the title to “Give Well-style”.
Agreed on the other title as well. I made some notes on this in the follow up post and noted that I could have picked a better title. https://forum.effectivealtruism.org/posts/xedQto46TFrSruEoN/responses-and-testimonies-on-ea-growth
Thanks for the feedback, I appreciate the note and will think more about this in the future. FWIW I typically spend a lot of time on the post, very little time on the title, even though the title is probably read by way more people. So it makes sense to re-calibrate that balance a bit.
This perspective strikes me as as extremely low agentiness.
Donors aren’t this wildly unreachable class of people, they read EA forum, they have public emails, etc. Anyone, including you, can take one of these ideas, scope it out more rigorously, and write up a report. It’s nobody’s job right now, but it could be yours.
I originally wrote this post for my personal blog and was asked to cross-post here. I stand by the ideas, but I apologize that the tone is a bit out of step with how I would normally write for this forum.
I see myself as straddling the line between the two communities. More rigorous arguments at the end, but first, my offhand impressions of what I think the median EA/XR person beliefs:
Ignoring XR, economic/technological progress is an immense moral good
Considering XR, economic progress is somewhat good, neutral at worst
The solution to AI risk is not “put everything on hold until we make epistemic progress”
The solution to AI risk is to develop safe AI
In the meantime, we should be cautious of specific kinds of development, but it’s fine if someone wants to go and improve crop yields or whatever
As Bostrom wrote in 2003: “In light of the above discussion, it may seem as if a utilitarian ought to focus her efforts on accelerating technological development. The payoff from even a very slight success in this endeavor is so enormous that it dwarfs that of almost any other activity. We appear to have a utilitarian argument for the greatest possible urgency of technological development.”
“However, the true lesson is a different one. If what we are concerned with is (something like) maximizing the expected number of worthwhile lives that we will create, then in addition to the opportunity cost of delayed colonization, we have to take into account the risk of failure to colonize at all. We might fall victim to an existential risk, one where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.[8] Because the lifespan of galaxies is measured in billions of years, whereas the time-scale of any delays that we could realistically affect would rather be measured in years or decades, the consideration of risk trumps the consideration of opportunity cost. For example, a single percentage point of reduction of existential risks would be worth (from a utilitarian expected utility point-of-view) a delay of over 10 million years.” https://www.nickbostrom.com/astronomical/waste.html
With regards to poverty reduction, you might also like this post in favor of growth: http://reflectivedisequilibrium.blogspot.com/2018/10/flow-through-effects-of-innovation.html
The idea that “the future might not be good” comes up on the forum every so often, but this doesn’t really harm the core longtermist claims. The counter-argument is roughly:
- You still want to engage in trajectory changes (e.g. ensuring that we don’t fall to the control of a stable totalitarian state)
- Since the effort bars are ginormous and we’re pretty uncertain about the value of the future, you still want to avoid extinction so that we can figure this out, rather than getting locked in by a vague sense we have today
Do you have a stronger argument for why we should want to future-proof ethics? From the perspective of a conservative Christian born hundreds of years ago, maybe today’s society is very sinful. What would compel them to adopt an attitude such that it isn’t?
Similarly, say in the future we have moral norms that tolerate behavior we currently see as reprehensible. Why would we want to adopt those norms? Should we assume that morality will make monotonic progress, just because we’re repulsed by some past moral norms? That doesn’t seem to follow. In fact, it seems plausible that morality has simply shifted. From the outside view, there’s nothing to differentiate “my morality is better than past morality” from “my morality is different than past morality, but not in any way that makes it obviously superior”.
You can imagine, for example, a future with sexual norms we would today consider reprehensible. Is there any reason I should want to adopt them?
Would be interested to see a list of accounts by:
Follower count among prize-winners
Divided by overall follower count
It’s not that interesting to be that Barack Obama is #1, since he’s just the #1 Twitter account overall. But it would be super interesting to see who prize-winners follow that other people do not.
Thanks for this analysis and dataset, super interested in this kind of work and would love to see more!
I read the title and thought this was a really silly approach, but after reading through the list I am fairly surprised how sold I am on the concept. So thanks for putting this together!
Minor nit: One concern I still have is over drilling facts into my head which won’t be true in the future. For example, instead of:
> The average meat consumption per capita in China has grown 15-fold since 1961
I would prefer:
> Average meat consumption per capita in China grew 15x in the 60 years after 1961
In general, WSJ reporting on SF crime has been quite bad. In another article they write
Much of this lawlessness can be linked to Proposition 47, a California ballot initiative passed in 2014, under which theft of less than $950 in goods is treated as a nonviolent misdemeanor and rarely prosecuted.
Which is just not true at all. Every state has some threshold, and California’s is actually on the “tough on crime” side of the spectrum.
Shellenberger himself is an interesting guy, though not necessarily in a good way.
very speculative
Say you’re hit by a car tomorrow and die. An angel comes down, and they don’t quite offer you a second chance at life, they just offer you a day of life, with none of your current memories, as an average middle class person in South Korea.
Do you accept? I probably would, I expect the median South Korean to have a net-positive existence.
But here’s the catch: you also have to spend a day as an average political dissident in North Korea. Would you take that trade? I definitely would not. I think the disutility of the second scenario far outweighs the utility of the first.
So what would the ratio have to be? I.e How many good days in SK would you have to get in return to accept a single day living in NK? It’s hard to say without a better sense of the conditions in each play, but I would genuinely guess something like 10:1. In other words, putting very rough guesses on the utility of each scenario:
Middle class in South Korea: 10
Muzak and potatoes: 0
Political dissident in North Korea: −100
In this view, you’re not just “saving a life”, you’re preventing a huge amount of suffering.
I’m not sure how exactly this compares to GiveWell’s evaluations, or what degree of disutility they expect to prevent with interventions. Dying is bad, getting malaria and then dying is probably really horrible.
I’m not advocating running out and donating to LiNK for all the reasons mentioned by OP, but this is the chain of reasoning I would pursue more rigorously if I wanted to seriously evaluate their efficacy.
Separating this question from my main comment to avoid confusion.
Your medium post reads: “Tyler Cowen, calling for faster technological growth for a better future, dismissed the Repugnant Conclusion as a constraint: “I say full steam ahead.””
Linking to this MR post: https://marginalrevolution.com/marginalrevolution/2018/08/preface-stubborn-attachments-book-especially-important.html
The MR post does not mention the Repugnant Conclusion, nor does it contain the words “full steam ahead”. Did. you perhaps link to the wrong post? I searched the archives briefly, but was unable to find a MR post that dismisses the Repugnant Conclusion: https://marginalrevolution.com/?s=repugnant+conclusion
I agree EA is really good as funding weird things, but every in-group has something they consider weird. A better way of phrasing that might have been “fund things that might create PR risk for OpenPhil”.
See this comment from the Rethink Priorities Report on Charter Cities:
Finally, the laboratories of governance model may add to the neocolonialist critique of charter cities. Charter cities are not only risky, they are also controversial. Charter cities are likely to be financed by rich-country investors but built in low-income countries. If rich developers enforce radically different policies in their charter cities, that opens up the charge that the rich world is using poor communities to experiment with policies that citizens of the rich world would never allow in their own communities. Whether or not this criticism is justified, it would probably resonate with many socially-minded individuals, thereby reducing the appeal of charter cities.
Note the phrasing “Whether or not this criticism is justified”. The authors aren’t worried that Charter Cities are actually neocolonialist, they’re just worried that it creates PR risk. So Charter Cities are a good example of something small donors can fund that large EA foundations cannot.
I agree that EA Funds is in a slightly weird place here since you tend to do smaller grants. Being able to refer applicants to private donors seems like a promising counter-argument to some of my criticisms as well. Though is that case, is the upshot that I should donate to EA Funds, or that I should tell EA Funds to refer weird grant applicants to me?
I think it’s still under-appreciated how much people hate billionaire-funded research into areas perceived to be weird, creepy or potentially inequality-exacerbating.
Consider some of the comments on that same article from the SlateStarCodex subreddit:
I’ll give a longevity startup the time of day when they show me a year old drosophila. And “slaps roof of longevity startup this bad boy can fit so much fraud in it”
Or a semi-popular reply to the tweet you shared:
Getting for longevity research from ageing billionaires is the bio equivalent of taking candy from a baby. Wish this big money was going towards solving global problems, not just making rich old people live longer.
Or some headlines from a Google search for silicon valley longevity:
The Guardian: Is Silicon Valley’s quest for immortality a fate worse than death?
The Conversation: Silicon Valley’s quest for immortality – and its worrying sacrifices
I don’t know if public blowback will result in fewer scientists and engineers wanting to work on these companies, or will lead to reduced enthusiasm from investors. But it’s possible, and would be very tragic. EA has historically not been very good at PR, but making the case that longevity research benefits everyone and is not just a toy for the rich could still be very important.
A bit of a nit since this is in your appendix, but there are serious issues with this reasoning and the linked evidence. Basically, this requires the claims that:
1. San Francisco reduced sentences
2. There was subsequently more crime
1. Shellenberger at the WSJ writes:
He doesn’t provide a citation, but I’m fairly confident he’s pulling these numbers from this SF Chronicle writeup, which is actually citing a change from 2018-2019 to 2020-2021. So right off the bat Shellenberger is fudging the data.
Second, the aggregated data is misleading because there were specific pandemic-effects in 2020 unrelated to Boudin’s policies. If you look at the DA office’s disaggregated data, there is a drop in filing rate in 2020, but it picks up dramatically in 2021. In fact, the 2021 rate is higher than the 2019 rate both for crime overall, and for the larceny/theft category. So not only is Shellenberger’s claim misleading, it’s entirely incorrect.
You can be skeptical of the DA office’s data, but note that this is the same source used by the SF Chronicle, and thus by Shellenberger as well.
2. Despite popular anecdotes, there’s really no evidence that crime was actually up in San Francisco, or that it occurred as a result of Boudin’s policies.
- Actual reported shoplifting was down from 2019-2020
- Reported shoplifting in adjacent countries was down less than in California as a whole, indicating a lack of “substitution effects” where criminals go where sentences are lighter
- The store closures cited by Shellenberger can’t be pinned on increased crime under Boudin because:
A) Walgreens had already announced a plan to close 200 stores back in 2019
B) Of the 8 stores that closed in 2019 and 2020, at least half closed in 2019, making the 2020 closures unexceptional
C) The 2021 store closure rate for Walgreens is actually much lower than comparable metrics, like the closures of sister company Duane Reader in NYC over the same year, or the dramatic drop in Walgreens stock price. It is also not much higher than the historical average of 3.7 store closures per year in SF.
I have a much more extensive writeup on all of this here:
https://applieddivinitystudies.com/sf-crime-2/
Finally, the problem with the “common sense” reasoning is that it goes both ways. Yes, it seems reasonable to think that less punishment would result in more crime, but we can similarly intuit that spending time in prison and losing access to legal opportunities would result in more crime. Or that having your household’s primary provider incarcerated would lead to more crime. Etc etc. Yes, we are lacking in high quality evidence, but that doesn’t mean we can just pick which priors to put faith in.