AppliedDivinityStudies
A Red-Team Against the Impact of Small Donations
Writing about my job: Internet Blogger
How should Effective Altruists think about Leftist Ethics?
[Question] Why aren’t you freaking out about OpenAI? At what point would you start?
[Question] What is the role of public discussion for hits-based Open Philanthropy causes?
Punching Utilitarians in the Face
[Linkpost] Apply For An ACX Grant
Why Hasn’t Effective Altruism Grown Since 2015?
The Repugnant Conclusion Isn’t
Responses and Testimonies on EA Growth
Does Moral Philosophy Drive Moral Progress?
The problem (of worrying that you’re being silly and getting mugged) doesn’t arise when probabilities are tiny, it’s when probabilities are tiny and you’re highly uncertain. We have pretty good bounds in the three areas you listed, but I do not have good bounds on say, the odds that “spending the next year of my life on AI Safety research” will prevent x-risk.
In the former cases, we have base rates and many trials. In the latter case, I’m just doing a very rough fermi estimate. Say I have 5 parameters with an order of magnitude of uncertainty on each one, which when multiplied out, is just really horrendous.
Anyway, I mostly agree with what you’re saying, but it’s possible that you’re somewhat misunderstanding where the anxieties you’re responding to are coming from.
Launching 60,000,000,000 Chickens: A Give Well-Style CEA Spreadsheet for Animal Welfare
EA Organization Updates: January 2022
Hey, great post, I pretty much agree with all of this.
My caveat is: One aspect of longtermism is that the future should be big and long, because that’s how we’ll create the most moral value. But a slightly different perspective is that the future might be big and long, and so that’s where the most moral value will be, even in expectation.
The more strongly you believe that humanity is not inherently super awesome, the more important that latter view seems to be. It’s not “moral value” in the sense of positive utility, it’s “moral value” in the sense of lives that can potentially be affected.
For example, you write:
I do not feel compelled by arguments that the future could be very long. I do not see how this is possible without at least soft totalitarianism, which brings its own risks of reducing the value of the future.
And I agree! But where you seem to be implying “the future will only be stable under totalitarianism, so it’s not really worth fighting for”, I would argue “the future will be stable under totalitarianism, so it’s really important to fight totalitarianism in particular!” An overly simplistic way of thinking about this is that longtermism is (at least in public popular writing) mostly concerned with x-risk, but under your worldview, we ought to be much more concerned about s-risk. I completely agree with this conclusion, I just don’t think it goes against longtermism, but that might come down to semantics.
FWIW my completely personal and highly speculative view is that EA orgs and EA leaders tend to talk too much about x-risk and not enough about s-risk, mostly because the former is more palatable, and is currently sufficient for advocating for s-risk relevant causes anyway. Or more concretely: It’s pretty easy to imagine an asteroid hitting the planet, killing everyone, and eliminating the possibility of future humans. It’s a lot wackier, more alienating and more bizarre to imagine an AI that not only destroys humanity, but permanently enslaves it in some kind of extended intergalactic torture chamber. So (again, totally guessing), many people have decided to just talk about x-risk, but use it as a way to advocate for getting talent and funding into AI Safety, which was the real goal anyway.
On a final note, if we take flavors of your view with varying degrees of extremity, we get, in order of strength of claim:
X-risk is less important than s-risk
We should be indifferent about x-risk, there’s too much uncertainty both ethically and in terms of what the future will actually look like
The potential for s-risk is so bad that we should invite and actually trying to cause x-risk, unless s-risk reduction is really tractable
S-risks aside, humanity is just really net negative and we should invite x-risk no matter what (to be clear, I don’t think you’re making any of these claims yourself, but they’re possible paths views similar to yours might lead to).
Some of these strike me as way too strong and unsubstantiated, but regardless of what we think object-level, it’s not hard to think of reasons these views might be under-discussed. So I think what you’re really getting at is something like, “does EA have the ability to productively discuss info-hazards”. And the answer is that we probably wouldn’t know if it did.
If this dynamic leads you to put less “trust” in our decisions, I think that’s a good thing!
I will push back a bit on this as well. I think it’s very healthy for the community to be skeptical of Open Philanthropy’s reasoning ability, and to be vigilant about trying to point out errors.
On the other hand, I don’t think it’s great if we have a dynamic where the community is skeptical of Open Philanthropy’s intentions. Basically, there’s a big difference between “OP made a mistake because they over/underrated X” and “OP made a mistake because they were politically or PR motivated and intentionally made sub-optimal grants.”
People like to hear nice things about themselves from prominent people, and Bryan is non-EA enough to make it feel not entirely self-congratulatory.
Strongly agree on this. It’s been a pet peeve of mine to hear exactly these kinds of phrases. You’re right that it’s nearly a passive formulation, and frames things in a very low-agentiness way.
At the same time, I think we should recognize the phrasing as a symptom of some underlying feeling of powerlessness. Tabooing the phrase might help, but won’t eradicate the condition. E.g.:
- If someone says “EA should consider funding North Korean refugees”
- You or I might respond “You should write up that analysis! You should make that case!”
- But the corresponding question is: Why didn’t they feel like they could do that in the first place? Is it just because people are lazy? Or were they uncertain that their writeup would be taken seriously? Maybe they feel that EA decision making only happens through “official channels” and random EA Forum writers not employed by large EA organizations don’t actually have a say?
One really useful way to execute this would be to bring in more outside non-EA experts in relevant disciplines. So have people in development econ evaluate GiveWell (great example of this here), engage people like Glen Wely to see how EA could better incorporate market-based thinking and mechanism design, engage hardcore anti-natalist philosophers (if you can find a credible one), engage anti-capitalist theorists skeptical of welfare and billionaire philanthropy, etc.
One specific pet project I’d love to see funded is more EA history. There are plenty of good legitimate expert historians, and we should be commissioning them to write for example on the history of philanthropy (Open Phil did a bit here), better understanding the causes of past civilizations’ ruin, better understanding intellectual moral history and how ideas have progressed over time, and so on. I think there’s a ton to dig into here, and think history is generally underestimated as a perspective (you can’t just read a couple secondary sources and call it a day).
A bit of a nit since this is in your appendix, but there are serious issues with this reasoning and the linked evidence. Basically, this requires the claims that:
1. San Francisco reduced sentences
2. There was subsequently more crime
1. Shellenberger at the WSJ writes:
He doesn’t provide a citation, but I’m fairly confident he’s pulling these numbers from this SF Chronicle writeup, which is actually citing a change from 2018-2019 to 2020-2021. So right off the bat Shellenberger is fudging the data.
Second, the aggregated data is misleading because there were specific pandemic-effects in 2020 unrelated to Boudin’s policies. If you look at the DA office’s disaggregated data, there is a drop in filing rate in 2020, but it picks up dramatically in 2021. In fact, the 2021 rate is higher than the 2019 rate both for crime overall, and for the larceny/theft category. So not only is Shellenberger’s claim misleading, it’s entirely incorrect.
You can be skeptical of the DA office’s data, but note that this is the same source used by the SF Chronicle, and thus by Shellenberger as well.
2. Despite popular anecdotes, there’s really no evidence that crime was actually up in San Francisco, or that it occurred as a result of Boudin’s policies.
- Actual reported shoplifting was down from 2019-2020
- Reported shoplifting in adjacent countries was down less than in California as a whole, indicating a lack of “substitution effects” where criminals go where sentences are lighter
- The store closures cited by Shellenberger can’t be pinned on increased crime under Boudin because:
A) Walgreens had already announced a plan to close 200 stores back in 2019
B) Of the 8 stores that closed in 2019 and 2020, at least half closed in 2019, making the 2020 closures unexceptional
C) The 2021 store closure rate for Walgreens is actually much lower than comparable metrics, like the closures of sister company Duane Reader in NYC over the same year, or the dramatic drop in Walgreens stock price. It is also not much higher than the historical average of 3.7 store closures per year in SF.
I have a much more extensive writeup on all of this here:
https://applieddivinitystudies.com/sf-crime-2/
Finally, the problem with the “common sense” reasoning is that it goes both ways. Yes, it seems reasonable to think that less punishment would result in more crime, but we can similarly intuit that spending time in prison and losing access to legal opportunities would result in more crime. Or that having your household’s primary provider incarcerated would lead to more crime. Etc etc. Yes, we are lacking in high quality evidence, but that doesn’t mean we can just pick which priors to put faith in.