AppliedDivinityStudies
If this dynamic leads you to put less “trust” in our decisions, I think that’s a good thing!
I will push back a bit on this as well. I think it’s very healthy for the community to be skeptical of Open Philanthropy’s reasoning ability, and to be vigilant about trying to point out errors.
On the other hand, I don’t think it’s great if we have a dynamic where the community is skeptical of Open Philanthropy’s intentions. Basically, there’s a big difference between “OP made a mistake because they over/underrated X” and “OP made a mistake because they were politically or PR motivated and intentionally made sub-optimal grants.”
In general, WSJ reporting on SF crime has been quite bad. In another article they write
Much of this lawlessness can be linked to Proposition 47, a California ballot initiative passed in 2014, under which theft of less than $950 in goods is treated as a nonviolent misdemeanor and rarely prosecuted.
Which is just not true at all. Every state has some threshold, and California’s is actually on the “tough on crime” side of the spectrum.
Shellenberger himself is an interesting guy, though not necessarily in a good way.
Thanks!
Conversely, if sentences are reduced more than in the margin, common sense suggests that crime will increase, as observed in, for instance, San Francisco.
A bit of a nit since this is in your appendix, but there are serious issues with this reasoning and the linked evidence. Basically, this requires the claims that:
1. San Francisco reduced sentences
2. There was subsequently more crime
1. Shellenberger at the WSJ writes:the charging rate for theft by Mr. Boudin’s office declined from 62% in 2019 to 46% in 2021; for petty theft it fell from 58% to 35%.
He doesn’t provide a citation, but I’m fairly confident he’s pulling these numbers from this SF Chronicle writeup, which is actually citing a change from 2018-2019 to 2020-2021. So right off the bat Shellenberger is fudging the data.
Second, the aggregated data is misleading because there were specific pandemic-effects in 2020 unrelated to Boudin’s policies. If you look at the DA office’s disaggregated data, there is a drop in filing rate in 2020, but it picks up dramatically in 2021. In fact, the 2021 rate is higher than the 2019 rate both for crime overall, and for the larceny/theft category. So not only is Shellenberger’s claim misleading, it’s entirely incorrect.
You can be skeptical of the DA office’s data, but note that this is the same source used by the SF Chronicle, and thus by Shellenberger as well.
2. Despite popular anecdotes, there’s really no evidence that crime was actually up in San Francisco, or that it occurred as a result of Boudin’s policies.
- Actual reported shoplifting was down from 2019-2020
- Reported shoplifting in adjacent countries was down less than in California as a whole, indicating a lack of “substitution effects” where criminals go where sentences are lighter
- The store closures cited by Shellenberger can’t be pinned on increased crime under Boudin because:
A) Walgreens had already announced a plan to close 200 stores back in 2019
B) Of the 8 stores that closed in 2019 and 2020, at least half closed in 2019, making the 2020 closures unexceptional
C) The 2021 store closure rate for Walgreens is actually much lower than comparable metrics, like the closures of sister company Duane Reader in NYC over the same year, or the dramatic drop in Walgreens stock price. It is also not much higher than the historical average of 3.7 store closures per year in SF.
I have a much more extensive writeup on all of this here:
https://applieddivinitystudies.com/sf-crime-2/
Finally, the problem with the “common sense” reasoning is that it goes both ways. Yes, it seems reasonable to think that less punishment would result in more crime, but we can similarly intuit that spending time in prison and losing access to legal opportunities would result in more crime. Or that having your household’s primary provider incarcerated would lead to more crime. Etc etc. Yes, we are lacking in high quality evidence, but that doesn’t mean we can just pick which priors to put faith in.
I can’t speak for everyone, but will quickly offer my own thoughts as a panelist:
1. Short and/or informally written submissions are fine. I would happily award a tweet thread it if was good enough. But I’m hesitant to say “low effort is fine”, because I’m not sure what else that implies.
2. It might sound trite, but I think the point of this contest (or at least the reason I’m excited about it) is to improve EA. So if a submission is totally illegible to EA people, it is unlikely to have that impact. On “style of argument” I’ll just point to my own backlog of very non-EA writing on mostly non-EA topics.
3. I wouldn’t hold it against a submission as a personal matter, and wouldn’t dismiss it out of hand, but it’s definitely a negative if there are substantive mistakes that could have been avoided using only public information.
The crucial complementary question is “what percentage of people on the panel are neartermists?”
FWIW, I have previously written about animal ethics, interviewed Open Phil’s neartermist co-CEO, and am personally donating to neartermist causes.
Are there any limitations on the kinds of feedback we can/should get before submitting? For example, is it okay to:
- Get feedback from an OpenPhil staff member?
- Publish on the forum, get feedback, and make edits before submitting a final draft?
- Submit an unpublished piece of writing which has previously been reviewed?
If so, should reviewers be listed in order to provide clarity on input? Or omitted to avoid the impression of an “endorsement”?
Antidepressants do actually seem to work, and I think it’s weird that people forget/neglect this. See Scott’s review here and more recent writeup. Those are both on SSRIs, there is also Wellbutrin (see Robert Wiblin’s personal experience with it here) and at least a few other fairly promising pharmacological treatments.
I would also read the relevant Lorien Psych articles and classic SSC posts on depression treatments and anxiety treatments.Since you asked for the meta-approach: I think the key is to stick with each thing long enough to see if it works, but also do actually move on and try other things.
Ideas are like investments, you don’t want just want a well diversified portfolio, you want to intentionally hedge against other assets. In this view, the best way to develop a scout’s mindset for yourself is to read a wide variety of writers, many of whom will be quite dogmatic. The goal shouldn’t be to only read other reasonable people, but to read totally unreasonable people across domains and synthesize their claims into something coherent.
As you correctly note, Graeber is a model thinker in a world of incoherent anarchist/marxist ramblings. I think our options are to either dismiss the perspective altogether (reasonable, but tragic) or take his factual claims with a grain of salt, while acknowledging his works as fountain of insight.
I would happily accept the criticism if there were any anarchist/marxist thinker alive today reasoning more clearly than Graeber, but I don’t think there is.
Strongly agree on this. It’s been a pet peeve of mine to hear exactly these kinds of phrases. You’re right that it’s nearly a passive formulation, and frames things in a very low-agentiness way.
At the same time, I think we should recognize the phrasing as a symptom of some underlying feeling of powerlessness. Tabooing the phrase might help, but won’t eradicate the condition. E.g.:
- If someone says “EA should consider funding North Korean refugees”
- You or I might respond “You should write up that analysis! You should make that case!”
- But the corresponding question is: Why didn’t they feel like they could do that in the first place? Is it just because people are lazy? Or were they uncertain that their writeup would be taken seriously? Maybe they feel that EA decision making only happens through “official channels” and random EA Forum writers not employed by large EA organizations don’t actually have a say?
EA Organization Updates: March 2022
FYI Samo’s forecasts on this were pretty wrong:
https://astralcodexten.substack.com/p/ukraine-warcasting?s=r
I would add that we should be trying to increase the pool of resources. This includes broad outreach like Giving What We Can and the 80k podcast, as well as convincing EAs to be more ambitious, direct outreach to very wealthy people, and so on.
Oh man, fixed, thank you.
EA Organization Updates: February 2022
It sounds wild, but AFAIK, the cotton gin and maybe some other forms of automation actually made slavery more profitable!
From Wikipedia:
> Whitney’s gin made cotton farming more profitable, so plantation owners expanded their plantations and used more slaves to pick the cotton. Whitney never invented a machine to harvest cotton, it still had to be picked by hand. The invention has thus been identified as an inadvertent contributing factor to the outbreak of the American Civil War.
across the board the ethical trend has been an extension of rights, franchise, and dignity to widening circles of humans
I have two objections here.
1) If this is the historical backing for wanting to future-proof ethics, shouldn’t we just do the extrapolation from there directly instead of thing about systematizing ethics? In other words, just extent rights to all humans now and be done with it.
2) The idea that the ethical trend has been a monotonic widening is a bit self-fulfilling, since we don’t no longer consider some agents to be morally important. I.e. the moral circle has narrowed to exclude ancestors, ghosts, animal worship, etc. See Gwern’s argument here:
https://www.gwern.net/The-Narrowing-Circle
One really useful way to execute this would be to bring in more outside non-EA experts in relevant disciplines. So have people in development econ evaluate GiveWell (great example of this here), engage people like Glen Wely to see how EA could better incorporate market-based thinking and mechanism design, engage hardcore anti-natalist philosophers (if you can find a credible one), engage anti-capitalist theorists skeptical of welfare and billionaire philanthropy, etc.
One specific pet project I’d love to see funded is more EA history. There are plenty of good legitimate expert historians, and we should be commissioning them to write for example on the history of philanthropy (Open Phil did a bit here), better understanding the causes of past civilizations’ ruin, better understanding intellectual moral history and how ideas have progressed over time, and so on. I think there’s a ton to dig into here, and think history is generally underestimated as a perspective (you can’t just read a couple secondary sources and call it a day).
I agree that it’s important to ask the meta questions about which pieces of information even have high moral value to begin with. OP gives as an example, the moral welfare of shrimps. But who cares? EA puts so little money and effort into this already on the assumption that they probably are valuable. Even if you demonstrated that they weren’t or forced an update in that direction, the overall amount of funding shifted would be fairly small.
You might worry that all the important questions are already so heavily scrutinized as to bear little low-hanging fruit, but I don’t think that’s true. EAs are easily nerd sniped, and there isn’t any kind of “efficient market” for prioritizing high impact questions. There’s also a bit of intimidation here where it feels a bit wrong to challenge someone like MacAskill or Bostrom on really critical philosophical questions. But that’s precisely where we should be focusing more attention.
For some classes of meta-ethical dilemmas, Moral Uncertainty recommends using variance voting, which requires you to know the mean and variance of each theory under consideration.
How is this applied in practice? Say I give 95% weight to Total Utilitarianism and 5% weight to Average Utilitarianism, and I’m evaluating an intervention that’s valued differently by each theory. Do I literally attempt to calculate values for variance? Or am I just reasoning abstractly about possible values?