You know, this makes me think I know just how academia was taken over by cancel culture.
It’s a very strong statement that academia has been taken over by cancel culture. I definitely agree that there are some very concerning elements (one of the ones I find most concerning are the University of California diversity statements), but academia as a whole is quite big and you may be jumping the gun quite a bit.
I guess I can’t resist one last comment—please feel free to not reply any further.
This seems clearly true to me, but I don’t see how it explains the things that I’m puzzled by.
To put it in rough Bayesian terms—I think your priors on what other people are saying and why are firing too strongly. This is making it hard to understand other people who are coming from a different place, and throwing up the elementary reasoning errors and anomalies you see. I wonder if you’ve previously encountered EAs or similar types of people saying the kinds of things jsteinhardt is saying here and meaning them sincerely, not performatively. I think some people, especially more online people, haven’t.
Thanks for explaining. I don’t wish to engage further here [feel free to reply though of course], but FWIW I don’t agree that there are any reasoning errors in Jacob’s post or any anomalies to explain. I think you are strongly focused on a part of the conversation that is of particular importance to you (something along the lines of whether people who are not motivated or skilled at expressing sympathy will be welcome here), while Jacob is mostly focused on other aspects.
what appears to me to be a series of anomalies that is otherwise hard to explain
What do you believe needs explaining?
This might be a minor point, but personally I think it’s better to avoid making generalizations of how an entire community must be feeling. Some members of the Asian community are unaware of recent events, while others may not be particularly affected by them. Perhaps something more along the lines of “I understand many people in the Asian community are feeling hurt right now” would be generally better.
I’m curious how xccf’s comment elsewhere on this thread fits in with your position as expressed here.
Ben Hoffman’s GiveWell and the problem of Partial Funding was also posted here on the forum, with replies from OpenPhil and GiveWell staff.
I don’t have any advice to offer, but as a datapoint for you: I applaud your goal and am even sympathetic to many of your points, but even I found this post actively annoying (unlike your previous ones in this series). It feels like you’re writing a series of posts for your own benefit without actually engaging with your audience or interlocutors. I think this is fine for a personal blog, but does not fit on this forum.
There’s a Buddhists in Effective Altruism group as well.
Thanks for writing this!
It has, however, succumbed to a third — mathematical authority. Firmly grounded in Bayesian epistemology, the community is losing its ability to step away from the numbers when appropriate, and has forgotten that its favourite tools — expected value calculations, Bayes theorem, and mathematical models — are precisely that: tools. They are not in and of themselves a window onto truth, and they are not always applicable. Rather than respect the limit of their scope, however, EA seems to be adopting the dogma captured by the charming epithet shut up and multiply.
I wonder if this old post by GiveWell (and OpenPhil’s ED) about expected value calculations assuages your fears a bit: Why we can’t take expected value estimates literally (even when they’re unbiased)
Personally I think equating strong longtermism with longtermism is not really correct. Longtermism is a much weaker claim. I highly doubt most longtermists are in danger of being convinced that strong longtermism is true, although I don’t have any real data on it.
I think I would spend a substantial amount of money on prizes for people who seem to have done obviously really good things for the world. Giving $10M to scihub seems worth it. Maybe giving $5M to Daniel Ellsberg as a prize for his lifetime achievements. There are probably more people in this reference class of people who seem to me to have done heroic things, but haven’t even been remotely well enough rewarded (like, it seems obvious that I would have wanted Einstein to die having at least a few millions in the bank, so righting wrongs of that reference class seems valuable, though Einstein did at least get a Nobel prize). My guess is one could spend another $100M this way.
I’m really surprised by this; I think things like the Future of Life award are good, but if I got $1B I would definitely not think about spending potentially $100m on similar awards as an EA endeavor. Can you say more about this? Why do you think this is so valuable?
Regardless of whatever happens, I’ve benefited greatly from all the effort you’ve put in your public writing on the fund Oliver.
I learned from stories 1 and 2 - thanks for the information!
Story 3 feels like it suffers from lack of familiarity with EA and argues against a straw version. E.g you write (emphasis added):
As the community grew it spread into new areas – Animal Charity Evaluators was founded in 2012 looking at animal welfare – the community also connected to the rationalist community that was worried about AI and to academics at FHI thinking about the long term future. Throughout all of this expected value calculations remained the gold star for making decisions on how to do good. The idea was to shut up and multiply. Even as effective altruism decision makers spread into areas of greater and greater uncertainty they (as far as I can tell) have mostly continued to use the same decision making tools (expected value calculations), without questioning if these were the best tools.
By 2011 GiveWell had already published Why we can’t take expected value estimates literally (even when they’re unbiased), arguing against, well, taking expected value calculations literally, critiquing GWWC’s work on that basis, and discussing how their solution avoided Pascal’s Mugging. There was a healthy discussion in the comments and the cross-post on LessWrong got 100 upvotes and 250 comments.
I just voted for the GFI, AMF, and GD videos because of your comment!
Even if that’s not what edoard meant, I would be interested in hearing the answer to ‘what are things you would say if you didn’t need to be risk averse?’!
I hope ImpactMatters does well!
Meta: A big thank you to Buck for doing this and putting so much effort into it! This was very interesting and will hopefully encourage more dissemination of knowledge and opinions publicly
I agree with Issa about the costs of not giving reasons. My guess is that over the long run, giving reasons why you believe what you believe will be a better strategy to avoid convincing people of false things. Saying you believed X and now believe ~X seems like it’s likely to convince people of ~X even more strongly.