I agree with others that this post is excellent.
Wow, I’m really impressed at how much CEA has been getting done over the past quarter or so.
Reflecting on the above, I think I sound more confident about my take here than I actually am. I do lean in the direction I describe here, but I can see why some reasonable people would disagree with me that what we’ve seen from Torres is sufficient to push him past the “actively engaging with critical arguments is good” and into “this is a bad actor we should just avoid associating with”.
But I do think that in cases like this, where there’s a credible (if not ironclad) case that someone is a bad actor, it’s especially important that you provide opportunities for pushback in the form of counter-critical reading, debate partners, et cetera.
I am, in general, in favour of inviting critical-of-EA speakers, even those whose critiques I think are unfair or ill-founded (and I agree that “of course we’re favour of critiques as long as they’re true” is not a viable policy). I think if you gave me a list of prominent EA critics I’d be in favour of most of them being invited as speakers.
But Phil Torres has repeatedly crossed lines that I think should not be crossed. It is not okay to accuse EAs of being white supremacists without credible evidence, or to repeatedly and knowingly misrepresent your opponents in order to gain rhetorical advantage. I can’t speak directly to his personal behaviour towards members of this community, but it’s my impression that there are some quite problematic patterns there, too.
The world contains bad actors. We should be careful about labelling those who disagree with us as such, mindful of how hard it is to account for our biases when doing so. But when we do have strong evidence that someone is such, ignoring it isn’t virtuous, it’s irresponsible.
(And yes, I appreciate that very similar arguments to mine could be – and are – made in contexts where I would find their use abhorrent. I think this is just an unfortunate feature of the way the world is.)
Regarding particular arguments Phil has made, I think the bar for “writing someone off” as no-longer worthy of being platformed should be extremely high.
Sounds right to me. Also sounds right that, in the modern world, repeatedly accusing a movement of being tainted with white supremacy using extremely flimsy evidence, in a manner that clearly seems to be about wielding a useful rhetorical weapon rather than anything connected to truth-seeking, is above that bar.
Yeah, I think you’ve encapsulated the two key ways people think about karma, and the difference between them. There was some discussion on LessWrong about this here.
I think probably the ideal would be for everyone to vote purely based on their reaction to the post and not at all in response to its current total. That’s probably not feasible – the information about the total is there and people will react to it – but I do think that complaining that a post has the wrong total karma (which is my reading of the top-level comment) is pushing the community towards total-based voting in a way I think isn’t great.
I took bupropion from about February to November 2021.
After a pretty rough transition I found it to be a quite effective antidepressant, but it gave me very bad insomnia which I needed to take sleeping pills to overcome (this kind of sucked as it meant I had to be very careful to take all my meds at the right time of day, and couldn’t increase the dose). I’m also quite confident that it made me more anxious.
That said, I would still definitely recommend it over SSRIs or mirtazapine, both of which have very common and serious side effects that I think are worse than bupropion’s for most people.
I’m considering trying bupropion again when I move to the US in 2022, since there is a shorter-half-life version available there that is not available in the UK.
It wasn’t obvious to me, and apparently also not to others, that your statements about “pandemics” were not meant to apply to pandemics in general.
In general, when you realise you have been communicating unclearly, it’s a bad idea to blame the people you confused.
I don’t think accusations of off-topic-ness at this point are very helpful.
You have been making strong claims about “pandemics” in general, which others have responded to by pointing out examples of pandemics that don’t fit your claims. If by “pandemics” you meant “civilisation-ending pandemics” only, I think it was on you to make that clear.
The AIDs epidemic is widely considered a pandemic (pandemics are a subset of epidemics). And one of the deadliest pandemics of the 20th century, at that.
In the 19th century, cholera, a faecal-oral pathogen, caused several pandemics, killing very many people. It doesn’t do that any more thanks to sanitation in rich countries, but it’s certainly not impossible for non-respiratory pathogens to achieve rapid global spread.
Everyone agrees with you that respiratory viruses are the biggest concern, and you’ve provided some good resources in this thread that I appreciate. But I do think you are being undernuanced and overconfident here.
I would hope that an event inviting someone with as controversial a record as Phil Torres would at least recommend some readings responding to his claims.
More generally, I think Torres’s recent engagement with longtermism (particularly around spurious claims of white supremacy, but not only that) crosses the line from valuable criticism into toxic personal attack, and I’m sad to see EA groups inviting him as a speaker.
I don’t see where anonea2021 has made that claim. Did you mean to write “property” instead of “state” in this paragraph? (genuine question) Either way, I’m having trouble following what you want to say with this paragraph.
Yes, it seems like there’s some crossed wires here.
I claimed that ancaps are “clearly trying to formulate a way for a capitalist society to exist without a state”. The intended implicature was that since anarchy = the absence of a state (according to common understanding, the dictionary definition, and etymology) it was therefore proper to call them anarchists.
anonea2021 responded with “From the perspective of every other lineage of anarchists, private property is one of the things that enforces injust hierarchies.” I was confused about this, since it didn’t seem like a direct response to my claims. I wasn’t sure whether to read it as (a) a claim that unjust hierarchies = a state (which seemed like a bad definition of “state”), or (b) a claim that anarchism wasn’t actually about the absence of a state but instead about abolishing unjust hierarchies in general (which seemed like a bad, question-begging definition of “anarchism”, given that ~everyone wants to minimise unjust hierarchies).
I tried to respond to the superposition of these two interpretations, which probably led to my phrasing being more confusing than it needed to be.
I can confirm that this indeed the view of every other lineage of anarchists that I’m aware of. The anarchist’s goal is to minimize unjust hierarchies. And given that private property (esp. of the means of production) is seen as one of the main causes of unjust hierarchies in today’s world, it is plausible that a movement that tries create a society which structures itself completely along the lines of private property, is seen as utterly missing the point of anarchism. Thus “anarcho-”capitalism.
As before, this begs the question. Everyone wants to minimise unjust hierarchies, so that’s not a useful description of anarchism. People who disagree about which hierarchies are unjust, what interventions are effective for reducing them, and what the costs of those interventions are, will end up advocating for radically different systems of government. Some of those will end up advocating for a society without a state, and it’s useful to refer to that subset of positions as “anarchist” even if they are very different from each other.
Anarcho-capitalism is really quite different from other forms of capitalist social organisation, and its distinctive feature is the absence of a coercive state. “Anarcho-capitalism” is thus a completely appropriate name for it – indeed, it’s hard to see what other name would fit better. Also, it’s what they call themselves, and we should heavily lean towards using people’s own self-labels.
It’s fine to just say “anarcho-capitalism is radically different from other forms of anarchism, and anarchists on the left will typically deeply disagree with its tenets”. That much is clear. Putting scare-quotes around “anarcho” is bad for the discourse in multiple ways.
Firstly, I wasn’t responding to the OP, I was responding several levels into a conversation between two different commentators about the responsibility of individual funders to reward critical work. I think this is an important and difficult problem that doesn’t go away even in a world with more diverse funding. You brought up “diversify funding” as a solution to that problem, and I responded that it is helpful but insufficient. I didn’t say anything critical of the OP’s proposal in my response. Unless you think the only reasonable response would be to stop talking about an issue as soon as one partial solution is raised, I don’t understand your accusation of unreasonableness here at all.
Secondly, “have more diversity in funders” is not remotely a concrete proposal. It’s a vague desideratum that could be effected in many different ways, many of which are probably bad. If that is “as concrete as [you] can imagine” then we are operating under different definitions of “concrete”.
I share the surprise and dismay other commenters have expressed about the experience you report around drafting this preprint. While I disagree with many claims and framings in the paper, I agreed with others, and the paper as a whole certainly doesn’t seem beyond the usual pale of academic dissent. I’m not sure what those who advised you not to publish were thinking.
In this comment, I’d like to turn attention to the recommendations you make at the end of this Forum post (rather than the preprint itself). Since these recommendations are somewhat more radical than those you make in the preprint, I think it would be worth discussing them specifically, particularly since in many cases it’s not clear to me exactly what is being proposed.
Having written what follows, I realise it’s quite repetitive, so if in responding you wanted to pick a few points that are particularly important to you and focus on those, I wouldn’t blame you!
You claim that EA needs to...
diversify funding sources by breaking up big funding bodies
Can you explain what this would mean in practice? Most of the big funding bodies are private foundations. How would we go about breaking these up? Who even is “we” in this instance?
[diversify funding sources] by reducing each orgs’ reliance on EA funding and tech billionaire funding
What sorts of funding sources do you think EA orgs should be seeking, other than EA orgs and individual philanthropists (noting that EA-adjacent academic researchers already have access to the government research funding apparatus)?
produce academically credible work
Speaking as a researcher who has spent a lot of time in academia, I think how much I care about work being “academically credible” depends a lot on the field. In many cases, I think post-publication review in places like the Forum is more robust and useful than pre-publication academic review.
Many academic fields (especially in the humanities) seem to have quite bad epistemic and political cultures, and even those that don’t often have very particular ideas of what sorts of problems & approaches are suitable for peer-reviewed articles (e.g. requiring that work be “interesting” or “novel” in particular ways). And the current peer-review system is well-known to be painfully inadequate in many ways.
I don’t want to overstate this – I think there are many cases where the academic publication route is a good option, for many reasons. But I’ve read a lot of pretty bad academic papers in my time, sometimes in prestigious journals, and it’s not all that rare for a Forum report to significantly exceed the quality of the academic literature. I don’t think academic credibility per se is something we should be aiming for for epistemic reasons. But perhaps you had other benefits in mind?
set up whistle-blower protection
Can you elaborate on what sorts of concrete systems you think would be useful here? Whistle-blower protection is usually intra-organisational – is this what you have in mind here, or are you imagining something more pan-community?
actively fund critical work
This sounds great, but I think is probably quite hard to implement in practice in a way that seems appealing. A lot depends on the details. Can you elaborate on what sorts of concrete proposals you would endorse here?
For example, do you think OpenPhil should deliberately fund “red-team” work they disagree with, solely for the sake of community epistemics? If so, how should they go about doing that?
allow for bottom-up control over how funding is distributed
I think having ways to aggregate small-donor preferences regarding EA grantees is valuable. I don’t think it should replace large philanthropic donors with concentrated expertise. But I think I’d have a better opinion if I had a better idea of what you were advocating.
diversify academic fields represented in EA
This isn’t something you can just change by fiat. You could modify the core messages of EA to deliberately appeal to a wider variety of backgrounds, but that seems like it has a lot of important downsides. Again, I think I would need a better idea of what exactly you have in mind as interventions to really evaluate this.
, make the leaders’ forum and funding decisions transparent
These seem like two different cases. I’m generally pro public reporting of grants, but I don’t really know what you have in mind for the leaders’ forum (or other similar meetings).
stop glorifying individual thought-leaders
I’m guessing for more detail on this we should refer to the section on intelligence from your earlier post? I’m torn between sympathy and scepticism here, and don’t feel like I have much to add, so let’s move on to...
stop classifying everything as info hazards
OK, but how do you handle actual serious information hazards?
I’m on record in various places (e.g. here) saying that I think secrecy has lots of really serious downsides, and I still think these downsides are frequently underrated by many EAs. I certainly think that there is substantial progress still to be made in improving how we think about and deal with these problems. But that doesn’t make the core problem go away – sometimes information really is hazardous, in a fairly direct (though rarely straightforward) way.
I agree that more diversity in funders would help with these problems. It would inevitably bring in some epistemic diversity, and make funding less dependent on maintaining a few specific interpersonal relationships. If funders were more numerous and less inclined to defer to each others’ analyses, then someone persistently failing to get funding would represent somewhat stronger evidence that the quality of their work is actually low.
That said, I don’t think this solves the problem. Distinguishing between bad (negatively-valuable) work and work you’re biased against because you disagree with it will still be hard in many cases, and more & better thinking about concrete approaches to dealing with this still seems preferable to throwing broad intuitions at each other.
Plus, adding more funders to EA adds its own tricky balancing acts. If your funders are too aligned with each other, you’re basically in the same position you were with only one funder. If they’re too unaligned, you end up with some funders regularly funding work that other funders think is bad for the world, and the movement either splinters or becomes too broad and vague to be useful.
I hadn’t realised that your comment on LessWrong was your first public comment on the incident for 3 years. That is an update for me.
But also, I do find it quite strange to say nothing about the incident for years, then come back with a very long and personal (and to me, bitter-seeming) comment, deep in the middle of a lengthy and mostly-unrelated conversation about a completely different organisation.
Commenting on this post after it got nominated for review is, I agree, completely reasonable and expected. That said, your review isn’t exactly very reflective – it reads more as just another chance to rehash the same grievance in great detail. I’d expect a review of a post that generated so much in-depth discussion and argument to mention and incorporate some of that discussion and argument; yours gives the impression that the post was simply ignored, a lone voice in the wilderness. If 72 comments represents deafening silence, I don’t know what noise would look like.
[Edited to soften language.]
I believe GiveWell has corrupted itself
Is it so hard to believe reasonable people can disagree with you, for reasons other than corruption or conspiracy?
What is your credence that you’re wrong about this?
Do you believe that the following representation of the incident is unfair?
Yes, at present I do.
I haven’t yet seen evidence to support the strong claims you are making about Julia Wise’s knowledge and intentions at various stages in this process. If your depiction of events is true (i.e. Wise both knowingly concealed the leak from you after realising what had happened, and explicitly lied about it somewhere) that seems very bad, but I haven’t seen evidence for that. Her own explanation of what happened seems quite plausible to me.
(Conversely, we do have evidence that MacAskill read your draft, and realised it was confidential, but didn’t tell you he’d seen it. That does seem bad to me, but much less bad than the leak itself – and Will has apologised for it pretty thoroughly.)
Your initial response to Julia’s apology seemed quite reasonable, so I was surprised to see you revert so strongly in your LessWrong comment a few months back. What new evidence did you get that hardened your views here so much?
And that since “the actual consequences were so minor and that the alternative hypothesis (that it was just a mistake) is so plausible” this doesn’t really matter?
It matters – it was a serious error and breach of Wise’s duty of confidentiality, and she has acknowledged it as such (it is now listed on CEA’s mistakes page). But I do think it is important to point out that, other than having your expectation of confidentiality breached per se, nothing bad happened to you.
One reason I think this is important is because it makes the strong “conspiracy” interpretation of these events much less plausible. You present these events as though the intent of these actions was to in some way undermine or discredit your criticisms (you’ve used the word “sabotage”) in order to protect MacAskill’s reputation. But nobody did this, and it’s not clear to me what they plausibly could have done – so what’s the motive?
What sharing the draft with MacAskill did enable was a prepared response – but that’s normal in EA and generally considered good practice when posting public criticism. Said norm is likely a big part of the reason this screw-up happened.
I suspect I disagree with the users that are downvoting this comment. The considerations Guy raises in the first half of this comment are real and important, and the strong form of the opposing view (that anyone should be “responsible for ensuring harmful and wrong ideas are not widely circulated” through anything other than counterargument) is seriously problematic and, in my view, prone to lead to some pretty dark places.
A couple of commenters here have edged closer to this strong view than I’m comfortable with, and I’m happy to see pushback against that. If strong forms of this view are more prevalent among the community than I currently think, that would for me be an update in favour of the claims made in this post/paper.
That said, I do agree that “consistently making bad arguments should eventually lead to the withdrawal of funding”, and that this problem is hard (see my other reply to Guy below).
Why is writing a sequence of snarky rhetorical questions preferable to just making counter-arguments?
I think this might just be unavoidably hard.
Like, it seems clear that funders shouldn’t fund work they strongly & rationally believe is low-quality and/or harmful. It also seems clear that reasonably free and open dissent is crucial for the epistemic health of a research community. But how exactly funders should go about managing that conflict seems very unclear – especially in domains in which infohazards are real and serious (but in which all the standard problems with secrecy still absolutely apply).
I would be interested in concrete proposals for how to do this well, given all the considerations raised in this discussion, as well as case studies of how other communities have maintained epistemic diversity and their relevance to our use case. The main example of the latter would be academia, but that has its own set of severe pathologies – especially in the humanities, which seem to be the fields one would most naturally use as control groups for existential risk studies. I think working on concrete ideas and examples of how we can do better would be more useful than arguing round and round the quality-vs-epistemic-diversity circle.