Against the Guardian’s hit piece on Manifest
Crosspost of this on my blog
The Guardian recently released the newest edition in the smear rationalists and effective altruists series, this time targetting the Manifest conference. The piece titled “Sam Bankman-Fried funded a group with racist ties. FTX wants its $5m back,” is filled with bizarre factual errors, one of which was so egregious that it merited a correction. It’s the standard sort of journalist hitpiece on a group: find a bunch of members saying things that sound bad, and then sneeringly report on that as if that discredits the group.
It reports, for example, that Scott Alexander attended the conference, and links to the dishonest New York Times smear piece criticizing Scott, as well as a similar hitpiece calling Robin Hanson creepy. It then smears Razib Khan, on the grounds that he once wrote a piece for magazines that are Paleoconservative and anti-immigration (like around half the country). The charges against Steve Hsu are the most embarrassing—they can’t even find something bad that he did, so they just mention half-heartedly that there were protests against him. And it just continues like this—Manifest invited X person who has said a bad thing once, or is friends with a bad person, or has written for some nefarious group.
If you haven’t seen it, I’d recommend checking out Austin’s response. I’m not going to go through and defend each of these people in detail, because I think that’s a lame waste of time. I want to make a more meta point: articles like this are embarrassing and people should be ashamed of themselves for writing them.
Most people have some problematic views. Corner people in a dark alleyway and start asking them why it’s okay to kill animals for food and not people (as I’ve done many times), and about half the time they’ll suggest it would be okay to kill mentally disabled orphans. Ask people about why one would be required to save children from a pond but not to give to effective charities, and a sizeable portion of the time, people will suggest that one wouldn’t have an obligation to wade into a pond to save drowning African children. Ask people about population ethics, and people will start rooting for a nuclear holocaust.
Many people think their worldview doesn’t commit them to anything strange or repugnant. They only have the luxury of thinking this because they haven’t thought hard about anything. Inevitably, if one thinks hard about morality—or most topics—in any detail, they’ll have to accept all sorts of very unsavory implications. In philosophy, there are all sorts of impossibility proofs, showing that we must give up on at least one of a few widely shared intuitions.
Take the accusations against Jonathan Anomaly, for instance. He was smeared for supporting what’s known as liberal eugenics—gene editing to make people smarter or make sure they don’t get horrible diseases. Why is this supposed to be bad? Sure, it has a nasty word in the name, but what’s actually bad about it? A lot of people who think carefully about the subject will come to the same conclusions as Jonathan Anomaly, because there isn’t anything objectionable about gene editing to make people better off. If you’re a conformist who bases your opinion about so called liberal eugenics (terrible term for it) on the fact that it’s a scary term, you’ll find Anomaly’s position unreasonable, but if you actually think it through, it’s extremely plausible, and is even agreed with by most philosophers. Should philosophy conferences be disbanded because too many philosophers have offensive views?
I’ve elsewhere remarked that cancel culture is a tax on being interesting. Anyone who says a lot of things and isn’t completely beholden to social consensus will eventually say some things that sound bad. The only people safe from cancel culture are those who never have interesting thoughts, who never step outside of the Overton window, who never advance beyond irrational and immoral societal norms, for they are beholden to the norms of their society.
Lots of people seem to treat associations like a disease—if you associate with people who think bad things, they’ll infect you with the badness bug, and then you’ll become bad too (this seems to be the reasoning behind the Guardian smearpiece). If I accepted this I’d have to be a Hermit in the wilderness, because I think almost everyone either thinks or does bad things—specifically, people who eat meat have, I think, either repugnant views or do things they know to be very wrong.
The association as disease model is crazy! It’s valuable to associate with people who think bad things. Has Hanania said some things I regard as objectionable? Of course. Does this mean I think Hanania should be permanently shunned? No—he’s an interesting guy who I can learn a lot from.
No one has ever convincingly explained why one shouldn’t interact with bad people or invite them to their conferences (even though it’s taken as axiomatic by lots of people). Suppose the Manifest crew really invited some bad hombres. So what? Why not have bad people give talks? While maybe the bad people will bring the good people over to the dark side, maybe the good people will bring the bad people over to the light side. For this reason, I’d expect people with radical views to be depolarized by an event like Manifest, if it has any impact on one’s views.
The Guardian hitpiece was written by Jason Wilson and Ali Winston. Maybe the Wilson and Winstons only go to conferences where no one thinks anything offensive (and perhaps everyone’s last name starts with a Wi, and has an o as the second to last letter). But if this is so then they only hung out with prudish bores. Anyone who thinks for themselves about issues will think some things that they wouldn’t want to utter at a liberal dinner party.
This shouldn’t be surprising. Social norms are often wrong. Just like old social norms were racist and sexist and homophobic, we should expect modern consensus views to often be similarly in error. This means that even if a person believed all and only true things, they’d end up constantly disagreeing with social norms. They’d end up thinking things that the ~Wilsons wouldn’t like—that they’d think are worthy of cancellation.
Are there any philosophers who don’t think any offensive things about ethics? I can’t think of any. Singer, one of the most influential ethicists, has been so controversial that he’s drawn protests, and supports infanticide in some cases. Should we want groups that censor people like Singer—people who diverge from mainstream groupthink?
If, as I’ve argued before, people who are interesting and write a lot will generally say controversial things, then stifling those who have controversial views will produce either people who self-censor or people who are not interesting. It will produce a world devoid of free thinkers who write a lot, a world filled with the type of Midwit who determines their beliefs by judging what things sound good rather than what is true.
The people at Manifest weren’t even disproportionately right-wing. Scott isn’t right-wing—neither were most of the attendees. But they provided enough fodder for a Guardian hitpiece because they had the unfortunate property of being interesting, of thinking for themselves. If we don’t want a society of boring conformists, we’ll have to tolerate that sometimes conferences will have people who we disagree with. The fact that in 2024, the Guardian is still churning out these misleading, low-info hitpieces in an attempt to cancel people is shameful.
Certainly the Guardian article had a lot of mistakes and issues, but I don’t at all buy that there’s nothing meaningfully different between someone like Hanania and most interesting thinkers, just because forcing consistency of philosophical views will inevitably lead to some upsetting conclusions somewhere. If I was to “corner someone in a dark alleyway” about population ethics until I caught them in a gotcha that implied they would prefer the world was destroyed, this updates me ~0 about the likelihood of this person actually going out and trying to destroy the world or causing harm to people. If I see someone consistently tweet and write in racist ways despite a lot of criticism and push-back, this shows me important things about what they value on reflection, and provides fairly strong evidence that this person will act in exclusionary and hateful ways. Trying to say that such racist comments are fine because of impossibility theorems showing everyone has to be committed to some weird views doesn’t at all engage with the empirical track record of how people who write like Hanania tend to act.
Even IF Hanania is not personally discriminatory, he is campaigning for the repeal of the single most famous piece of American legislation designed to outlaw racist discrimination.
I think posts like this exhibit the same thought terminating cancel culture behaviour that you are supposedly complaining about, in a way that is often inaccurate or uncharitable.
For example, take the mention of scott alexander:
Now, compare this to the actual text of the article:
Now, I get the complaint about the treatment of robin hanson here, and I feel that “accused of misogyny” would be more appropriate (outside of an oped). But with regards to scott alexander, there was literally no judgement call included.
When it comes to the NYT article, very few people outside this sphere know who he is. Linking to an article about him in one of the most well known newspapers in the world does not seem like a major crime! People linking to articles you don’t like is not cancel culture. Or if it is, then I guess I’m pro cancel culture, because the word has lost all meaning.
It feels like you want to retreat into a tiny, insular bubble where people can freely be horribly unpleasant to each other without receiving any criticism at all from the outside world. And I’m happy for those bubbles to exist, but I have no obligation to host your bubble or hide out there with you.
Linking to hitpieces is not cancel culture, but if your objection to some group is “look at all these bad people they associate with,” and then you link to poorly reasoned and educated hitpieces, that is bad.
I think the NYT’s criticisms of Scott were basically fair even if some of the details were off, but I don’t think you can reasonably imply that someone linking to it while writing a scathing criticism of groups and views Scott is associated with is linking it just because it is in the NYT. They are obviously trying to get the reader to draw negative inferences about Scott and people and movements associated with him.
I am surprised by some of the things written here, and this line especially stood out to me:
Based on the discussions had at the Yarvin afterparty (which was organised by Curtis Yarvin, not Manifest), I’d say there was a significant overrepresentation of very very right-wing people at Manifest (as in the right-wing tail of the political distribution was overrepresented. Not making a statement on more moderate right-wingers or left-wingers.). This sentence felt especially surprising since you were there at the afterparty.[1] To be fair, there were also people there who weren’t right-wing at all, and when I reached out to you to ask about this you said that you didn’t find many to say right-wing things, and that only a small percentage of Manifest attendees were invited to the afterparty.
There is a chance that people around me said non-representatively many bigoted things, but I think it is more likely that your experience is explained by people avoiding more incendiary topics around an in-group famous, non-right wing blogger such as yourself. I am not very confident on this, though.
I asked Omnizoid for his permission to mention this.
Depends what we mean in proportion to? I guess most of them will vote democrat.
And again, to mention the Yarvin afterparty seems importantly different.
The article was obviously terrible, and I hope the listed mistakes get corrected, but I haven’t seen a request for correction on the claim that CFAR/Lightcone has $5 million of FTX money and isn’t giving it back. Is there any more information on whether this is true and, if so, what their reasoning is?
Lightcone doesn’t have $5M of FTX money! I’ve generally been very transparent about this and e.g. you can see a breakdown of our FTX funding in e.g. this old comment of mine (and also some others that I could probably dig up).
Lightcone Infrastructure (fiscally sponsored by CFAR) has received around $4M in grants from FTX. By the time FTX collapsed almost all of the grant funding was spent on the programs that FTX wanted to support (the relevant work was mostly on the Lightcone Offices, LessWrong and the AI Alignment Forum). We offered FTX a settlement of a large fraction of Lightcone’s assets and cash reserves (~$700k / ~$900k and more than what wasn’t already spent or legally committed by the time FTX collapsed), which they rejected without any kind of counter offer. Now they filed a formal complaint, which we’ll fight.
The article and the FTX complaint also includes an additional $1M, which was an escrow deposit that FTX covered for us. We never had any ownership over that money and it’s just sitting with the title company somewhere, and I don’t know why the FTX estate hasn’t picked it up. We have tried to put them in contact. I am sad to see they are still including it in their complaint, since as far as I can tell there is really no way in which Lightcone has or ever had that money.
Happy to try to answer any other questions people might have (though commenting on ongoing litigation is a bit messy).
Why is the escrow deposit still sitting somewhere? Some quick online research (so take it with a grain of salt) makes it sound like the escrow process usually takes 4 to 8 weeks in California—so this seems significantly long, in comparison.
Can you clarify when you received these grants and the escrow money? The complaint filed by FTX (documents here, for anyone interested) have the dates of transfers as March 3, July 8, July 13, August 18, September 20, and October 3, all in 2022—so well within the timeframe that might be subject to clawbacks, and well within the bankruptcy lookback period. (For a comparison point, EV US and EV UK paid the FTX estate an amount equal to all the funds the entities received in 2022.)
Why would you not proactively return this money or settle with the FTX estate, given the money came from FTX and could have been originally obtained in fraudulent ways? My prior is that you (Oliver Habryka) have written multiple times on the Forum about the harm EA may have caused related to FTX and wish it could have been prevented, so somehow it seems strange to me that you wouldn’t take the opportunity to return money that came from FTX, especially when it could have been obtained in harmful, unethical ways.
Did you in fact ignore FTX’s attempts to contact you in 2023, as the complaint says? And if so, why?
I also think it’s worth pointing out that in bankruptcy cases, especially regarding clawbacks, the question of whether you have a legal obligation to return the money isn’t a question of whether you currently have the $5M of FTX money sitting around or whether you’ve already allocated or used it. Demonstrating that you’ve spent the funds on legitimate charitable activities might strengthen your case, but that doesn’t guarantee protection from clawback attempts.
I am also confused (and very frustrated by this). The key thing to understand here is that the escrow was due to be returned right around the time when FTX went bankrupt (the sale was completed on the 4th of November, FTX filed for bankruptcy November 11), so this meant that none of my contacts at FTX were there to facilitate the return of the escrow, and there was presumably enough chaos for multiple weeks that the escrow’s company’s attempt to reach out to North Dimension Inc. at their usual address and contact information were unsuccessful. After a few weeks the escrow company asked Lightcone for advice on how to return the funds and we gave them the contact information we had.
Yes, the rough timeline here is accurate (I didn’t double check the exact dates and am not confirming that in detail here). All the funds were received in 2022.
Well, the key problem was that by the time FTX went bankrupt, and it became clear there was a lot of fraud at FTX, the money had been spent or committed in contracts, there wasn’t much opportunity left to return the funds. Indeed, by early 2023 when the liabilities from our renovation project had cleared and everything was paid, Lightcone had completely ran out of money and indeed was financially in the red until around Q3 2023.
I did fundraise explicitly for money to return to the FTX creditors during our 2023 fundraising, from both the Open Philanthropy project and the Survival and Flourishing Fund, our two biggest funders. Open Philanthropy declined to give us any funds for settlement or return purposes. SFF didn’t explicitly tell us whether the money they gave us was for settlement or return purposes, but we only received barely enough money from them during the 2023 grant round to cover our existing liabilities (and the settlement we offered FTX in a settlement was greater than the amount I think one could conceivably say we fundraised for it).
If Lightcone had been in a position to return funds proactively I likely would have done it.
Yes, or like, some of them. The central reason here was just that everyone I talked to told me to get representation by a lawyer before talking to FTX, since given that the funds had already been spent, there was a quite high chance there would be some kind of suit or more complicated settlement.
I decided I would be very thorough in my choice of lawyer due to the high stakes, and so I took a long time (multiple months IIRC) interviewing different bankruptcy lawyers. During that time I asked every lawyer I interviewed about how we should respond to the FTX communications. I think literally every lawyer said that we should wait on responding, on the basis of there still being a huge amount of uncertainty and lack of clarity about whether the FTX estate is actually in a position to settle these claims, and that before that issue is cleared, there wouldn’t be much use in talking to them and all information I gave them would be used against me.
I now honestly think the lawyer’s advice to not respond was kind of a mistake (and more broadly think that the type of excuse of “my lawyer told me so” is a bad excuse for immoral behavior in-general, though I personally don’t feel that much guilt about my decision-making process here, since it is a very high-stakes situation, there was consensus among many lawyers I talked to about this, and I did not have any experience whatsoever in navigating legal situations like this).
Yep, I am well aware. My current take is that bankruptcy law is kind of broken here, and indeed, there are multiple judges who upon delivering judgements against nonprofits that seemed unfair even to them (but where the bankruptcy law gave them little choice) have called for bankruptcy law to be changed to be more protective of nonprofits here.
The legal situation for nonprofits is unfortunate, but I think the potentially workable patches wouldn’t help an org in Lightcone’s shoes very much. IIRC, one state shortened its look back period for charities after many of them got burnt in a fraud.
But all these transfers were within ~7 months. Most of us would prefer our monies go to charity rather than our creditors, so a superfast look back period would incentivize throwing tons of money to charity once the business or person realized the ship was gonna sink.
Protection based on a donor’s good faith wouldn’t help. Protection up to a percentage of profits wouldn’t help given FTX claimed tons of losses on its taxes. Protection based on consistency with a well-established pattern of giving from that donor wouldn’t help.
Equitably, my general take in these situations is that the charity got some value toward its charitable out of the expended donation (although perhaps not the full dollar value). The victims got $0 out of the transaction. So I’d be hesitant to endorse any reforms that didn’t produce some meaningful recoveries for victims in a case like this.
(I have lots of takes here, but my guess is I shouldn’t comment. Overall, agree with you that it’s a tricky situation of the law. I disagree that there aren’t small changes that would help. For example, I think if the Religious Liberty and Charitable Donation Protection Act of 1998 could have considered foundations or corporations as part of its definition of “natural person”, that would have been a substantial improvement. But again, I sadly can’t comment here much, which I find really annoying, also in parts because I find this part of the law quite fascinating and would love to talk about it)
We may not disagree: I had specific elements of Lightcone’s situation in mind when I said “help an org in Lightcone’s shoes very much.” That situation is unfortunately not rare, given the charities that ended up with Madoff-tainted money and Petters-tainted money.
So in that context, the RLCDAP amendments to 11 USC 548 won’t help a charity with a SBF/Madoff/Petters-type problem because they don’t protect charities where the debtor had an “actual intent to hinder, delay, or defraud” creditors under (a)(1)(A). Another reason a small fix might not help here: If Congress were to extend RLCDAP protections to corporations, it would need to decide how big the safe harbor should be. Although RLCDAP gives individuals some room to play bad-faith games, that room is usually fairly limited by the usual relationship between individual’s incomes and assets. I don’t think it would be reasonable to protect nearly as much as FTXFF was handing out under the circumstances. Whatever formula you choose, it has to work for low-margin, high-volume companies (think grocery stores) as well as tech-like companies.
I would have to think more about the extent to which—at least where large donations are involved—strong protection should be dependent on the existence of an acceptable comprehensive audit of the company-donor. Where that isn’t the case, and the donations are fairly large, I might focus relatively more on education of the nonprofit sector about the risks and relatively less about writing them an insurance policy on the creditors’ backs.
In part, I think I’m much more accepting of charitable donations by insolvent individuals than insolvent corporations. A decent number of individuals are insolvent; I certainly would not suggest that they should not donate to charity and instead have some sort of ethical duty to run their lives in a way that maximizes creditor recoveries. In contrast, I am more willing to assign an insolvent corporation has much more rigorous duties to creditors and so am considerably more willing to call out dissipation of assets away from creditors.
I mean, I would really love to discuss this stuff with you, but I think I can’t. Maybe in a year or so we can have a call and discuss bankruptcy law.
Yeah, I agree with that. Mainly, I think I want to signal to the audience that the situation in which orgs find themselves reflects thorny policy tradeoffs rather than a simple goof by Congress. Especially since the base rate of goofs is so high!
Are you able to say whether the other relevant defendants—CFAR and Rose Garden LLC—also made offers, or whether accepting LI’s offer would have required the estate to surrender its claims against them?
I’m obviously not going to get into the legal side, but your comment hints at various ethical or equitable arguments for why ~17.5 cents on the dollar was a fair offer for the estate’s claim. To the extent it would be a global settlement, LI’s own ability to pay the potential judgment seems of little relevance without additional relevant facts.
Given litigation, I will obviously understand and not draw adverse inferences if you decide not to answer.
I don’t think I can comment on this because it risks breaking legal privilege, though I am not confident (also, sidenote, I really really hate the fact that discussing legal strategy in the US risks breaking privilege, it makes navigating this whole situation so much worse).
As a relevant clarification: Lightcone Infrastructure is a fiscally sponsored project of CFAR. In-general FTX has directed all of its communications at CFAR, and made no distinction between CFAR and the fiscally sponsored projects within it.
Makes sense—I wouldn’t have even asked about possible details if you hadn’t mentioned the settlement offer.
The complaint sues a Lightcone Infrastructure, Inc., “a Delaware non-profit corporation with its principal place of business at 270 Telegraph Avenue, Berkeley, California, 94705.” Am I correct in thinking that, as of 10/13/22, Lightcone now possesses a separate corporate existence (which many fiscally sponsored projects do not)?
Lightcone Infrastructure Inc. has so far never done anything. It’s a nonprofit that I incorporated with the intent of being a home for future projects of mine, but doing anything with it was delayed because of the whole FTX thing. The most real thing it has done is me depositing $50 in its bank account.
I agree the article was pretty bad and unfair, and I agree with most things you say about cancel culture.
But then you lose me when you imply that racism is no different than taking one of the inevitable counterintuitive conclusions in philosophy thought experiments. (I’ve previously had a lengthy discussion on this topic in this recent comment thread.)
If I were an organizer of a conference where I wanted having interesting and relevant ideas being discussed, I’d still want there to be a bar for attendees to avoid the problem Scott Alexander pointed out (someone else recently quoted this in this same context, so hat tip to them, but I forget the name of the person):
I’d be in favor of having the bar be significantly lower than many outrage-prone people are going to be comfortable with, but I don’t think it’s a great idea to have a bar that is basically “if you’re interesting, you’re good, no matter what else.”
In any case, that’s just how I would do it. There are merits to having groups with different bars.
(In the case of going for a very low one, I think it could make sense to think about the branding and whether it’s a good idea to associate forecasting in particular with a low filter.)
Basically, what I’m trying to say is I’d like to be on your side here because I agree with many things you’re saying and see where you’re coming from, but you’re making it impossible for me to side with you if you think there’s no difference between biting inevitable bullets in common EA thought experiments vs “actually being racist” or “recently having made incredibly racist comments.”
I don’t think I’m using the adjective ‘racist’ here in a sense that is watered down or used in an inflationary sort of way; I think I’m trying to be pretty careful about when I use that word. FWIW, I also think that the terminology “scientific racism” that some people are using is muddling the waters here. There’s a lot of racist pseudoscience going around, but it’s not the case that you can say that every claim about group differences is definitely pseudoscience (it would be a strange coincidence if all groups of all kinds had no statistical differences in intelligence-associated genes). However, the relevant point is group differences don’t matter (it wouldn’t make a moral difference no matter how things shake out because policies should be about individuals and not groups) and that a lot of people who get very obsessed with these questions are actually racist, and the ones who aren’t (like Scott Alexander, or Sam Harris when he interviewed Charles Murray on a podcast) take great care to distance themselves from actual racists in what they say about the topic and what conclusions they want others to draw from discussion of it. So, I think if someone were to call Scott Alexander and Sam Harris “scientifically racist,” then that seems like it’s watering down racism discourse because I don’t think those people’s views are morally objectionable, even though it is the case that many people’s views in that cluster are morally objectionable.
Small nitpick; this is a typo or ‘connection’ is something I’m not familiar with in this context.
Executive summary: The Guardian’s recent hit piece on the Manifest conference is filled with factual errors and unfairly smears attendees for having controversial views, but associating with people who have differing opinions is valuable and attempting to cancel them will lead to a society of boring conformists.
Key points:
The Guardian article smears Manifest attendees by cherry-picking controversial statements or associations, without engaging with their actual views.
Most people, if pressed, will express some views that sound bad out of context. Thinking deeply about topics like morality often leads to accepting unsavory implications.
Cancel culture punishes people for being interesting and saying things outside the Overton window. Only boring conformists are safe.
Associating with people who have controversial views is valuable and can lead to depolarization. Shunning them is not justified.
Social norms are often wrong, so even a perfectly rational thinker would constantly disagree with them. Stifling controversial views will lead to self-censorship and uninteresting groupthink.
The Manifest attendees weren’t even disproportionately right-wing. The Guardian is unfairly trying to cancel them for being interesting and thinking for themselves.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Well said. I love the “cancel culture is a tax on being interesting.”
That’s a pretty low-bar for interesting.
And how many left-wingers were there? To the left’s credit, they are a bit more socially-intelligent as to not mix & mingle with the crowd of tired cranks with the same tired ideas, but I assure you there are plenty of desperate cranks on the left who are “down bad”.
If you really want fresh thinkers with controversial ideas, have some actually controversial left wing speakers who’s ideas—unlike “controversial” right-wingers—actually are controversial and have the chance to change the world.
You kid yourself if you think the guestlist of libertarian & right-wing speakers at Manifest weren’t themselves also conformist.
EA began as an intellectual movement, and over the years it has watered this down to let in any and all contrarian rejects from other movements. And it shows. EA’s crypto-exchange psoter boy couldn’t grasp probability concepts like the Kelly Criterion, and our conferences are full of cranks, the alt-right, & pseudoscience. Being contrarian should not be mixed up with being EA.
I do think Hanania is interesting. He’s a pro immigration conservative, for instance, and constantly writes about things the right is wrong about. In particular, I found his much-maligned essay about pronouns and genocide pretty fascinating—a shockingly honest look into the unflattering bits of his own psychology.
I think a lot of the people there were prototypical gray tribe members—a bit left of center but with lots of weird heterodox views. Scott, for instance, is left of center, so is Nate Silver, so is Kelsey Piper. I also got an invite and I’m thoroughly left of center—albeit a bix heterodox—having praised Chomsky at some length, written critically about U.S. foreign policy on about a dozen different occasions, and written in support of open borders.
I don’t think that Hanania is a conformist, for instance. This shtick of “actually the non-conformists are the real conformists,” is silly.
Worth noting that Manifest wasn’t an EA conference. It just had some EA people there who wanted to go to a cool conference. Not sure what EA is supposed to do about that.
So… people who are neoliberal to centrist, with Scott (as others on this forum have pointed out on this forum) partial to race pseudoscience. Wow, what confronting fresh ideas.
Have some anarchists, socialists, communists. Have some actually brave thinkers. Be challenged. What are y’all afraid of?
Let me try to steelman this:
We want people to learn new things, so we have conferences where people can present their research. But who to invite? There are so many people, many of whom have never done any studies.
Luckily for us, we have a body of people that spend their lives researching and checking each other’s research: Academia. Still, there are many academics, and there’s only so many time slots you can assign before you’re filled up; ideally, we’d be representative.
So now the question becomes: why was the choice made to spend so many of the limited time slots on “scientific racists”, which is a position that’s virtually universally rejected by professional researchers, while topics like “socialism”, which has a ton of support in academia (e.g., the latest philpapers survey found that when asked about their politics, a majority of philosophers selected “socialism” and only a minority selected “capitalism” or “other”), tend to get little to no time allotted to them at these conferences?
I agree with the point your actually making here-namely that people invite racists but not socialists because they like racism better than socialism or other alternative viewpoints that they could invite people with, but I do have a nitpick:
While I’d much rather have (most, non-Stalinist) socialists than scientific racists, I’d say economists are the most relevant experts for economics, and they seem to be down on socialism, except maybe some non-mainstream market variants. Although I guess other social scientists also have relevant expertise and more of them are socialists I think? Insofar as philosophers are expressing reasonably high confidence in socialism by picking it in the philpapers survey even when “don’t know” is also an option, yet among economists socialism is (I think?) quite fringe, I feel like this is the kind of anti science/empiricism arrogance that philosophers are often accused of, usually quite unfairly. But then I am not a socialist.
I did try to find a survey for sociology, political science, and economics, not only today but also when I was writing my post on market socialism (I too wondered whether economists are more in favor of market socialism), but I couldn’t really find one. My guess is that the first two would be more pro-socialism and the last more anti, although it probably also differs from country to country depending on their history of academia (e.g. whether they had a red scare in academia or not).
This is probably partly because of the different things they’re researching. Economics tends to look at things that are easier to quantify, like GDP and material goods created, which capitalism is really good at, while philosophers tend to look at things that capitalism seems to be less good at, like alienation, which is harder to quantify (though proxies like depression, suicide and loneliness do seem to be increasing).
Not to mention, they might agree on the data but disagree on what to value. Rust & Schwitzgebel (2013) did a survey of philosophy professors specializing in ethics, philosophy professors not specializing in ethics, and non-philosophy professors. 60% of ethicists felt eating meat was wrong, while just under 45% of non-ethicists agreed, and only 19.6% of non-philosophers thought so. I personally think one of the strongest arguments against capitalism is the existence of factory farms. With such numbers, it seems plausible that while an average economist might think of the meat industry as a positive, the average philosopher might think of it as a negative (thinking something akin to this post).
I don’t see why we’d expect less factory farms under socialism, except via us being poorer in general. And I feel like “make everything worse for humans to make things better for animals” feels a bit “cartoon utilitarian super-villain”, even if I’m not sure what is wrong with it. It’s also not why socialists support socialism, even if many are also pro-animal. On the other hand, if socialism worked as intended, why would factory farming decrease?
The comment was about how factory farms are an argument against capitalism; not about why it is an argument for other economic philosophies, so one can’t conclude from this that some other specific economic philosophy (e.g., socialism) doesn’t have that argument against them. It could be that, e.g. factory farms are an argument against capitalism and socialism, but not mutualism.
There was no claim that this is why socialists support socialism, but even if there was, it doesn’t really matter for the argument. Even if we could conclude from “factory farms are an argument against capitalism” that “socialism is good for animal welfare”, why would the motivation of socialists matter? Even if socialists created better animal welfare only unintentionally, wouldn’t that still be one reason to support them? (Assuming you care about the consequences of policy more than the virtues of the participants)
Lastly, I want to talk about this claim. But less so to address you, and more so to address the forum users.
I don’t think that socialism would make us poorer, at least not in the long run. The dynamics of capitalism are very destructive (e.g. negative externalities, regulatory capture, planned obsolescence...) and countries like the Nordic countries, with more socialist policies, tend to do better.
Socialist firms were shown in meta analyses to not be less productive than capitalist firms, while being vastly more resilient (among many other beneficial attributes), so they would help the economy grow more in the long run, making us richer. This is not all there is to say; there are many more arguments and there are many more other factors to consider, but in the end, why bother?
You could spend time and energy crafting long chains of arguments with lots of citations and data on unpopular positions (even if you weren’t the person who made an assertion, like in this case) only to get vastly less karma/voting power than people who just assert the popular opinion. I.e., in this case, the assertion “socialism would make us poorer” without any sources or arguments. Which btw is fine, this is an internet comment not an academic paper, but I’ve experienced the dynamics on this forum for years; if one were to reply that it wouldn’t make us poorer, also without sources, or even with some sources, you would lose karma/voting power. And then another person would jump in and point out that the reply didn’t cover literally every aspect of the economy, and it avoided talking about this or that part, which is fine, demands for rigor are good, but the forum as a whole more often than not makes isolated demands for rigor, and the original anti-socialism assertion rarely if ever gets such a demand.
Case in point, the comment you’re replying to. It didn’t even make the assertion that socialism is better, just posted some studies and data from which one could infer that he is pro-socialism, and that’s enough to make him lose karma/voting power, while your stronger assertion without any studies/data (which, again, is fine) get lots of karma.
(Again, while the first two points are aimed at your reply, this last point is aimed at the broader EA forum user base.)
We’re afraid of people writing hit pieces about us and then boycotting and shunning us because of who we associate with.
Well half of you do and half of you don’t. The OP for example is defending the Manifest guestlist.
And for people like him who want defend these conferences for having interesting controversial people, why not actually have some confronting controversial people? Instead it’s always libertarians and neoliberals spouting the same tired old race psuedoscience.