My experience at the controversial Manifest 2024
My experience at the recently controversial conference/festival on prediction markets
Background
I recently attended the triple whammy of rationalist-adjacent events of LessOnline, Summer Camp, and Manifest 2024. For the most part I had a really great time, and had more interesting conversations than I can count. The overlap between the attendees of each event was significant, and the topics discussed were pretty similar.
The average attendee for these events is very smart, well-read, and most likely working in tech, consulting, or finance. People were extremely friendly, and in general the space initially felt like a high-trust environment approaching that of an average EAGlobal conference (which also has overlap with the rational-ish communities, especially when it comes to AI risks), even if the number of EA people there was fairly low–the events were very rationalist-coded.
Nominally, Manifest was about prediction markets. However, the organizers had selected for multiple quite controversial speakers and presenters, who in turn attracted a significant number of attendees who were primarily interested in these controversial topics, most prominent of which was eugenics.
This human biodiversity (HBD) or “scientific racism” curious crowd engaged in a tiring game of carefully trying the waters with new people they interacted with, trying to gauge both how receptive their conversation partner is to racially incendiary topics and to which degree they are “one of us”. The ever-changing landscape of euphemisms for I-am-kinda-racist-but-in-a-high-IQ-way have seemed to converge to a stated interest in “demographics”–or in less sophisticated cases the use of edgy words like “based”, “fag”, or “retarded” is more than enough to do the trick. If someone asks you what you think of Bukele, you can already guess where he wants to steer the conversation to.
The Guardian article
I
While I was drafting this post, The Guardian released a flawed article on Lightcone, who own the event venue Lighthaven, that a certain lawsuit claims was partially bought with FTX money (which Oliver Habryka from Lightcone denies). The article detailed some of the scientific racism special guests these past three events had.
In the past, The Guardian has released a couple of articles on EA that were a bit hit-piece-y, or tried to connect nasty things that are not really connected to EA at all to EA, framing them as representative of the entire movement. Sometimes the things presented were relevant to other loosely EA-connected communities, or some of the people profiled had tried to interact with the EA community at some point (like in the case of the Collinses, who explicitly do not identify as EA despite what The Guardian says. Collinses attempt to present their case for pro-natalism on the EA Forum was met mostly with downvotes), but a lot of the time the things presented were non-central at best, and I haven’t seen strong evidence that would suggest that the Lightcone team is guilty of any wrongdoing.
Despite this, I think the core claim of “the event platformed a lot of problematic people” holds true. Some of the things in it I might object to (describing Robin Hanson as misogynistic in particular registers a bit unfair to me, even if he has written some things in bad taste), but for the most part I agree with how it describes Manifest. What is up with all the racists?
II
The article names some people who are quite connected to eugenics, HBD, or are otherwise highly controversial. They missed quite a few people[1], including a researcher who has widely collaborated with the extreme figure Emil O. W. Kirkegaard, the personal assistant of the anti-democracy, anti-equality figure Curtis Yarvin (Yarvin himself wasn’t attending, although he did organize an afterparty at his house for Manifest attendees), and the highly controversial excommunicated rationalist Michael Vassar, who has been described as “a cult leader” involved in some people having a psychotic breaks due to heavy psychedelics use[2] (according an organiser Vassar did not end up coming to the event, but there were people involved with him that were present who said he might be dropping by and that he had bought a ticket). Manifest co-organiser Saul expanding on the Vassar situation here.
Among the people listed as special guests for LessOnline and Manifest I would be comfortable putting a total of eight people under the eugenics/HBD label. There might be more, but I am not an expert. In addition to those eight there were multiple prominent people taking part in the three events as attendees who clearly fall under this umbrella. I am not tallying Scott Alexander or Steve Hsu here, although both of them seemed and do seem at least sympathetic to some subset of HBD beliefs (I do get that this might be a controversial opinion to express here, and if you feel offended by this feel free to ignore this aside).
The race science people were fairly welcoming. As long as you didn’t react to their hot takes with a strong emotional outburst, didn’t use too many leftie shibboleths, and had a modicum of social skills, you could, like, hang out. If you were fun to hang around with, you probably were also invited to the Curtis Yarvin afterparty as well. The party featured almost every single person from the three events that fell under the category “vaguely racist” (the more cringy or overtly racist ones weren’t invited), along with many people who were there probably just out of sheer curiosity (these included some pretty famous people within the community, but I am not naming names). Newbies thought that the party was kind of lame, and the amount of controversial things being said was only about half a notch worse than what was already being said after midnight during Manifest when people didn’t have as many social guards up. Anti-trans sentiment, however, appeared to be way higher during the Yarvin party, even if race stuff was not much worse. And wow, some people really do idolize this Yarvin dude.
Takeaways
I do not live in the Bay Area. I do not know how representative of the Bay Area rationalism these events were. But I do think that these events featured a very problematic undercurrent in the rationalism community.
It is probably wise to have a stronger separation between EA and rationalism. Many people attend both rationalist meetups and EA meetups. Out of all the communities that have an overlap with the EA community, the rationalist community has the largest intersection. I think the EA community should strive to hold itself to a higher standard, and to the degree where we can affect what goes on with the rationalist community, we should at least demand them not to platform highly controversial figures with ideas way outside the Overton window.
Yes, it is true that these events weren’t EA events per se, but they featured prominent EAs, forecasting is sometimes considered to be a niche EA cause area, and rationalists and AI safety people are extremely intertwined. EA will be associated with what happens at events like these. If we don’t want these things to be associated with EA, add some distance. Some of the more good things that might come out of strong interest in genetics can be presented in a way that does not invite controversy. A hyperbolic example not strictly about anyone specific in particular: want to create healthier and smarter babies? Great! Having speakers who choose to express opinions on the Holocaust as an eugenic event during that presentation? Not so great! And now a non-insignificant portion of the audience is people who were attracted to the controversy. Even the good parts of a controversial idea are ruined if you have the wrong person talking about it.
[Edit: People have begun to object to this part of the text, since it was quite clear who I was loosely referring to here. I regret using this as an example, and I think the presence of the person holding this specific talk was way more justified and less likely to attract a bad crowd than many other controversial speakers. I do not think this speaker is an anti-semite. I’m leaving this reference in the text for posterity in a slightly edited form that I hope makes my point a little bit clearer.]
Closing words with some extra ramblings and loose thoughts about vibes
I am releasing this post under a pseudonym, because I really don’t know how much talking about this topic with my real name and face might hurt my future interactions with the rationalist community. It might turn out to have zero effect, but I dunno maybe the Manifest people and Lightcone would kind of dislike me or something.
LessWrong was where I first came across EA, and both communities have been important to me at different points in my life. In general I do identify more with the EA movement, and the vibes of both communities feel like they have diverged quite a bit. If I’d have to vaguely point to a specific difference in the vibes of an EAs and those of rats, I would say EAs feel more innocent whereas rats might, with possibly a little bit too much generalization, feel like they’d rank higher in some dark triad traits and feature more of chuunibyou tendencies sprinkled with a dash of narrative addiction.
I don’t really feel like many people in the rationalist community communicate very openly or honestly, even though non-deception is often thought to be one of their core tenets. I’m not sure how much this vibe can be explained by being exposed to the older iterations of LessWrong, where high-decouplers would discuss pick-up-artistry way beyond the bar for manipulation, where people might commit to naive utilitarianism at the expense of common sense, and where a small sub-community would obsess over scientific racism and group IQ differences (a sub-community which arguably gave rise to the modern alt-right, even though this honor might not be something they hold in high regard).
Anyways, those were some of my grievances about some of the special guests and a non-insignificant portion of the attendee base. In general I did have a good time at these events, even if some of the attendees did bum me out. I would probably go again, especially if whoever is responsible for choosing the speakers tones it down with the controversial special guests. But who knows, maybe next time half the people there will consist of Republicans and the Thielosphere[3]. Let me know what you think, but I won’t promise to reply in the comments.
- ^
I assume mostly because you really do need quite a lot of evidence to make a claim about someone in media and not get sued for slander.
- ^
One source from Scott Alexander here. Linked because this sounds like a shocking claim and I am not sure how widely people gossip about this stuff. For the rest of the people mentioned I refrained from linking to them.
- ^
Thiel is tied to Yarvin, who is tied to race stuff.
- Why so many “racists” at Manifest? by 18 Jun 2024 3:05 UTC; 189 points) (
- 17 Jun 2024 23:38 UTC; 118 points) 's comment on My experience at the controversial Manifest 2024 by (
- 18 Jun 2024 16:04 UTC; 63 points) 's comment on Why so many “racists” at Manifest? by (
- Truth-seeking vs Influence-seeking—a narrower discussion by 23 Jun 2024 12:47 UTC; 11 points) (
One aspect of the framing here that annoyed me, both in the OP and in some of the comments: the problem is not controversial beliefs, it is exclusionary beliefs. Here are some controversial beliefs that I think would pose absolutely no problem at this event or any other:
The many-worlds interpretation of quantum mechanics.
Virus gain-of-function research creates more risk than it prevents.
Nuclear energy is a necessary part of the transition away from fossil fuels.
The problem with racism and transphobia is not that people disagree about them! The problem is that these beliefs, in their content on the object level, hurt people and exclude people from the discussion.
Let’s avoid using “controversial” as a euphemism for “toxic and exclusionary”. Let’s celebrate the debate and discussion of all controversies that threaten no-one and exclude no-one. Suggesting any of that is at stake is totally unnecessary.
I think this concept of an “exclusionary belief” is incoherent. If Alice is a speaker at an event, and holds belief X, and Bob is very put off by belief X and is therefor less interested in attending, that is never just about X. That is always an interaction between Bob and X, it is a function of both. And for any X, there will exist a Bob. There are many anti-nuclear and green energy activists who would not attend a conference with a speaker who has advocated nuclear energy as a necessary part of the transition away from fossil fuels. There are surely researchers who do gain of function research, or who view it as essential to protecting against future pandemics, who would not attend a conference with a speaker advocating against gain of function research. I can certainly think of people in the world, on both sides of the political spectrum, who, had they been invited to Manifest, that would have given me pause. The question is how should we respond when we find ourselves in Bob’s shoes? And I think we should definitely not demand that Alice be deplatformed. Asking for someone else to be deplatformed, because of our own feelings about them or their beliefs, is controlling behavior. It is a heckler’s veto, and therefor contrary to ideals of free expression and intellectual inquiry. Ultimately each of us is responsible for our own feelings. Each of us can weigh the features of an event that we like against the features we dislike and decide for ourselves whether it is worth our time and energy and money to attend. Either choice is fine. But nobody owes us an event with any particular speakers or ideas included or excluded, and to act as though they do is just poor, controlling behavior.
To be honest, I didn’t intend to focus primarily on what an exclusionary belief is, as much as highlight that many controversial beliefs are not exclusionary. If we want to get more precise about it, I’m saying something like: all the objectionable beliefs here are beliefs about people who are also (perhaps prospectively) participating in the discussion, and this is a key thing that distinguishes them from like 95% of controversial (in the sense of heated disagreement) beliefs, and that’s a whole lot of baby that we risk throwing out with the bathwater if we keep saying “controversial” like the controversy itself is the problem.
I think this is mostly just arguing over hypotheticals, so it’s pretty impossible to adjudicate, but I want to highlight a difference between “I’m not going to this conference because it’s a waste of time, because they are discussing ideas that are obviously (to me) wrong”, and “I’m not going to this conference because it’s supporting and strengthening people who are actively hostile towards me, on the basis of characteristics that I can’t change, and is thereby either hostile to me itself or at least indifferent to hostility towards me”.
As a light thought experiment, what if Alice’s belief X was “People called Bob are secret evil aliens who I should always try to physically attack and maim if I get the opportunity” ?
Bob would understandably be put off by this belief, and have a pretty valid reason to not attend an event if he knew someone who believed it were present. Does it seem reasonable that Bob would ask that Alice (or people who hold the attack-secret-alien-Bobs belief) not be invited as speakers? Is that a heckler’s veto, and contrary to free expression and intellectual enquiry? Is Bob’s decision not to attend just a matter of his own feelings?
If answers to the above questions are ‘no’, it suggests it’s possible for a belief to be an ‘exclusionary belief’, on your terms.
In this case, I think the “physically attack and maim” part makes it much more than just a belief. So far as I am aware, nobody thinks anyone under discussion in relation to Manifest was ever likely to physically attack anybody.
Yes I’m not saying anyone was—this is a thought experiment to see if exclusionary beliefs can be a coherent concept. We can stipulate that Alice has this sincere belief, but no history of such attacks (she’s never met a Bob), and hasn’t made any specific threats against Bob. It’s just a belief - a subjective attitude about the world. If Bob does not attend due to knowing about Alice’s belief, is that reasonable in your view?
Bob can attend or not attend for whatever reasons he wishes. I’m not trying to judge that at all. The question seems to be whether Bob can reasonably ask the organizers to deplatform or uninvite or ban Alice. In your scenario, I think the answer is “yes”, though I would frame that as being about Alice’s likely future criminal behavior, not directly about the belief that precipitates that behavior.
Thanks. Given Alice has committed no crime, and everything else about her is ‘normal’, I think organizers would need to point to her belief to justify uninviting or banning her. That would suggest that an individual’s beliefs can (in at least one case) justify restricting their participation, on the basis of how that belief concerns other (prospective) attendees.
I think you’d be a lot more successful with a hypothetical that wasn’t about whether someone would follow the law and/or conference rules.
I would also expect, for example, a conference under Chatham House Rules to reject participants who believed this kind of rule did not bind them. Even if the organizers otherwise were quite committed to free expression. Organizers can and should be willing to consider expressed beliefs even without a history of acting on them.
I also think it being about “people named Bob” messes with our intuitions, since it’s so silly, but ok.
Perhaps better hypothetical would be if Alice believed people named Bob were not moral patients (that had a bunch of nasty views downstream from this on what law and social norms should be) but still confirmed (and organizers trusted) that she would follow the law and treat him respectfully at the conference?
The point wasn’t to motivate intuitions on the broader issue, but demonstrate that exclusionary beliefs could be a coherent concept. I agree your version is better for motivating broader intuitions
Does advocating the anti-Bob position in any way constitute not “treat[ing] him respectfully,” even if he is not in earshot? As a practical reality, very few people would feel psychologically safe attending a conference at which people were having anti-Bob conversations after checking the participants’ comfort level with euphemisms and/or slurs, or inviting people to an off-site anti-Bob party.
Also: While I think the Alice hypo is too related to non-speech actions, I think the anti-Bob hypo is too divorced from them in the abstract. We’d need to consider a context in which anti-Bobism and adjacent thoughts—at a minimum—had been used to deny fundamental human rights to Bobs over an extended period of time. And where (at least) a number of people preaching anti-Bobism would favor rights denial against Bob and other Bobs (e.g., prohibition/restriction on procreation, deportation) should they come into power.
Let’s say that Alice is going to advocate for her anti-Bob position even when Bob is in the discussion. And that this is a carve out from “treat Bob respectfully”.
Bonus questions: Is the answer the same in this related hypo—Charlie thinks Delana Dixon, and only her among all human beings, is not a moral patient. In other words, does it matter if the belief and advocacy are targeted at an individual person vs. a group based on an immutable characteristic?
Also, does the organizer’s assessment of Alice and Charlie’s reasons for holding their beliefs matter here? Should they give less tolerance to the extent they conclude a belief is based on bigotry, delusion, a bad breakup with a Bob or with Delana Dixon, etc?
I think the problem with making the hypo more concrete in the ways you suggest is that then whether the hypo represents reality becomes highly contestable, and we devolve into object level debates. To take one example, despite being very pro-immigration myself, I find your suggestion that deportation of non-citizens somehow violates fundamental human rights to be absolutely ridiculous. If you set up a hypothetical about Alice wanting to deport non-citizen Bobs, you won’t convince me of anything. I’m guessing a lot of the disagreement here is less about event norms and more about people in the EA community being intolerant of those they disagree with politically. One reason for choosing such an abstract hypothetical was to try to separate out the two.
Note that I didn’t actually say “deportation of non-citizens somehow violates fundamental human rights” as you assert. The reference to fundamental rights was in the past tense: “had been used to deny fundamental human rights.” Certainly slavery involves the denial of a fundamental human right. The e.g. that references deportation follows the broader term “rights denial.”
That being said, I would characterize at least severe discriminatory treatment by the government on the basis of race as denial of a fundamental human right.
In any event, I recognize the concern you identify—but using only abstract hypotheticals is going to systematically bias the hypo in favor of the scientific racists by stripping away important context. If adding certain context changes the results of the hypo, then we’re stuck with an object-level debate on which hypo better reflects reality.
I was a bit confused by this comment. I thought “controversial” commonly meant something more than just “causing disagreement”, and indeed I think that seems to be true. Looking it up, the OED defines “controversial” as “giving rise or likely to give rise to controversy or public disagreement”, and “controversy” as “prolonged public disagreement or heated discussion”. That is, a belief being “controversial” implies not just that people disagree over it, but also that there’s an element of heated, emotional conflict surrounding it.
So it seems to me like the problem might actually be controversial beliefs, and not exclusionary beliefs? For example, antinatalism, communism, anarcho-capitalism, vaccine skepticism, and flat earthism are all controversial, and could plausibly cause the sort of controversy being discussed here, while not being exclusionary per se. (There are perhaps also some exclusionary beliefs that are not that controversial and therefore accepted, e.g., some forms of credentialism, but I’m less sure about that.)
Of course I agree that there’s no good reason to exclude topics/people just because there’s disagreement around them—I just don’t think “controversial” is a good word to fence those off, since it has additional baggage. Maybe “contentious” or “tendentious” are better?
Yeah I just don’t think that what people are objecting to is that these beliefs are the subject of even heated disagreement. I’m not saying “disagreement is fine, as long as it’s not heated”, I’m saying “even heated disagreement is fine, but there’s some other distinction that makes it potentially a problem”, and while I’m not quite precise about what that other distinction is, it’s something like, is this topic directly about some of the people in the conversation, and does it implicitly or explicitly threaten the legitimacy of their presence or their contribution?
I think vaccine skepticism is an interesting example, as I do tend to think conferences shouldn’t invite vaccine skeptics. But that’s more out of a sense that vaccine skeptics in practice are grifters / dishonest (which is no coincidence, in that the genuinely curious have mostly had their curiosity satisfied). I would be very happy to see someone speak about how the new malaria vaccines aren’t effective enough to be worth it, if they had good reasons for thinking that.
I don’t think people object to these topics being heated either. I think there are probably (at least) two things going on:
There’s some underlying thing causing some disagreements to be heated/emotional, and people want to avoid that underlying thing (that could be that it involves exclusionary beliefs, but it could also be that it is harmful in other ways)
There’s a reputational risk in being associated with controversial issues, and people want to distance themselves from those for that reason
Either way, I don’t think the problem is centrally about exclusionary beliefs, and I also don’t think it’s centrally about disagreement. But anyway, it sounds like we mostly agree on the important bits.
I like what this is getting at, and also I personally disprefer many of the specific “controversial/exclusionary” speakers at manifest being discussed (and would expect things be better if some had not attended), but I think this proposal might need revision to really work more broadly.
First, I’m pretty sure it is common lingo to have “controversial” be used in the way it is in this article. If this were a news story in The New York Times, I’d expect it would be much more likely to use the word “controversial” than the word “exclusionary”.
If the New York Times and WSJ both had front-page stories about “Conference draws attention for controversial speakers”, I’d expect this to be more about radical right-wing or left-wing beliefs than I would the many-worlds interpretation of quantum mechanics.
Second, I’m nervous that in practice that “exclusionary” is not as clean a concept as we’d like it to be. It’s arguably too low a bar in many cases, and too high in others. I understand this to be arguing that there are some beliefs are disliked by others, enough to convince others to attend the event.
But I could imagine many ideas in this category. If there were a speaker talking about how to secure Taiwan, arguably Chinese nationalists would feel uncomfortable attending and argue that that is exclusionary. Many people are uncomfortable with basic ideas in effective altruism and might not attend conferences with prominent EAs—they might argue that that EA is exclusionary.
I’m not sure if one could argue that many beliefs themselves lead to people being uncomfortable—this seems more like a function of both the belief and the culture at some moment in time.
For example, say that we did live in some world where all discussion of “the value of nuclear research” was highly coupled with hateful takes against some group or other. In that case, this might then become exclusionary, in a way that could, in many cases, subsequently make sense to draw less attention to.
All that said, personally, I agree with what the post is grappling with, I’m just nervous about the idea of trying to change terminology without thinking it through.
Yeah, and I mostly think this is a mixture of confusion and cowardice on their part, frankly. To the extent that they really believe the controversy is itself the problem, I think they’re wrong. To the extent that they’re saying “controversial” because it’s unarguably literally true and allows them to imply “bad” without having to actually say it, I think it’s an attempt to project a false neutrality, to take a side without appearing to take a side. Some react to that by saying “let our neutrality not be false”, some by “let us not project neutrality”. Either way has more respect from me.
Yeah, for sure I expect disagreement about what’s exclusionary, and when we should stand by something even though it’s exclusionary. My main point was to point out that lots of disagreements aren’t exclusionary, and choosing how we handle potentially-exclusionary discourse doesn’t need to put any of that at stake. (There’s room to disagree with this distinction, but that’s the distinction I was trying to draw.)
If I am in support of designer babies, and see this as an important issue, does that fall in under your “toxic and exclusionary” label?
Or would you perhaps want to make that taboo, due to “guilt by association”? (If so, I ask rhetorically—at what degree of seperation does the guilt by association stop?)
I see that the post author explicitly chose to not put Steve Hsu under the eugenics/HBD label, which I appreciate. But the author did throw shade at Steve Hsu.
If people didn’t know the context, they might think that you only want to make opinions taboo if they are mean-spirited or inherently constitute some sort of personal attack. While in actuality, the scope of ideas you want to be taboo is wider than that. And that is for good and understandable reasons. It’s wider than that for me too (even if it is less wide than yours).
From my perspective, there is significant truth/wisdom in your comment here. But also some degree of not acknowledging genuine tradeoffs.
As an added comment: I feel unsure myself about what the right balance is for this sort of thing.
So, I downvoted this post, and wanted to explain why.
First though, I’d like to acknowledge that Manifest sure seems by far the most keen to invite “edgy” speakers out of any Lighthaven guests. Some of them seem like genuinely curious academics with an interest bound to get them into trouble (like Steve Hsu), whereas others seem like they’re being edgy for edges sake, in a way that often makes me cringe (like Richard Hanania last year). Seems totally fair to discuss what’s up with that speaker choice.
However, the way you engage in that discussion gives me pause.
I’m happy to cut you some slack, because having a large community discussion about these topics in a neutral and detached way is super hard. Sometimes you just gotta get your thoughts out there, and can’t be held to everything under a microscope. And in general, that’s ok. Nonetheless, I feel kind of obliged to point out a bunch of things that make me uncomfortable about your post.
The title itself describes Manifest as controversial as though it was an objectively verifiable descriptive term (such as “green”). This gives me an immune reaction, feeling something like “Well show me the evidence and allow me to decide for myself whether it seems controversial”.
Again, this section plainly asserts that some people are “racist” without really arguing for or substantiating that claim. And what does “racist” even mean here? I’m worried that there’s a bait-and-switch going on, where this term is being used as an ambiguous combination of grave, derogatory accusation; and descriptive of a set of empirical beliefs about demographics and genetics. (Or to clarify: there’s of course absolutely such a bait-and-switch going on, in the Guardian article and lots of broader discourse, my worry is about it also leaking into EA forum discussion via your post.)
...what?
The piece was inaccurate in almost every paragraph. Whether it be easily verifiable factual claims; or its confused attempt to designate a section of social reality (most notably bundling accelerationism and alignment folks into one group).
It used the exact pattern you outlined: bundling a set of unrelated facts to make the receipient look bad. I don’t see why they would make this be a weird Frankenstein-combination of article about SBF and article about HBD, unless it was a deliberate attempt to cause maximum reputational damage to the recipient. (Though I have hypotheses about what’s up.)
Take for example this:
The screenshot says:
A “walled, surveilled compound”… come on, it’s a Hansel and Gretel looking old inn with a fence around it:
Another key piece of evidence is how the article decides to use Scott Alexander’s real name, even though it is largely unknown, doesn’t have any impact on the reported story, and was at the heart of a large blow-up a few years ago where New York Times decided to doxx Scott, against his strong preference to remain pseudonymous.
Judging from their tweets, the author of the article is deliberately adversarial:
Sure, I’m hammering in the point here. But given the blood, sweat and tears my team poured into making Lighthaven great; I care a lot about being very clear that yes, this was a hit piece: a piece of writing deliberately designed to cause damage, rather than conveying information.
What’s up with the use of “Republicans” here? Am I misunderstanding something, or is it being used interchangeably with “the cluster of people you want to distance yourself from with the post”? That seems… kind of intense? (I’d get this a bit more if you were from Europe originally, like myself, where being a “Republican” is sometimes seen as a kind of unbelievable American thing extremely far from most people’s political beliefs… but in an American context, the quoted section sounds crazy)
In the context of the above epistemic moves, it’s definitely uncomfortable to me that the post then engages in these pretty sweeping proposals. What exactly is the intended separation here?
Distancing from Lighthaven (because we rented our venue to a paying customer and gave them large freedom to invite the speakers they desire)?
Distancing from Manifold, the prediction market website (whose utility is completely independent of who they invite to Manifest)?
Distancing from Manifest (seems maybe more fair given your beliefs, though I personally disagree)?
Distancing from anything vaguely rationalism-adjacent...?
I think controversial is a totally fair and accurate description of the event given that it was the subject of a very critical story from a major newspaper, which then generated lots of heated commentary online.
And just as a data point, there is a much larger divide between EAs and rationalists in NYC (where I’ve been for 6+ years), and I think this has made the EA community here more welcoming to types of people that the Bay has struggled with. I’ve also heard of so many people who have really negative impressions of EA based on their experiences in the Bay which seem specifically related to elements of the rationalist community/culture.
Idk what caused this to be the case, and I’m not suggesting that rationalists should be purposefully excluded from EA spaces/events, but I think there are major risks to EA to be closely identified with the rationality community.
No, this argument is importantly invalid.
It was not a “critical story”. It was a hit piece engineered to cause reputational damage. This distinction really matters. (For people who wanted more receipts than my above comment about the adversarial intent, the journalist behind the article now also has sent a cryptic message eerily similar to a death threat(!!) in response to discussion of the article, by what appears to a political rival of theirs. This is not neutral reporting)
The majority of commentary I saw was complaining about the piece being a hit piece. See [1], [2], [3], [4], [5], [6] … . The piece was also community noted on twitter.
The event series LessOnline, Summer Camp, Manifest in total had 500+ guests, 500+ sessions, 70+ invited speakers, across a 10 day stretch. It was a large festival with a ton of different content.
I strongly reject the norm whereby a belligerent writer at a small news outlet can pick out a small slice of a large event, write an adversarial hit piece on it, have people complain about the piece’s journalistic integrity, get some activity as a result; and then have people claim the whole event could be “fairly and accurately” described as controversial(!)
The term is not fair and it is not accurate. Manifest was not controversial; I reject the label. Closest I think is right is “Manifest invited some controversial speakers”. Like this new article from today, for example, which says “the venue’s owners played host to a conference with controversial attendees”. That seems right, and that I encourage a conversation about!
You might want to make your point by appealing to the conference itself, but appealing to the guardian article and its effects really is not a valid argument. For the epistemic health of the community, I think it would be wise to stay way clear of the process that generated that term here.
Of course Manifest is controversial; the very active and heated debate on this post is evidence of that!
No. You pointing a finger and yelling “controversial!” doesn’t make something controversial any more than you yelling “racist” at people makes them racist.
I think if the only thing claiming controversy was the article, it might make sense to call that fabricated/false claim by an outsider journalist, but given this post and the fact many people either disapprove or want to avoid Manifest, (and also that Austin writes about consciously deciding to invite people they thought were edgy,) means I think it’s just actually just a reasonable description.
And there’s disanalogy there. Racism is about someone’s beliefs and behaviors, and I can’t change those of someone’s else’s with a label. But controversy means people disagree, disapprove, etc. and someone can make someone else’s belief controversial just by disagreeing with it (or if one disagreement isn’t enough to be controversy, a person contributes to it with their disagreement).
To clarify:
Claiming that Manifest is controversial because of the Guardian reporting—I’ll argue against this pretty strongly
Claiming that Manifest is controversial because of an independent set of good faith accounts from EA forum members—more legit and I can see the case (though I personally disagree)
Very importantly, Garrison’s comment was arguing using 1, not 2.
To perhaps help clarify the discourse, I’ll leave a comment below where people can react to signal “I think the argument for controversy from the Guardian article is invalid; but I do think Manifest should be labeled controversial for other arguments that I think are valid”
React to this comment to convey opinions on:
“I think the argument for controversy from the Guardian article is invalid; but I do think Manifest should be labeled controversial for other arguments that I think are valid”
The definition of “controversial” is “giving rise or likely to give rise to controversy or public disagreement”. The definition of “controversy” is “prolonged public disagreement or heated discussion”. This unusually active thread is, quite clearly, an example of “prolonged public disagreement or heated discussion”.
I think the really key thing here is the bait-and-switch at play.
Insofar as “controversial” means “heated discussion of subject x”, let’s call that “x-controversial”.
Now the article generates heated discussion because of “being a hit piece”, and so is “hit-piece-controversial”. However, there’s then also heated discussion of racism on the forum, call that “racism-controversial”.
If we then unpack the argument made by Garrison above, it reads as “It is fair and accurate to label Manifest racism-controversial, because of a piece of reporting that was hit-piece controversial”—clearly an invalid argument.
Moreover, I don’t know that the forum discourse was necessarily that heated; and seems like there could be a good faith conversation here about an important topic (for example the original author has been super helpful in engaging with replies, I think). So it also seems lots of “heat” got imported from a different controversy.
Crucially, I think part of the adversarial epistemic playbook of this article, the journalist behind it, as well as your own Tweets and comments supporting it, is playing on ambiguities like this (bundling a bunch of different x-controversial and y-controversial things into one label “controversial”), and then using those as the basis to make sweeping accusations that “organisations [...] cut all ties with Manifold/Lightcone”.
That is what I’m objecting so strongly against.
What does “controversial” mean, according to you?
I think Shakeel’s cited definition with my clarification here seems good; https://forum.effectivealtruism.org/posts/MHenxzydsNgRzSMHY/my-experience-at-the-controversial-manifest-2024?commentId=rB6pq5guAWcsAJrWx
What would you suggest as an alternative title? I don’t feel very strongly about that particular choice of word and would be happy to change the title.
I considered changing the title to “My experience with racism at Manifest 2024”, but that feels like it might invite low quality discussion and would probably be bad.
“My experience at Manifest 2024”
“My experience with controversial speakers at Manifest 2024”
“My perception of HBD discourse at and around Manifest 2024”
I’d suggest link searching stories on Twitter to see what their general response is. My Twitter feed was also full of people picking the story apart, but that’s clearly more a reflection of who I follow! Many people were critical (for very good reason, mind you!), but many praised it (see for yourself). There were a ton of mistakes in the article, and I agree that the authors seemed to have a major axe to grind with the communities involved. I’m a journalist myself, and I would be deeply embarrassed to publish a story with so many errors.
I didn’t claim that the event was controversial solely because of the Guardian article — I also mentioned the ensuing conversation, which includes this extremely commented and voted upon post.
And whether you like it or not, The Guardian is one of the largest newspapers in the world, with half of the traffic of the NY Times!
No, I think this is again importantly wrong.
First, this was published in the Guardian US, not the Guardian.
The Guardian US does not have half the traffic of the NYTimes. It has about 15% the traffic, far as I can tell (source). The GuardianUS has 200k Twitter followers; The Guardian has 10M Twitter followers (so 2% of the following).
Second, I scrolled through all the tweets in the link you sent showing “praise”. I see the following:
Emile Torres with 250 likes.
Timnit Gebru’s new research org retweeting, 27 likes
A professor I don’t know supporting it, 117 likes
Shakeel being “glad to see the press picking it up”, 14 likes
A confusing amount of posts, maybe 10+, which retweet and get 0 likes and no engagement, and 10 that get 1-10 likes
Original tweet by the author of the article, 500 likes
Another journalist praising, 60 likes
You can of course compare this to:
Tweet from a usually EA-critical account with 161 likes, “This is just bad assignment work for whoever wrote this beat.”
Theo Jaffe critical tweet, 144 likes
Robin Hanson with 400 likes, complaining about defamation.
Byrne Hobart critical tweet, 500 likes
Multiple Kelsey tweets, with 300 likes
Habryka’s refutation, 450 likes
Quilette editor critical tweet, 100 likes
So I think this just clearly proves my point: the majorty of engagement of this article on Twitter is just commenting on it being a terrible hit piece.
The tiny wave of praise came mostly from folks well known for bad faith attacks on EA, a strange trickle of no-to-low engagement retweets, 1-2 genuine professors, and, well, Shakeel.
My mistake on the guardian US distinction but to call it a “small newspaper” is wildly off base, and for anyone interacting with the piece on social media, the distinction is not legible.
Candidly, I think you’re taking this topic too personally to reason clearly. I think any reasonable person evaluating the online discussion surrounding manifest would see it as “controversial.” Even if you completely excluded the guardian article, this post, Austin’s, and the deluge of comments would be enough to show that.
It’s also no longer feeling like a productive conversation and distracts from the object level questions.
Thanks for your comment! I think most of these issues stem from the fact that I am not a very good writer or a communicator, and because I tried to be funny at the same time. I hope you can cut me some slack, like you said. Rest assured I haven’t written this post as a bad-faith hit piece, but as a collection of grievances that expand upon some of the core claims The Guardian article made. I am quite a conlfict averse person, so doing this in the first place is pretty nerve wrecking and I’m sure I made a bunch of mistakes or framed things in a sub-optimal way.
I’ll try to reply to some of your points here:
My original draft had a different title, but the release of the Guardian article and subsequent Twitter discussion among EAs and rationalists made me change the title. It felt like an appropriate adjective, and I am somewhat surprised that you don’t feel like these things could be called controversy or warrant the use of that word. I don’t feel very strongly on this, though, and am happy to change the title if you feel like it is inappropriate.
I agree that I have not spent much time actually listing out specific instances that I felt were racist, and I am trusting the reader to simply believe me when I say that depending on the level of comfort some participants said some pretty racist things, or have backgrounds in HBD stuff (which I consider to racist by nature, as by default this stuff does not improve the lives of minorities, but does the very opposite). I am afraid that if I list specific takes or topics I will be blowing my anonymity pretty fast.
If the event organizers wish to substantiate the claim that many people experienced racist discourse, they could make an anonymous survey for the event attendees. I can believe that one probably can go through the collection of events without noticing that a lot of the attendees seem to hold quite unsavory views, but especially for the people who took part in all three events I feel like that would require quite a bit of naivety. I would love to hear from other attendees as well.
Thank you for objecting to this and providing receipts. I will edit the post to reflect what David Mathers said along the lines of “a core argument of the article about the event featuring HBD and otherwise problematic people remains true”. You are correct that the article does appear biased. I am not sure I would still label the post under “hit-piece”, but regardless of that keeping that word in the text would be a distraction and will be changed.
Many of the values many Republicans hold are incompatible with the values of EA. In addition, there was at least one Republican working in politics present at the event, who engaged in transphobic discourse. I would rather not see more of this.
I was born outside of US and have lived outside of it for most of my life. When in the US, I have not interacted much with people who strongly identify as Republicans. I agree that this might make me biased.
I agree that this was kind of vague, and I am finding it difficult to turn this into actionable interventions. In order to do something like that properly I feel like we’d need more people dedicating way more time to think about this.
The idea here in short is:
A lot of rationalist are keen on discussing controversial ideas beyond the current Overton window.
This attracts people who are mostly drawn to controversy, and who hold controversial views.
Platforming these people affects both how the community will look like in the future, and how the community is perceived by outsiders.
Due to community overlaps, both the community make-up and the perceptions will reflect on EA as well.
EAs should demand rationalists and other overlapping communities to at least not platform (for example, not an exhaustive list) bigots, race scientists, or otherwise highly problematic people who hold views incompatible with EA.
If this is not possible, EAs should add some distance between the communities, avoid advertising adjacent community events, and go to adjacent community events less.
I do not think that Lighthaven or the people running it have done anything wrong. I think whoever was in charge of inviting special guests for the three events shouldn’t have platformed many of the people, and someone should have vetted the special guest list.
I am open to trade, but I would like something in return, and my guess is it would have to be pretty valuable since option value and freedom of expression is quite valuable to me. I don’t see a basis on which the EA community would have any right to “demand” such a thing from rationalists like myself.
Thanks for the reply, it feels like you’re engaging in good faith and I really appreciate that!
Brief notes --
The word “controversy”: Thanks. I think the issue with some of these media things is that they feed off of themselves. Something becomes a controversy merely because everyone believes it’s controversial; even though it really might not have to be. (For a longer explanation of this phenomenon, search for “gaffe” here)
People you met: I believe you that you met people who were into HBD. I saw at least one comment in Manifest discord last year that weirded me out. I’m pro people discussing that and how to relate to that. (I’m just worried how the term “racist” easily steers this off the rails, as seen in some of the other comments on this post)
Republicans: I’ll be blunt, but I think you’re way off base here. Being a republican is equally as compatible with EA as being a Democrat. Lots of people on both sides have incompatible views. I honestly think you just haven’t met enough Republicans! (Maybe some could introduce themselves in reply to this comment? :) )
Distancing: I think some version of the “platforming” concept makes sense. I currently don’t think Lighthaven should be die hard free speech absolutists. We’re freer than most—but there’s some limit. Yet platforming rules are really tricky to apply. To me, the trickiest part is that deplatforming not self-correcting: by removing someone’s ability to speak, you also risk removing their ability to complain about being removed. This freaks me out.
>I’m just worried how the term “racist” easily steers this off the rails, as seen in some of the other comments on this post
Not many terms are more gerrymandered or more “powerful.” Overuse and lack of clarity are degrading its usefulness.
>(Maybe some could introduce themselves in reply to this comment? :) )
Doing so seems like a good way to get put on some EA watchlist of who shouldn’t be invited to future events, or at least put under greater scrutiny :p Maybe after the election season you’ll have better luck...
Have you had a look at things like project 2025? Because I’ll be honest, if EAs despite that think that “being a republican is equally as compatible with EA as being a Democrat” (as the agree-votes seem to indicate) then I don’t think I want to be an EA.
Maybe useful: “Latently controversial” – there’s no public controversy because people didn’t know about it, but if people had more information, there would be public controversy. I think this would perhaps be more the case with Manifest if the article hadn’t come out, but it’s still reasonable to consider Manifest to have some inherent potential “controversialness” given choice of speakers.
FWIW I found your writing in this post better and more honest and to-the-point than most of what’s on the forum.
This is an opinion of yours for which counterarguments exist.
If HBD happens to be broadly correct then having people act under that assumption likely DOES improve the lives of minorities, at least compared to the mainstream alternative world in which HBD is taboo and we try to pretend every group is perfectly equal to every other group in every possible way so it must be fixed when group differences pop up.
The main HBD response to group differences existing is to ALLOW group differences to exist.
That’s a policy which is inexpensive, noncoercive, doesn’t require extra bigotry to be imposed from outside, doesn’t undermine the success of the few high-achieving minorities in relevant fields, doesn’t set up underqualified minority representatives for failure, doesn’t promote resentment against structurally unfair treatment, doesn’t deepen existing bigotry…the way the DEI/AA approach does.
Pounding square pegs into round holes is rarely good for the pegs.
So it’s okay for you to BELIEVE that HBD doesn’t improve the lives of minorities but you shouldn’t take that belief as axiomatic—it’s something that needs arguing for.
And wouldn’t it be kinda hard to HAVE that argument if you start by banning from discussion everyone who disagrees with you?
Can you tell us what you mean by HBD? Like give definition? Is it just the idea that there sometimes are statistical, genetic differences between groups, such as racial groups?
HBD is mainly the idea that different groups of people are different. And should be expected to differ; Humans are Bio-Diverse.
Different groups differ along every axis—anything you can measure, you should expect measurable differences. Different skills, different abilities, different interests. HBD is understanding and accepting that as a base fact about the world and taking it into account. Your null hypothesis should never be that all groups are exactly the same unless bigotry or structural racism causes them to be different—rather it should be that different groups differ. If anything, we should be surprised and suspect bias if they don’t differ!
This does apply when the groups are “races” but also applies with groups we’d categorize as the same “race”. German-Americans are different from Italian-Americans are different from Swedish-Americans. If anyone bothered to look we’d also find those kinds of groups differ by income, by wealth and—most of all—by representation level in various professions or college majors.
Men differ from women, Red Sox fans differ from Giants fans, people from group A are different from people in group B and that is okay—viva la difference!
Group differences can be genetic or cultural or both. And yes, IQ is one of the zillion things that differs. But it doesn’t really matter why groups differ so much as that they do and that fact has implications: it means in the absence of bias we still shouldn’t expect absolute equality of outcome to be possible or even a good idea, which makes DEI efforts likely to become an unending black hole sucking up resources without improving the world.
For example, let’s consider the representation level of Asians among professional basketball players: Asians are 6% of Americans but only 0.4% of the NBA. That means a lot of Asian people who COULD be going into basketball are doing something else instead—they must have some other career they enjoy more or are better at than basketball. Suppose we wanted to “fix” this “underrepresentation”. If we poured enough resources into it we probably could! We could bribe or shame teams into lowering their standards so as to accept more mediocre Asian players and subsidize their salaries to take the job. What does that immediately do? It validates and reinforces the stereotype that Asians are bad at basketball while creating racial resentment. Everyone rejected from a team now hates Asians for taking their spot; everyone in a team now expects their Asian players to not be very good. Since the new players are people who wouldn’t otherwise have played basketball at all they’re less likely to succeed at it; they’re likely to find they would have been better off going straight into law or medicine or whatever their other option was. DEI made them choose a worse career where everyone hates them whereas HBD would have allowed them to follow a course better suited to their height and other relevant attributes. In this case, HBD makes the minority in question better off in the long run by leaving them alone.
Thank you for your explanation. One thing that stands out to me is that “human biodiversity” is a phrase that uses the language of science, yet you seem less interested in the scientific questions and much more interested in the policy questions. To continue with your example of Asians in the NBA, that seems to point at any number of purely factual scientific questions that could be explored. Are Asians generally lacking in some relevant physical characteristic, such as height, agility, or reflex speed? Are they better at something else, creating higher opportunity costs for them, and if so, what and why? Yet your seem to focus less on these scientific factual sorts of questions, and more on how we should respond to the observed difference on a policy level. Am I correct in reading that HBD is more a policy stance than an area of science?
I think the fact that you said “ambiguous combination of grave, derogatory accusation” is a problem for your argument, because it suggests that you don’t have anything in mind that racism could mean other than a set of empirical beliefs about demographics and genetics. If this is the only actual thing that comes to mind for people, then presumably the grave/derogatory aspect is just a result of how they view those empirical beliefs about demographics and genetics.
I say this as one of the people who started HBD conversations at less.online (main one being a conversation about this paper—I didn’t do the whole fishing-for-compatibility thing that OP mentioned). Or I would be inclined to call them racist conversations, though if I was to propose an alternate meaning of “racist” where I don’t count as a racist, it would be something like: someone whose political theories find it infeasible to work with different races. White separatists would be a central example, in that they decide it’s too infeasible to work with black people and therefore want their own society. And e.g. cops who aren’t accountable to black communities would also be an example of racism.
But this would exclude some things that I think people would typically agree is racism, e.g. cops who do racial profiling but don’t conspire to protect each other when one of them abused a black person who is seeking accountability. So I wouldn’t really push this definition so hard.
In my opinion, a more productive line of inquiry is that a lot of HBD claims are junk/bullshit. From a progressive perspective, that’s problematic because there’s this giant edifice of racist lies that’s getting enabled by tolerating racism, and from the perspective of someone who is interested in understanding race, that’s problematic because HBD will leave you with lots of abd flaws in your understanding. Progressives would probably be inclined to say that this means HBD should be purged from these places, but that’s hypocritical because at least as many progressive claims about race are junk/bullshit. My view of the productive approach would be to sort out the junk from the gems.
They didn’t doxx scott alexander—his name is public knowledge now. He doxxed himself a couple of years back. But you’re right that that was probably deliberately adversarial.
Yeah. I am aware of the story. (I was in fact the person who made this site, together with my colleague Ben.) Updated my comment for clarity.
(For people who don’t know all the details: Scott didn’t just voluntarily doxx himself. He only did it in a kind of judo-move response to the New York Times informing him they were going to proceed with doxxing him, against his repeatedly strongly expressed wishes.)
I was at Manifest as a volunteer, and I also saw much of the same behaviour as you. If I had known scientific racism or eugenics were acceptable topics of conversation there, I wouldn’t have gone. I’m increasingly glad I decided not to organise a talk.
EA needs to recognise that even associating with scientific racists and eugenicists turns away many of the kinds of bright, kind, ambitious people the movement needs. I am exhausted at having to tell people I am an EA ‘but not one of those ones’. If the movement truly values diversity of views, we should value the people we’re turning away just as much.
Edit: David Thorstad levelled a very good criticism of this comment, which I fully endorse & agree with. I did write this strategically to be persuasive in the forum context, at the cost of expressing my stronger beliefs that scientific racism & eugenics are factually & morally wrong over and above just being reputational or strategic concerns for EA.
Hey huw—I’m very grateful that you took the time to volunteer at Manifest. I hope that you overall enjoyed your time at the festival; either way, thanks for the feedback.
I don’t love that some guests we invited may turn away bright, ambitious, and especially kind folks like yourself; I write a bit more about this here. I think the opposite is true as well, though, where left-leaning views turn away some of the most awesome up-and-coming folks. My subjective guess is that EA as a whole is far more likely to suffer from the latter failure mode.
In any case, I expect EAGs to represent more of an official EA party line with respect to who they include or exclude, and encourage you to look there if you don’t find Manifest to your tastes. One of the explicit tenets of Manifest that distinguishes it from an EAG is that we are default-open rather than default-closed; there’s no application process where we screen attendees to conform to a particular mold.
Here’s how I interpret your response:
Manifest is cool and open; EA is snooty and closed.
Manfiest values free discourse; EA is stifling.
EAG and Manifest are equally controversial because EA has leftists and Manifest has rightists.
Manifest is just getting flak from the left because Manifest has some right-leaning people.
Sure some bright, ambitious, kind people turn away, but that’s just because they’re too leftist and an equally large amount of bright, ambitious, kind people would bounce off if Manifest were more leftist as well.
Turning away people is never the right thing to do unless they pose a physical threat.
Manifest faces trade-offs and these trade-offs go in equal directions.
I think this response is a false equivalence and feels dismissive of the concerns being expressed.
My issue is not that I’m leftist and don’t like right-wing opinions and just want to toe the “party line”. I am actually quite moderate, attend right-wing conferences, and share a lot of misgivings with left-wing culture + cancel culture + progressives.
My issue is that I don’t like having platformed speakers who think that trans people are mentally ill, that black professionals are easily dismissed affirmative action hires (or worse: animals). I don’t like cancel culture but I do think there needs to be some sort of “line” established of acceptable conduct and I think this goes way beyond right vs. left and into something very dark and different.
I think this comment does a really bad job of interpreting Austin in good faith. You are putting words in his mouth, rewriting the tone and substance of his comment so that it is much more contentious than what he actually expressed. Austin did not claim:
that Manifest is cooler or less snooty than EA
that EA and Manifest are equally controversial
that turning people away is never the right thing to do (baring a physical threat)
I think it is pretty poor form to read someone’s comment in such a hostile way and then attribute views to them they didn’t express.
I’d be curious to hear from @Austin some thoughts on where you think the line of acceptable conduct is? (Though I know it’s really tricky to specify, as argued here.)
Mm, for example, I think using the word “fag” in conversation is somewhat past the line; I don’t see why that kind of epithet would need to be used at Manifest, and hope that I would have spoken out against that kind of behavior if I had witnessed it. (I’m naturally not a very confrontational person, fwiw).
I don’t remember any instances or interactions throughout Manifest that I witnessed which got close to the line; it’s possible it didn’t happen in front of me, because of my status as an organizer, but I think this was reflective of the vast majority of attendee experiences. In the feedback form, two commenters described interactions that feel past the line to me (which I detail here).
❤️ I do wanna add that every interaction I had with you, Rachel, Saul, and all staff & volunteers was overwhelmingly positive, and I’d love to hang again IRL :) Were it not for the issue at hand, I would’ve also rated Manifest an 8–9 on my feedback form, you put on one hell of an event! I also appreciate your openness to feedback; there’s no way I would’ve posted publicly under my real name if I felt like I would get any grief or repercussions for it—that’s rare. (I don’t think I have much else persuasive to say on the main topic)
There have been a lot of EAGs with a lot of attendees, so I think it’s reasonable to ask for specific support for this proposition:
Which specific events and/or attendees at past EAGs have—or would reasonably be expected to—“turn away some of the most awesome up-and-coming folks”?
I have heard from many conversatives (and some grey tribe people) over the years that they feel very unwelcome at EA events (which is not very surprising, given quotes in the OP which expresses horror at a conference that might be 50% republicans, though I understand that might be more of a US/non-US cultural misunderstanding).
I don’t pay that much attention to which speakers go to EAG, so I am less sure about speakers, but there have been a bunch of radical-leftist animal rights people at various conferences that have been cited to me many times as something that made very promising young people substantially less likely to attend (I don’t want to dox the relevant attendees here, but would be happy to DM you some names if you want).
“there have been a bunch of radical-leftist animal rights people at various conferences that have been cited to me many times as something that made very promising young people substantially less likely to attend (I don’t want to dox the relevant attendees here, but would be happy to DM you some names if you want).”
I’m curious about the type of behaviour rather than the names of the people.
As an example of something that I think causes people to have this reaction, DxE coordinated and tried to stage a protest at the EA Global I organized in 2015, because we served some meat at the event. DxE also staged a protest at another CFAR event that I helped organize in 2016. Their protests at the time consisted of disruptively blocking access to the food and screaming very loudly (sometimes with a megaphone) at the people trying to get food about how they are evil (everyone gets to hear this, though it’s directed at the people who eat meat) until they get escorted out by security.
Some of their other public protests involve showering the floor and furniture in pig blood: https://www.totallyveganbuzz.com/headline-posts/vegan-activists-arrested-after-storming-mcdonalds-wearing-pig-masks-and-smearing-blood-across-the-floor/
(Also, to be clear on my position, I think Wayne Hsiung, head of DxE is a pretty terrible person with a history of disruption and advocating for pretty extreme bad things in my books, and I still think it would be good for him to be invited to Manifest, especially if he would debate his positions with someone, and he commits to not staging some kind of disruptive protest)
Thanks; this is helpful. It does seem reasonable that at least certain “radical-leftist animal rights people” would create an unwelcoming environment for many moderates and conservatives (and probably others too).
I am more hesitant to deny people admission to an event based on their ideological views (as long as they are fairly well-behaved) than I am to decide not to give them a spot on the agenda or “special guest” status. For example, aggressive proselytism of uninterested and unwilling people is annoying, whether the offender is preaching religion, politics, animal rights, operating-system preference, or sports fandom. I would deny admission for a history of that kind of behavior, but I would view it as application of a viewpoint-neutral conduct rule. Even the First Amendment doesn’t broadly grant people the right to aggressively push their views on an unwilling listener.
I don’t think it’s a function of specific events or speakers or attendees at an EAG, and more of like, a general sense that interesting and talented young folks no longer cite EA as a thing they support. I feel like Bentham’s Bulldog is almost the exception to that proves the rule. This is super vibes-based though, and I’m curious if others in the community agree or disagree with this take.
Two years ago, Tyler Cowen wrote
and this no longer feels true to me.
Maybe it’s because EA had more money two years ago, not because EA is too left-leaning
I do think Manifest and the Manifund team try to communicate a philosophy of extreme transparency and extreme openness for any conversational topic that people want to bring (of course barring anything that actually involves directly harassing someone). I, of all people, had a bunch of arguments with Austin over the last 1-2 years about whether some people should be more clearly deplatformed or excluded from conversations (and I think I am already at least a 95th percentile person on this dimension)
I think this is overall admirable, but I am sad that you ended up attending without that being properly sign-posted to you.
I guess I am trying to elucidate that the paradox of intolerance applies to this kind of extreme openness/transparency. The more open Manifest is to offensive, incorrect, and harmful ideas, the less of any other kinds of ideas it will attract. I don’t think there is an effective way to signpost that openness without losing the rest of their audience; nobody but scientific racists would go to a conference that signposted ‘it’s acceptable to be scientifically racist here’.
Anyway. It’s obviously their prerogative to host such a conference if they want. But it is equally up to EA to decide where to draw the line out of their own best interests. If that line isn’t an outright intolerance of scientific racism and eugenics, I don’t think EA will be able to draw in enough new members to survive.
Huw: To what extent is this EA verses rationality? Above you keep saying, “EA needs to” but these are ultimately rationalist conferences. For example, I’m not sure what more we can do to loudly signal Vassar isn’t EA. He’s banned from literally everything and has been for coming up to 10 years. I am pretty sure that extends to multiple people listed here. I am just not sure how public those decisions are so I will stop listing but (shoots karma into space) I wouldn’t go near half of these people with a 60 foot long stick.
Steelmanning Huw’s comments, I interpret “it is equally up to EA to decide where to draw the line out of their own best interests” as speaking out against certain things and being very careful to not give the impression of accepting or tolerating them. Indicia exist that could cause a reasonable person to think that Manifest was somehow related to EA—it was promoted on this Forum, Manifold has received significant funding from an “EA” coded source (i.e., FTXFF), a number of prominent EAs were reportedly in attendance, etc. So one could reach a conclusion that EA needs to distance itself more sharply and firmly from this while recognizing that the conferences are not under EA control.
Firm separation sounds great to me. I just also want to voice something like “Many people in the community have firmly taken strong stances” but regardless of what we say or do get lumped in with a load of stuff that we’ve pretty clearly said we find abhorrent and have firmly exiled from our community spaces (Im thinking of the case of Vassar here but also Im pretty sure they don’t want to be in EA spaces anyway because they aren’t EA!).
Also, to be clear, I have no idea what people mean by describing Manifest as a “rationalist” conference. It doesn’t advertise itself much to rationalists, and I, as the person who maybe next to Eliezer Yudkowsky has the most authority to declare something “rationalist” or not, would not apply that label to that conference, and also have no particularly meaningful control about who shows up to it.
I’m surprised you say you have “no idea what people mean.” The Manifest / Summer Camp / LessOnline trio made Manifest seem closer to “project the LessWrong team is deeply involved with” than “some organization is renting out our space.”
Among the things that gave me this impression were Raemon’s post “some thoughts on LessOnline” and the less.online website, both of which integrate content about Manifest without clear differentiation.
Now that I’m looking at these with a more careful eye, I can see that they all say Manifest is independently operated with its own organizers, etc. I can understand how from the inside, it would be obvious that Manifest was run by completely different people and had (I’m now presuming) little direct LessWrong involvement. I just think it should be apparent that this is less clear from the outside, and it wouldn’t be hard for someone to be confused on this point.
Granted, I didn’t go to any of these, I’ve just seen some stuff about them online, so discount this take appropriately. But my impression is that if a friend had asked me “hey, I heard about Manifest, is that a Rationalist thing?” I think “yes” would have been a less misleading answer than “no.”
Yeah, I think this is fair. I think using the language “no idea what people mean” in exchange for “I think these people are wrong and I think are capable of figuring out that they are wrong” (which is closer to what I meant) is a bad rhetorical move and I shouldn’t have used it.
Sure, we definitely collaborated a bunch! But a core component of our contract and arrangement with Manifest, which they repeatedly emphasized, is that they find it very costly for us to limit their attendance (and this is of course a very reasonable request as a paying client of Lighthaven).
I mean, I don’t really see the paradox in this case. I am totally OK attending a conference with people who I strongly disagree with or who I think are generally being harmful for the world in a bunch of different ways (and I think this is also true for the vast majority of attendees at Manifest).
I feel like the solution to this paradox is to just have a conference for people who are fine with attending an event with that disagreement present, which Manifest is IMO pretty clearly signaling it is trying to be. People don’t have to attend, but where is the paradox?
The paradox IMO only applies if for some reason the different groups actually start competing or interfering with each other, but I had no run-ins with any of these people, and nobody seemed to do any kind of bullying or threaten violence or anything else that seems like it would actually pose a fundamental conflict here.
I again think it’s valuable and important to sign-post that this is the kind of event where people with strong disagreements and drastically different beliefs and moral perspectives will attend, but I feel like overall Manifund and Manifest have been pretty consistent in that messaging (though still not perfect, and I expect future years to have less of an issue here, since people can see how the previous years were).
The paradox is that openness to these kinds of speakers makes the conference much less attractive and acceptable to the large swathe of people interested in forecasting but not interested in engaging with racists. The conference does not need to literally bar this swathe from attending to effectively dissuade them from doing so. Consider how a similar selection of blatant homophobes would affect LGBT+ folks’ decision to attend.
(edit: some cool voting patterns happening huh)
If you cancel speakers from attending a future Manifest, won’t that also make the conference less attractive and acceptable to a large swathe of people interested in forecasting?
Consider the relative sizes of the groups, and their respective intellectual honesty and calibre. Manifest can be intellectually open, rigorous, and not deliberately platform racists—it really is possible. And to be clear, I’m not saying ban people who agree with XYZ speaker with racist ties—I’m saying don’t seek to deliberately invite those speakers. Manifest has already heard from them, do they really need annual updates?
It seems like you are referring to Richard Hanania—who has been invited twice. I suspect that he was invited because Hanania has been an outspoken advocate of prediction markets. I find it highly doubtful that Hanania has, on net, pushed more people away from Manifest (and prediction markets) than been a draw to them attending.
It’s not just a matter of a speaker’s net effect on attendance/interest. Alex Jones would probably draw lots of new people to a Manifest conference, but are they types of people you want to be there? Who you choose to platform, especially at a small, young conference, will have a large effect on the makeup and culture of the related communities.
Additionally, given how toxic these views are in the wider culture, any association between them and prediction markets are likely to be bad for the long-term health of the prediction community.
In the left-wing EA culture. Most of these people have been widely published in magazines, featured on major TV programs, etc.
Fwiw, I think this is precisely why you don’t want to invite people solely on popularity. Jones is popular and charismatic but epistemically not someone I want to benefit.
The views of the race science guys on race or Hanson’s edgelording about rape strike me as far more tolerated in (West Coast) EA culture than I would guess they would be in mainstream US conservatism. (Despite EAs no doubt being anti-conservative in many other ways.)
There are other examples, and I do not share your doubt.
I am strongly in favor of having more forecasting conferences! I think having a more orthodox and professional forecasting conference could be great and I would love to host it. I agree there is somewhat of a limited resource in terms of conference-bandwidth here, but I think on the margin there is just space for multiple events with different priorities here.
I claim Manifest would do better by it’s own lights (i.e. openness to many ideas) if they were more accomodating to people who find racism distasteful than to those who find it acceptable. But also, as one of very few conferences on a niche topic, Manifest holds some responsibility to the current and future forecasting community. On a moral and intellectual basis, barring explicit racists seems much more reasonable than cultivating a home for racists. There is no necessary connection between forecasting and racism—it is a relationship contingent upon particular histories and internet groups. Manifest can decide to continue that relationship or disrupt it.
I feel sympathy for the “cultivating a home for racists” comparison, but like, my sense is just that Manifest just invited anyone who wanted to come with any reasonably large following of any kind. I don’t think they were trying in any way to “cultivate a home for racists”.
I feel hesitant to put more complicated reputational burdens on conference organizers. It is already an enormously thankless job, and while I agree there is conference fatigue and so that means there are some commons to be allocated here, I think on the margin it’s more productive to encourage people to run their own conferences instead of putting more constraints on existing organizers.
I think one way you can read this situation is: racists are looking for an “intellectual home” in some sense, and since they don’t find one in most of the mainstream, they look for places that they can parasitically occupy and use for their own ends. The warning here is: the forecasting community need not only to avoid cultivating a home for racists, but also to proactively defend against racists cultivating a home for themselves. And if the forecasting community can’t build walls against this kind of parasitism, then the rationalist community needs to protect themselves from the forecasting community. And if they can’t do that, then EA needs to protect itself from the rationalist community.
The core of much of this thinking is that racists (and fascists, the alt-right generally) don’t play fair in the marketplace of ideas, and they will manipulate and exploit your welcome if you extend them one. I’m not sure how well I can defend this idea (might write a top-level comment about it if I can feel confident enough about it), but I think that’s often what people are getting at with these kinds of concerns.
Yeah, to be clear, I think this is a real dynamic (as Scott Alexander has I think cogently written about here [1]). I think in as much as this is the concern, I am pretty into thinking about the dynamics here, and strongly agree that defenses for this kind of stuff are important.
I also think similar things are true about people on the far left and a bunch of other social clusters with a history of trying to establish themselves in places with attack surface like this.
I think a reasonable thing would definitely be to see whether any specific subculture is growing at a very disproportionate rate in terms of attendance for events like Manifest, as well as to think about good ways of defending against this kind of takeover. My model of Manifest is probably not doing enough modeling about this kind of hostile subculture growth, though my guess is they’ll learn quickly as it becomes a more apparent problem.
I understand what you’re getting at, but would flag that all these categories are pretty coarse.
The “forecasting community” sounds similar to the “finance community”. In finance, there are tons of subcommunities. Chicago economists are nothing like wolf-of-wallstreet salesmen.
Similarly, I don’t see there being a coherent “forecasting” community now. There’s a bunch of very different clusters of people.
Arguably this conference was more about the “Manifold community”, which is large and diverse in a similar way to the “Reddit community”.
Yeah this is well put. Not sure I endorse, but equally, i don’t want forecasting to be a home for racists.
I’m not 100% sure I endorse either, to be fair. I’ve heard this story from many people and I think it’s a useful story to have in mind, but I don’t feel like I’ve seen enough concrete evidence and thought enough about alternative explanations to really vouch that it’s right.
Yes ‘cultivate’ is too strong—but the rate of speakers of this kind is way above what one would expect just from the happenstance crossover of interests. Like my guess here is that some subset of the organisers has significant interest in those communities and proactively seeks to add speakers from them. There are speakers/panellists whose connection to any of the Manifest topics are tenuous, and there are other fields with tenuous connections which are not drawn on much by Manifest—e.g. formal risk analysis, actuarial studies, safety engineering, geopolitics, statistics. All that to say there appears to be at least some predilection for edgelordism, above and beyond any coinciding of interests.
As far as I can tell, this isn’t true. My model of Austin, Saul and Rachel did indeed invite tons of people from different fields, and it happened to be that these people developed an interest in prediction markets and wanted to come.
I guess I don’t super have a feeling of edgelordism, though I do see a pretty extreme commitment to openness. To be clear, I am not like “these people aren’t at all edgy for the sake of edgy”, but there are people for which I get that vibe much more. It feels much more like a deep commitment to something that happens to give rise to an intense openness to stuff here.
“You can do better at displaying openness by explicitly denying access to certain gerrymandered ideas” is certainly a take. Not necessarily a wrong one, but a fragile knife to walk, to mix metaphors.
One might distinguish de jure openness (“We let everyone in!”) with de facto (“We attract X subgroups, and repel Y!”). The homogeneity and narrowness of the recent conference might suggest the former approach has not been successful at intellectual openness.
The homogeneity and narrowness of the conference might also suggest various limits and pipeline problems like narrowing from general population to “the demographics of blogging” to “the demographics of EA in general” to “the demographics of forecasting” to “the demographic willing and able to attend a conference about these things.”
Perhaps the tradeoff is worth it, to clearly and loudly dismiss a noticeable and direly hated minority. My prediction is doing so will not actually improve openness, but it could be an interesting experiment and at least it won’t generate stupid Grauniad hit pieces (a competing fluff piece, maybe). Anti-Manifest when and where?
Have you considered that the reason you don’t see a paradox here is because you are not one of the minorities targeted by the abhorrent views you and your organisation seek to platform?
I have considered it, though most of my staff (which I of course recognize is a biased sample, but it’s what I have) and a large fractions of my friends who I talked to about this are in a bunch of the obviously targeted demographics (most relevantly many of them are jewish), and I don’t think they feel differently. Indeed, there was literally a Shabbat service at the event.
I am pretty confident that is not the reason for my belief, and if I was jewish or black, which seem like the obvious demographics, I would not believe something different here, though it’s of course hard to know with such a substantial counterfactual.
As a side note: I would like to flag that a common meme in the HBD crowd seemed to be that Ashkenazim are the best and the most intelligent race, and that Jewish people were overrepresented among them.
Please only answer if you are comfortable.
When you say “scientific racists and eugenicists” how often would you say you heard things like “some races are worse than others and don’t deserve respect” or “poor people shouldn’t have kids” as opposed to “there are slight differences between racial groups” or “people should be able to select their children for intelligence”.
Because both sets of statements are technically racist and eugenicist but I think there is a pretty large gap between them. What exactly did you hear?
I feel like listing specific examples is pretty difficult without compromising anonymity, but at least I heard a range of takes between those more mellow examples you gave and a few times even beyond what your more incendiary examples were. There are incentives to leave more shocking views implicit.
To be fair it is pretty difficult to tell to what degree some of the more extreme views were views that the people actually held, and to what degree they were just attempts to be shocking, edgy, and contrarian (or funny). They might work as status signals as well—”I can say this outrageous thing out loud and nothing is going to happen”. If push came to shove I doubt many of people saying these things would say they actually subscribe to these what they imply (I could be wrong about this, though).
It’s pretty damning of an event in my view if people are saying things beyond “some races are worse than others and don’t deserve respect.” (Or indeed, if they are literally saying just that.)
Many not themselves bigoted people in the rationalist community seem to really hate the idea that HBD people are covering up bad intentions with a veneer of just being interested in scientific questions about the genetics of intelligence because they pattern-match it to accusations of “dog-whistling” on twitter and correctly note that such accusations are epistemically dodgy, because they are so hard to disprove even in cases where they are false. (And also, the rationalists themselves I think, often are interested in scientific racist ideas simply because they want to know whether scary taboo things are true.) But these rationalists should in my view remember that:
A) It IS possible for people to “hide their power level” so to speak (https://knowyourmeme.com/memes/hide-your-power-level) and people on the far-right (amongst others) do do that. (Unsurprisingly, as they have strong incentives to do so.) Part of the reason this sometimes works is because most people understand that accusations that someone is doing this are sometimes made frivolously because they are to disprove.
B) There are people who hate Black people (and in the context of US HBD it usually is about Black people, even if literal Nazis care more about antisemitism), and enjoy participating in groups that are hostile to them. (These people can easily be Jewish or Asian so “but they’re not actually a white supremacist” is not much of a defense here.)
C) For extremely obvious reasons, scientific racism is extremely attractive to people who genuinely hate black people.
D) Scientific racism is extremely unpopular in the wider world of people who don’t hate Black people.
Together, A-D) make it I suspect very easy to attract the kind of people who say things more extreme than “some races are worse than others and don’t deserve respect” if you signal openess to HBD/scientific racism by attracting speakers associated with it. They also mean that some (in my view, probably most, but I can’t prove that) scientists who believe in scientific racism but claim a lack of personal prejudice are just lying about it, and actually are hostile to Black people.
Yeah i agree it is pretty damning.
Thank you for clarifying. I would regard Nathan’s first pair of examples as racist and eugenic, but importantly not his second pair. My experience at Summer Camp and Manifest was that I did not hear anything like the first pair or anything more extreme. (I did not attend Less Online or the Curtis Yarvin party so I cannot speak to what happened there). I think I understand why you did not include many concrete examples, but the accusation of racism without concrete examples mostly comes off as name-calling to me. The “HBD” label also comes off to me as name-calling, as I only ever hear it used by people attacking it, and they don’t ever seem to say much more than “racist” in their own definitions of it. I haven’t really seen people say “yes, I believe in HBD and here is what I mean by that”, but maybe I’m just not reading the right people. If you could point me at such a person that might be useful. But now it seems you are claiming to have heard significantly more extreme things than I did. And I’m curious why that is.
I’m sorry that happened. Ooof.
I’m kind of confused by this. I went to LessOnline and Manifest feel like I hardly heard any racist opinions. It’s possible that such people don’t talk to me or that opinions that the poster thinks are racist, I don’t, but I dunno, I just didn’t hear much of that kind of edginess. It was probably slightly less edgy than I expected.
I have some sympathy with the poster. I didn’t like that Hanania was given top billing last year and pushed in the discord for that to change (and wrote this). I have literally taken flack for not being harsh enough there, but I stand by what I said—that status is something to be careful when doling out and that Hanania didn’t deserve it. Not that he never would, but that he wasn’t at the time.
To me it feels like those people who generate new ideas are pretty scattershot about it. Hanson has some great ideas and some pretty bad ones. But I think if he never felt comfortable saying a bad idea he might not say some really good ones too.
The question then is whether it is ethical to have events that involve people with bad ideas and whether there are ways to minimise harms. I think yes to both. To me, the prediction market space is an unusually good option here—it can be free speechy but try to give status to those who have good forecasting track records rather than just the edgiest people (which sometimes happens in heterodox spaces and I find very tiring). If I think manifest is on the wrong side of this, I hope I’ll stop going.
But I do think that sometimes trying to see the truth may involve engaging with uncomfortable or bad ideas. People who come up with good ideas may come up with bad ones. Trying to invite truth seeking individuals may also leave a net broad enough for scoundrels.
I don’t think this is for everyone but I would like there to be a free speech space that isn’t the dissident right. Caplan is right to say that IQ realists are a scary bunch. I sometimes think the same is true of free thinkers. But I think they also sometimes give me ideas that are really valuable. I don’t recommend everyone go to such events, nor that such events be labelled for everyone, but I think it is probably good that they exist.
And the Guardian article was deeply lazy, and that kind of behaviour should be taxed, regardless of what it is about.
Agree, and my experience was also free of racism, although I only went to one session (my debate with Brian Chau) and otherwise had free-mingling conversations. It’s possible the racist people just didn’t gravitate to me.
I would never have debated Brian Chau for a podcast or video because I don’t think it’s worth /don’t want to platform his org and its views more broadly, but Manifest was a great space where people who are sympathetic to his views are actually open to hearing PauseAI’s case in response. I think conferences like that, with a strong emphasis on free speech and free exchange, are valuable.
I think it would be phenomenally shortsighted for EA to prioritize its relationship with rationalists over its relationship with EA-sympathetic folks who are put off by scientific racists, given that the latter include many of the policymakers, academics, and professional people most capable of actualizing EA ideas. Most of these people aren’t going to risk working/being associated with EA if EA is broadly seen as racist. Figuring out how to create a healthy (and publicly recognized) distance between EAs and rationalists seems much easier said than done, though.
I think you’re underestimating how important the Rationalist influence has been for putting EA on the right track. The clearest example is that it likely would have taken us much longer to start paying attention to AI without the rationalist influence. A large number of movements have been rendered ineffective when they’ve allowed their epistemics to be corrupted by politics.
A more minor point, but perhaps worth noting — rationalists also made the forum software we’re all using to have this discussion.
Exactly. EA is a political project, not a truth seeking one. If EA is clear about that it can better make the political alliances that are useful for its aims.
I think any other source of EA principles you can find will say the same thing.
I don’t think EA should be a political project at all. The value in EA is to be an intellectual space where weird ideas about how to improve the world can be explored. That is where it has excelled in the past and has the potential to excel even more in the future. When it comes time to do politics, that should be entirely outside the EA brand/umbrella. That should be done under cause-specific brands and umbrellas that can incorporate both the relevant components of EA and non-EAs who share the relevant policy goals.
I don’t want EA to be a political project over a truth seeking one. What helps us know what politics we should enact?
Even in this question you put the political action as an end goal and the truth-seeking as only an instrumental one. This means truth-seeking is (and, in my view, really should be) secondary, and should sometimes give way to other priorities.
Huh, it’s a bit surprising to me that people disagree so strongly with this comment, which seems to be (uncharitably but not totally inaccurately) paraphrasing the parent, which has much more agreement.
(Maybe most people are taking it literally, rather than interpreting it as a snipe?)
I don’t agree with @Barry Cotter’s comment or think that it’s an accurate interpretation of my comment (but didn’t downvote).
I think EA is both a truth-seeking project and a good-doing project. These goals could theoretically be in tension, and I can envision hard cases where EAs would have to choose between them. Importantly, I don’t think that’s going on here, for much the same reasons as were articulated by @Ben Millwood in his thoughtful comment. In general, I don’t think the rationalists have a monopoly on truth-seeking, nor do I think their recent practices are conducive to it.
More speculatively, my sense is that epistemic norms within EA may—at least in some ways—now be better than those within rationalism for the following reason: I worry that some rationalists have been so alienated by wokeness (which many see as anathema to the project of truth-seeking) that they have leaned pretty hard into being controversial/edgy, as evidenced by them, e.g., platforming speakers who endorse scientific racism. Doing this has major epistemic downsides—for instance, a much broader swath of the population isn’t going to bother engaging with you if you do this—and I have seen limited evidence that rationalists take these downsides sufficiently seriously.
Your comment seems to be pretty straightforwardly advocating for optimizing for very traditional political considerations (appearance of respectability, relationships with particular interest groups, etc) by very traditional political means (disassociating with unfavorables). The more central this is to how “EA” operates, the more fair it is to call it a political project.
I agree that many rationalists have been alienated by wokeness/etc. I disagree that much of what’s being discussed today is well-explained by a reactionary leaning-in to edginess, and think that the explanation offered—that various people were invited on the basis of their engagement with concepts central to Manifest, or for specific panels not related to their less popular views—is sufficient to explain their presence.
With that said, I think Austin is not enormously representative of the rationalist community, and it’s pretty off-target to chalk this up as an epistemic win for the EA cultural scene over the rationalist cultural scene. Observe that it is here, on the EA forum, that a substantial fraction of commenters are calling for conference organizers to avoid inviting people for reasons that explicitly trade off against truth-seeking considerations. Notably, there are people who I wouldn’t have invited, if I were running this kind of event, specifically because I think they either have very bad epistemics or are habitual liars, such that it would be an epistemic disservice to other attendees to give those people any additional prominence.
I think that if relevant swathes of the population avoid engaging with e.g. prediction markets on the basis of the people invited to Manifest, this will be substantially an own-goal, where people with 2nd-order concerns (such as anticipated reputational risk) signal boost this and cause the very problem they’re worried about. (This is a contingent, empirical prediction, though unfortunately one that’s hard to test.) Separately, if someone avoided attending Manifest because they anticipated unpleasantness stemming from the presence of these attendees, they either had wildly miscalibrated expectations about what Manifest would be like, or (frankly) they might benefit from asking themselves what is different about attending Manifest vs. attending any other similarly large social event (nearly all of which have invited people with similarly unpalatable views), and whether they endorse letting the mere physical presence of people they could choose to pretend don’t exist stop them from going.
I have mostly observed people who don’t see the controversial speakers as a problem claim that excluding them would go against truth-seeking principles. People who’d prefer to not have them platformed at an event somewhat connected to EA don’t seem to think this is a trade off.
Anecdotally, a major reason I created this post was because the amount of very edgy people was significantly higher than the baseline for non-EA large events. I can’t think of another event that I have attended where people would’ve felt comfortable saying the stuff that was being said. I didn’t particularly seek these types of interactions either.
The fact is that we have multiple people who would have been a positive contribution to the event, multiple people who have had similar experiences, and at least one person who said they would not have come or volunteered if they would have known that race science is a topic that would continue to come up (and I myself was on the fence on whether or not I’d come again, but I probably would, especially if some actions are taken to make things more comfortable for everyone). To be fair, at least one person has said that they did not see anything like this happening during the events, so it is unclear how many people were actually left upset by these things (Austin’s feedback form suggests not many).
Optimizing for X means optimizing against not-X. (Well, at the pareto frontier, which we aren’t at, but it’s usually true for humans, anyways.) You will generate two different lists of people for two different values of X. Ergo, there is a trade off.
Note that these two sentences are saying very different things. The first one is about the percentage of attendees that have certain views, and I am pretty confident that it is false (except in a trivial sense, where people at non-EA events might have different “edgy” views). If you think that percentage of the general population that holds views at least as backwards as “typical racism” is less than whatever it was at Manifest (where I would bet very large amounts of money the median attendee was much more egalitarian than average for their reference class)...
The second one is about what was said at the event, and so far I haven’t seen anyone describe an explicit instance of racism or bigotry by an attendee (invited speaker or not). There were no sessions about “race science”, so I am left at something of a loss to explain how that is a subject that could continue to come up, unless someone happened to accidentally wander into multiple ongoing conversations about the subject. Absent affirmative confirmation of such an event, my current belief is that much more innocous things are being lumped in under a much more disparaging label.
>so alienated by wokeness (which many see as anathema to the project of truth-seeking)
Would you be willing to express any degree of agreement or disagreement?
Or, perhaps, a brief comment on whether certain epistemic approaches could be definitionally incompatible? That is, that what “woke” call truth-seeking is so different from what “rationalists” call truth-seeking (not taking a position here on which one is more correct, mind you) as to be totally separate domains of thought, EA’s version is somewhere in between, and that tension/confusion is contributing to these issues.
Hi, last organizer here, wanted to give my take.
Overall, I’m sympathetic to the point this post is making.
This is tricky because I think I could defend the choice to have any of the individual controversial speakers. Some of them, e.g. Simone and Malcolm Collins, simply do not hold racist views. Sure, they can be edgy and inflammatory — they act this way on the internet strategically as far as I can tell, and it’s not my style. But they’re not scientific racists. Embryo selection has nothing to do with race or reproductive coercion and oppression. Plus they are particularly generous, friendly, and engaging in person, which means they are particularly value-adding as attendees. Others of them, e.g. Brian Chau, I don’t like the style or opinions of about basically anything (though I admit I’ve hardly engaged with his stuff). I’ve seen him write about race and gender in a way I perceive to be unnecessarily inflammatory, and like, mean? And I think he’s wrong and doing a lot of harm with the AI stuff. But he came to do a debate with Holly Elmore about acceleration vs. pause. It was a very popular session, and I heard from an AI safety friend I respect a lot that he learned a lot about Brian’s views, which was useful!!
That said, in the end, the concentration of the edgy people was weirdly high, in a way that seems to have skewed your experience significantly. I’m sorry. As Saul and Austin have indicated in their comments, this was a thing we were concerned about, and though we took some action to correct it, perhaps we didn’t totally succeed.
I do not see this as a matter of banning certain ideas or people from Manifest. Openness and free speech are really important to me, and as Nathan said, it’s good to provide a space for this that isn’t the “dissident right.” Forecasting is a good candidate. Last year, Hanania came and remarked afterwards that his “mind had been opened” after talking to some trans women at the event. People meeting with others they strongly disagree with in person can be enormously valuable! There are lots of people on the guest list who I disagree with strongly about a variety of things — for example, I think Eliezer’s takes on baby and animal suffering are wrong in a super morally important way, but I’m still happy he came.
Instead, I think this is an issue of emphasis and balance. As Ozzie noted in his comment, there’s an unintentional spiraling effect: being open to a couple of edgy people early on means future invited edgy people feel like it’s more an event for them and are more likely to want to come, and that attracts more edgy attendees, etc. (and probably puts off the opposite kinds of people but of course that’s less visible to us). So without trying to elevate their more extreme ideas or their styles, we end up doing so via some early light momentum and continued chillness. At times I was thought “maybe we shouldn’t have so many of these people on our website, that might send the wrong message about what we’re about” — not everyone we gave a free ticket to was listed, and this could have prevented this from spiraling. But that also seemed potentially dishonest, like we were trying to hide that the controversial people were invited? So, idk.
I personally quite dislike contrarianism for its own sake. I prefer not to hang out with people who use language like “fag” and “retard”, and would not like to cultivate that vibe at events I run. My impression based on the Manifest feedback is that overwhelmingly, people were kind, activities were wholesome, and conversations were spectacular. But a couple responses, and now this post, have made me think there was a bit more edgelordism than would have been ideal. If Manifest happens again next year, I’d like to nudge it away from this.
You can see Saul’s and Austin’s comments about this as well, which are more detailed than mine, and the details of which I almost entirely agree with.
(Tbh I might not respond to replies here. For one, I find this kind of thing pretty stressful and aversive and have already spent too much time and energy on it. For two, I’m really pregnant and could have a baby to deal with any day now.)
meta
above all, thank you for writing this up. i recognize how difficult writing something like this might be, and i endorse the reflex to click “publish” anyway.
“I am releasing this post under a pseudonym, because I really don’t know how much talking about this topic with my real name and face might hurt my future interactions with the rationalist community. It might turn out to have zero effect, but I dunno maybe the Manifest people and Lightcone would kind of dislike me or something.”
i’m not sure exactly what you mean by “dislike;” i do think we disagree, but i definitely don’t think you’re an evil person or something. i would love to have you back at future manifests!
i was pretty disappointed at the quality of the journalism in the guardian hitpiece; see @Habryka ’s thread pointing out a number factual errors. there were definitely some nuggets of fair criticism, which made me even more disappointed in the guardian piece. but that also makes me even more glad that you wrote up what is (IMO) a much more coherent criticism.
…however, i think that “error-riddled guardian hitpiece” is a pretty low bar to clear. i do still think that there are some worrying/bad parts of this piece, which i think @jacobjacob gets at quite well in his comment. i won’t get at them here, but i encourage readers to read jacob’s comment before mine.
i think that there is a lot here that i could write a lot about, but i’m not currently planning to write out all of my thoughts here. this is particularly because, in my experience, it’s vastly more productive to have these sorts of dialogues over a video call or face-to-face. so:
if folks would like to talk to me IRL, i usually bounce between SF, LA, and Boston. contact me when you’re in any of those cities and i’ll buy you a coffee.
alternatively, we can talk over videocall, which you can do here. i’ve already had videocalls of a similar nature with e.g. @Catherine Low and @ChanaMessinger (among others), and intend to do so with @David Thorstad (among others). i’d also politely urge those with whom i’ve already talked to reply to this comment about their experience of chatting with me, so as to calibratedly {en,dis}courage other folks {to talk,from talking} with me.
@Mananio, i’m particularly eager to chat with you; if you feel comfortable doxxing yourself to me, i’d be delighted to meet with you, either in-person or over videocall.
i wrote this in just a couple hours. which, for me, is “quite quickly.” on priors, it seems likely that there are at least a couple points in here that i’ll change my mind about later on.
i write this as one of the co-leads of manifest, though this doesn’t (necessarily) reflect the opinions of @Austin or @Rachel Weinberg. you can read austin’s thoughts here. [edit june19-2024: you can read rachel’s thoughts here.]
what is manifest about? what ought manifest be about?
although manifest is nominally about prediction markets, it’s also about all the ideas that folks who like prediction markets are also into — betting, philosophy, mechanism design, writing, etc. i’d recommend readers look through our special guest list and come to their own opinion about manifest; we had about sixty such special guests, and i think some aggregation of all of them probably amounts to a much more accurate read of the intellectual vibe at manifest than any selected subset of guests.
and i want to note that some edge is fine (and good!) — but it’s fine & good as a byproduct of a good event-building process, not as a goal at which i’d like to intentionally aim.
i don’t want manifest to be a conference for edgelords, and i don’t want manifest to be known as such. if it is, i’ve failed.
…but i don’t think i’ve failed! my guess is that most people can attend manifest and never interact with someone who they consider racist. the average response on the feedback form was a 9⁄10, and of the negative responses, the vast majority were about long lines for the bathrooms, not about racists. this was also true of qualitative reactions i heard during the event; @Nathan Young ’s comment gets into this really well.[1]
my guess is that, on the margin, i’d have liked to have a bunch more folks at manifest who’re sorta unrelated to discussions about race. some specific people i invited and who weren’t able to make it include andy matsuschak, judea pearl, jason matheny, and many others. i don’t think we hit this balance perfectly, but i also don’t think we were off-base. i’ll touch on this more in a moment, but i wanted to make on thing really clear:
separate “attended” from “invited”
manifest is not an application-based or invite-only event. you buy a ticket, and you show up.
two exceptions to that general rule:
we sometimes subsidize particular people who we particularly want to attend by giving them a free or reduced-price ticket. for instance, i did this for tracing woodgrains (one of my favorite writers), madhu sriram (founder of fractal university), and keri warr (organized a 2 hour session of wrestling in the park, and gave a talk on anthropic’s internal prediction markets). in general, i endorse subsidizing things of which i want to see more, and this is a pretty straightforward application of that general rule.
(rarely,) we ban folks. when we do, it’s because we think they are or are likely to be in violation of our rules — mostly, these are folks who we think are likely to cause our attendees physical harm. this particular subpoint has been probably the single most difficult part of event organizing, and the part that i dislike most. it’s really draining, both on time and on energy. and it’s a totally thankless task that’s only noticed if you do it poorly.
we have a high bar for banning people from the event, and we also have a pretty high bar for giving people free tickets. the vast majority (~4/5?) of the attendees at manifest fell into the category of “bought a ticket, showed up.”
again: the vast majority of attendees simply bought a ticket and showed up.
i think that nonhuman animal suffering is an atrocious blight on humanity’s moral track record. but if the person who most strongly endorsed nonhuman animal suffering bought a ticket to manifest and showed up, i would’ve let them into the event — and for context on that statement, i’ve taken the pledge and donated ~all of my pledged funds thus far to various animal welfare organizations.
and this framework extends more broadly, to folks who hold views that you might consider abhorrent: e.g. we did not give curtis yarvin a free ticket to attend manifest, but if he had bought a ticket and showed up, i would’ve let him in. (however, yarvin didn’t buy a ticket, and didn’t attend.)
…but we’re also responsible for who buys tickets.
if we invite a bunch of edgy speakers, and then a bunch of edgelords buy tickets, we can’t reasonably claim that we’re not responsible for creating an edgy vibe.
i think that, on balance, we were like ~5% too edgy or something — but the way that i’d aim to correct this is by having the makeup of speakers more accurately represent my internal set of beliefs and interests (which happens to be like ~5% less edgy), and not by intentionally cutting our average edginess. anodynity is a really bad goal to aim for. you can see in one of our notes docs on april 22 that we explicitly wanted to invite more “warm/kind/gracious” people, and this was directly to have the speaker makeup more accurately reflect our interests.
like, c’mon — we had fifty seven speakers! look through them, and evaluate for yourself if the 8 that this article describes is an accurate representation of our speakers overall.
a few specific corrections
this is technically true, but a bit misleading. Lightcone owns & operates the venue (Lighthaven), so by a stretched interpretation of “host,” this is true of every event that occurs at Lighthaven. but more realistically:
the LessOnline team hosted all of LessOnline, including running operations & controlling finances
the LessOnline team controlled most of the finances for Summer Camp, but the Manifest team ran most of the operations
the Manifest team controlled both the finances and the operations of Manifest proper
and, more specifically:
the LessOnline team (and not the Manifest team) had ~full authority to kick folks out of LessOnline
any of the LessOnline or Manifest teams had ~full, independent authority to kick folks out of Summer Camp
the Manifest team (and not the LessOnline team) had ~full authority to kick folks out of Manifest
i can clarify further if you’d find it helpful, but this is the gist of the split.
yarvin didn’t attend, and although you clarified that later on, it looks like many folks in the comment section were confused by your phrasing. also, the afterparty that yarvin organized was hosted at yarvin’s house (not at the manifest venue), and was unaffiliated with manifest. i’d appreciate if you made those points clear in each of the portions of text in your article in which you reference yarvin or his party. (if you’d like, you can also make it clear that, based on my current knowledge of his behavior, had yarvin bought a ticket and showed up to manifest, i would have let him in; but that’s up to you.)
michael vassar did not attend any of the events; i further clarify in this thread. i think your phrasing is worded in a way that seems to imply that he did attend, and i’d appreciate if you edited your article to reflect that.
uh, so, my guess is that you mean something like “it’s bad to invite speakers who think the holocaust is {fake, good, etc}.” i agree with this take, but the way that you’ve currently phrased this is pretty ambiguous in a way that seems quite unhelpful. to take an obviously hyperbolic example, i myself have pretty strong opinions on the holocaust: my grandparents survived torture & starvation in various death camps, and my opinions are, roughly, “the holocaust was (strongly) bad.”
i’d like to understand your wording better, and i’d encourage you to edit your original wording to reflect what you actually mean as well as the thing that you’re actually critiquing. e.g., did such a speaker come to manifest? what was the view that they actually endorsed? what norm do you think that violates? etc.
independently, i’d also like to know if any special guests explicitly endorsed the holocaust as being good or fake — i’d probably be a lot less interested in giving them a free ticket next time.
to repeat:
i think that there is a lot here that i could write a lot about. in my experience, it’s vastly more productive to have these sorts of dialogues over a video call or face-to-face. so:
if folks would like to talk to me IRL, i usually bounce between SF, LA, and Boston. contact me when you’re in any of those cities and i’ll buy you a coffee :)
alternatively, we can talk over videocall, which you can do here.
if you’re actually interested in improving community dynamics, talking to me (or the other organizers) IRL or over video call is probably the most effective way to do so; and i’d actively encourage it.
i do think there’s a bit of a selection effect, where those most hurt by a racist vibe would probably have not come (or would have left early, etc). again, if this prevented great folks who would otherwise have attended the event from coming, i think i’ve failed them, and i’d seek to do better for the next event.
Thank you for this message Saul Munn. It feels honest and I do notice myself feeling relieved seeing that one of the main organisers isn’t angry at me or something. I’ll make every edit and clarification that you suggested. I would like to point out that I had said that I am unsure if Vassar himself attended the events or not, but I have been told that he did buy a ticket. I believe you that he did not attend, but I will leave a mention about the ticket (unless of course I have been misinformed, but I do get it if you can’t divulge in who specifically has or has not bought a ticket). [Edit: Saul on Vassar attendance]
I do regret using the holocaust example. The example was loosely based on one speaker who appeared to be defending eugenics by saying that the holocaust was actually considered a dysgenic event by top nazi officials. Many other examples I could have used would have narrowed my identity down quite a bit, as many of the sessions had just a handful people in the audience. What I was going for was “even the good parts of a controversial idea are ruined if you have the wrong person talking about it”. I edited this in the text to make it more clear. Please take a look to see if makes more sense now!
I appreciate your invite to grab a coffee or have a video call, but I’m afraid I’m already finding writing about this extremely stressful. If you have a couple of specific questions you’d like to ask me, feel free to send me a DM, but I can’t promise an answer.
That sounds like an obviously invalid argument! Now, a) I didn’t attend that talk, b) many people are bad at making arguments, and c) I’ve long suspected that poor reasoning especially is positively correlated with racism (and this is true even after typical range restriction). So it’s certainly possible that the argument they made was literally that bad.
But I think it’s more likely that you misunderstood their argument.
Yeah I have now heard two people say that this was more a historical quibble than some sort of discussion of eugenics in relation to the holocaust. I imagine I could find the actual argument if people thought it was load bearing.
I believe Vassar did buy a ticket to Summer Camp, but it was refunded as he wasn’t allowed into that event.
True, but he was allowed to attend Manifest, even though he didn’t end up coming in the end. Saul on Vassar attendance: https://forum.effectivealtruism.org/posts/MHenxzydsNgRzSMHY/my-experience-at-the-controversial-manifest-2024?commentId=vWLJo6GQ5sFbbbxch
If it were my event, i would like it to be free speechy without giving status mainly for edginess. So attendees can do what they like, but i’d prefer guests were people with good forecasting track records or something interesting to say—people who we want to grow up to be, in some sense. I think this would remove a lot of the incentive for edgy people to come, which i’m fine with.
The post complains about “scientific racists” at the conference, with there being a minimum of eight:
We can debate whether it’s closer to eight or closer to twelve but let’s take eight as the conservative estimate. You say:
And:
So 8 out of 60 means that 13.333% of the speakers were “scientific racists” and if we decrease that by 5 percent we end up with five “scientific racists”. So is this correct? Will you invite five “scientific racists” as speakers next time?
I’m a non-binary person with a disability—and Manifund inviting iconoclastic thinkers made me feel like this is a safer community.
Because more than my gender or privilige, I identify as somebody who thinks outside the box.
I love the EA and rationalist communities for their willingness to seek truth and altruism, regardless of how socially acceptable it is. I feel like I can talk to people and really get to the heart of an issue without having to self-censor due to politics.
I am happy that Manifund is doing what they think is right and true. I am happy that they do not ostracize and deplatform people if they believe in controversial things outside of the Overton window.
I hope they keep doing what they’re doing.
I’m not particularly happy to see people within this community immediately present and accept the framing that Manifest was controversial because people reacted harshly to an article explicitly aimed at smearing a community I belong to with reckless disregard for truth and bizarrely sinister framing of mundane decisions, written by people who proceeded simply by reading a guest list without even bothering to attend the event they were writing about. In that regard, Manifest is only controversial in the same sense Scott Alexander was controversial when the New York Times wrote about him.
To name something is often to make it so; to lead with the framing that Manifest was controversial is to encourage other people to see it that way, yielding to the frame of people who treat EA itself as controversial. That has an impact on everyone who attends, organizes, and puts effort into it. I recognize that your own experience was mixed and have no problem with you sharing that and exploring it, but I think it’s worth being cautious about frame-setting in the title in that way, particularly given its potential impact on early-career organizers or guests.
I was excited and honored to be invited to Manifest. It’s the first conference that went out of its way to invite me as a special guest, more-or-less the first place I spoke openly under my own name, and a place that gave me the opportunity to meet and speak with people I have read and admired for years. It was an extraordinarily valuable experience for me, one where I seized the opportunity to give a light-hearted presentation on a niche topic, chat with and learn from many of my role models, and generally enjoy meeting people in person who I have only had the chance to interact with online.
I am extremely confident that an article aimed not at attacking the conference but at presenting an even-handed, cohesive picture of the experience as a whole would read very differently to the Guardian article and would include many more stories like my own and like the descriptions provided by other attendees.
Were some guests controversial? Yes, though I am happy to defend their inclusion on the merits. Does the presence of a few controversial guests or a harsh reaction against an article aimed at creating a mess merit framing the conference as a whole as controversial? I’m not at all convinced that’s fair, accurate, or helpful.
My distaste for Manifest and a subset of its speakers preceded and is not based on the article in any way. I would guess others are similar.
I believe you, but “Ben Stewart dislikes it” is not typically the standard for declaring a conference controversial. Was there public controversy around the conference not connected to the article?
There was a controversy about whether or not Hanania should be included last year (he was), mostly within connected communities.
Sure—but a debate over one guest last year says little about the reception of the conference as a whole. The New York Times paid much more attention to the presence of an orgy and Aella than to Hanania, glossing him over in a single line.
Most conferences—not least EAG—wind up with debates like that on the margins.
My claim is that the article did not drive the objections and disagreements at hand, and is instead a contingent trigger of discussion. The voluminous and intense debate in this and other threads are indicative that stridently opposing views on this have been in place for some time. So ‘controversial’ within EA seems not driven by the article. Public controversy may indeed be driven by the article—I doubt the public had any knowledge or interest in Manifest (and I don’t know if this article has had any traction outside of these circles). The exception may this New York Times article, which did not focus on any controversy and had a very mild note about Hanania: “Richard Hanania, the conservative commentator, signed copies of his book on wokeness.”
So I think I implied something I didn’t intend here. I thought you were saying that the article’s slant drove the current criticism—that I strongly deny. But if you were saying that the article’s slant drove public controversy (if there is some), then I agree that my personal beliefs don’t matter much. I agree with the comments in the rest of the thread that focusing on ‘public controversy’ isn’t capturing the substance of the critique—i.e. that Manifest’s decisions are and have been bad not in terms of PR, but bad for its own epistemics, the forecasting community, EA, and basic human decency.
“Basic human decency”? Jeez, mate. I understand not wanting to engage with right-wingers personally, but treating it as a deep affront when others choose to do so is off-putting, to say the least.
My comment was in response to OP’s explicit note that the controversy around the Guardian article is what made him change the title.
Yeah that was a bit strong, sorry late here. I’m conflating reacting to Hanania et al. vs reacting to Manifest, which I shouldn’t do. Thanks for pointing to the note—what do you think of the ‘controversy’ being ‘in EA’ vs ‘in public’?
I meant “public” in a broad sense of examining reactions to the conference, inclusive of “public within EA.” I agree that many disputes tend to lurk beneath the surface, but not that there was any discussion sufficient to justify the title prior to OP encouraging it. In the same way that I imagine you wouldn’t be thrilled with a label of “Ben Stewart, who works for the controversial Open Philanthropy” or “Ben Stewart, adherent to the controversial philosophy effective altruism”—even though both OpenPhil and EA have plenty of controversies that bubble up here and there—I think it’s better to raise this sort of discussion around Manifest without proactively centering controversy as its most salient feature.
Ah okay, I understand better now, thanks. There could be better examples given OP and EA have legitimate controversy, such that I wouldn’t find that phrasing objectionable, but I take your point
I don’t feel super strongly about the title, and would be happy to change it. What would you suggest as an alternative title?
The suggestion of “My experience at Manifest 2024” seems like a maximally neutral one, if information-light. “Issues with controversial guests at Manifest 2024″, perhaps, if you want to be more direct.
I’m quite leftwing by manifest standards. I’m probably extremely pro-woke even by EA standards. I had a great time at less-online/summer-camp/manifest. I honestly tried to avoid politics. Unlike many people I don’t actually like arguing. I’d prefer to collaborate and learn from other people. (Though I feel somewhat ‘responsible for’ and ‘invested in’ EA and so I find it hard not to argue about that particular topic). I mostly tried to talk to people about finance, health and prediction markets. Was honestly super fun and easy. People didn’t force me to discuss poltiics.
Though I must say it was probably a mistake to bring my girlfriend to manifest. I think she got freaked out. Probably wasn’t good for our relationship.
Thanks for the report; I’m glad that you had a good time! And I appreciate that you brought your girlfriend, and sorry that our event didn’t sit well with her—I think that’s a bad sign, and want to figure out how to structure Manifest so that people like her also enjoy it.
I want to be in a movement or community where people hold their heads up, say what they think is true, speak and listen freely, and bother to act on principles worth defending / to attend to aspects of reputation they actually care about, but not to worry about PR as such.
It’s somehow hard for me to read the OP and the comments below it without feeling like I should cower in fear and try to avoid social attack. I hope we don’t anyhow. (TBF, lots of the comments actively make this better, and I appreciate that!)
(Alternately put: a culture of truthseeking seems really important if we want to do actual good, and not just think we’re doing good or gain careers by being associated with the idea of do-gooding or something. I find it actively difficult to remember I wish to live by truth-seeking principles/culture while reading these threads somehow. I want a counterweight to make it easier.)
Fwiw, a big reason for posting under a throwaway is the fear of a social attack. I don’t want to make enemies out of people, especially if some of them I have only briefly interacted with. Some people are bound to have had a different experience from mine, and will be tempted to discard what I write about out of hand. Some can’t relate to my experience due to mere chance with regards to who they’ve interacted with during these events, and some might not be able to relate due to being very generous (or perhaps even naive) in their interpretation of others.
I do not think I am encouraging changing things just for PR reasons. I think it is not only instrumentally bad move towards becoming a more edgy or racist community, but that is also bad in absolute terms. I am not sure if what you are trying to say is “if people think [insert an HBD belief here] is true, they should feel comfortable saying so in this community”, and I hope I am not misrepresenting you by responding this way—let me know if you’d like me to delete this last paragraph and I’d be happy to do so.
It’s concerning and requires clarification that people in this forum are so quick to link racism to truthseekingness. What is the basis for this connection? Have you evaluated racist assertions and found them to be truthful?
One might argue that a lack of censorship, even of false ideas, is crucial for a community committed to truth-seeking, and that people should be free to express themselves without social consequences. However, it’s likely you would set limits. Imagine someone was inexplicably fixated on asserting that your children specifically were genetically inferior and should be deported, posting this repeatedly on Twitter/X. What benefit would the community gain by including them in events?
Really agree with this take. Ultimately, I get the impression that there seems to be a growing divide in EA between people who prioritize more truthseeking and those who prioritize better PR and kindness. And these are complex topics with difficult trade-offs that each has to navigate and establish on a personal basis.
As elsewhere, more edgy does not equal more truth-seeking. By favouring a more homogenous and more exclusionary conference, Manifest closes off important sources of ideas. And based on the intellectual calibre of the edgier speakers, they are not thereby gaining a compensatory stream of ideas. It is not a dichotomy between truth seeking and kindness—both are achievable.
I agree with that, and that our goal should be to achieve both, but reality being what it is, there are going to be times when truth-seeking and kindness confront each other, and one has to make a trade-off. Ultimately, I choose truth-seeking in case of conflict, even weighing in the negative effects it can generate. But to each his own.
I wouldn’t frame it as prioritizing truth vs kindness.
I don’t see it as a kindness to shun people based on who they hang out with, to try to control what EAs can listen to or who they can talk to, or to encourage people to avoid controversial ideas/people.
I think this hurts truth but it also hurts people.
I am not being precise with language, but what I meant was something like sometimes you know that stating some truths, or merely accepting the possibility of some things being true and being willing to explore them and publicize them no matter the consequences might have negative consequences, like being hurtful and/or offending to people, frequently for good, pragmatic and historical reasons. Prioritizing not to harm would feel like a perfectly valid, utilitarian consideration, even if I disagree with it trumping all others. In Haidt’s moral framework terms, one can prioritize Care/Harm versus Liberty/Oppression. Myself, I have a deontological, quasi-religious belief in truth and truth-seeking as an end in itself.
This is deeply antithetical to EA culture and norms.
Applying RCTs to global poverty was outside the overton window and considered very controversial early on.
Same with AI safety.
We should have a community where people are free to associate with who they want, explore out there ideas, and pursue truth and altruism instead of politics.
Yarvin, in particular, stands out to me as someone who only has relevant notability as a famous openly anti-democratic far-right figure with explicitly far-right “establish a dictatorship” political goals. If he was not merely there as an attendee, but invited by the organisers, I think EA orgs should cut any ties with Manifold frankly.
Also, the use of slurs as described should be an absolutely easy case for expulsion apart from anything else. It doesn’t have even the slightest fig-leaf of being about taboo empirical claims. If Manifold won’t take even that seriously it’s hard to believe they don’t hold pretty hard right, bigoted opinions as an org. (A shame if so, as I like Manifold itself.) Though I guess it is possible the organisers were not aware of this behaviour.
Thanks for your comment David! Just to clarify: Yarvin wasn’t there there as an attendee or as an invitee, but both his personal assistant and many people who seemed to have close ties to him were attending the events. Many attendees also took part in the afterparty organized by Yarvin at his house. I could have been clearer about this in my text.
Just to clarify for my own benefit:
Yarvin wasn’t there there as an attendee or as an invitee.
Michael Vassar wasn’t there as an attendee or as an invitee, and you have no evidence that he even attempted to purchase a ticket.
No one holding the wrong views on the historicity of the Holocaust was invited to give a talk, and you have no evidence that Austin or Oliver ever considered extending an invitation to such a person.
Of these points, there was only legitimate confusion over the Vassar one. Omitting these points would not only make the post epistemologically cleaner and more concise, but also save you the trouble posting corrections down in the comments.
You will forgive me if this colors the conclusions I draw from your post.
I don’t know exactly how it would work, but we need to get better as a community in excluding extreme right authoritarian people from spaces associated with us. It’s bad stuff on its merits, it’s uncomfortable for many EAs who are not straight white men (not all necessarily obviously more than literally zero very right-wing people of colour are fine with this stuff, and some are in EA), and it makes me and I suspect other people nervous about publicly identifying with EA. (I don’t think we should play down what we believe to be popular, but I do think we should reject/eject people for believing stuff that is both wrong and bigoted and reputationally toxic.)
I would slightly separate “pro-eugenics” stuff from the white nationalism, because “eugenics” alas covers a wide variety of importantly different things. Certain ideas associated with “liberal eugenics” are quite mainstream in parts of analytic philosophy, whereas writing about the United States becoming too genetically Mexican is thankfully mainstream nowhere reputable. See for example: https://plato.stanford.edu/entries/eugenics/#ArguForLibeEuge One of the “liberal eugenicists” discussed there, Julian Savulescu was a titled chair in practical ethics at Oxford. (In fairness, finding Savulescu totally beyond the pale, and being outraged by this is likely also a mainstream opinion in analytic philosophy.) For this reason, I think it is much harder to have a policy of “throw out eugenicists” than it is to have one of “throw out racists”, whatever you think of the substantive merits of the “liberal eugenics” position.
As it happens, while I am very queasy about the things the liberal eugenicists say, I think there are probably (realistic, not necessarily actual) circumstances in which I would support genetic enhancement, including for “non-medical” purposes: i.e. in circumstance where I judged the risk of abuse, coercion and increased racism associated historically with this sort of thing was low enough to be outweighed by the benefits. By the standard official definitions any genetic enhancement (actually even in clearly medical contexts) is eugenics.
In my view, the fact that a) some “liberal” eugenic opinions have relatively decent mainstream clout in academic ethics, and b) substantively, it is genuinely not obvious that *every* view that can be classed as “eugenics” under any reasonable definition of the latter taken literally, is a bad view, makes a blanket “exclude eugenicists” policy hard to get right. (Though if someone is happy to call themselves a “eugenicist” I think that is usually a bad sign about their politics/worldview, even if the apparently only profess mild eugenic opinions.)
On the other hand, you absolutely can just boot everyone who, like Razib Khan, has written for a white nationalist website (and doesn’t seem sufficiently repentent: Hanania’s repentance is insufficient when combined with his tendency to still do things like calling black lawyers he doesn’t like (“these people”) “animals” on twitter.) It’s easy, just ban them from your events, don’t hire them at your orgs. Will there be borderline cases of “white nationalist website”? Sure. But that is true for any realistic rule of the form “exclude people who think X from your movement”, and almost everyone will agree that some such rules are ok. (I.e. exclude Open Hitler fans.) And it’s also true for most rules about other things too. Almost any rule about social/political stuff requires some skill at judgment to apply and admits borderline cases. (Though writing for borderline white nationalist websites seem bad also.)
None of this is to say I think that everything in EA around eugenics is all fine, by the way. I think support for even liberal eugenics is often-not always-a tell that someone has dodgy political views across the board. I suspect (though cannot prove) that people sometimes present as believing only in non-coercive stuff in this area, when they actually support coercion. Other times, they are good at presenting a general feel of “this is sensible careful, mainstream stuff, not far-right”, whilst not explicitly ruling out support for (even) extensive coercion. I.e. this forum post, which has a lot of “pro-freedom” vibes, but read carefully seems to explicitly disavow only forced sterilization and murder as coercive interventions, whilst hinting at favoring jailing people for having children with high chance of disease/disability: https://forum.effectivealtruism.org/posts/PTCw5CJT7cE6Kx9ZR/most-people-endorse-some-form-of-eugenics (I also seem to recall the author hinting at more extreme views on twitter latter, though I haven’t checked.)
“Eugenics” is the worst word. (Is there any other word in the English language where the connotations diverge so wildly from the literal denotation?) “Liberal eugenics” is effectively a scissor-statement to generate utterly unnecessary conflict between low and high decouplers. Imagine if the literal definition of “rape” didn’t actually include anything about coercion or lack of consent, and then a bunch of sex-positive philosophers described themselves as being in favor of “consensual rape” instead of picking a less inflammatory way of describing being sex-positive. That’s eugenics discourse today.
ETA: my point being that it would seem most helpful (both for clear thinking and for avoiding unnecessary conflict) for people to use more precise language when discussing technologically-aided reproductive freedom and technologically-aided reproductive coercion. The two opposites are not the same, just because both involve technology and goal-directedness in relation to reproduction!
So you want EA orgs to use their clout to try and push for EA-associated spaces to not allow people that you and some amount of EAs don’t like?
I don’t want this. CEA can choose who it wants to invite to EAGs (and I think manages to block out extreme right authoritarians pretty well). Other orgs can invite who they want.
I find this desire for control over other people and spaces bad. I predict it has a chilling effect on ideas. A big chunk of thinking about AI (for better or worse) came from people who are at times uncomfortable to me and I guess you. Would you have endorsed not engaging with these people 10 years ago?
Also I just really don’t think there were many authoritarian right wingers at these events. Feels like the poster and I went to completely different events?
I think the key words in the text you quoted are “spaces associated with us”:
If it’s an EA space, then it isn’t really an “other” space.
If it’s a non-EA space that is somehow being coded by others as an EA space, then it’s reasonable for EA to distance itself from that space and to expect the other organization to make its non-EA nature quite clear.
Imagine there was someone with the same name as me writing vile nonsense on the internet, and others were misattributing it to me and making my life difficult. I would desire a measure of control over that situation, but it wouldn’t be to silence speech I find distasteful. It would be to protect my own valid interest in not being associated with that speech.
Hmmmm maybe. But what does distancing mean? Does it mean “saying we aren’t rationalists”? That option has always been available to EAs who aren’t. Does it mean “never booking events at lighthaven”? That seems pretty silencing.
This is pretty concerning to me (as someone who didn’t attend Manifest but very well might have under other circumstances). I knew Hanania had been at Manifest before and would perhaps be there again, but didn’t realize the event overall had this level of “race science” presence? I hope the Manifest organizers take action to change that for future events, and in general have thought some of Manifold’s recent decisions seemed rather too “edgy”/sensationalist/attention-seeking (not sure of the right word here...) for my taste.
However, this post also rubs me the wrong way a bit because it seems to conflate a bunch of things that I don’t think are appropriate.
To name some quick examples that I don’t intend to get into in detail:
I think the Guardian article was seriously flawed well beyond any issues with Manifest
I think Vassar’s group has been broadly separate from the rationalists for many years now
I don’t much agree with your characterization of rationalists vs. EAs
The other thing that really gets me about this post, though, is your conclusion:
I think conflating Republicans, the “Thielosphere” and (implicitly) these “scientific racists” is really bizarre and extreme.
My understanding is that surveys of EA and adjacent communities generally indicate that EA has a very major political skew towards liberal/progressive beliefs. [1] I consider this a serious weakness and potential failure mode for the movement—if we end up becoming just another political thing it could really curtail EA’s potential. The idea of “Republicans” being conflated with this more extreme stuff strikes me as a bad sign.
Quite frankly if someone told me that an EA Global next year was half Republicans/conservative-leaning people, I would consider that likely a major success in terms of diversification and avoiding political skew, and it would significantly increase my optimism for EA as a whole. It seems bizarre to use that sort of thing (admittedly with a far less central event than EA Global) as a “failure condition” here. (And I’m not even a Republican!)
[1] See for instance this post, which found >75% of EAs identified as center-left or left and less than 3% identified as center-right or right! I believe SSC/ACX community surveys also tend to show a strongly left-leaning readership, though with a less dramatic slant.
Thanks Davis Kingsley! I edited my post to include a mention that The Guardian article is flawed, and that Vassar has been more or less excommunicated (I had already replaced the Vassar mention with a link to Saul expanding on Vassar’s attendance).
I guess I am happy to hear that my vibes on rationalists vs. EAs doesn’t ring true to you—I hope you are right on this regard.
I changed the Republicans part into strikethrough, since multiple people have objected to it now, but left the Thielosphere mention as Thiel is tied to Yarvin, who is tied to race stuff. Thiel does a lot of stuff, and what Samo Burja (who, if I understand correctly, is at least partially funded by Thiel) does, for example, doesn’t appear to be very objectionable to me, but overall Thiel does seem like a character whose values are incompatible with EA.
Manifest doesn’t really register as an EA event to me, and the amount of people who I might categorise as EA was maybe 25% of the attendance pool, so I am not sure representative these surveys are of the attendance pool of the event. I have not attended many rationalist events in the Bay, so talk for that either. I suspect there is a major skew towards liberal/progressive beliefs, but with a non-insignificant reactionary minority.
Also, and pardon me if I am mistaken, I am relatively sure (80%+) that I did see you at the venue during Summer Camp or less.online? The things I am talking about were present during those two as well.
I don’t think Davis was at Summer Camp or LessOnline. I would have said hello to him, and also I can’t find anyone on the ticket list for any of the events with the name “Davis”.(Edit: OOps, I was just looking at the Manifest guest list. He sure was at LessOnline. Sad to have missed him!)
Thanks for the edits!
I indeed attended LessOnline for a day, but not Summer Camp or Manifest; while there I didn’t notice the “race science” angle you mention but I was only there for a day and spent a bunch of that time presenting classes/sessions on rationality stuff and then talking to people afterwards, so you probably have a broader sense of what was present during “the events as a whole” than I do.
Depends how you define “strong” for ACX. I think the median was 4.something on a 1 most left to 10 most right scale: https://docs.google.com/forms/d/e/1FAIpQLScHznuYU9nWqDyNvZ8fQySdWHk5rrj2IdEDMgarf3s34bSPrA/viewanalytics But yes, I’d say ACX has a long history of too much tolerance of the far-right, but most readers are not far-right themselves. (The comments section is generally more right-wing than the lurkers I think.)
I think the current political situation in the US is somewhat problematic in the context of inclusion/exclusion, because on the one hand, nearly half of Americans with a party affiliation are Republicans and that MUST include many decent people who would bring good things to the movement, but on the other hand I also do think that the mainstream Republican party, so long as its leading figure is Trump, will remain an anti-democratic menace, as demonstrated by Trump’s behavior around the last election. (Something I think Scott Alexander himself actually agrees with as far as I can tell, ironically.) For Thiel specifically, he is fairly strongly associated with Yarvin as far as I remember, who is clearly a fascist. I am therefore generally against attracting Thiel fans. There are probably some exceptions though: libertarians who admire Thiel for other reasons and are just in denial about how fash-y his views are. Tyler Cowen, who seems ok to me, is probably in that category.
I respectfully disagree with this post. I think one of the best aspects of rat/EA/adjacent spaces like Manifest is that they’re willing to have a broader Overton window than the rest of society, and that this is fragile and should be protected. To encourage Manifest to no longer host “controversial” speakers (more controversial from the perspective of mainstream media than from the perspective of EA) is to defeat the purpose of one of the key norms of rationalism: truth-seeking over prevailing social norms.
Was Vassar a speaker or just an attendee?
In addition to the cult stuff you mentioned, when the time article on sexual harassment in rationalist communities came out, many responses on the article claimed Vassar had been accused of multiple instances of sexual harassment or assault and banned from multiple communities. I got the impression he was no longer around, and am disturbed that he would be allowed in such a conference.
Edit: see the edit in the Op, vassar did not actually attend, but apparently he could have if he wanted to. I would advise everyone to not let this guy attend your conferences.
neither; he did not attend manifest.
edit: see jonas’ important question below, and my response. i think they both provide pretty important context.
Would he have been allowed to attend if he wanted to? (I think you really need to have a process to filter out people like him.)
good question, jonas; thanks for asking it!
he bought a ticket to summer camp; we refunded it ~immediately and uninvited him from the event.
after that, and before (or possibly during?) manifest, i made the decision to, conditional on his having bought a ticket, allow him to participate in the event. a few clarifications:
i made this decision, not the others on the manifest team; i bear sole responsibility/deserve blame-in-expectation if it was wrong.[1]
i made this decision under quite a lot of stress/pressure, after agonizing about it for a couple days, and with way less information than i would’ve liked; i basically didn’t know who vassar was prior to summer camp, and had like an hour or two in total to do research/talk to people/learn about his behavior/etc.
i’m quite unsure if this is a decision i reflectively endorse, and if you have information that might sway my decision about his attending future events in either direction, i’d love to hear — especially now that i actually have the time/attention to do look into it, rather than being amidst a 600-person event i’m running. feel free to reach out to me privately, if you’d prefer.
ultimately, vassar did not buy a ticket to manifest.
or, like, conditional on my decision having been a mistake, the team bears responsibility for setting up systems such that i was enabled to make this decision. but i’d disagree with that (i think that our systems for deciding who to uninvite were pretty sound, generally), and i think i deserve all of the blame to the extent that there is blame deserved.
I’m confused how come he was first uninvited, and then you later decided to allow him to participate? Did he buy a summer camp ticket, get uninvited, and then you decided to make up your mind for what would happen if he also were to buy a Manifest ticket?
He was 86′ed from Summer Camp; later on staff had a discussion of what they would do if, hypothetically, he bought a ticket to Manifest. Saul had final say and decided Vassar would not be banned. In the end, the hypothetical was never tested, as Vassar did not attempt to purchase a Manifest ticket.
I read Saul’s comment to be discussing two different events. 1 event he was uninvited to, the other he would have been able to attend if he would have so wished.
can you say more about how you approached this decision and what seemed like the key considerations for you? I’m interested in whether you were primarily approaching it as something about his views, or about his interpersonal behaviour and alleged abuse, and given your decision to allow him, whether it was “I don’t think the allegations of misbehaviour are credible enough to act on”, or “even if the allegations were true, they wouldn’t constitute reason to exclude him”, or some third thing.
I appreciate this is probably a stressful request, and I don’t necessarily think I’m entitled to the answers, but it’s something I think about a lot so I’m really interested in hearing how people are approaching it.
Note that at this point we only have indirect word that he bought a ticket. Also note that anyone can buy a ticket, and if his ticket was cancelled by Manifold (which is probably the thing you want), we would not hear about that directly. Of course, information can emerge that he actually did attend.
Here is Saul on Vassar attendance: https://forum.effectivealtruism.org/posts/MHenxzydsNgRzSMHY/my-experience-at-the-controversial-manifest-2024?commentId=vWLJo6GQ5sFbbbxch
Tl;dr: Vassar bought a ticket to Summer Camp, got uninvited to it, then the decision was reversed to allow him to participate in Manifest, and he ultimately didn’t end up participating. Editing the main post to reflect this.
Out of hundreds of guests, there were a handful people who might hold views you disagree with that were allowed to come to a conference which was not about their controversial views?
As someone said on Twitter, “They allowed Republicans to attend? The horror!”
It is my strong belief that each of us have something to learn even from these people with controversial views.
The Guardian article cited Jonathan Anomaly’s “liberal, non-coercive eugenics”. Is that so bad that we can never discuss it? We have to shun such people and destroy their careers so that the silencing affect is so large that no one dares to ever bring it up again?
That’s pure authoritarian censorship. I truly believe you guys should all be ashamed of yourselves. This is not how we build the best future.
The wisest among us know to reserve judgment and engage intellectually even with ideas we don’t believe in. Have some humility—you might not be right about everything!
I think EA is getting worse precisely because it is more normie and not accepting of true intellectual diversity.
Lastly, you might not know this, but there’s broad agreement among the smartest people that HBD is true. I’ve been in the room before. The data is very obvious if you undertake research yourself—it is just taboo. Many decide that the world is better off hiding this knowledge, but I think there is a middle ground.
Just to clarify, I would be extremely surprised if any organizer would have asked anyone to give a presentation on any holocaust stuff, that would seem extremely non-characteristic of everyone involved (I mean, maybe in a way that would allow someone else to challenge it live, but not in a way that would be an encouraged one-sided presentation).
I also don’t remember anything about this on the schedule, but all three events were unconferences where anyone could add anything they wanted to the schedule, and there were over 200 sessions, so I can’t rule that out (but like, anyone could buy a ticket and add anything they wanted to the schedule).
Thanks Habryka, I changed the wording a bit to make it clearer that I was not talking about someone literally holding a presentation on the holocaust.
Even though I used this hyperbolic example to illustrate that it does matter who you choose to present your ideas, it does have some basis on what happened at the conference as one of the presenters spent some time clarifying that the holocaust was actually considered a dysgenic event by top nazi officials (and thus appealing to the crimes of the nazis against Jewish people is not a valid counter-argument against eugenics) during a Q&A. Apart from this, the example is not really based on anyone specific in particular, and is used for illustrative purposes only.
I was at that session. My memory is that the presenter was very clear that the nazis killed other groups for eugenic reasons, and the jews for dysgenic reasons, both of which are generally regarded as part of the holocaust. The distinction was a bit of history nerding, not an attempt to minimize the nazis crimes, or to deny that the nazis were eugenicists.
If that is the case then the post seems shockingly disingenuous, even within the category of ‘denounce people for tolerating controversial people’ posts. It really seems like the OP was trying to let readers assume that the speakers’ strong opinions in question were pro-holocaust or pro-holocaust denial, especially given the post was also calling them racist. If those strong opinions were actually included their opposition to the genocide … well, what would the OP prefer? Speakers with mixed and equivocal views on the holocaust?
I was attempting to use a hyperbolic example that is loosely based on reality to illustrate that even the good parts of a controversial idea can be poisoned by the wrong speaker. Please do take a look at the main text if it looks better to you now?
For what it is worth, I do feel like the dysgenics comment was in extremely bad taste, and was clearly used as a defence of eugenics. Doing a bit of history nerding in this context was a monumentally bad move.
The person doing the talk most definitely isn’t pro-holocaust or a holocaust denier, and if this is what people feel like I’ve tried to say then I have failed to make my point.
I think many people are overestimating the reputational risks here.
Firstly, cancel culture is past its peak. Secondly, for better or worse, the Overton window is largely than it was previously (I expect this process to continue further). Thirdly, many of the folk who play the ‘guilt by association game’ already hate us and already have enough ammunition that we aren’t going to change their minds. Fourthly, the folk who play that game most strongly mostly wouldn’t make good community members anyway. Fifthly, the more you bend in relation to reputational attacks or to ward them off, the more that people see you as a juicy target.
For that reason, I don’t think we should prioritise worries about reputational risks nearly as much as you think (in fact, posts like this seem to cause more reputational risks than they potentially solve by implicitly accepting the frame that EA and LightCone and Manifold shouldn’t be regarded as separate entities, but all mashed together).
I strongly believe that we should allow each community to pursue its own path. Effective Altruism cares primarily about impact, rationalism primarily about strong epistemics and Manifold about accurate prediction markets. This will naturally lead to divergent preferences about who is acceptable to platform; and I’d much rather embrace the divergence than engage in in-fighting over which community gets to set the norms.
Even though there is some overlap between the communities (myself included), I really think we should push back against conflating the two communities. We should also push to further distinguish Lightcone from Lightcone venue hirers. Collapsing these associations doesn’t gives an unfair and inaccurate idea of responsibility for particular decisions.
[Minor edits—I posted this after seeing the main post and one of the comments about similar experiences, and figured that this experience was much more common. Since then I haven’t seen many other similar experiences listed, but there have been some on the other side. I’d be curious to get a better read on how many people wound up feeling uncomfortable at the event. That’s something I don’t want to see, and it’s something the organizers don’t want to see.]
Thanks for bringing this up, and I feel really bad about a lot of this.
My impression is that some decent forecasters / financial traders online are more interested in some of this than I’d like or expect. There’s definitely some edgy vibes in the territory. I think the conference played heavily to the interests of some of the community that enjoys forecasting (much more than the community of professional forecasters or forecasting researchers), whatever these interests might be.
I attended for most of Manifest, but I was very focused on the forecasting parts, and (thankfully) think I missed almost all of what you’re describing. There was definitely a decent crowd of professionals there focused on forecasting / EA (and among these, I’d expect much less of the above) but perhaps one would have had to know who they were.
If there are future Manifest events, I’d encourage changes here.
Personally, I’m really hoping that we later get separate, serious forecasting/epistemics events that are much more targeted at professionals / researchers, and are really just about that work. I think some assume that Manifest is this, but I don’t think that’s what it’s trying to be. It’s arguably more of a festival for forecasting enthusiasts—which is also a good thing to have, just a different thing.
Thinking about this more—at first I was confused on why all of these people (Curtis Yarvin? Michael Vassar?) were interested in forecasting. Some of the individuals listed were people I’ve never known to be interested in the area.
I think one issue might have been that this was just an unusual event in that it rejected very few people for controversial beliefs, and was (generally) well put together. I suspect that this could have attracted people with controversial beliefs, even though that was likely not really the main intention.
So, “not excluding controversial people” quickly becomes, partially, “a gathering for controversial people.”
I think Scott Alexander wrote about this sort of pattern with online communities.
This is the Scott Alexander summation of it that sticks in my memory most: “if you’re against witch-hunts, and you promise to found your own little utopian community where witch-hunts will never happen, your new society will end up consisting of approximately three principled civil libertarians and seven zillion witches. It will be a terrible place to live even if witch-hunts are genuinely wrong.”
And lots of great people who are interested and talented at forecasting don’t go, because they don’t want to be “balancing out”.
I imagine a bit of investigation/survey work would be really good here. It’s a clear empirical question, and I’d hope that if it is true, data would be very convincing to the relevant decision-makers.
Personally I’d hope and expect that it’s true (I’m not very excited about some of these people either, and it would be great to have the conference have better attendees), but expect that the Manifold team isn’t yet convinced.
I think that I’m a bit nervous that claims like “lots of great people who are interested and talented at forecasting don’t go” might actually be, or be treated as, vague points in favor of a side, rather than a real empirical belief.
I’d suspect people on the other side would assume that this is is meant more as a value statement.
But given the specific people in this conversation, I think it’s possible and perhaps preferable for it to be more of an empirical statement—and if that is true, that would be great because it could be studied and the corresponding point would stand.
Here’s one data point; I was consistently in the top 25 on metaculus for a couple years. I would never attend a conference where a “scientific racist” gave a talk.
Yeah and while I don’t think that people should be banned from attending based on most controversial views (I am open to a discussion on Yarvin) I think that special guests should be much more carefully handed out. But I think that’s better argued internally rather than using the kind of blackmail that so many commuities seem to use on one another these days.
Regardless of the tone here, I think EA is much less pushy than most communities in this regard.
I’m surprised at all the negative votes I was getting above. I felt like I was trying to understand the problem, not recommend solutions.
If it is the case that “not excluding controversial people” lead to it becoming unintentionally popular with some crowd, I imagine there are various ways this could be handled. Like, have discussions with some of these people first, and try to get to some agreement people are all happy with.
Oh yeah i bet that not excluding controversial people had this effect. I think several people really liked it because they felt able to be themselves. But i imagine that this led to some of what we see discussed here.
It’s important to consider adverse selection. People who get hounded out of everywhere else are inexplicably* invited to a forecasting conference, of course they come! they have nowhere else to go!
* inexplicably, in the sense that a forecasting conference is inviting people specialized in demographics and genetics—it’s a little related, but not that related.
“forecasting is sometimes considered to be a niche EA cause area”
Given that Open Phil have given many millions to metaculus, and there are a bunch of posts on the forum I would say it may even be more than a niche EA cause area.
It is an OP grantmaking program now, afaik https://forum.effectivealtruism.org/posts/ziSEnEg4j8nFvhcni/new-open-philanthropy-grantmaking-program-forecasting
I don’t know about this, Open Phil have given billions to GiveWell charities and GHD programmes. A couple of million to a forecasting platform seems niche in comparison.
From here its around 30 million to forecasting orgs so a bit more than a couple of million. Your orders of magnitude less than other causes point probably still stands though.
https://www.openphilanthropy.org/grants/?q=&focus-area%5B%5D=forecasting
I note that there has been a fair bit of promotion related to Manifest on the Forum. While people can put on whatever events they wish under their own banner—we have no veto power over that—Maniano’s post makes me skeptical that advertising Manifest on the Forum is appropriate. While I don’t think allowing an event to be advertised can fairly imply endorsement, a reasonable observer could interpret such allowance as implying some degree of tolerance or acceptability. While I’m cognizant of the downsides of a centralized authority deciding what events can and cannot be promoted here, I think the need to maintain sufficient distance between EA and this sort of event outweighs those downsides.
Can I also nudge people to be more vocal when they perceive there to a problem? I find it’s extremely common that when a problem is unfolding nobody says anything.
Even the post above is posted anonymously. To me, I see this as being part of a wider trend where people don’t feel comfortable expressing their viewpoint openly, which I think is not super healthy.
I can’t speak for the original poster, but the Forum is on the public internet. I can’t blame someone in the OP’s shoes for not wanting their name anywhere near a discussion of “scientific racism” where potential employers, neighbors, and others might come across it—even if their post is critical of the concept.
I think saying “I am against scientific racism” is within the overton window, and it would be extraordinarily unlikely to be”cancelled” as a result of that. This level of risk aversion is straightforwardly deleterious for our community and wider society.
The person who sees the post after Googling the commenter’s name is still potentially left with the impression of the commenter as part of a community that tolerates “scientific racism.” That imposes costs that some of us, especially those with non-EA professional lives, would prefer not to bear.
I made a small poll to try and figure out what consensus and disagreeing views are. You can add your own statements.
I’ll post it on both articles, please only vote once. Likewise please take the results with a pinch of salt because I can’t gate to one vote per user.
Results here: https://viewpoints.xyz/polls/ea-and-manifest/results
(If you want more people to answer this poll consider upvoting it)
A bad pattern than can happen in posts like that is that we all feel attacked. Maniano feels attacked and so uses a pseudonym. I feel attacked because I have to argue carefully on tough topics and so I feel stressed. some people feel that racism is going on unacknowledged. That sounds stressful too.
So a couple of things.
I imagine there is more agreement than it feels like. Both @habryka and I have acknowledged having conversations about disinviting people. Whatever the vibe is, we both think that some people are beyond the pale and this probably includes many people other readers do. @saul has seemed open to discussing ways to improve.
We don’t need to settle this today. This issue has bubbled under the surface for a while. I’d have preferred that we dealt with it in small dollops, but i don’t always get what i want. We don’t have to fix/work out the EA-rationalist relationship in one post.
What do you mean by this?
I’m reading it as saying that EAs shouldn’t attend rationalist meetups?
Discouraging group members from hanging out with groups you disagree with is very culty, leads to impoverished epistemics, and should not be associated with EA.
Thank you for writing this, and I’m really sorry you experienced it. I’m really disappointed that Manifold have learnt absolutely nothing after inviting Hanania last year, and that Lightcone are fine having such horrible people at their events. These organisations are, knowingly or not, actively making their communities unwelcoming to people of colour and other underrepresented groups. I personally think CEA and other major organisations like Open Philanthropy ought to cut all ties with Manifold/Lightcone.
Events hosted at Lighthaven are not generally “our events”. I am not going to ban people I disagree with from hosting events at Lighthaven, that would be both a dumb economic decision and a dumb moral decision.
As I said before, I would be surprised if we never end up hosting events for scaling lab teams at Lighthaven. If they pay us a fair price (and maybe somewhat more as a tax), I will rent to them.
You are of course entitled to that view, but I find it morally wrong!
As far as I can tell this is standard practice for practical all hotels and event venues I have ever heard about. Indeed it would be kind of weird for an event venue to ask detailed question about who I am inviting to an event (and this has never happened before), and very rare and weird for them to block someone from running an event because of an attendee (not unprecedented, but very weird).
What does “cut all ties” mean, concretely?
Cease funding, stop giving them booths at EAG, stop inviting Habryka et al to EA leadership events like Meta Coordination Forum.
I feel like you are basing this reaction on something more than Manifest events and less.online. Is this correct? I feel like Habryka or the Lightcone team haven’t really done anything that registers as clearly wrong to me in this case. What you are suggesting sounds extreme based on what information I have.
(Side note regarding the karma system and this comment being at like −30: I think the right response to this comment is “upvote, because it concretely and clearly answers the question, and then if you also disagree, use the ‘disagree’ react button”. It’s not a bad contribution to the discourse, which is what I think the downvote button should be used for)
(Edit: the comment is at +26 now. Karma here really is a rollercoaster, lol.)
Thanks. Do you think you should be invited to EA leadership events?
now that I no longer work at CEA, no
I’m not aware of any clear ties between Manifold and OP although I could have missed them.
To give some agreement, I too was sad that Hanania was given special guest billing last year. Not being racist isn’t particularly hard and Hanania seems to me to do it egregiously. Also it led to several people I actually wanted to hear from pulling out (as was their right).
I agree almost entirely. I would only change “knowingly or not” to “knowingly”; there’s no argument from ignorance at this point.
For those not on Twitter, note that the above commenter seems to be pursuing an active campaign against Manifold and Lightcone. See these two twitter threads [1], [2].
Quotes include being “glad to see” the Guardian running their piece and being “delighted” with the part on Brian Chau, “a big chunk of the rationalist community is just straight up racist”, and “the entire ‘truth seeking’ approach is often a thinly veiled disguise for indulging in racism.”
“Pursuing an active campaign” is kind of a weird way to frame someone writing a few tweets and comments about their opinion on something
He doesn’t present it as an opinion. He doesn’t even present it as an argument. It would be a stretch even to say he presents his opinions as fact. He sees a word that represents his ideological enemies, and he sees a word that causes visceral reactions of disgust in bystanders, and he resolves to use those two words in the same sentence as often as possible, so that the connotations of one will bleed into the other. There’s a reason this kind of attack is called a `smear.′
In this thread, he takes questions of the actual harms as settled, and concludes that taking the most extreme and divisive action possible is the only possible option. MOST NOTABLY, he says nothing about Austin, who was the one making the decision to invite Hanania because he wanted Hanania at the event. Instead he goes after Oliver, who is multiple levels removed the decision, and only provides `solutions’ focused on harming Oliver’s career and personal reputation.
I didn’t mention Habryka in any of my tweets. I mentioned him in this forum comment because he is the only person in this situation who I know is involved in EA “leadership”.
It also just occurred to me that Shakeel’s first tweet about the article was I think(?) the first time it appeared on Twitter. It was actually made before the author’s themselves retweeted it. And also before any of the “hit piece” pushback had appeared.
Someone sent the article to me, I thought it was interesting and tweeted about it. I live in the UK so maybe I saw it before others woke up?
I agree that framing is a bit intense, but noting that:
He mentions “I’ve long expressed my disgust at how Lightcone/Manifold indulge abhorrent ideas and people, both while I was at CEA and after.”
The opinions are sometimes really just wrappers around imperative claims (“I think that… you ought to X”)
He also appears to support the journalistic methodology of the Guardian piece. That piece, of course, is not expressing opinions; it is adversarially designed to cause reputational damage.
I’ve seen Shakeel’s comments in other places before, and I used to think that he (or well, you, if you’re reading this) just had opinions different than mine. And if he would’ve said “This piece lacks journalistic integrity and is clearly a hit piece, but some of the underlying concerns are valid”, I think it would’ve been more fair to describe as “expressing an opinion”.
Now, however, seeing the combination of data points, I update that he is not just “analysing”, but rather quite actively saying these things with a political intent that can be described as a “campaign”.
You’ve misleadingly quoted me here. I said I was delighted to see The Guardian pick up my reporting on Brian Chau, not that I was delighted with the piece overall. I’m surprised that someone committed to truth-seeking would mislead forum users like this.
In the follow-up tweet you say: “Glad to see the press picking [this story] up (though wish they made the rationalist/EA distinction clearer!)”
So far as I’ve found, you’ve made no comments indicating that you disagree with the problematic methodology of the piece, and two comments saying you were “delighted” and “glad” with parts of it. I think my quote is representative. I’ve updated my comment for clarity.
Nonetheless: how would you prefer to be quoted?
EDIT: Shakeel posted a comment pointing to a tweet of his “mistakes” in the post, and I was wrong to claim there were no comments.
It seems you didn’t look very hard! https://x.com/shakeelhashim/status/1802493753841594711?s=46
Ah! I was wrong to claim you made “no” such comments. I’ve edited my above comment.
Now, I of course notice how you only mention “lots of mistakes” after Jeffrey objects, and after it’s become clear that there is a big outpouring of hit piece criticism, and only little support.
Why were you glad about it before then?
Did you:
...not think it was a hit piece? (I think you’re a smart guy, and even a journalist yourself, so I’m kind of incredulous about you not picking up on the patterns here)
...or were you okay with the-amount-of-hit-piece-you-thought-it-was? (this is of course what I’m worried about, and why I am pursuing this so vigorously. I think this article crossed several very important epistemic red lines, and I will fight for those lines to remain intact, and will be very vocal about confronting journalists close to the community who don’t seem to respect them)
...or something else? (reality might of course be more complicated than my neatly packaged options above, so do feel free to explain)
To sort of steelman a defence here, shouldn’t we be glad that Shakeel is publicly expressing the views he actually holds. To my understanding, he doesn’t like how rationalists behave in this area and so has said so, both on twitter and on the forum.
Perhaps you might have preferred he did it differently, but it seems like he could have done it much worse and given it’s a thing he actually believes, it seems better that he said it than that he didn’t.
(I’m not sure I fully endorse this, nor do I endorse shakeel’s position in general, but like I’m glad he’s said it on the forum)
It’s indeed helpful that Shakeel expressed those views, because now it’s clear where he’s at, and it will make it easier to relate to him as a journalist in future.
I can see that EA could optimise for being respectable with some controversial ideas rather than fully truth seeking. But that seems like a hard equilibrium, similarly difficult to manifest in fact:
Manifest is trying to be truthseeking without being full of racists
EA might try to be a mix of truth seeking and respectable, without becoming milquetoast.
If that were our aim, what would our north star be and how could we know we weren’t ending up incapable of discussing difficult things. Already it seems that a number of issues are ‘not the sort we should be truthseeking about’ in the opinion of some. How do we avoid that becoming too broad a set.
Likewise, the norms that lead to this seem pretty bad—guilt by association, inability to consider ideas on their merits—and by giving in to these norms we might strengthen them. EA has the opportunity to push back against the idea that if you engage with a few bad people you must be bad. I imagine overall this idea causes a lot of damage and is driven by fear of cancellation. I’m not sure we should endorse acting in that way.
It is plausible this is the best way to be, but I’d ask, when do we stop?
Minor note: “based” is a part of current gen Z parlance and “fag” is a part of current queer gen Z parlance.
Was Cremieux considered to be one of the HBD/eugenics people?
[Twitter also kept pushing him into my feed even though I didn’t follow him...]
If you look at the schedule of events(https://schedule.manifest.is/Manifest?view=text), Jonathan Anomaly is the only person who is very clearly HBD (the Collins allegedly, and then there’s Stephen Hsu, who’s HBD-adjacent). From the surface, it doesn’t look like the HBD participants got any extra-special visibility? (though they just might be especially conspicuous in how unfiltered they are in bringing the topics out ) Gene Smith is there and has a post on gene editing for intelligence but AFAIK has never been associated with the HBD crowd (and doesn’t get into the controversial social aspects) [also, gene-editing, unlike eugenics, can help empower the “have nots”].
“First they came for the high decouplers...”
Man yesterday this was at +20 karma and no it’s at −20. There seems to be a massive diurnal effect in how the votes on the forum swing.
I think both of those karma values are kind of extreme, and so find myself flipping my vote around. But wish I could leave an anchor vote like “if the vote diverges from value X, change my vote to point it back toward X”
I think the diurnal effect is real and is based on there being a lot of people in both the UK and the SF Bay Area that have opposite and geographically correlated views on this topic.
Tl;dr: 8 conservatives went to a conference and were friendly. Some also lived near by, and some also exist but didn’t go.