I’m seeing commenters criticise this piece for factual errors (e.g. misidentifying Manifund as a prediction market) and I think The Guardian should issue corrections for those errors.
But I’m much more concerned about the racist ties:
One, Jonathan Anomaly, published a paper in 2018 entitled Defending Eugenics, which called for a “non-coercive” or “liberal eugenics” to “increase the prevalence of traits that promote individual and social welfare”. The publication triggered an open letter of protest by Australian academics to the journal that published the paper, and protests at the University of Pennsylvania when he commenced working there in 2019. (Anomaly now works at a private institution in Quito, Ecuador, and claims on his website that US universities have been “ideologically captured”.)
Another, Razib Khan, saw his contract as a New York Times opinion writer abruptly withdrawn just one day after his appointment had been announced, following a Gawker report that highlighted his contributions to outlets including the paleoconservative Taki’s Magazine and anti-immigrant website VDare.
The Michigan State University professor Stephen Hsu, another billed guest, resigned as vice-president of research there in 2020 after protests by the MSU Graduate Employees Union and the MSU student association accusing Hsu of promoting scientific racism.
Brian Chau, executive director of the “effective accelerationist” non-profit Alliance for the Future (AFF), was another billed guest. A report last month catalogued Chau’s long history of racist and sexist online commentary, including false claims about George Floyd, and the claim that the US is a “Black supremacist” country. “Effective accelerationists” argue that human problems are best solved by unrestricted technological development.
The response by a Manifest representative is also concerning:
“We were aware that some of these folks have expressed views considered controversial.”
He went on: “Some of these folks we’re bringing in because of their past experience with prediction markets (eg [Richard] Hanania has used them extensively and partnered with many prediction market platforms). Others we’re bringing in for their particular expertise (eg Brian Chau is participating in a debate on AI safety, related to his work at Alliance for the Future).”
I think “high decouplers” believe there is negligible risk in platforming intolerant people because rationalists can isolate their relevant thoughts (e.g. prediction markets) from their dangerous thoughts (e.g. there should be more incarceration of Black people).
But the poor quality of reasoning demonstrated in somebody’s more controversial thinking should reflect poorly on their general intellectual rigour. Do you really have a lot to learn from somebody who, like Chau, thinks the US is a “Black supremacist country” or is that person maybe just a controversialist?
And it seems disingenuous to suggest that rationalists aren’t interested in those controversial views. We know that rationalists are unusually and disturbingly pro-eugenics. And it’s hard to see it as a coincidence that this event attracted ~five racist public intellectuals.
Manifest’s representative also said:
“We did not invite them to give talks about race and IQ” and concluded: “Manifest has no specific views on eugenics or race & IQ.”
As with critiquing factual errors without engaging the central claims about racist ties, declining this opportunity to condemn scientific racism goes some way to validating the report’s argument: that rationalists have a blind spot on racism.
Effective altruism is based on the core belief that all people count equally. We unequivocally condemn Nick Bostrom’s recklessly flawed and reprehensible words. We reject this unacceptable racist language, and the callous discussion of ideas that can and have harmed Black people. It is fundamentally inconsistent with our mission of building an inclusive and welcoming community.
I’m seeing commenters criticise this piece for factual errors (e.g. misidentifying Manifund as a prediction market) and I think The Guardian should issue corrections for those errors.
But I’m much more concerned about the racist ties:
I’m unfamiliar with these people but I don’t like what I’m learning. Khan wrote a letter to “a white nationalist website” about “the threat of the United States becoming ‘more genetically and culturally Mexican’.” Chau believes “the narrative” around George Floyd’s death “was every bit as fake as an AI-generated video”. And Richard Hanania, also invited to Manifest, believes “we need more policing, incarceration and surveillance of black people.”
The response by a Manifest representative is also concerning:
I think “high decouplers” believe there is negligible risk in platforming intolerant people because rationalists can isolate their relevant thoughts (e.g. prediction markets) from their dangerous thoughts (e.g. there should be more incarceration of Black people).
But the poor quality of reasoning demonstrated in somebody’s more controversial thinking should reflect poorly on their general intellectual rigour. Do you really have a lot to learn from somebody who, like Chau, thinks the US is a “Black supremacist country” or is that person maybe just a controversialist?
And it seems disingenuous to suggest that rationalists aren’t interested in those controversial views. We know that rationalists are unusually and disturbingly pro-eugenics. And it’s hard to see it as a coincidence that this event attracted ~five racist public intellectuals.
Manifest’s representative also said:
As with critiquing factual errors without engaging the central claims about racist ties, declining this opportunity to condemn scientific racism goes some way to validating the report’s argument: that rationalists have a blind spot on racism.
As an effective altruist, not a rationalist, I align instead with the Centre for Effective Altruism’s statement in response to Nick Bostrom’s email:
(I’m writing in a personal capacity)