Cheers! Here’s to being first against the wall when the basilisk comes.
britomart
Oh, no, to be clear I find the post extremely unpersuasive—I am interested in it only insofar as it seems to represent received wisdom within the EA community.
“But do you want to give up or do you want to try?”
I suppose my instinctive reaction is that if there’s very little reason to suppose we’ll succeed we’d be better off allocating our resources to other causes and improving human life while it exists. But I recognise that this isn’t a universal intuition.
Thank you for the links, I will have a look :)
I recognise that the brains of any AI will have been designed by humans, but the gap in puissance between humans and the type of AGI imagined and feared by people in EA (as outlined in this blog post, for example) is so extreme the fact of us having designed the AGI doesn’t seem hugely relevant.
Like if a colony of ants arranged its members to spell out in English “DONT HURT US WE ARE GOOD” humans would probably be like huh, wild, and for a few days or weeks there would be a lot of discussion about it, and vegans would feel vindicated, and Netflix would greenlight a ripoff of the Bachelor where the bachelor was an ant, but in general I think we would just continue as we were and not take it very seriously. Because the ants would not be communicating in a way that made us believe they were worthy of being taken seriously. And I don’t see why it would be different between us and an AGI of the type described at the link above.
Thank you, that is helpful. I still don’t see, I think, why we think an AGI would be incapable of assessing its own values and potentially altering them, if it’s intelligent enough to be an existential risk to humanity—but we’re hoping that the result of any such assessment would be “the values humans instilled in me seem optimal”? Is that it? Because then my question is which values exactly we’re attempting to instill. At the risk of being downvoted to hell I will share that the thought of a superpowerful AI that shares the value system of e.g. LessWrong is slightly terrifying to me. Relatedly(?) I studied a humanities subject :)
Thank you again!
Help me to understand AI alignment!
(It feels important to disclaim before commenting that I’m not an EA, but am very interested in EA’s goal of doing good well.)
Thank you!! This post is a breath of fresh air and makes me feel hopeful about the movement (and I’m not just saying that because of the Chumbawamba break in the middle). In particular I appreciated the confirmation that I’m not the only person who has read some of the official EA writings regarding climate change and thought (paraphrasing here) ”?!!!!!!!!!”
I know this cannot have been trivial or relaxing to write. A huge thank you to the authors. I really hope that your suggestions are engaged with by the community with the respect that they deserve.
The parenthetical was a joke. I won’t do it again.
The survey question wasn’t alluding to cystic fibrosis and it’s disingenuous to pretend otherwise! You and I both know this!
I’m not going to respond further, I don’t think this conversation is productive.
Thanks for this perspective, sapphire. I’m glad you managed to get out. I honestly am not hopeful about the prospects for a wider detangling of EA and white supremacy but I will keep my fingers crossed.
I agree a representative survey of EAs would be useful data. In its absence, this survey is (I believe) a reasonable proxy, showing the popularity of “race realism” beliefs among rationalists:
https://mobile.twitter.com/IneffectiveAlt4/status/1613821366318338049
Does it matter that some EAs think black people are stupider than white people?
Your blanket statement about what “intelligence researchers” believe is misleading and implies a level of consensus that does not exist in the field (see for example https://www.insidehighered.com/news/2020/01/23/intelligent-argument-race). I am aware of the empirical evidence purporting to show ethnic differences in intelligence; I simply think it is low-quality and unconvincing.
Hello, I did mean to type “not incompatible”- I think we are largely in agreement.
Hello Robert, I am stepping back from this forum but as you’ve replied to me directly I will endeavour to help you understand my viewpoint. I will use italics as you seem to have a high level of belief in their ability to improve written communication.
If the only form that racism took was hatred of black people, then the evidence you present would be persuasive that EA as a movement as a whole does not condone racism.
However: racism also encompasses the belief that certain races are inferior. Belief that black people are stupider than white people, for example, is not incompatible with sending aid to Africa.
Therefore, I was relieved to see an EA institution explicitly confirm that it does not condone racism.
Hope this helps.
Thank you, this is very illuminating. You argue that:
Culture has become “feminised” (I assume this means it has started doing more of the housework)
This feminisation means that EAs are discouraged from engaging with DIFFICULT but IMPORTANT questions, such as “Are white men the smartest of them all.”
One potential solution—this, to me, was the apotheosis of your comment—is “a Scott Alexander megapost”
Spending time on this forum has clarified for me that although I support in principle many of the stated aims of the EA movement, I don’t wish to participate in the culture, which is hostile to anyone who refuses to make a fetish of rationality, while refusing to consider the ways in which such fetishisation is itself irrational. So: so long, and thanks for all the fish ✌️
This seems like a very emotionally-driven response. If you look at the situation rationally, setting aside your instinctive defensiveness, I think you’ll realise what a wild overreaction proposing a schism is. I know putting emotion to one side can be really challenging, especially when you feel threatened, but I really suggest making the effort so that you don’t embarrass yourself with clearly overblown statements like “high cognitive decouplers and low decouplers can’t live together online anymore.”
The current top-voted comments on the original post are: 1) a moderator asking people to remember community norms, 2) a commenter asking people to be kind and charitable, and 3) someone who says “The original email (written 26 years ago) was horrible, but Bostrom’s apology seems reasonable” and criticises the author of the post for attempting to stir up drama.
Hello lepidus, can I just check that I’m understanding you correctly—you believe that people within EA who support Bostrom are a persecuted minority, who must band together to discuss the topics that other people are not brave (or honest) enough to broach?
Lamentably I am unable to tell you what I think of your comments without breaking the site’s TOS, so instead I will offer that, if I WERE to respond to the substance of what you’ve written, my focus would be upon your reasoning skills and ability to parse empirical evidence :)
I’m a charitable woman, so I won’t mention your spelling—except to note that in the context of this attempt to flex your intellectual bona fides, “congitive equivlanet” is delicious. Have a lovely day.