Tell them that it is a piece of journalism designed to sensationalize an issue. Tell them that taking things written by a journalist seriously as an accurate, balanced or fair description of a situation is generally a mistake, and that the article gives them no trustworthy information, and if they want to ask questions about the particular policies, practices and culture in whatever local group they want to work with, they are welcome to.
timunderwood
EA novel published on Amazon
Object level point:
”I don’t have a good inside view on timelines, but when EY says our probability of survival is ~0% this seems like an extraordinary claim that doesn’t seem to be very well supported or argued for, and something I intuitively want to reject outright, but don’t have the object level expertise to meaningfully do so. I don’t know the extent to which EY’s views are representative or highly influential in current AI safety efforts, and I can imagine a world where there’s too much deferring going on. It seems like some within the community have similar thoughts.”EY’s view of doom being basically certain are fairly marginal. They definitely are part of the conversation, and he certainly is not the only person who holds them. But most people who are actively working on AI safety see the odds of survival as much higher than roughly 0% -- and I think most people see the P(doom) as actually much lower than 80%.
The key motivating argument for AI safety being important, even if you think that EY’s model of the world might be false (though it also might be true) is that while it is easy to come up with plausible reasons to think that P(doom) is much less than 1, it is very hard to dismiss enough of the arguments for it to get p(doom) close to zero.
I think I’m already pretty familiar with thinking around this. What I don’t know is if there is any way to get people who have different intuitions around these questions to converge or to switch intuitions.
So I’m pro-natalist in part because I see potential people who do not exist, but who might someday exist as being the sort of people who I can either help (by increasing their odds of someday existing and having a good life, or decreasing their odds of existing and having a bad life) or harm (by doing the opposite).
At a deep level this describes my feelings when I imagine the nearly infinite number of potential humans, when I imagine what my state was before I was conceived, and when I think about how happy I am to be alive, and how grateful I am that I got the chance to exist, when it easily could have been someone else, or when humanity easily could have failed to evolve at all.
So I very, very much intuitively feel like if I bring someone into existence who will have a good life, I just did something very nice for them. If I make it so that they don’t come into existence, I did something extremely unkind to them.
And this intuition connects to all sorts of other identities and feelings I have, decisions I make, things I wish I had or could do, etc. As closely as I can tell it is deeply embedded in me.
It possibly has to do with the fact that I was homeschooled, so I never got bullied in school, and that I am thirty eight, and a couple of weeks ago I had some nasty mouth ulcers, and I realized that this was the physically most unpleasant thing I’ve ever gone through. What I’m saying, is I haven’t ever actually suffered, and this feeds into my into my intuitions about the goodness of life.
But ultimately: I am pronatalist because I care about people who do not exist, and who therefore cannot either suffer or feel happiness. I am pronatalist because I think that it is possible to do something beneficial to individuals who do not currently exist, and who might never exist. It is not because I don’t understand that they don’t exist.
I could be wrong, but I’m pretty sure that most people who adopt a sort of pure longtermist utilitarianism already understand your argument here, but have different intuitions about it.
He should recognize that his autism (after I recognized the sort of errors my own mind makes in reading his apology email, I non-expert with an Asperger’s diagnosis outside diagnosed him) makes him an idiot about PR things, and before making any future public announcements he should get several people who are ‘woke’ or whatever the right word to describe them is to read it first.
He also should introspect about the thing in his brain that made him feel like it was really, really important to be precise about what he thought about racism and eugenics in this apology, and he should recognize that sometimes it is not the time to say anything.
I mean, he made errors of judgement. Both 25 years ago, and last week. The one last week was actually a bigger error of judgement in my view, since he should have taken into account that he is currently in a position of public responsibility.
However the ‘introspection’ I want Bostrom to engage in is fundamentally different in kind from the ‘introspection’ that I think David wanted him to engage in.
Fair.
But I also want to say here aloud: Bostrom is fine. He has no need at any point in this to engage in sincere repentence, introspection or remorse. He is not a bad person, and I would be happy to associate with him. He has shown no signs of factual views that are empirically untenable, and he has shown no sign of moral views that involve not valuing the well being of everyone in an approriate and equal manner, no matter who they are or where they came from.
He made a mistake in terms of communication and said something offensive twenty five years ago, that he understands was a mistake to say. But that mistake was one of judgement not of fundamental moral character.
You do not repent for making a mistake of judgement, you apologize for being dumb and move on.
There is nothing in this that indicates poor moral character or views that I find reprehensible in Bostrom. I do not view him as a sinner in need of repentence.
Further expecting those who have sinned to sincerely introspect and to sincerely repent is the sort of thing that religious fanatics and other sorts of bad people ask people to do.
That is my honest view. It is my honest view that David Mears is suggesting we create a community culture that is fundamentally designed to enforce conformity and prevent truth seeking. And I think just like those who think that discussion about race, genetics and intelligence should be allowed to happen somewhere (though that place definitely should not be the EA forum) need to ask themsevles ‘is what I am thinking similar in some important way to what Nazis thought’ and ‘might allowing these conversations lead to somewhere bad and unfairly exclude people’ those who want to demand the sort of conformist policy should ask themselves if this is similar to the sort of thought control that has been exerted by ideologically motivated villians throughout history, and if this sort of policy might lead to very bad places also.
Then go for it.
Come up with a detailed proposal, describe exactly how it would work, convince people to give you funding to run the experiment, and then report back and tell us how it went.
The default assumption always is that doing everything differently won’t work very well. It doesn’t matter what the precise change is. So skepticism is the correct attitude until this is proven that it can work.
It is a good idea though for the people who are enthused about this idea to follow their passion, and build and test concrete proposals. Go forth and try to make the world better.
Yeah, this is why earn to give needs to come back as a central career recommendation.
Here I go with something else completely unhelpful:
I think it is that the people who actually donate money (and especially the people who have seven figure sums to donate) might be far weirder than the average person who posts and make votes on the forum.
On which topic, I really, really should go back to mostly being a lurker.
Yeah, I agree, there is a good reason they exist.
I don’t think they are unreasonable either as individuals or in essays and conversations.
Further they are trying to do things to change the world in ways that we both agree would make it a better place. Possibly the movement is strongly net positive for the world.
But they also make people who are emotionally obsessed with the truth content of the things they say and believe feel excluded and unwelcome.
I mean the real divide is probably SJW vs not SJW.
But the one reason that people became anti SJW was in part because speech restrictions are very damaging to epistemics at the individual level (how can you judge if something is true, if you aren’t allowed to hear both the arguments for and against it).And the other reason is that the SJW model seems to strongly incentivize people to lie about their true beliefs if their true beliefs are not what they are supposed to be.
Not good for community epistemics.
In a community with good epistemics, everyone should feel comfortable to say what they truly believe without fearing that they will be excluded and banished due to that. And this is also what will create the context where they can actually learn that those beliefs are wrong. After all, the surface level arguments for the socially accepted belief clearly did not convince this person, but it is possible that they will be convinced that the socially accepted belief is true, if they are able to articulate their objections to intelligent people, and then hear thoughtful and considered answers to their true objections.
Simply being told to lie, or at least never, ever speak about what they believe is forcing them to repress a part of their personhood, and it will do nothing to lead them to move forward and improve their actual beliefs. This is destructive, toxic, unpleasant, and unkind.
Obviously this set of arguments does not at all interact with what I think is your core argument: Allowing people to be part of a community who openly express racist opinions (even if they are only doing so elsewhere) might drive off minorities, and it definitely will bring internet mobs. The cost of that might be very large in consequentialist terms.
I just noticed that I am very confused about what precise object level thing we think we are arguing about when we argue about the Bostrom apology.
Do we think we are arguing about whether the community should distance itself now from Bostrom, despite the apology?
Do we think we arguing about whether the original email was very offensive? Plus it was stupid to bring up all of that stuff about race and eugenics in an ‘apology’?
Do we think we are arguing about whether it is evil to be a person who says ‘they don’t know whether there is a genetic component to the differences in racial outcomes’ instead of saying ‘there definitely is no possible genetic component to the differences in racial outcomes’.
Do we think we are arguing about whether these genetic drivers of behavioral differences actually exist?
etc.
I can come up with more.
I’m pretty sure different people think they are arguing about different things.
What I think I’m arguing about is first: That the community should accept as a member in good standing someone who says that they honestly don’t know whether genetic differences are an important cause of black/white outcome differences in the United States.
And second: While the community probably should distance itself from someone who regularly goes around sasying what was said in the orginal post from the 90s, there is no reason to exclude someone who wrote that once, and then realized that w it hat they’d just written was dumb and a mistake.
Is this what you all think you are arguing about?
Aligning the Aligners: Ensuring Aligned AI acts for the common good of all mankind
I am confused.
The bad thing would be if FLI funded them. FLI did not fund them due to things discovered due to due diligence. So FLI literally did nothing wrong, and literally has nothing to apologize for.
Unless we actually are saying that talking with ‘bad people’ is automatically bad and something you should apologize to all your right thinking friends for having contaminated them with proximity to badness afterwards .
Is there a principled argument that thinking about funding a group like that, and then changing your mind is bad?
The problem with Bostrom’s apology is that it made the argument worse rather than achieving (the presumed) goal of making the conversation around it as small as possible.
There were true things and true impressions he could have said and left that would have done that.
Opinions that are stupid are going to be clearly stupid.
So the thing is, racism is bad. Really bad. It caused Hitler. It caused slavery. It caused imperialism. Or at least it was closely connected.
The holocaust and the civil rights movement convinced us all that it is really, really bad.
Now the other thing is that because racism is bad, our society collectively decided to taboo and call horrible arguments that racists make and use.
The next point I want to make is this: As far as I know the science about race and intelligence is entirely about figuring out causation from purely observational studies when you have only medium sized effects.
We know from human history and animal models that both genetic variation and the cultural forces are powerful enough to create the observed differences.
So we try to figure out which one it is using these observational studies on a medium sized effect (ie way smaller than smoking and lung cancer, or stomach sleeping and SIDS). Both causal forcesnl are capable of producing in principle the observed outcomes.
You can’t do it. Our powers of causal inference are insufficient. It doesn’t work.
What you are left with is your prior about evolution, about culture, and about all sorts of other things. But there is no proof in either direction.
So this is the epistemic situation.
But because racism as bad, society, and to a lesser extent the scientific community, has decided to say that attributing any major causal power to biology in this particular is disproven pseudoscience.
Some people are good at noticing when the authorities around them and their social community and the people on their side are making bad arguments. These people are valuable. They notice important things. They point out when the emperor has no clothes. And they literally built the EA movement.
However, this ability to notice when someone is making a bad argument doesn’t turn off just because the argument is being made for a good reason.
This is why people who are good at thinking precisely will notice that society is saying that there is no genetic basis for racial differences in behavior with way, way more confidence than is justified by the evidence presented. And because racism is a super important topic in our society, most people who think a lot will think hard about it at some point in their life.
In other words, it is very hard to have a large community of people who are willing to seriously consider that they personally are wrong about something important, and that they can improve, without having a bunch of people who also at some point in their lives at least considered very hard whether particular racist beliefs are actually true.
This is also not an issue with lizard people or flat earthers, since the evidence for the socially endorsed view is really that good in the latter case, and (so far as I have heard, I have in no way personally looked into the question of lizard people running the world, and I don’t think anyone I strongly trust has either, so I should be cautious about being confident in its stupidity) the evidence for the conspiracy theory is really that bad.
This is why you’ll find lots of people in your social circles who can be accused of having racist thoughts, and not very many who can be accused of having flat earth thoughts.
Also, if a flat earther wants to hang out at an ea meeting, I think they should be welcomed.
It is constantly claimed, but never actually proven that bad PR (in the sense of being linked to things like SBF, racism, or an Emile Torres article) leads to fewer donations for EA causes.
I am not convinced this is actually true. Does bad PR actually lead twenty something people who want to do ai safety research to be less likely to get a grant for career development? Does it actually hurt MIRI’s budget? Or the ai safety camp? Etc.
Does it actually make people decide to not support an organization that wants to hand out lots of anti factory farm pamphlets? Are AMF and Give directly and the worm initiatives actually receiving less money because of these bad PR moments?
And if they are, how do we collectively know that?
I completly that group genetic differences should not be discussed here. It is a good thing that I don’t think I’ve ever encountered a discussion of it on the EA forum prior to this situation.
So we all agree: Talking about this on the forum is a bad idea. Then the remaining question is what attitude we should take towards Bostrom now that this email of his from the nineties has become the topic de jour.
Possibly the position you are trying to take is that the institutions of the community should distance themselves from him because continuing to treat him as a central intellectual voice might offecnd and drive out minorities, and might offend and drive away people who a very sensitive to the possibility that someone is accepted in a community who is racist.
I want to note that there are also huge negative consequences to the official community distancing itself from such an important figure over this. Notably it will show that it is adopting an attitude that people who honestly try to figure out the truth on controversial topics without being concerned about what is socially acceptable should not be here. It will be saying that we care more about PR than truth.
The sorts of people who care about arguments, and will follow them wherever they go are and have been very central to the EA community, and they are unusual people who provide extremely important benefits, and the unique value of EA as an addition to the global portolio of ideas has probably come from how it was a place where those sorts of thinkers thought about how to do good.
I’d also note: We constantly talk about the PR effect of our decisions. The forum at least has become obsessed with it over the past years.
Lol, and now I’m wondering how much I do of that as someone over six foot/ 185cm