I don’t think EA should be a political project at all. The value in EA is to be an intellectual space where weird ideas about how to improve the world can be explored. That is where it has excelled in the past and has the potential to excel even more in the future. When it comes time to do politics, that should be entirely outside the EA brand/umbrella. That should be done under cause-specific brands and umbrellas that can incorporate both the relevant components of EA and non-EAs who share the relevant policy goals.
River
I was at that session. My memory is that the presenter was very clear that the nazis killed other groups for eugenic reasons, and the jews for dysgenic reasons, both of which are generally regarded as part of the holocaust. The distinction was a bit of history nerding, not an attempt to minimize the nazis crimes, or to deny that the nazis were eugenicists.
Thanks for gathering the data. Enough of these results are just batshit crazy that it further cements my view that bioethicists, as a group, should not be given any say in the questions they purport to be experts on. I’ll point to two that I don’t think anyone else has highlighted yet. 42% apparently believe that in a perfectly just society, blindness would not be a disadvantage. Not being able to drive a car isn’t a disadvantage? Because no technology currently exists that would allow the blind to do that. And 66% think that a person’s life being worth living is somehow not a reason to bring them into existence? If not that, then what would be a reason to bring someone into existence? That may not be a sufficient reason to bring someone into existence, but if you don’t think it is a necessary one then there is something deeply wrong with your morals. I do not understand at all what would cause someone to give this answer.
The most important effect of parenting on productivity is left out here. A whole new person is created, meaning a whole additional person’s worth of productivity! And with only a two decade or so delay. Not to mention a whole additional person’s worth of happiness, and of combating demographic decline. So while it may be true that in the short term additional children decrease productivity, in the long term (which is what we as EAs should be caring most about) each additional child massively increases productivity.
I have two points regarding point 2. Firstly, what matters is the relationship between the expected happiness and the expected suffering, not the best happiness and the worst suffering. There is no particular reason that these relationships should be the same. It may be that the worst suffering outweighs the best happiness, and also that the expected happiness outweighs the expected suffering.
Secondly, why do you think people would skew towards the suffering dominating? My intuition is that the expected happiness will generally dominate. I’ve noticed there are a subset of EAs who seem to have an obsession with suffering, and the related position of anti-natalism, but I do not think EAs are representative of the broader population in this regard, and I do not think this subset of EAs are epistemically justified.
I think what this comes down to for me is: If Kat Woods’ Forum username was pseudonymous, would we have taken down Ben’s post? (Or otherwise removed all references to Kat by her real name?)
If the answer to this is “yes,” then I don’t think Alice+Chloe should be deanonymized.
I do not like the incentive structure that this would create if adopted. Kat did not get to look at this particular drama and decide whether she wanted it discussed under a real or pseudonymous username. Her decision point was when she created her forum account however many years ago, at a time when she had no idea that this kind of drama would erupt. If this position becomes policy, then it incentivizes every person, at the time that they create a forum account, to choose a pseudonym rather than use their real name, to avoid having any unforeseeable future drama publicly associated with their real name. I think this would be bad. People in a community can’t build trust if they don’t know the identities of the people they are building trust with.
My understanding is that Kat and Emerson did in fact get their names on CEA’s blacklist to some extent.
Here is the bigger problem I see with your proposed solution. If an employer reviewing an application from Alice or Chloe believes their side of this, then the employer would not factor in the fact of their presence on CEA’s blacklist, since the employer, by hypothesis, thinks CEA was mistaken to put them there. If, on the other hand, an employer reviewing an application from Alice or Chloe believes Nonliner’s side of this, then the employer may justifiably look at the fact that CEA erred by having blacklisted Kat and Emerson and choose not to consult CEA in their hiring decisions at all, and therefor not discover that their applicant was Alice or Chloe. Either way, CEA blacklisting Alice and Chloe seems ineffective.
This word “retaliation” seems to be doing a lot of work in your thinking, so I’d like to disect it a little bit. What exactly do you mean by “retaliation”? One could use retaliation to mean “any time Alice hurts Bob, and later Bob does something that hurts Alice, which he would not have done but for Alice’s initial hurtful action.” If that is your definition, then yes, sure, this is obvious retaliation. So what? Lots of things that are retaliation under this definition are fine, some are even optimal. Every time that a US military unit attacked a Japanese one during ww2 was retaliation for Pearl Harbor under this definition, yet clearly waging war on Japan was correct. I think when you use the word though, you mean it to carry some additional meaning. You seem to think that it is necessarily bad. And that requires a more constricted definition and an argument that nonlinear’s actions satisfy it.
I’m not sure I would have used Ben as the example had I been writing it, but I think I understand why they did, and I certainly don’t blame them for it. There is no drama where everyone is on the same side, so any real life example would antagonize some readers. Hypothetical examples are always weaker because the reader might think they are unrealistic. And Ben is in no position to complain about people sharing negative one-sided stories on the EA forum.
I think even with just the behaviours that Nonlinear has publicly confirmed, there is cause for major concern.
Lets look at one specific claim that you pointed to—whether there was a legal contract agreed beforehand specifying a salary. Unless I’ve missed something, I don’t believe nonlinear has publicly commented on this. All I’m saying is don’t let your confidence exceed the strength of the evidence.
The emotion of guilt is usually what leads to accountability and behaviour change. See e.g. this video with clinical psychologist June Tangney, co-author of the book Shame and Guilt.
It is certainly one emotion that can. But your video just talks about guilt and shame, it doesn’t talk about other emotions. I would expect all emotions have the potential to change behavior under the right circumstances—otherwise, they wouldn’t serve an evolutionary purpose. I can think of instances where I’ve altered my behavior after social drama out of fear of getting hurt again, rather than guilt or shame. So when I look at someone else, I don’t need to settle on a particular explanation of why they’ve changed their behavior to accept evidence that they have.
Even if we assume that all of the allegations are true (which seems unwarranted when the evidence is hearsay from two anonymous sources), you seem to think that remorse is the only mental state that could cause people to change their behavior. Why do you think that?
Are you familiar with any concerns about nonlinear not raised in Ben’s post? Ben seems particularly concerned that nonlinear creates an epistemic environment where he wouldn’t know if there was more. If there is, that seems pretty central to confirming Ben’s concerns.
Thank you for sharing Minh, I think this is one of the most important updates.
If our goal is (as I think it should be) only to figure out whether we want to interact with any of these people in the future, and not to exact retribution for past wrongs against third parties, then we don’t need to know exactly what happened between nonlinear and Alice and Chloe. That’s good, since we probably never will. What does seem to be the case is this. (1) Everybody involved agrees that something went badly wrong in the relationships between Kat/Emerson and Alice/Chloe, though they may dramatically disagree about what. (2) Kat/Emerson have changed their behavior in a way that prevents a repeat. Your testimony is good evidence for 2. And given that, I don’t think I will update much on whether I want to interact with them in the future. So thank you for your testimony.
(disclaimers: my past interactions with Kat have been positive but not extensive. I don’t believe I have interacted with Emerson. And I was not asked to comment by anyone involved.)
I guess my fundamental question right now is what do we mean by intelligence? Like, with humans, we have a notion of IQ, because lots of very different cognitive abilities happen to be highly correlated in humans, and this allows us to summarize them all with one number. But different cognitive abilities aren’t correlated in the same way in AI. So what do we mean when we talk about an AI being much smarter than humans? How do we know there even are significantly higher levels of intelligence to go to, since nothing much more intelligent than humans has ever existed? I’m not sure why people seem to assume that possible levels of intelligence just keep going.
My other question, related to the first, is how do we know that more intelligence, whatever we mean by that, would be particularly useful? Some things aren’t computable. Some things aren’t solvable within the laws of physics. Some systems are chaotic. So how do we know that more intelligence would somehow translate into massively more power in domains that we care about?
I do not like this. One of the fundamental premises of EA is to be neutral about who we are helping—people here, people there, people now, people later, all get weighted the same. Specifically setting out to help only Muslims therefor seems non-EA. If Muslims want to do it, I guess they have that right, but EA shouldn’t be touching it.
Lastly and probably most significantly, there is obviously the loss of an additional individual who would likely have been economically productive over the course of their lifetime
From a common-sensical point of view, it’s difficult to know exactly where to “draw the line”; it seems crazy to imagine a baby dying during labour as anything other than a rich, full potential liife lost, but if we extend that logic too far backwards then we might imagine any moment that we are not reproducing to be costing one “life’s worth” of DALYs.
There seems to be an obvious route of inquiry to address this quandary, which is to ask what impact a stillbirth has on the number of children a woman has during her life. I imagine some nontrivial fraction of women who have stillbirths go on to become pregnant again in relatively short order, and end up having just as many children as they would have had the pregnancy succeeded. If, hypothetically, 90% of women who have stillbirths go on to have just as many children as they would have without the stillbirth, and 10% have one fewer children, then it seems straightforward to me that we should count a stillbirth as costing 0.1 lives. I don’t know actual numbers about how stillbirths impact women’s later reproductive choices, but presumably somebody has studied this.
I agree on food. I was careless with my qualifications, sorry about that.
I think part of the difficulty here is that “wokism” seems to refer to a cluster of ideas and practices that seem to be a genuine cluster, but don’t have especially clear boundaries or a singular easy definition.
What I do notice is that none of the ideas you listed, at least at the level of abstraction at which you listed them, are things that anyone, woke or anti-woke or anywhere in between, will disagree with. But I’ll try to give some analysis of what I would understand to be woke in the general vicinity of these ideas. Note that I am not asserting any normative position myself, just trying to describe what I understand these words to mean.
I don’t think veganism really has much to do with wokism. Whatever you think about EA event catering, it just seems like an orthogonal issue.
I suspect everyone would prefer that EA spaces be welcoming of trans people, but there may be disagreement on what exactly that requires on a very concrete level, or how to trade it off against other values. Should we start meetings by having everyone go around and give their pronouns? Wokism might say yes, other people (including some trans people) might say no. Should we kick people out of EA spaces for using the “wrong” pronouns? Wokism might say yes, other might say no as that is a bad tradeoff against free speech and epistemic health.
I suspect everyone thinks reports of assault and harassment should be taken seriously. Does that mean that we believe all women? Wokism might say yes, others might so no. Does that mean that people accused should be confronted with the particular accusations against them, and allowed to present evidence in response? Wokism might say no, others might say yes, good epistemics requires that.
I’m honestly not sure what specifically you mean by “so-called ‘scientific’ racism” or “scourge”, and I’m not sure if that’s a road worth going down.
Again, I’m not asserting any position myself here, just trying to help clarify what I think people mean by “wokism”, in the hopes that the rest of you can have a productive conversation.
synonyms might be “SJW” or “DEI”.
Thank you for clarifying. I would regard Nathan’s first pair of examples as racist and eugenic, but importantly not his second pair. My experience at Summer Camp and Manifest was that I did not hear anything like the first pair or anything more extreme. (I did not attend Less Online or the Curtis Yarvin party so I cannot speak to what happened there). I think I understand why you did not include many concrete examples, but the accusation of racism without concrete examples mostly comes off as name-calling to me. The “HBD” label also comes off to me as name-calling, as I only ever hear it used by people attacking it, and they don’t ever seem to say much more than “racist” in their own definitions of it. I haven’t really seen people say “yes, I believe in HBD and here is what I mean by that”, but maybe I’m just not reading the right people. If you could point me at such a person that might be useful. But now it seems you are claiming to have heard significantly more extreme things than I did. And I’m curious why that is.