I watched most of a youtube video on this topic to see what it’s about. I think I agree that “coordination problems are the biggest issue that’s facing us” is an underrated perspective. I see it as a reason for less optimism about the future.The term “crisis” (in “metacrisis”) makes it sound like it’s something new and acute, but it seems that we’ve had coordination problems for all of history. Though maybe their effects are getting worse because of accelerating technological progress?In any case, in the video I watched, Schmachtenberger mentioned the saying, “If you understand a problem, you’re halfway there toward solving it.” (Not sure that was the exact wording, but something like that.) Unfortunately, I don’t think the saying holds here. I feel quite pessimistic about changing the dynamics about why earth is so unlike Yudkwosky’s “dath ilan.” Maybe I stopped the Schmachtenberger video before he got to the solution proposals (but I feel like if he had great solution proposals, he should lead with those). In my view, the catch-22 is that you need well-functioning (and sane and compassionate) groups/companies/institutions/government branches to “reform” anything, which is challenging when your problem is that groups/companies/institutions/government branches don’t work well (or aren’t sane or compassionate).
I didn’t watch the entire video by Schmachtenberger, but I got a sense that he thinks something like, “If we can change societal incentives, we can address the metacrisis.” Unfortunately, I think this is extremely hard – it’s swimming upstream, and even if we were able to change some societal incentives, they’d at best go from “vastly suboptimal” to “still pretty suboptimal.” (I think it would require god-like technology to create anything close to optimal social incentives.)
Of course, that doesn’t mean making things better is not worth trying. If I had longer AI timelines, I would probably think of this as the top priority. (Accordingly, I think it’s weird that this isn’t on the radar of more EAs, since many EAs have longer timelines than me?)
My approach is mostly taking for granted that large parts of the world are broken, so I recommend working with the groups/companies/institutions/government branches that still function, expanding existing pockets of sanity and creating new ones.
Of course, if someone had an idea for changing the way people consume news, or making a better version of social media, trying to create more of a shared reality and shared priority about what matters in the world, improving public discourse, I’d be like “this is very much worth trying!.” But it seems challenging to compete for attention against clickbait and outrage amplification machinery.EA already has the cause area “improving institutional decision-making.” I think things like approval voting are cool and I like forecasting just like many EAs, but I’d probably place more of a focus on “expanding pockets of sanity” or “building new pockets of sanity from scratch.” “Improving” suggests that things are gradual. My cognitive style might be biased towards black-and-white thinking, but to me it really feels like a lot of institutions/groups/companies/government branches mostly fall into two types, “dysfunctional” and “please give us more of that.” It’s pointless to try to improve the ones with dysfunctional leadership or culture (instead, those have to be reformed or you have to work without them). Focus on what works and create more of it.
That would be a valid reply if I had said it’s all about priors. All I said was that I think priors make up a significant implicit source of the disagreement – as suggested by some people thinking 5% risk of doom seems “high” and me thinking/reacting with “you wouldn’t be saying that if you had anything close to my priors.”
Or maybe what I mean is stronger than “priors.” “Differences in underlying worldviews” seems like the better description. Specifically, the worldview I identify more with, which I think many EAs don’t share, is something like “The Yudkowskian worldview where the world is insane, most institutions are incompetent, Inadequate Equilibria is a big deal, etc.” And that probably affects things like whether we anchor way below 50% or above 50% on what the risks should be that the culmination of accelerating technological progress will go well or not.
In general I’m skeptical of arguments of disagreement which reduce things to differing priors. It’s just not physically or predictively correct, and it feels nice because now you no longer have an epistemological duty to go and see why relevant people have differing opinions.
That’s misdescribing the scope of my point and drawing inappropriate inferences. Last time I made an object-level argument about AI misalignment risk was just 3h before your comment. (Not sure it’s particularly intelligible, but the point is, I’m trying! :) ) So, evidently, I agree that a lot of the discussion should be held at a deeper level than the one of priors/general worldviews.
Quintin has lots of information, I have lots of information, so if we were both acting optimally according to differing priors, our opinions likely would have converged.
I’m a fan of Shard theory and some of the considerations behind it have already updated me towards a lower chance of doom than I had before starting to incorporate it more into my thinking. (Which I’m still in the process of doing.)
Yes to (paraphrased) “5% should plausibly still be civilization’s top priority.”However, in another sense, 5% is indeed low!I think that’s a significant implicit source of disagreement over AI doom likelihoods – what sort of priors people start with.
The following will be a bit simplistic (in reality proponents of each side will probably state their position in more sophisticated ways). On one side, optimists may use a prior of “It’s rare that humans build important new technology and it doesn’t function the way it’s intended.”
On the other side, pessimists can say that it has almost never happened that people who developed a revolutionary new technology displayed a lot of foresight about its long-term consequences when they started using it. For instance, there were comparatively few efforts at major social media companies to address ways in which social media might change society for the worse. Or, same reasoning for the food industry and the obesity epidemic or online dating and its effects on single parenthood rates.
I’m not saying revolutions in these sectors were overall negative for human happiness – just that there are what seems to be costly negative side-effects where no one competent has ever been “in charge” of proactively addressing them (nor do we have good plans to address them anytime soon). So, it’s not easily apparent how we’ll suddenly get rid of all these issues and fix the underlying dynamics, apart from “AI will give us god-like power to fix everything.” The pessimists can argue that humans have never seemed particularly “in control” over technological progress. There’s this accelerating force that improves things on some metrics but makes other things worse elsewhere. (Pinker-style arguments for the world getting better seem one-sided to me – he mostly looks at trends that were already relevant 100s of years ago, but doesn’t talk about “newer problems” that only arose as Molochian side-effects of technological progress.) AI will be the culmination of all that (of the accelerating forces that have positive effects on immediately legible metrics, but negative effects on some other variables due to Molochian dynamics). Unless we use it to attain a degree of control that we never had, it won’t go well. To conclude, there’s a sense in which believing “AI doom risk is only 5%” is like believing that there’s a 95% that AI will solve all the world’s major problems. Expressed in that way, it seems like a pretty strong claim.(The above holds especially for definitions of “AI doom” where humanity would lose most of its long-term “potential.” That said, even if by “AI doom” one means something like “people all die,” it stands to argue that one likely endpoint/attractor state from not being able to fix all the world’s major problems will be people’s extinction, eventually.)I’ve been meaning to write a longer post on these topics at some point, but may not get to it anytime soon.
That makes sense – I get why you feel like there are double standards.
I don’t agree that there necessarily are.Regarding Bostrom’s apology, I guess you could say that it’s part of “truth-seeking” to dive into any mistakes you might have made and acknowledge everything there is to acknowledge. (Whether we call it “truth-seeking” or not, that’s certainly how apologies should be, in an ideal world.) On this point, Bostrom’s apology was clearly suboptimal. It didn’t acknowledge that there was more bad stuff to the initial email than just the racial slur.
Namely, in my view, it’s not really defensible to say “technically true” things without some qualifying context, if those true things are easily interpreted in a misleadingly-negative or harmful-belief-promoting way on their own or even interpreted as, as you say, “racist dogwhistles.” (I think that phrase is sometimes thrown around so lightly that it seems a bit hysterical, but it does seem appropriate for the specific example of the sentence Bostrom claimed he “likes.”)
Take for example a newspaper reporting on a person with autism who committed a school shooting. Given the widespread stigma against autism, it would be inappropriate to imply that autism is linked to these types of crimes without some sort of very careful discussion that doesn’t make readers prejudiced against people on the spectrum. (I don’t actually know if there’s any such link.)
What I considered bad about Bostrom’s apology was that he didn’t say more about why his entire stance on “controversial communication” was a bad take. Given all of the above, why did I say that I found Bostrom’s apology “”reasonable”″?
“Reasonable” is a lower bar than “good.”
Context matters: The initial email was never intended to be seen by anyone who wasn’t in that early group of transhumanists. In a small, closed group, communication functions very differently. For instance, among EA friends, I’ve recently (after the FTX situation) made a joke about how we should run a scam to make money. The joke works because my friends have enough context to know I don’t mean it. I wouldn’t make the same joke in a group where it isn’t common knowledge that I’m joking. Similarly, while I don’t know much about the transhumanist reading list, it’s probably safe to say that “we’re all high-decouplers and care about all of humanity” was common knowledge in that group. Given that context, it’s sort of defensible to think that there’s not that much wrong with the initial email (apart from cringiness) other than the use of the racial slur. Bostrom did apologize for the latter (even viscerally, and unambiguously).
I thought there was some ambiguity in the apology about whether he was just apologizing for the racial slur, or whether he also meant just the general email when he described how he hated re-reading it. When I said that the apology was “reasonable,” I interpreted him to mean the general email. I agree he could have made this more clear.
In any case, that’s one way to interpret “truth-seeking” – trying to get to the bottom of any mistakes that were made when apologizing. That said, I think almost all the mentions of “truth-seeking is important” in the Bostrom discussion were about something else.There was a faction of people who thought that people should be socially shunned for holding specific views on the underlying causes of group differences. Another faction that was like “it should be okay to say ‘I don’t know’ if you actually don’t know.”
While a few people criticized Bostrom’s apology for reasons similar to the ones I mentioned above (which I obviously think is reasonable!), my impression is that the people who were most critical of it did so for the “social shunning for not completely renouncing a specific view” reason.
For what it’s worth, I agree that emphasis on truth-seeking can go too far. While I appreciated this part of EA culture in the discussion around Bostrom, I’ve several times found myself accusing individual rationalists of fetishizing “truth-seeking.” :)So, I certainly don’t disagree with your impression that there can be biases on both sides.
There is actually nothing technically untrue about this statement? [...]Do you have any thoughts on the explanations for what seem like an inconsistent application of upholding these standards? It might not even be accurately characterized as an inconsistency, I’m likely missing something here.
“Technically not saying anything untrue” isn’t the same as “exhibiting a truth-seeking attitude.”
I’d say truth-seeking attitude would have been more like “Before we condemn FLI, let’s make sure we understand their perspective and can assess what really happened.” Perhaps accompagnied by “I agree we should condemn them harshly if the reporting is roughly as it looks like right now.” Similar statement, different emphasis. Shakeel’s comment did appropriate hedging, but its main content was sharing a (hedged) judgment/condemnation.
Edit: I still upvoted your comment for highlighting that Shakeel (and Jason) hedged their comments. I think that’s mostly fine! In hindsight, though, I agree with the sentiment that the community discussion was tending towards judgment a bit too quickly.
The mindset you describe seems like a big improvement compared to what I suspect is common among EAs (“naive trust”). However, it doesn’t sound entirely optimal to me either.Sometimes it’s possible for people (with good people judgment)* to know specific other people well enough to confidently rule out various failure modes around prosociality like “this person certainly isn’t a serial killer” or “this person wouldn’t turn badly abusive even if lots of things go wrong.”**
Note that the emphasis here is on “sometimes.” The question isn’t just “do people with a low corruption threshold exist?” but also “can we identify them ex ante?” Identifying is hard, but sometimes we know someone well enough to do it. This holds especially if that person also makes themselves transparent (which greatly helps with trust-building).
Forming high-trust relationships with a few specific people (or even just one person) can be really valuable, so people might want to learn how to develop that sort of confidence even if they otherwise benefit from adopting the “no idols” mindset. (Note that it might be very rare to have the privilege to form close-enough relationships to make these assessments at all.)
I think the statement “power corrupts” is true, so with power-related failure modes, the world might be more grey (e.g., maybe there’s no person of whom we can tell that they’d never end up corrupted). Even so, you can sometimes say something like “this person seems about as good as it gets for this type of leadership.” That still makes a big difference!
In fact, I think focusing on “people and their traits” gives us more leverage than focusing on “situations” or “temptations/risk factors.” This is why, in organizational contexts, I think focusing on “checks and balances” is less important than choice of leadership. Apart from personality IMO being responsible for most of the variance around good/bad outcomes (no citation; that’s just my impression), I also think that “checks and balances” only control downside risk. They don’t really help much with cases where leadership subtly caps their upside potential for success at the mission in exchange for more easily legible (selfish) “benefits.” (Altruistic missions are much harder to verify than “make $ profits for shareholders,” which is a reason why EA orgs have it harder.) By contrast, good leadership seems like the only way to succeed with a mission where tracking progress is difficult and not really legible for people who aren’t deeply immersed in things.*People who are good at “people judgment.” Some people are really bad at it, in which case they can only limit downsides.
**I say “badly abusive” because I accept that it can probably happen to even the best of people that they end up unhappy with themselves and let it out on others, to some degree. That said, I think there are people who would notice and care (because they derive their life satisfaction from being a good partner or parent) if they started to do this and who would have the strength of character to accept that they’re doing this (as opposed to going into denial about it and taking it out on the other person even more out of convoluted self-hatred). I expect that “awareness + caring + resistance to strong levels of self-deception” protect quite well against the worst relationship failure modes.
Thanks for posting this account!
It sounds like your friend ended up being attracted (or sucked in) to quite a bad memetic environment!It seems really off to me to assign much practical relevance to questions of IQ differences between groups because these priors get screened off once we learn new information about individuals. For instance, if we learn that someone works at a “cognitively demanding organization,” that evidence is more direct and more relevant than any prior from group averages.
(Same reasoning: if all we knew about someone is that they don’t have a university degree, then we’d be forgiven to have priors that they’re somewhat likely to be not extremely intelligent or conscientious. [But it still seems important to keep an open mind because priors are very crude – that’s the whole point!] However, once we learn that they work at a cognitively-demanding organization, the prior from the uni degree no longer matters at all!)
Neo-reaction has been close to the EA and rationalist communities for a very large fraction of our history.
I think a lot of this happened before I became active in EA/rationality, but I remember feeling quite puzzled when I read about neoreaction and saw that some people active in that scene had ties to the rationalists in the Bay area. My impression was that this influence has gotten a lot weaker over time, but it sounds like your experience suggests that it’s still a big issue. I find that very unfortunate.
I agree with basically everything you say here, but I also think it’s a bit unfair to point this out in the context of Kaspar Brandner sharing a lot of links after you did the same thing first (sharing a lot of links). :)In any case, I thinknot discussing the issue >> discussing the issue >> discussing the issue with flawed claims.
(And I think we’re all in trouble as a society because, unfortunately, people disagree about what the flawed claims are and we get sucked into the discussion kind of against our will because flawed claims can feel triggering.)
I like that you make a distinction between longtermism, the idea, and other “related” views that are prominent among longtermists, but logically distinct from longtermism. I disagree with calling the other views (like transhumanism, though that’s a broad tent) “indistinguishable from eugenics.” I find that statement so wrong that I downvoted the comment even though I really liked that you pointed out the above distinction.
On transhumanism among longtermists, I like Cinera’s point about focus on positive selection, but I also want to make a quite different point in addition, on how many longtermists, as far as I’m aware, don’t expect “genetics” to play a big role in the future. (People might still have views on thought experiments that involve genes; I’m just saying those views are unlikely to influence anything in practice.) Many longtermists expect mind uploading to become possible, at which point people who want to be uploaded can enter virtual worlds (and ones who don’t want it can stay back in biological form in protected areas). Digital minds do not reproduce the biological way with fusion of gametes (I mean, maybe you could program them to do that, but what would be the point?), so the whole issue around “eugenics” no longer exists or has relevance in that context. There would then be lots of new ethical issues around digital minds, explored here, for instance. I think it’s important to highlight that many (arguably most?) longtermists who think transhumanism is important in practice mostly mean mind uploading rather than anything related to genes.
So, it might be interesting to talk about attitudes around mind uploading. I think it’s very reasonable if some people are against uploading themselves. It’s a different question whether someone wants to prohibit the technology for everyone else. Let’s assume that society thinks carefully about these options and decides not to ban all forms of mind uploading for everyone. In that scenario, everything related to mind uploading becomes “transhumanism.” There’ll be a lot of questions around it. In practice, current “transhumanists” are pretty much the only people who are concerned about bad things happening to digital minds or bad dynamics among such minds (e.g., Malthusian traps) – no one else is really thinking about these scenarios or considers them important. So, there’s a sense in which you have to be a transhumanist (or at least participating in the discourse) if you think it matters what’s going to happen with digital minds. And the motivation here seems very different from the motivation behind eugenics – I see it as forecasting (the possibility of) radical societal changes and thinking ahead about what are good vs. bad options and trajectories this could take.
That’s a cool point by Klein.
There is so much evidence at this point against race realism/ HBD. There is no possibility of it “could be false” without evoking some grand conspiracy. Can we never call it pseudoscience?
If the consensus is strong enough then yes, we should call it pseudoscience.
I read the Wikipedia article you linked on the topic and my feeling was that there’s some remaining disagreement in many places, but overall it does read as though the science supports environmental factors much more than genetic ones. I’m not 100% on how much I should trust it given political pressure and some yellow flags in the article like their uncritical mention of the Southern Poverty Law Center, which have behaved awfully and at times tried to cancel people like Sam Harris or Maajid Nawaz, who are “clearly good people” in my book. (And they still have Charles Murray on their list of extremists, putting him in the same category as neo-nazis, which is awful and immoral.)I already looked at the resources by Bob Jacobs and thought some of them seemed a bit condescending in the sense that I’d expect people who feel confident enough to downvote or upvote claims on this topic would already be familiar with them. (E.g., some of the points he makes would also speak against studying whether mammals are smarter than fish given that fish have more genetic diversity than all mammals together and are a bit of an “unnaturally drawn group in biology”.) That said, it’s good to highlight the point about African diversity and, e.g., Nigerians having higher education scores in some areas than Europeans (and high conscientiousness – whether it’s cultural or genetic).
Other points seem overstated to me (e.g., criticism of validity of IQ). I think the Wikipedia article you linked to is a better source to convince people that genetic influences may not play much of role.
On the topic of the discussion as a whole, the current situation is clearly very unfortunate. It seems like there are many people who only get interested in the topic because they have the impression there’s censorship and they’re against such censorship. If we slightly relaxed on what inferences are defensible to draw draw from the science, then most people would lose interest, which would lower social polarization? Maybe the best message to promote is something like “If there are genetic influences, they’re likely no larger than environmental ones, and there may not be, and overall the question doesn’t seem to have any moral or political/practical relevance.”
I downvoted it (weakly) because my impression is that “it’s pseudoscience” is not a nuanced statement on a topic where there’s bad science all over the place on both sides. Apart from the awfully racially-biased beliefs of many early scientists/geneticists, there has been a lot of pseudoscience from far right sources on this also more recently – that’s important to mention – but so has there been pseudoscience in Soviet Russia (Lysenkoism) that goes in the other ideological direction and we’re currently undergoing a wave of science denial where it’s controversial in some circles to believe that there are any psychological differences whatsoever between men and women. Inheritance stuff also seems notoriously difficult to pin down because there’s a sense in which everything is “partly environmental” (if put babies on the moon, they all end up dead) and you cannot learn much from simple correlation studies (there could still be environmental influences in there). I think a lot of the argument against genetic influences is about pointing out these limitations of the research and then concluding that, because of the limitations, it must be environmental only. But that’s only half-right: if the research has all these limitations, it makes more sense to be uncertain about the causes.
When I closely followed the controversy around Sam Harris and his interview of Charles Murray and later the conflict and subsequent discussion between Sam Harris and Ezra Klein, I noticed that the side that was accusing Sam Harris of pandering to pseudoscience was lying about a bunch of easily-verifiable things. I’m not sure I would understand the science well enough to say that they’re wrong about their scientific claims (and I didn’t bother to read their work in detail), but I think it’s good practice not to trust liars. (Ezra Klein was more of a weasel in that discussion than a liar – the people I think were lying were people whose hit piece against Harris and Murray Ezra Klein allowed to be published on Vox.) Given the above, it seems possible to me that genetic influences also play a role. It seems plausible on priors (would be a coincidence if all groups are the same in all regards), we have some precedent for group differences (I think the research on Ashkenazi jews having higher average IQ is less controversial?), and it can’t fill you with confidence in the other position when we can observe how some people are morally confused so they think the topic is so politically dangerous that they feel the need to lie about things (e.g., in the Sam Harris context, but also recent EA twitter threads I’ve seen go in that direction).
It seems clear that some group differences are environmental-only. However, note that, even if they weren’t, it wouldn’t have any political implications. The benefits of access to good things that underprivileged groups often have less of, like access to education, health care, infrastructure, both parents involved in upbringing (though of course many single parents do an excellent job raising their kids), etc., these benefits don’t have much to do with IQ increases! Instead, access to these things is beneficial in all kinds of ways for anyone. So, politically, nothing would change and it would remain morally important to work towards more equality.
As I said before, it’s totally counterproductive for the goal of fighting racism to stake your case on scientific claims that could turn out to be false. (Imagine how much of a convenient weapon you’d be handing over to racists if they can point out how the anti-racists are staking their claims on potentially flawed science and how they’re punishing anyone who expresses uncertainty.) There’s no reason to consider group averages morally relevant. It’s a huge confusion to act as though there’s a lot that morally depends on it.I also downvoted sapphire’s comments in some places (though not this thread) because they make it seem like there’s some conspiracy in EA around this stuff and because I don’t like their use of the term “Scientific Racism.” (I think the term is very appropriate for many scientists in the early 20th century or before, but very unfair to use towards people like Charles Murray or ones who say things like Bostrom said in his apology.) Regarding the alleged conspiracy, I had to look up what “HBD” exactly means. It might be true that some contrarian types are drawn to these topics in Bay area and via that spinoff from Slatestarcodex where people get kicks from discussing controversial topics. But that seems not particularly representative to me (and more rationalists than EAs)? In any case, I mostly talk to EAs in London and Oxford, where I’ve never seen anyone express any interest in these topics whatsoever, “EA leadership” least of all. I agree that the voting patterns maybe suggests something about EA being unusual, but to me that mostly implies stuff like “EAs/rationalists are skeptical of making confident claims where the evidence is unlikely to support such claims.”
Thanks for the reply. That makes sense! I feel like Bostrom said a bit more than you describe here to make it clear that he doesn’t hold the view that white people are superior. So, to me, while “I like this comment” seemed like an extremely unfortunate phrasing on his part, the context at least made clear that he liked how the comment is “bold and edgy” rather than liking something about alleged differences between white and black people. That said, you’re right that it’s important to make these things really clear in an apology and he could have said more on the topic. Other people have also had negative reactions to the apology (e.g., Habiba here), so maybe I’m in a minority. I read the apology and thought it wasn’t bad. I agree it could’ve been better (e.g., he could have written something like the paragraphs I wrote on how we should be very clear that group averages don’t have moral significance) .
(I’m sometimes not sure whether it’s good to make apologies really long. If I ever had to apologize for something pretty bad, I’d be tempted to write a very long statement – but that may come across as self-absorbed and overly defensive. It just seems hard to get this right and I feel like Bostrom’s apology at least hit a few aspects of what I’d expect an acceptable apology to contain.)
That’s not my reading of the statement (it says “unacceptably racist language” and then condemns the manner of discussion rather than beliefs held).
It completely fails to take Bostrom’s apology into account in any form.
Yeah, but that can be okay if you think it’s higher priority to make a public statement about the contents of the email.
I initially didn’t think such a statement was necessary because disagreeing with the email seemed like a no-brainer, so I didn’t think anyone would have any uncertainty about the views of an organization like CEA. But apparently some (very few) people are not only defending the apology – which I’ve done myself – but argue that the original email was ~fine(?). I don’t agree with such reactions (and Bostrom doesn’t agree either and I see him a sincere person who wouldn’t apologize like that if he didn’t think he messed up), but they show that the public statement serves a purpose beyond just virtue-signalling to make sure there are no misunderstandings. (Note that it’s possible to condemn someone’s actions from long ago as”definitely not okay” without saying that the person is awful or evil!)
Hard to read?
Apart from racial slurs the original email contained “I like that sentence.” I’m sure that’s explained by not being neurotypical and enjoying being contrarian/edgy (see this comment), but I still find it jarring. I think that’s a natural reaction.
Does anyone here face actual adversity anymore?
Why is it that people either have to be unreasonable in one direction or the other? In my view, you’re being just as one-sided here as “the mob” if you make it seem like no one is facing racial adversity. Lots of people are facing adversity for all kinds of reasons, racism is some of it, as are other problems (e.g., mental illnesses are a big problem that’s arguably underrated and happens to affect people from all kinds of backgrounds).
Btw. even Charles Murray now publicly says the “there’s very little, if anything, to gain from discussions about group averages” line. In my view this is the equivalent to rolling onto one’s back and begging for the mob to harass someone else while hoping the inquisition overlooks that one has not wholly retracted one’s statement.
When you imagine the Venn diagram of people who talk a lot about group averages vs people who have actively sought to improve the situation of socio-economically disadvantaged groups, I’d say the intersection isn’t that large. Charles Murray happens to be in the intersection (and it’s awful and unfair how people have shunned him), but this doesn’t change that the intersection is rather small according to my perception.
If the post had just been making the forum readers aware of the controversy and added some commentary along the lines “this was really hard to read and was really disappointing,” then I would’ve upvoted it (I see it the same way.)
However, the OP also highlights a particular paragraph in the apology (about the cause of differences in group averages) and implies that Bostrom’s uncertainty about it and his statement “and I don’t have any particular interest in the question” means that he holds morally repugnant views or at least doesn’t sufficiently condemn them. I explained here why I don’t agree with this. To quote from that comment:
I think it’s bad to reinforce the idea that group averages have any normative relevance whatsoever. If we speak as though the defence against racism is empirically finding that all intelligence differences for group averages are at most environmentally-caused, then that’s a weak defence against racism! It’s “weak” because it could turn out to be false. But in reality, I don’t think there’s any possible finding that could make us think “racism is okay.” In my view, not being racist – in the sense that has moral significance for me – means that (1) you’re not more inclined to falsely reach a conclusion about people from a different ethnicity than you’d reach the same conclusion about (e.g.) your own ethnicity and (2) when you consider “candidates” (in whatever context) with equal characteristics/interests/qualifications, etc., you’re not more inclined to treat some worse than others based solely on their ethnicity. If we hold this view, we get to relax to about what could be found out about group averages. That said, I do agree that there’s very little, if anything, to gain from discussions about group averages, and that the people who are eager to bring up the topic seem morally suspicious. (In this specific case of Bostrom_2023 writing the apology, it’s not like he could have chosen to avoid the topic entirely – so given the mistakes he made 26 years ago, he had to address it again.)
I think it’s bad to reinforce the idea that group averages have any normative relevance whatsoever. If we speak as though the defence against racism is empirically finding that all intelligence differences for group averages are at most environmentally-caused, then that’s a weak defence against racism! It’s “weak” because it could turn out to be false. But in reality, I don’t think there’s any possible finding that could make us think “racism is okay.” In my view, not being racist – in the sense that has moral significance for me – means that (1) you’re not more inclined to falsely reach a conclusion about people from a different ethnicity than you’d reach the same conclusion about (e.g.) your own ethnicity and (2) when you consider “candidates” (in whatever context) with equal characteristics/interests/qualifications, etc., you’re not more inclined to treat some worse than others based solely on their ethnicity. If we hold this view, we get to relax to about what could be found out about group averages.
That said, I do agree that there’s very little, if anything, to gain from discussions about group averages, and that the people who are eager to bring up the topic seem morally suspicious. (In this specific case of Bostrom_2023 writing the apology, it’s not like he could have chosen to avoid the topic entirely – so given the mistakes he made 26 years ago, he had to address it again.)
It changes the emphasis a bit from “written evidence” (and “expressed worldviews”) to “anything whatsoever.”
E.g., if classrooms in 2005 had CCTV, you could find a video of my 14-year-old self deliberately mispronouncing someone else’s name to make it sound dumb and making a comment about them having “girly” hair after someone else had already started making fun of him. I think that video would be similarly hard to watch as the original Bostrom email is hard to read.edit: At least on some dimensions of “hard to watch”? I understand the view that Bostrom’s comments were much worse, but I think there’s something especially jarring about expressed lack of empathy when the person who’s being hurt is right in front of you, as opposed to saying dumb stuff in a small/closed setting to be intellectually edgy.
To add one more person’s impression, I agree with ofer that he apology was “reasonable,” I disagree with him that your post “reads as if it was optimized to cause as much drama as possible, rather than for pro-social goals,” and I agree with Amber Dawn that the original email is somewhat worse than something I’d have expected most people to have in their past. (That doesn’t necessarily mean it deserves any punishment decades later and with the apology –non-neurotyptical people can definitely make a lot of progress between, say, early twenties and later in life, in understanding how their words affect others and how edginess isn’t the same as being sophisticated.)I think this is one of these “struggles of norms” where you can’t have more than one sacred principle, and ofer’s and my position is something like “it should be okay to say ‘I don’t know what’s true’ on a topic where the truth seems unclear ((but not, e.g., something like Holocaust denial)).” Because a community that doesn’t prioritize truth-seeking will run into massive troubles, so even if there’s a sense in which kindness is ultimately more important than truth-seeking (I definitely think so!), it just doesn’t make sense as an instrumental norm to treat it as sacred (so that one essentially forces people to say things that might be false or else they are punished).
Separately from that, I think it’s bad to reinforce the idea that group averages have any normative relevance whatsoever. If we speak as though the defence against racism is empirically finding that all intelligence differences for group averages are at most environmentally-caused, then that’s a weak defence against racism! It’s “weak” because it could turn out to be false. But in reality, I don’t think there’s any possible finding that could make us think “racism is okay.” In my view, not being racist – in the sense that has moral significance for me – means that (1) you’re not more inclined to falsely reach a conclusion about people from a different ethnicity than you’d reach the same conclusion about (e.g.) your own ethnicity and (2) when you consider “candidates” (in whatever context) with equal characteristics/interests/qualifications, etc., you’re not more inclined to treat some worse than others based solely on their ethnicity. If we hold this view, we get to relax to about what could be found out about group averages.
That said, I do agree that there’s very little, if anything, to gain from discussions about group averages, and that the people who are eager to bring up the topic seem morally suspicious. (In this specific case of Bostrom_2023 writing the apology, it’s not like he could have chosen to avoid the topic entirely – so given the mistakes he made 26 years ago, he had to address it again.)
It sounds like you’re advocating for the position of always following “good practices heuristics” and you’re saying “grantmakers who are on the board of another org should recuse themselves from grantmaking decisions about this other org” is one such heuristic. The first point seems uncontroversial; the second point is, in my view, open to debate.
It’s open to debate because “board membership” at most correlates with having specific conflicts of interest. What we should really be concerned about are the potentially-biasing influences themselves, like:
Does the grantmaker person have a strong financial motive to make a particular decision?
Does the grantmaker have a strong reputational motive to make a particular decision?
Does the grantmaker have a strong social motive (friendship, romance, peer pressure, etc.) to make a particular decision?
Once we learn that Claire joined EVF’s board in her role as a grantmaker, all the other ways in which “being a board member” is usually correlated with the above three potentially-biasing influences no longer apply. Learning the context in which she joined screens off these other factors. By contrast, if we knew nothing about why Claire joined the EVF board, and especially if she had joined their board before starting to work at Open Phil, then it would become hard to rule out that her board membership comes with potentially-biasing influences.
Maybe another concern is “Is the grantmaker at risk of exerting undue influence over an org” – but that depends on what we mean by “undue.” It’s also somewhat common for funders to join boards, so it’s not like this clearly violates good practices.
Overall, I think it’s quite reasonable not to be concerned about this after thinking through the specifics. The position of “it’s hubris to think through the specifics when we must avoid anything that’s even just vaguely correlated with a conflict of interest” doesn’t seem appealing to me. It also seems like “process theater” where people signal how virtuously they adhere to “good processes” without seeming to even understand or care why these processes are there in the first place. If anything, I’d find it concerning if people reasoned about things in a rigidly-rule-driven way that’s disconnected from “why might this be bad?”
Since he’d been involved in EA for so long, I wonder if he never truly subscribed to EA principles and has simply been ‘playing the long game’.
I explained in this comment and the comment reply below it why I think it’s clear that he did believe in EA principles (except for the part of “EA principles” that are explicitly against fraud and so on).
I’ve seen plenty of examples of SBF being a master at this dumb game we woke westerners play where we say all the right shibboleths and so everyone likes us.
That’s evidence that he’s deceptive, but note that he meant to refer to stuff like corporate responsibility, not his utilitarianism.
“Never” is too strong, okay. But I disagree with your second point. I feel like I was only speaking out against the framing that critics of EA are entitled to a lengthy reply because of EA being ambitious in its scope of caring. (This framing was explicit at least in the quoted paragraph, not necessarily in her post as a whole or her previous work.) I don’t feel like I was discouraging criticism. Basically, my point wasn’t about the act of criticizing at all, it was only about an added expectation that went with it, which I’d paraphrase as “EAs are doing something wrong unless they answer to my concerns point by point.”