Theoretical Computer Science Msc student at the University of [Redacted] in the United Kingdom.
I’m an aspiring alignment theorist; my research vibes are descriptive formal theories of intelligent systems (and their safety properties) with a bias towards constructive theories.
I think it’s important that our theories of intelligent systems remain rooted in the characteristics of real world intelligent systems; we cannot develop adequate theory from the null string as input.
𝕮𝖎𝖓𝖊𝖗𝖆
I think your consequentialist analysis is likely wrong and misguided. I think you’re overstating the effects of the harms Bostrom perpetuated?
I think a movement where our leading intellectuals felt pressured to distort their views for social acceptability is a movement that does a worse job of making the world a better place.
Bostrom’s original email was bad and he disavowed it. The actual apology he presented was fine IMO; he shouldn’t have pretended to believe that there are definitely no racial differences in intelligence.
I notice that I am surprised and confused.
I’d have expected Holden to contribute much more to AI existential safety as CEO of Open Philanthropy (career capital, comparative advantage, specialisation, etc.) than via direct work.
I don’t really know what to make of this.
That said, it sounds like you’ve given this a lot of deliberation and have a clear plan/course of action.
I’m excited about your endeavours in the project!
Yeah, I agree here. We shouldn’t discuss that topic in community venues; it doesn’t help our mission and is largely counterproductive.
I don’t really like this thing where you speak on behalf of black EAs.
I think you should let black EAs speak for themselves or not comment on it.
In my experience, there seems to be distortionary epistemic effects when someone speaks on behalf of a minority group. Often, the person so speaking assigns them harms, injustices or offenses that the relevant members of those groups may not actually endorse.
When it’s done on my behalf, I find it pretty patronising, and it’s annoying/icky?
I don’t want to speak for black EAs but it’s not clear to me that the “hurt” you mention is actually real.
- 13 Jan 2023 11:50 UTC; 2 points) 's comment on CEA statement on Nick Bostrom’s email by (
Edit: leaving the original tweet below, but I no longer endorse this take.
SBF is a lot more cynical/nihilistic/immoral than I had suspected.
I think people are overrating the causal role EA ideology played here.
This doesn’t sound like someone for whom utilitarian calculus was a decisive factor.
“It’s only wrong if you lose”
“If you win it doesn’t matter what your morals were.”
“Being upright/principled and losing is bad”
“Ethical talk is just PR”
“Ethics is itself just signalling and status games”
What a bleak world view.
One thing I definitely believe, and have commented on before[1], is that median EA’s (I.e, EA’s without an unusual amount of influence) are over-optimising for the image of EA as a whole, which sometimes conflicts with actually trying to do effective altruism. Let the PR people and the intellectual leaders of EA handle that—people outside that should be focusing on saying what we sincerely believe to be true
FWIW, I’m directly updating on this (and on the slew of aggressively bad faith criticism from detractors following this event).
I’ll stop trying to think about how we should optimise for and manage PR, and default to honesty and accurate representation (as opposed to strategic presentation of our positions to make them more appealing/easier to accept). (This is not to imply that I ever condoned lying, but I have thought that it may be better to e.g. change which parts of EA messaging we highlight based on what people seem best receptive to vs our real cruxes: e.g. justify existential risk mitigation because 8 billion people dying is bad, instead of via inaccessible longtermist arguments.)
Very strong disagree here.
Bostrom endorses positive selection for beneficial traits (via e.g. iterated embryo selection), he doesn’t support negative selection (i.e. preventing people who have less of the beneficial trait from reproducing).
I think positive selection for beneficial traits/human enhancement more generally is good.
I’ve updated the title.
Bostrom’s email did not actually cause any harm (as far as we know) as at the time it was written.
I want EA to be a movement about ambitiously working towards a brighter future for humanity.
To that end, it’s a feature not a bug that some EAs are rich/powerful and that EA attracts some of those kinds of people.
I really don’t like this post.
Factually, I think it removes critical context and is sorely lacking in nuance.
Crucial context that was missing:
It was sent 25+ years ago when Bostrom was a student
It was sent as part of a conversation about offensive communication styles
Bostrom apologised for it at the time within 24 hours
Bostrom apologised again for the email now
Beyond the lack of nuance, this feels like it’s optimised for PR management and not honest communication or representation of your fully considered beliefs. I find that disappointing. I greatly preferred Habiba’s statement on this issue despite it largely expressing similar sentiments because it did feel like honest communication/representation of her beliefs (I’ve strongly downvoted this post and strongly upvoted that one, despite largely disagreeing with the sentiment expressed).
And I don’t really like the obsession with PR management in the community. I think it’s bad for epistemic integrity, and it’s bad for expected impact of the effective altruism community on a brighter world.
Emotionally, this made me feel disappointed and a bit bitter.
Immigration is such a tight constraint for me.
My next career steps after I’m done with my TCS Masters are primarily bottlenecked by “what allows me to remain in the UK” and then “keeps me on track to contribute to technical AI safety research”.
What I would like to do for the next 1 − 2 years (“independent research”/ “further upskilling to get into a top ML PhD program”) is not all that viable a path given my visa constraints.
Above all, I want to avoid wasting N more years by taking a detour through software engineering again so I can get Visa sponsorship.
[I’m not conscientious enough to pursue AI safety research/ML upskilling while managing a full time job.]
Might just try and see if I can pursue a TCS PhD at my current university and do TCS research that I think would be valuable for theoretical AI safety research.
The main detriment of that is I’d have to spend N more years in <city> and I was really hoping to come down to London.
Advice very, very welcome.
[Not sure who to tag.]
Eugenics was not an unrelated tangent.
Bostrom has been accused of being a eugenicist, and Bostrom has defended views that could be characterised as eugenics.
Probably the people trying to cancel him would have attempted to cancel him for eugenics.
It was very much in topic.
Bostrom could have written a better apology, but I think that may have required dishonesty about his beliefs, and I think such dishonesty would have been really bad.
Copying my LW comment:
I don’t buy this argument for a few reasons:
SBF met Will MacAskill in 2013 and it was following that discussion that SBF decided to earn to give
EA wasn’t a powerful or influential movement back in 2013, but quite a fringe cause.
SBF was in EA since his college days, long before his career in quantitative finance and later in crypto
SBF didn’t latch onto EA after he acquired some measure of power or when EA was a force to be reckoned with, but pretty early on. He was in a sense “homegrown” within EA.
The “SBF was a sociopath using EA to launder his reputation” is just motivated credulity IMO. There is little evidence in favour of it. It’s just something that sounds good to be true and absolves us of responsibility.
Astrid’s hypothesis is very uncredible when you consider that she doesn’t seem to be aware of SBF’s history within EA. Like what’s the angle here? There’s nothing suggesting SBF planned to enter finance as a college student before MacAskill sold him on earning to give.
Most comments on the matter I’ve actually seen were critical/condemning. It sounds like several supporters were not vocally expressing their positions?
The opinion itself may not be that controversial, but it’s very much a minority in terms of actual comments on the matter?
I am not clear to the extent to which his email actually harmed people.
I agree that he did not optimise for mitigating the harm caused, but I don’t grant much weight to that because it’s very ambiguous to me to what extent harm was caused.
Strongly upvoted.
I endorse basically everything here.
In general, I’m very unconvinced that raising EA bureaucracy and more democratically driven funding/impact decisions would be net positive.
Valid!
It’s definitely valid to lower your opinion of Bostrom’s character because of this.
I was merely presenting my own opinion because I was persuaded it needed to be heard.
I guess I prioritise somewhat different things from you.
For context, I’m black (Nigerian in the UK).
I’m just going to express my honest opinions here:
The events of the last 48 hours (slightly) raised my opinion of Nick Bostrom. I was very relieved that Bostrom did not compromise his epistemic integrity by expressing more socially palatable views that are contrary to those he actually holds.
I think it would be quite tragic to compromise honestly/accurately reporting our beliefs when the situation calls for it to fit in better. I’m very glad Bostrom did not do that.
As for the contents of the email itself, while very untasteful, they were sent in a particular context to be deliberately offensive and Bostrom did regret it and apologise for it at the time. I don’t think it’s useful/valuable to judge him on the basis of an email he sent a few decades ago as a student. The Bostrom that sent the email did not reflectively endorse its contents, and current Bostrom does not either.
I’m not interested in a discussion on race & IQ, so I deliberately avoided addressing that.