Co-founder of Arb, an AI / forecasting / etc consultancy. Doing a technical AI PhD.
Conflicts of interest: EPSRC, Emergent Ventures, OpenPhil, Infrastructure Fund, Alvea.
It’s very unclear if she’s read any of the work besides the torrid Torres pieces.
Thread for serious AI safety researchers who aren’t longtermists
Review of the New Yorker piece. It’s a model of its type, for good and ill but mostly good. The good: The essence is correct. EA is now powerful enough that public scrutiny is fully justified. Lewis-Kraus engages with the ideas, and skips tabloid cheap shots. (The house style always involves little gossipy comments about fashion and eye colour, but here it’s more about scruffy clothing than physical appearance). For instance, it’s extremely easy to caricature utilitarianism. Certainly many professional philosophers do. But Lewis-Kraus chooses the neutral definition: no cavilling about hedonism, reductionism, Gradgrind, nor very much about honor. Similarly, AI risk is oddly underemphasised, and we all know how easy that is to piss on.
The hypothesis of MacAskill’s bad faith is entertained and rejected. So too with Bernard Williams’ quietism: looked at and put back on the shelf. “perhaps one thought too few”.The bad: gossip and false balance. Girlfriends and buildings are named, needlessly, privacy and risk be damned. The dissident’s gender is revealed for absolutely no reason. Journalists as a class have an underdeveloped sense of the risks they are exposing people to. The house style demands irrelevant detail, and apparently places style above potential impacts.I can’t help but admire the symbols he picks out of real life, even though they are the nonfiction equivalent of puns or entrail reading:* Of xrisk research: “an Oxford building that overlooks a graveyard.”* “The room featured a series of ornately carved wooden clocks, all of which displayed contrary times; an apologetic sign read “Clocks undergoing maintenance,” but it was an odd portent for a talk about the future”* “We passed People’s Park, which had become a tent city, but his eyes flicked toward the horizon.”
Some risible bits:
> abandon the world view of the “benevolent capitalist” and, just as Engels worked in a mill to support Marx, to live up to its more thoroughgoing possibilitiesIncredible. Engels ran a Manchester cotton mill and inherited a fifth of it; he was a benevolent capitalist!
> the chances of human extinction during the next century stand at about 1–6, or the odds of Russian rouletteThat’s not how odds work> It does, in any case, seem convenient that a group of moral philosophers and computer scientists happened to conclude that the people most likely to safeguard humanity’s future are moral philosophers and computer scientistsjfc. If you worry that practitioners of a field are ignoring something, you’re a crank and a trespasser. If you worry about the tail risks of your own field, you’re suffering from convenient delusions of grandiosity.The PR suspicion is funny (“Was MacAskill’s gambit with me—the wild swimming in the frigid lake—merely a calculation that it was best to start things off with a showy abdication of the calculus?”). GLK didn’t mention any of this in his profile of Rothberg, a businessman with incentives and a presumably similarly sized filter on his speech. But mention consequentialism and suddenly everyone assumes you’re a master at acting and a 4D chess player. But he was just primed for it by the dissident so nvm.> I could see how comforting it was, when everything seemed so awful, to take refuge on the higher plane of millenarianism.Literally backwards. I find it much more emotionally difficult to contemplate x-risk than terrible but limited events.
But overall GLK is the real deal, as good as magazine writers get. See also him on Paige Harden and Scott Alexander.
I meant Hamish
I’m troubled that the version of the story you heard didn’t mention it was a fuckup she repeatedly apologised for.
I would love to see your estimates. As I say below, I overlooked peacekeeping and it is probably the diamond in the rough.
I am negative because of the lack of estimates, and because it really does seem relatively low importance and tractability. (Every UN insider I’ve spoken to (now 4) is extremely negative about it.)
I would love to be wrong.
army of one.
What’s the main benefit of these posts do you think? Accountability, transparency to your community given funding, inspo, braggadocio?
Impressed with the infrastructure you built around the post (the anon forms and the votey comments)! Also love the randomisation ideas.
You do well at reporting the views without necessarily endorsing them in the first half—but then the policy suggestions seem to endorse every criticism. (Maybe you do agree with all of them?) But if not there’s a PR flavour to it: “we have to spend on climate cos otherwise people will be sad and not like us”. Of the four arguments in Policy section 1, none seem to depend on estimating the expected value and comparing it to the existing EA portfolio, as Ben Dixon memorably did.
(I’d have no objection if the section was titled “appropriate respect for climate work” rather than “more emphasis”, which implies a zero-sum bid for resources, optimality be damned.)
We can now sponsor US work visas.
Not wrong but not helpful imo. (Past treatments of the theme: here, here, here.)
Main problem is you’re not considering the base rate for elitism / credentialism / privilege in the reference class “philanthropy / intellectual movements / technical fields / levers of power”. I’m first-generation college (and not elite college either), and I can tell you that my EA clients care the least about this among any class of clients (corporate, academic, government, non-EA philanthropy) by far.
Similarly: cmon, EA is 20% non-straight, as opposed to like 5% in the US.
It’s also just temporary founder effects plus previous lack of resources. One of the many boons of the funding influx is that we can start lifting unprivileged students. I’ve seen this happen ten times this year. There are lots of people trying to expand into Latin America and India. It’s hard!
Boring meta comment on the reception of this post: Has anyone downvoting it read it? I don’t see how you could even skim it without noticing its patent seriousness and the modesty of its conclusion.
(Pardon the aside, Richard; I plan to comment on the substance in the next few days.)
I suspect downvoters are misunderstanding “know” and “will be”; I think Turchin meant “If we knew” and “it would [then] be reasonable” (subjunctive).
Have you seen the new GLP-1 agonists? They only got approved for obesity last year, and might actually make a dent. The next generation are apparently even better.
Making these cheaper, more available, and with non-needle delivery are all worthy interventions, more tractable imo than the policy steering you described. But I don’t know if they need charity, the market is vast.
I have no proof it mattered, but a few years before the big pivot to longtermism, 80k debated some leftists who emphasised the sheer scope of systemic change and measurability bias. And we moved.
I would rather gloss that article as “EA pays too much attention to one kind of criticism: vague systemic paradigmatic insinuation”.
Most philosophers will automatically be metaphilosophical optimists. I’d love to know what fraction of the dropouts are pessimists.
Lots of things for verbal types to do! Just one: it turns out that precise writing is in very short supply; I know great researchers who are way more productive with writing support.
I also encourage you not to take the tests too seriously. Nor your current dislikes. I’m a philosophy type, but I made myself technical enough for an AI PhD, slowly overcoming a heavy bias against maths. It is unlikely that you couldn’t do the same if you wanted.
I found this unusually moving and comprehensive; thank you.
(Note: the above is not an argument against working in the US, which is probably correctly rated in EA.)