Former AI safety research engineer, now AI governance researcher at OpenAI. Blog: thinkingcomplete.blogspot.com
richard_ngo
I think this pales in comparison to Trump’s willingness to silence critics (e.g. via hush money and threats).
If you believe that Trump has done a bunch of things wrong, the Democrats have done very little wrong, and the people prosecuting Trump are just following normal process in doing so, then yes these threats are worrying.
But if you believe that the charges against Trump were in fact trumped-up, e.g. because Democrats have done similarly bad things without being charged, then most of Trump’s statements look reasonable. E.g. this testimony about Biden seems pretty concerning—and given that context, saying “appoint a Special Counsel to investigate Joe Biden who hates Biden as much as Jack Smith hates me” seems totally proportional.
Also, assuming the “hush money” thing is a reference to Stormy Daniels, I think that case reflects much worse on the Democrats than it does on Trump—the “crime” involved is marginal or perhaps not even a crime at all. (tl;dr: Paying hush money is totally legal, so the actual accusation they used was “falsifying business records”. But this by itself would only be a misdemeanor, unless it was done to cover up another crime, and even the prosecution wasn’t clear on what the other crime actually was.) Even if it technically stands up, you can imagine the reaction if Clinton was prosecuted on such flimsy grounds while Trump was president.
The Democratic party, like the GOP, is going to act in ways which help get their candidate elected. … There’s nothing illegal about [not hosting a primary] though, parties are private entities and can do whatever they want to select a candidate.
If that includes suing other candidates to get them off the ballots, then I’m happy to call that unusually undemocratic. More generally, democracy is constituted not just by a set of laws, but by a set of traditions and norms. Not hosting a primary, ousting Biden, Kamala refusing interviews, etc, all undermine democratic norms.
Now, I do think Trump undermines a lot of democratic norms too. So it’s really more of a question of who will do more damage. I think that many US institutions (including the media, various three-letter agencies, etc) push back strongly against Trump’s norm-breaking, but overlook or even enable Democrat norm-breaking—for instance, keeping Biden’s mental state secret for several years. Because of this I am roughly equally worried about both.
Scott Aaronson lays out some general concerns well here.
I don’t really see much substance here. E.g. Aaronson says “Trump’s values, such as they are, would seem to be “America First,” protectionism, vengeance, humiliation of enemies, winning at all costs, authoritarianism, the veneration of foreign autocrats, and the veneration of himself.” I think America First is a very reasonable value for an American president to have (and one which is necessary for the “American-led peaceful world order” that Scott wants). Re protectionism, seems probably bad in economic terms, but much less bad than many Democrat policies (e.g. taxing unrealized capital gains, anti-nuclear, etc). Re “vengeance, humiliation of enemies, winning at all costs, authoritarianism”: these are precisely the things I’m concerned about from the Democrats. Re “the veneration of foreign autocrats”: see my comments on Trump’s foreign policy.
I don’t think the link you provided on Reddit censorship demonstrates censorship
Sorry, I’d linked it on memory since I’ve seen a bunch of censorship examples from them, but I’d forgotten that they also post a bunch of other non-censorship stuff. Will dig out some of the specific examples I’m thinking about later.
Re Facebook, here’s Zuckerberg’s admission that the Biden administration “repeatedly pressured our teams for months” to censor covid-related content (he also mentions an FBI warning about Russian disinformation in relation to censorship of the Hunter Biden story, though the specific link is unclear).
(This comment focuses on object-level arguments about Trump vs Kamala; I left another comment focused on meta-level considerations.)
Three broad arguments for why it’s plausibly better if Trump wins than if Kamala does:
I basically see this election as a choice between a man who’s willing to subvert democracy, and a party that is willing to subvert democracy—e.g. via massively biased media coverage, lawfare against opponents, and coordinated social media censorship (I’ve seen particularly egregious examples on Reddit, but I expect that Facebook and Instagram are just as bad). RFK Jr, a lifelong Democrat (and a Kennedy to boot), has now endorsed Trump because he considers Democrat behavior too undemocratic. Heck, even Jill Stein has make this same critique. It’s reasonable to think that the risk Trump poses outweighs that, but it’s also reasonable to lean the other way, especially if you think (like I do) that the neutrality + independence of many US institutions is at a low point (e.g. see the Biden administration’s regulatory harassment of Musk on some pretty ridiculous grounds).
On foreign policy: it seems like Trump was surprisingly prescient about several major geopolitical issues (e.g. his 2016 positions that the US should be more worried about China, and that the US should push European countries to contribute much more to NATO, were heavily criticized at the time, but now are mainstream). The Abraham Accords also seem pretty significant. And I think the fact that the Ukraine war and the Gaza war both broke out under Biden not Trump should make us update in Trump’s favor (though I’m open to arguments on how much we should update).
On AI and pandemics: I don’t like his object-level policies but I do think he’ll bring in some very competent people (like Musk and Ramaswamy), and as I argued in this post I think the EA community tends to err towards favoring people who agree with our current beliefs, and should update towards prioritizing competence. (Of course there are also some very competent people on the Democrat side on these issues, but I expect them to be more beholden to the status quo. So if e.g. you think that FDA reform is important for biosecurity, that’s probably easier under Trump than Harris.)
(This comment focuses on meta-level issues; I left another comment with object-level disagreements.)
The EA case for Trump was heavily downvoted, with commenters arguing that e.g. “a lot of your arguments are extremely one-sided in that they ignore very obvious counterarguments and fail to make the relevant comparisons on the same issue.”
This post is effectively an EA case for Kamala, but less even-handed—e.g. because it:
Is framed it not just as a case for Kamala, but as a case for action (which, I think, requires a significantly higher bar than just believing that it’d be better on net if Kamala won).
Doesn’t address the biggest concerns with another Democrat administration (some of which I lay out here).
Generally feels like it’s primarily talking to an audience who already agrees that Trump is bad, and just needs to be persuaded about how bad he is (e.g. with headings like “A second Trump term would likely be far more damaging for liberal democracy than the last”).
And yet it has been heavily upvoted. Very disappointing lack of consistency here, which suggests that the criticisms of the previous post, while framed as criticisms of the post itself, were actually about the side chosen.
This matters both on epistemic grounds and because one of the most harmful things that can be done for AI safety is to heavily politicize it. By default, we should expect that a lot more people will end up getting on the AI safety train over time; the main blocker to that is if they’re so entrenched in their positions that they fail to update even in the face of overwhelming evidence. We’re already heading towards entrenchment; efforts like this will make it worse. (My impression is that political motivations were also a significant contributor to Good Ventures decoupling itself from the rationalist community—e.g. see this comment about fringe opinion holders. It’s easy to imagine this process spiraling further.)
Anyone know what post Dustin was referring to? EDIT: as per a DM, probably this one.
Defining alignment research
I recently had a very interesting conversation about master morality and slave morality, inspired by the recent AstralCodexTen posts.
The position I eventually landed on was:
Empirically, it seems like the world is not improved the most by people whose primary motivation is helping others, but rather by people whose primary motivation is achieving something amazing. If this is true, that’s a strong argument against slave morality.
The defensibility of morality as the pursuit of greatness depends on how sophisticated our cultural conceptions of greatness are. Unfortunately we may be in a vicious spiral where we’re too entrenched in slave morality to admire great people, which makes it harder to become great, which gives us fewer people to admire, which… By contrast, I picture past generations as being in a constant aspirational dialogue about what counts as greatness—e.g. defining concepts like honor, Aristotelean magnanimity (“greatness of soul”), etc.
I think of master morality as a variant of virtue ethics which is particularly well-adapted to domains which have heavy positive tails—entrepreneurship, for example. However, in domains which have heavy negative tails, the pursuit of greatness can easily lead to disaster. In those domains, the appropriate variant of virtue ethics is probably more like Buddhism: searching for equanimity or “green”. In domains which have both (e.g. the world as a whole) the closest thing I’ve found is the pursuit of integrity and attunement to oneself. So maybe that’s the thing that we need a cultural shift towards understanding better.
My take is that most of the points raised here are second-order points, and actually the biggest issue in this election is how democratic the future of America will be. But having said that, it’s not clear which side is overall better on this front:
The strongest case for Trump is that the Democrat establishment is systematically deceiving the American people (e.g. via the years-long cover-up of Biden’s mental state, strong partisan bias in mainstream media, and extensive censorship campaigns), engaging in lawfare against political opponents (e.g. against Elon and Trump), and generally growing the power of unaccountable bureaucracies over all aspects of life (including bureaucracies which do a lot of harm, like the FDA, FTC, EPA etc). All of this is highly undemocratic, and implicitly coordinated via preference cascades (e.g. see how during covid the Democrats established strong party lines on masks, lockdowns, lab origin, etc, which occasionally required an 180-degree flip from their previous positions). While I think Democrat appointees are likely to be more competent on average than Republicans, I can imagine similar preference cascades leading to totally crazy AI policies.
The strongest case against Trump is how many of his cabinet members and previous close supporters from his last term turned against him—particularly Pence’s account of Trump trying to overturn the 2020 election results. I don’t trust a lot of the coverage about how authoritarian Trump is, since there’s a lot of anti-Trump bias in the media (see for instance the “very fine people” hoax), but those people were selected for being sympathetic to Trump in the first place, and should know the details, so their opposition to him updates me a lot. This is especially worrying given that AGI might provide an opportunity for a US leader to seize centralized power.
I remain in favor of people doing work on evals, and in favor of funding talented people to work on evals. The main intervention I’d like to make here is to inform how those people work on evals, so that it’s more productive. I think that should happen not on the level of grants but on the level of how they choose to conduct the research.
Twitter thread on open-source AI
Twitter thread on AI safety evals
This seems like the wrong meta-level orientation to me. A meta-level orientation that seems better to me is something like “Truth and transparency have strong global benefits, but often don’t happen enough because they’re locally aversive. So assume that sharing information is useful even when you’re not concretely sure how it’ll help, and assume by default that power structures (including boards, social networks, etc) are creating negative externalities insofar as they erect barriers to you sharing information”.
The specific tradeoff between causing drama and sharing useful information will of course be situation-dependent, but in this situation the magnitude of the issues involved feels like it should significantly outweigh concerns about “stirring up drama”, at least if you make attempts to avoid phrasing the information in particularly-provocative or careless ways.
I disagree FWIW. I think that the political activation of Silicon Valley is the sort of thing which could reshape american politics, and that twitter is a leading indicator.
you can infer that people who don’t take AI risk seriously are somewhat likely to lack important forms of competence
This seems true, but I’d also say that the people who do take AI risk seriously also typically lack different important forms of competence. I don’t think this is coincidental; instead I’d say that there’s (usually) a tradeoff between “good at taking very abstract ideas seriously” and “good at operating in complex fast-moving environments”. The former typically requires a sort of thinking-first orientation to the world, the latter an action-first orientation to the world. It’s possible to cultivate both, but I’d say most people are naturally inclined to one or the other (or neither).
Towards more cooperative AI safety strategies
If Hassan had said that more recently or I was convinced he still thought that, then I would agree he should not be invited to Manifest.
My claim is that the Manifest organizers should have the right to invite him even if he’d said that more recently. But appreciate you giving your perspective, since I did ask for that (just clarifying the “agree” part).
Having said that, given that there is a very clear non-genocidal reading, I do not think it is a clear example of hate speech in quite the same sense as Hanania’s animals remark
I have some object-level views about the relative badness but my main claim is more that this isn’t a productive type of analysis for a community to end up doing, partly because it’s so inherently subjective, so I support drawing lines that help us not need to do this analysis (like “organizers are allowed to invite you either way”).
Why doesn’t this imply that EA should get better at power struggles (e.g. by putting more resources into learning/practicing/analyzing corporate politics, PR, lobbying, protests, and the like)?
Of course this is all a spectrum, but I don’t believe this implication in part because I expect that impact is often heavy-tailed. You do something really well first and foremost by finding the people who naturally inclined towards being some of the best in the world at it. If a community that was really good at power struggles tried to get much better at truth-seeking, it would probably still not do a great job at pushing the intellectual frontier, because it wouldn’t be playing to its strengths (and meanwhile it would trade off a lot of its power-seeking ability). I think the converse is true for EA.
I broadly endorse Jeff’s comment above. To put it another way, though: I think many (but not all) of the arguments from the Kolmogorov complicity essay apply whether the statements which are taboo to question are true or false. As per the quote at the top of the essay:
“A good scientist, in other words, does not merely ignore conventional wisdom, but makes a special effort to break it. Scientists go looking for trouble.”
That is: good scientists will try to break a wide range of conventional wisdom. When the conventional wisdom is true, then they will fail. But the process of trying to break the conventional wisdom may well get them in trouble either way, e.g. because people assume they’re pushing an agenda rather than “just asking questions”.
The main alternative to truth-seeking is influence-seeking. EA has had some success at influence-seeking, but as AI becomes the locus of increasingly intense power struggles, retaining that influence will become more difficult, and it will tend to accrue to those who are most skilled at power struggles.
I agree that extreme truth-seeking can be counterproductive. But in most worlds I don’t think that EA’s impact comes from arguing for highly controversial ideas; and I’m not advocating for extreme truth-seeking like, say, hosting public debates on the most controversial topics we can think of. Rather, I think its impact will come from advocating for not-super-controversial ideas, but it will be able to generate them in part because it avoided the effects I listed in my comment above.
One person I was thinking about when I wrote the post was Medhi Hassan. According to Wikipedia:
During a sermon delivered in 2009, quoting a verse of the Quran, Hasan used the terms “cattle” and “people of no intelligence” to describe non-believers. In another sermon, he used the term “animals” to describe non-Muslims.
Medhi has spoken several times at the Oxford Union and also in a recent public debate on antisemitism, so clearly he’s not beyond the pale for many.
I personally also think that the “from the river to the sea” chant is pretty analogous to, say, white nationalist slogans. It does seem to have a complicated history, but in the wake of the October 7 attacks its association with Hamas should I think put it beyond the pale. Nevertheless, it has been defended by Rashida Tlaib. In general I am in favor of people being able to make arguments like hers, but I suspect that if Hanania were to make an argument for why a white nationalist slogan should be interpreted positively, it would be counted as a strong point against him.
I expect that either Hassan or Tlaib, were they interested in prediction markets, would have been treated in a similar way as Hanania by the Manifest organizers.
I don’t have more examples off the top of my head because I try not to follow this type of politics too much. I would be pretty surprised if an hour of searching didn’t turn up a bunch more though.
One more point: in Scott’s blog post he talks about the “big lie” of Trump: that the election was stolen. I do worry that this is a key point of polarization, where either you fully believe that the election was stolen and the Democrats are evil, or you fully believe that Trump was trying to seize dictatorial power.
But reality is often much more complicated. My current best guess is that there wasn’t any centrally-coordinated plan to steal the election, but that the central Democrat party:
Systematically turned a blind eye to thousands of people who shouldn’t have been voting (like illegal immigrants) actually voting (in some cases because Democrat voter registration pushes deliberately didn’t track this distinction).
Blocked reasonable election integrity measures that would have prevented this (like voter ID), primarily in a cynical + self-interested way.
On priors I think this probably didn’t swing the election, but given how small the winning margins were in swing states, it wouldn’t be crazy if it did. From this perspective I think it reflects badly on Trump that he tried to do unconstitutional things to stay in power, but not nearly as badly as most Democrats think.
(Some intuitions informing this position: I think if there had been clear smoking guns of centrally-coordinated election fraud, then Trump would have won some of his legal challenges, and we’d have found out about it since then. But it does seem like a bunch of non-citizens are registered to vote in various states (e.g. here, here), and I don’t think this is a coincidence given that it’s so beneficial for Dems + Dems have so consistently blocked voter ID laws. Conversely, I do also expect that red states are being overzealous in removing people from voter rolls for things like changing their address. Basically it all seems like a shitshow, and not one which looks great for Trump, but not disqualifying either IMO, especially because in general I expect to update away from the mainstream media line over time as information they’ve suppressed comes to light.)