I think that’s too speculative a line of thinking to use for judging candidates. Sure, being intelligent about AI alignment is a data point for good judgment more generally, but so is being intelligent about automation of the workforce, and being intelligent about healthcare, and being intelligent about immigration, and so on. Why should AI alignment in particular should be a litmus test for rational judgment? We may perceive a pattern with more explicitly rational people taking AI alignment seriously as patently anti-rational people dismiss it, but that’s a unique feature of some elite liberal circles like those surrounding EA and the Bay Area; in the broader public sphere there are plenty of unexceptional people who are concerned about AI risk and plenty of exceptional people who aren’t.
We can tell that Yang is open to stuff written by Bostrom and Scott Alexander, which is nice, but I don’t think that’s a unique feature of Rational people, I think it’s shared by nearly everyone who isn’t afflicted by one or two particular strands of tribalism—tribalism which seems to be more common in Berkeley or in academia than in the Beltway.
Totally agree that many data points should go into evaluating political candidates. I haven’t taken a close look at your scoring system yet, but I’m glad you’re doing that work and think more in that direction would be helpful.
For this thread, I’ve been holding the frame of “Yang might be a uniquely compelling candidate to longtermist donors (given that most of his policies seem basically okay and he’s open to x-risk arguments).”
I think that’s too speculative a line of thinking to use for judging candidates. Sure, being intelligent about AI alignment is a data point for good judgment more generally, but so is being intelligent about automation of the workforce, and being intelligent about healthcare, and being intelligent about immigration, and so on. Why should AI alignment in particular should be a litmus test for rational judgment? We may perceive a pattern with more explicitly rational people taking AI alignment seriously as patently anti-rational people dismiss it, but that’s a unique feature of some elite liberal circles like those surrounding EA and the Bay Area; in the broader public sphere there are plenty of unexceptional people who are concerned about AI risk and plenty of exceptional people who aren’t.
We can tell that Yang is open to stuff written by Bostrom and Scott Alexander, which is nice, but I don’t think that’s a unique feature of Rational people, I think it’s shared by nearly everyone who isn’t afflicted by one or two particular strands of tribalism—tribalism which seems to be more common in Berkeley or in academia than in the Beltway.
Totally agree that many data points should go into evaluating political candidates. I haven’t taken a close look at your scoring system yet, but I’m glad you’re doing that work and think more in that direction would be helpful.
For this thread, I’ve been holding the frame of “Yang might be a uniquely compelling candidate to longtermist donors (given that most of his policies seem basically okay and he’s open to x-risk arguments).”
If you read it, go by the 7th version as I linked in another comment here—most recent release.
I’m going to update on a single link from now on, so I don’t cause this confusion anymore.