I’m not hearing any concrete plans for what the president can do
To clarify again, I’m more compelled by Yang’s openness to thinking about this sort of thing, rather than proposing any specific plan of action on it. I agree with you that specific action plans from the US executive would probably be premature here.
why that position you quote is compelling to you.
It’s compelling because it’s plausibly much better than alternatives.
[Edit: it’d be very strange if we end up preferring candidates who hadn’t thought about AI at all to candidates who had thought some about AI but don’t have specific plans for it.]
[Edit: it’d be very strange if we end up preferring candidates who hadn’t thought about AI at all to candidates who had thought some about AI but don’t have specific plans for it.]
That doesn’t seem that strange to me. It seems to mostly be a matter of timing.
Yes, eventually we’ll be in an endgame where the great powers are making substantial choices about how powerful AI systems will be deployed. And at that point I want the relevant decision makers to have sophisticated views about AI risk and astronomical stakes.
But in the the decades before that final period, I probably prefer that governmental actors not really think about powerful AI at all because...
1. There’s not much that those governmental actors can usefully do at this time.
2. The more discussion of powerful AI there is in the halls of government, the more likely someone is to take action.
Given that there’s not much that can be usefully done, it’s almost a tautology that any action taken is likely to be net-negative.
Additionally, there are specific reasons to to think that governmental action is likely to be more bad than good.
Politicization:
As Ben says above, this incurs a risk of politicizing the issue, that prevents good discourse in the future, and traps the problem in a basically tribal-political frame. (Much as global climate change, a technical problem with consequences for everyone on planet earth, has been squashed into a frame of “liberal vs. conservative.”)
Swamping the field:
If the president of the United States openly says that AI alignment is a high priority for our generation, that makes AI alignment (or rather things called “AI alignment”) high status, sexy, and probably sources of funding. This incentives many folks to either rationalize the work that they were already doing as “AI alignment” or to more-genuinely try to switch into switch into doing AI alignment work.
But the field of AI alignment is young and fragile, it doesn’t yet have standard methods or approaches, and it is unlike most technical fields in that there is possibly a lot of foundational philosophical work to be done. The field does not yet have clear standards of what kind of work is good and helpful, and which problems are actually relevant. These standards are growing, slowly. For instance Stuart Russell’s new textbook is a very clear step in this direction (though I don’t know if it is any good or not).
If we added 100 or 1000x more people to the field of AI alignment, without having slowly built that infrastructure, the field will be swamped: there will be a lot of people trying to do work in the area, using a bunch of different methods, most of which will not be attacking the the core problem (that’s a crux for me). The signal to noise ratio would collapse. This will inhibit building a robust, legible paradigm that is tracking the important part of the problem.
Elaborating: Currently, the the people working on AI alignment are unusually ideologically motivated (ie they’re EAs), and the proportion of people working in the field who have deep inside view models of what work needs to be done and why, is relatively high.
If we incentivized working on AI alignment, via status or funding, more of the work in the area will be motivated by people seeking status or funding, instead of motivated by a desire to solve the core problem. I expect that this will warp the direction of the field, such that most of the work done under the heading of “AI alignment” is relatively useless.
(My impression is that this is exactly what happened with the field of nanotechnology: There was a relatively specific set of problems, leading up to specific technologies. The term “Nanotech” became popular and sexy, and a lot of funding was available for “nanotech.” The funders couldn’t really distinguish between people trying to solve the core problems that were originally outlined and people doing other vaguely related work (see PaulGraham, on “the Design Paradox” (the link to the full post is here)). The people doing vaguely related work that they called nanotech got the funding and the prestige. The few people trying to solve the original problems were left out in the cold, and more importantly, the people who might have eventually been attracted to working on those problems were instead diverted to working on things called “nanotech.” And now, in 2019 we don’t have a healthy field building towards Atomically Precise manufacturing.
We do want 100x the number of people working on the problem, eventually, but it is very important to grow the field in a way that allows the formation of good open problems and standards.
My overall crux here is point #1, above. If I thought that there were concrete helpful things that governments could do today, I might very well think that the benefits outweighed the risks that I outline above.
I think that’s too speculative a line of thinking to use for judging candidates. Sure, being intelligent about AI alignment is a data point for good judgment more generally, but so is being intelligent about automation of the workforce, and being intelligent about healthcare, and being intelligent about immigration, and so on. Why should AI alignment in particular should be a litmus test for rational judgment? We may perceive a pattern with more explicitly rational people taking AI alignment seriously as patently anti-rational people dismiss it, but that’s a unique feature of some elite liberal circles like those surrounding EA and the Bay Area; in the broader public sphere there are plenty of unexceptional people who are concerned about AI risk and plenty of exceptional people who aren’t.
We can tell that Yang is open to stuff written by Bostrom and Scott Alexander, which is nice, but I don’t think that’s a unique feature of Rational people, I think it’s shared by nearly everyone who isn’t afflicted by one or two particular strands of tribalism—tribalism which seems to be more common in Berkeley or in academia than in the Beltway.
Totally agree that many data points should go into evaluating political candidates. I haven’t taken a close look at your scoring system yet, but I’m glad you’re doing that work and think more in that direction would be helpful.
For this thread, I’ve been holding the frame of “Yang might be a uniquely compelling candidate to longtermist donors (given that most of his policies seem basically okay and he’s open to x-risk arguments).”
To clarify again, I’m more compelled by Yang’s openness to thinking about this sort of thing, rather than proposing any specific plan of action on it. I agree with you that specific action plans from the US executive would probably be premature here.
It’s compelling because it’s plausibly much better than alternatives.
[Edit: it’d be very strange if we end up preferring candidates who hadn’t thought about AI at all to candidates who had thought some about AI but don’t have specific plans for it.]
That doesn’t seem that strange to me. It seems to mostly be a matter of timing.
Yes, eventually we’ll be in an endgame where the great powers are making substantial choices about how powerful AI systems will be deployed. And at that point I want the relevant decision makers to have sophisticated views about AI risk and astronomical stakes.
But in the the decades before that final period, I probably prefer that governmental actors not really think about powerful AI at all because...
1. There’s not much that those governmental actors can usefully do at this time.
2. The more discussion of powerful AI there is in the halls of government, the more likely someone is to take action.
Given that there’s not much that can be usefully done, it’s almost a tautology that any action taken is likely to be net-negative.
Additionally, there are specific reasons to to think that governmental action is likely to be more bad than good.
Politicization:
As Ben says above, this incurs a risk of politicizing the issue, that prevents good discourse in the future, and traps the problem in a basically tribal-political frame. (Much as global climate change, a technical problem with consequences for everyone on planet earth, has been squashed into a frame of “liberal vs. conservative.”)
Swamping the field:
If the president of the United States openly says that AI alignment is a high priority for our generation, that makes AI alignment (or rather things called “AI alignment”) high status, sexy, and probably sources of funding. This incentives many folks to either rationalize the work that they were already doing as “AI alignment” or to more-genuinely try to switch into switch into doing AI alignment work.
But the field of AI alignment is young and fragile, it doesn’t yet have standard methods or approaches, and it is unlike most technical fields in that there is possibly a lot of foundational philosophical work to be done. The field does not yet have clear standards of what kind of work is good and helpful, and which problems are actually relevant. These standards are growing, slowly. For instance Stuart Russell’s new textbook is a very clear step in this direction (though I don’t know if it is any good or not).
If we added 100 or 1000x more people to the field of AI alignment, without having slowly built that infrastructure, the field will be swamped: there will be a lot of people trying to do work in the area, using a bunch of different methods, most of which will not be attacking the the core problem (that’s a crux for me). The signal to noise ratio would collapse. This will inhibit building a robust, legible paradigm that is tracking the important part of the problem.
Elaborating: Currently, the the people working on AI alignment are unusually ideologically motivated (ie they’re EAs), and the proportion of people working in the field who have deep inside view models of what work needs to be done and why, is relatively high.
If we incentivized working on AI alignment, via status or funding, more of the work in the area will be motivated by people seeking status or funding, instead of motivated by a desire to solve the core problem. I expect that this will warp the direction of the field, such that most of the work done under the heading of “AI alignment” is relatively useless.
(My impression is that this is exactly what happened with the field of nanotechnology: There was a relatively specific set of problems, leading up to specific technologies. The term “Nanotech” became popular and sexy, and a lot of funding was available for “nanotech.” The funders couldn’t really distinguish between people trying to solve the core problems that were originally outlined and people doing other vaguely related work (see PaulGraham, on “the Design Paradox” (the link to the full post is here)). The people doing vaguely related work that they called nanotech got the funding and the prestige. The few people trying to solve the original problems were left out in the cold, and more importantly, the people who might have eventually been attracted to working on those problems were instead diverted to working on things called “nanotech.” And now, in 2019 we don’t have a healthy field building towards Atomically Precise manufacturing.
Now that I think about it, this pattern reminds me of David Chapman’s Mops, Geeks, and Sociopaths.)
We do want 100x the number of people working on the problem, eventually, but it is very important to grow the field in a way that allows the formation of good open problems and standards.
My overall crux here is point #1, above. If I thought that there were concrete helpful things that governments could do today, I might very well think that the benefits outweighed the risks that I outline above.
I think that’s too speculative a line of thinking to use for judging candidates. Sure, being intelligent about AI alignment is a data point for good judgment more generally, but so is being intelligent about automation of the workforce, and being intelligent about healthcare, and being intelligent about immigration, and so on. Why should AI alignment in particular should be a litmus test for rational judgment? We may perceive a pattern with more explicitly rational people taking AI alignment seriously as patently anti-rational people dismiss it, but that’s a unique feature of some elite liberal circles like those surrounding EA and the Bay Area; in the broader public sphere there are plenty of unexceptional people who are concerned about AI risk and plenty of exceptional people who aren’t.
We can tell that Yang is open to stuff written by Bostrom and Scott Alexander, which is nice, but I don’t think that’s a unique feature of Rational people, I think it’s shared by nearly everyone who isn’t afflicted by one or two particular strands of tribalism—tribalism which seems to be more common in Berkeley or in academia than in the Beltway.
Totally agree that many data points should go into evaluating political candidates. I haven’t taken a close look at your scoring system yet, but I’m glad you’re doing that work and think more in that direction would be helpful.
For this thread, I’ve been holding the frame of “Yang might be a uniquely compelling candidate to longtermist donors (given that most of his policies seem basically okay and he’s open to x-risk arguments).”
If you read it, go by the 7th version as I linked in another comment here—most recent release.
I’m going to update on a single link from now on, so I don’t cause this confusion anymore.