Yes, policy can be changed for sure. I was just referring to actually changing minds in the community, as he said—“Probably the best starting point would be to get the AI community on board with such a thing. It seems impossible today that consensus could be built about such a thing, but the presidency is a large pulpit.”
There are numerous minor, subtle ways that EAs reduce AI risk. Small in comparison to a research career, but large in comparison to voting. (Voting can actually be one of them.)
Your analysis seems to rely heavily on the judgement of r/neoliberal.
Very little, actually. Only time I actually cite the group is the one thread where people are discussing baby bonds.
It’s true that in many cases, we’ve arrived at similar points of view, but I have real citations in those cases.
As I say in the beginning of the “evaluations of sources” sections, a group like that is more for “secondary tasks such as learning different perspectives and locating more reliable sources.” The kind of thing that usually doesn’t get mentioned at all in references. I’m just doing more work to make all of it explicit.
And frankly I don’t spend much time there anyway. I haven’t gone to their page in months.
Do you think that they are poor even by the standards of social media groups? Like, compared to r/irstudies or others?
I would have thought that actually it’s the social democracies which follow very technocratic keynesian economics that produce better economic outcomes (idk, greater growth, less unemployment, more entrepreneurship, haha I have no idea how true any of this is tbh—I just presume).
It seems like most economists approve of them in times of recession (New Keynesianism), but don’t believe in blasting stimulus all the time (which would be post-Keynesianism). I’m a little unsure of the details here and may be oversimplifying. Frankly, it’s beyond my credentials to get into the weeds of macroeconomic theory (though if someone has such credentials, they are welcome to become involved...). I’m more concerned about the judgment of specific policy choices—it’s not like any of the candidates have expressed views on macroeconomic theory, they just talk about policies.
I found research indicating that countries with more public spending have less economic growth:
Perhaps the post-Keynesian economists have some reasons to disagree with this work, but I would need to see that they have some substantive critique or counterargument.
In any case, if 90% of the field believes something and 10% disagrees, we must still go by the majority, unless somehow we discover that the minority really has all the best arguments and evidence.
Of course, just because economic growth is lower, doesn’t mean that a policy is bad. Sometimes it’s okay to sacrifice growth for things like redistribution and public health. But remember that (a) we are really focusing on the long run here, where growth is a bit more important, and (b) we also have to consider the current fiscal picture. Debt in the US is quite bad right now, and higher spending would worsen the matter.
Espescially now considering that the US/globe is facing possible recession, I would think fiscal stimulus would be even more ideal.
Candidates are going to come into office in January 2021 - no one has a clue what the economy will look like at that time. Now if a candidate says “it’s good to issue large economic stimulus packages in times of recession,” I suppose I would give them points for that, but none have recently made such a statement as far as I know. For those politicians who were around circa 2009, I could look to see whether they endorsed the Recovery and Reinvestment Act (and, on a related note, TARP)… maybe I will add that, now that you mention it, I’ll think about it/look into it.
I meant that it’s definitely more efficient to grow the EA movement than to grow Yang’s constituency. That’s how it seems to me, at least. It takes millions of people to nominate a candidate.
FWIW I don’t think that would be a good move. I don’t feel like fully arguing it now, but main points (1) sooner AGI development could well be better despite risk, (2) such restrictions are hard to reverse for a long time after the fact, as the story of human gene editing shows, (3) AGI research is hard to define—arguably, some people are doing it already.
create a treaty for countries to sign that ban research into AGI.
You only mean this as a possibility in the future, if there is any point where AGI is believed to be imminent, right?
Still, I think you are really overestimating the ability of the president to move the scientific community. For instance, we’ve had two presidents now who actively tried to counteract mainstream views on climate-change, and they haven’t budged climate scientists at all. Of course, AI alignment is substantially more scientifically accepted and defensible than climate skepticism. But the point still stands.
What about simply growing the EA movement? That clearly seems like a more efficient way to address x-risk, and something where funding could be used more readily.
If you read it, go by the 7th version as I linked in another comment here—most recent release.
I’m going to update on a single link from now on, so I don’t cause this confusion anymore.
I think that’s too speculative a line of thinking to use for judging candidates. Sure, being intelligent about AI alignment is a data point for good judgment more generally, but so is being intelligent about automation of the workforce, and being intelligent about healthcare, and being intelligent about immigration, and so on. Why should AI alignment in particular should be a litmus test for rational judgment? We may perceive a pattern with more explicitly rational people taking AI alignment seriously as patently anti-rational people dismiss it, but that’s a unique feature of some elite liberal circles like those surrounding EA and the Bay Area; in the broader public sphere there are plenty of unexceptional people who are concerned about AI risk and plenty of exceptional people who aren’t.
We can tell that Yang is open to stuff written by Bostrom and Scott Alexander, which is nice, but I don’t think that’s a unique feature of Rational people, I think it’s shared by nearly everyone who isn’t afflicted by one or two particular strands of tribalism—tribalism which seems to be more common in Berkeley or in academia than in the Beltway.
moved comment to another spot.
I think that the most likely strongly negative outcome is that AI safety becomes attached to some standard policy tug-o-war and mostly people learn to read it as a standard debate between republicans and democrats
I don’t think this is very likely (see my other comment) but also want to push back on the idea that this is “strongly negative”.
Plenty of major policy progress has come from partisan efforts. Mobilizing a major political faction provides a lot of new support. This support is not limited to legislative measures, but also to small bureaucratic steps and efforts outside the government. When you have a majority, you can establish major policy; when you have a minority, you won’t achieve that but still have a variety of tools at your disposal to make some progress. Even if the government doesn’t play along, philanthropy can still continue doing major work (as we see with abortion and environmentalism, for instance).
A bipartisan idea is more agreeable, but also more likely to be ignored.
Holding everything equal, it seems wise to prefer being politically neutral, but it’s not nearly clear enough to justify refraining from making policy pushes. Do we refrain from supporting candidates who endorse any other policy stance, out of fear that they will make it into something partisan? For instance, would you say this about Yang’s stance to require second-person authorization for nuclear strikes?
It’s an unusual view, and perhaps reflects people not wanting their personal environments to be sucked into political drama more than it reflects shrewd political calculation.
By ‘polarized partisan issue’ do you merely mean that people have very different opinions and settle into different camps and make it hard for rational dialogue across the gap? That comes about naturally in the process of intellectual change, it has already happened with AI risk, and I’m not sure that a political push will worsen it (as the existing camps are not necessarily coequal with the political parties).
I was referring to the possibility that, for instance, Dems and the GOP take opposing party lines on the subject and fight over it. Which definitely isn’t happening.
I don’t think that making alignment a partisan issue is a likely outcome. The president’s actions would be executive guidance for a few agencies. This sort of thing often reflects partisan ideology, but doesn’t cause it. And Yang hasn’t been pushing AI risk as a strong campaign issue, he only acknowledged it modestly. If you think that AI risk could become a partisan battle, you might want to ask yourself why automation of labor—Yang’s loudest talking point—has NOT become subject to partisan division (even though some people disagree with it).
If you are looking at presidential candidates, why restrict your analysis to AI alignment?
If you’re super focused on that issue, then it will definitely be better to spend your money on actual AI research, or on some kind of direct effort to push the government to consider the issue (if such an effort exists).
When judging among the presidential candidates, other issues matter too! And in this context, they should be weighted more by their sheer importance than by philanthropic neglectedness. So AI risk is not obviously the most important.
With some help from other people I comprehensively reviewed 2020 candidates here: https://t.co/kMby2RDNDx
The conclusion is that yes, Yang is one of the best candidates to support—alongside Booker, Buttigieg, and Republican primary challengers. Partially due to his awareness of AI risk. But in the updates I’ve made for the 8th edition (and what I’m about to change now, seeing some other comments here about the lack of tractability for this issue), Buttigieg seems to move ahead to being the best Democrat by a small margin. Of course these judgments are pretty uncertain so you could argue that they are wrong if you find some flaw or omission in the report. Very early on, I decided that both Yang and Buttigieg were not good candidates, but that changed as I gathered new information about them.
But it’s wrong to judge a presidential candidate merely by their point of view on any single issue, including AI alignment.
This is too ad hoc, dividing three or four cause areas into two or three categories, to be a reliable explanation.
OK, sounds like the biggest issue is not the recognition algorithm itself (can be replicated or bought quickly) but the acquisition of databases of people’s identities (takes time and maybe consent earlier on). They can definitely come together, but otherwise, consider the possibilities (a) a city only uses face recognition for narrow cases like comparing video footage to a known suspect while not being able to do face-rec for the general population, and (b) a city has profiles and the ability to identify all its citizens for some other purpose but just doesn’t have the recognition algorithms (yet).
Well, I’m not trying to convince everyone that society needs a looser approach to AI. Just that this activism is dubious, unclear, plausibly harmful etc.
This need not be about ruthlessness directed right at your interlocutor, but rather towards a distant or ill-specified other.
I think it would be uncontroversial that a better approach is not to present yourself as authoritative, but instead present a conception of general authority in EA scholarship and consensus, and demand that it be recognized, engaged with, cited and so on.
Ruthless content drives higher exposure and awareness in the very first place.
There seems like an inadequate sticking rate of people who are just exposed to EA, consider for instance the high school awareness project.
Also, there seems like a shortage of new people who will gather other new people. When you just present the nice message, you just get a wave of people who may follow EA in their own right but don’t go out of their way to continue pushing it further. Because it was presented to them merely as part of their worldview rather than as part of their identity. (Consider whether the occasionally popular phrase “aspiring Effective Altruist” obstructs one from having an real EA identity.) How much movement growth is being done by people who joined in the recent few years compared to the early core?
I am also thinking of how there has been more back-and-forth about the optimizer’s curse, people saying it needs to be taken more seriously etc.
I don’t think that the prescriptive vs descriptive nature really changes things, descriptive philosophizing about methodology is arguably not as good as just telling EAs what to do differently and why.
I grant that #3 on this list is the rarest out of the 4. The established EA groups are generally doing fine here AFAIK. There is a CSER writeup on methodology here which is perfectly good: https://www.cser.ac.uk/resources/probabilities-methodologies-and-evidence-base-existential-risk-assessments-cccr2018/ it’s about a specific domain that they know, rather than EA stuff in general.
I’ve long preferred expressing EA as a moral obligation and support the main idea of that article.