I think itâd be better to title this âthe quest for a kidney-stone free worldâ. Keeping the words âkidney stoneâ till late in a long title means that people just browsing the forum front page donât see it, and so canât tell what the post is about. This is really substantial-looking work, and itâd be a shame if it got less clicks than it deserves.
David Mathersđ¸
Thanks, that is very helpful to me in clarifying your position.
I think for me, part of the issue with your posts on this (which I think are net positive to be clear, they really push at significant weak points in ideas widely held in the community) is that you seem to be sort of vacillating between three different ideas, in a way that conceal that one of them, taken on its own sounds super-crazy and evil:
1) Actually, if AI development were to literally lead to human extinction, that might be fine, because it might lead to higher utility.
2) We should care about humans harming sentient, human-like AIs as much as we care about AIs harming humans.
3) In practice, the benefits to current people from AI development outweigh the risks, and the only moral views which say that we should ignore this and pause in the face of even tiny risks of extinction from AI because there are way more potential humans in the future, in fact, when taken seriously, imply 1), which nobody believes.
1) feels extremely bad to me, basically a sort of Nazi-style view on which genocide is fine if the replacing people are superior utility generators (or I guess, inferior but sufficiently more numerous). 1) plausibly is a consequence of classical utilitarianism (even maybe on some person-affecting versions of classical utilitarianism I think), but I take this to be a reason to reject pure classical utilitarianism, not a reason to endorse 1). 2) and 3), on the other hand, seem reasonable to me. But the thing is that you seem at least sometimes to be taking AI moral patienthood as a reason to push on in the face of uncertainty about whether AI will literally kill everyone. And that seems more like 1) than 2) or 3). 1-style reasoning supports the idea that AI moral patienthood is a reason for pushing on with AI development even in the face of human extinction risk, but as far as I can tell 2) and 3) donât. At the same time though I donât think you mean to endorse 1).
By creating certain agents in a scenario where it is (basically) guaranteed that there are some agents or other, we determine the amount of unfulfilled preferences in the future. Sensible person-affecting views still prefer agent-creating decision that lead to less frustrated existing future preferences over more existing future preferences.
EDIT: Look at it this way: we are not choosing between futures with zero subjects of welfare and futures with non-zero, where person-affecting views are indeed indifferent, so long as the future with subjects has net-positive utility. Rather we are choosing between two agent-filled futures: one with human agents and another with AIs. Sensible person-affecting views prefer the future with less unfulfilled preferences over the one with more, when both futures contain agents. So to make a person-affecting case against AIs replacing humans, you need to take into account whether AIs replacing humans leads to more/âless frustrated preferences existing in the future, not just whether it frustrates the preferences of currently existing agents.
It shows that just being person-affecting doesnât mean that you can argue that since current human preferences are the only ones that exist now, and they are against extinction, person-affecting utilitarians donât have to compare what a human-ruled future would be like to what an AI would be like, when deciding whether AIs replacing humans would be net bad from a utilitarian perspective. But maybe I was wrong to read you as denying that.
What is coherence here? Perfect coherence sounds like a very strong assumption to me, not a minimal one.
My point is that even if you believe in the assymetry you should still care whether humans or AIs being in charge leads to higher utility for those who do exist, even if you are indifferent between either of those outcomes and neither humans nor AIs existing in the future.
I donât think you can get from the procreation asymmetry to only current and not future preferences matter. Even if you think that people being brought into existence and having their preferences fulfilled has no greater value than them not coming into existence, you might still want to block the existence of unfulfilled future preferences. Indeed, it seems any sane view has to accept that harms to future people if they do exist are bad, otherwise it would be okay to bring about unlimited future suffering, so long as the people who will suffer donât exist yet.
Not an answer to your original question, but beware taking answers to the Metaculus question as reflecting when AGI will arrive, if by âAGIâ you mean AI that will rapidly transform the world, or be able to perform literally every task humans perform as well as almost all humans. If you look at the resolution criteria for the question, all it requires for the answer to be yes, is that there is a model able to pass 4 specific hard benchmarks. Passing a benchmark is not the same as performing well at all aspects of an actual human office or lab job. Furthermore, none of these benchmarks actually require being able to store memories long-term and act coherently on a time scale of weeks, two of the main things current models lack. It is a highly substantial assumption that any AI which can pass the Turing test, do well on a test of subject matter knowledge, code like a top human over relatively small time scales, and put together a complicated model car can do every economically significant task, or succeed in carrying out plans long-term, or have enough commonsense and adapatibility in practice to fully replace a white-collar middle manager or a plumber.
Not that this means you shouldnât be thinking about how to optimize your career for an age where AI can do a lot of tasks currently done by humans, or even that AGI isnât imminent. But people using that particular Metaculus question to say âsee, completely human-level or above on everything transformative AIâ is coming soon, when that doesnât really match the resolution criteria, is a pet hate of mine.
I donât know enough about moral uncertainty and the parliamentary model to say.
Itâs worth saying that although in EA, people favour approaches to moral uncertainty that reject âjust pick the theory that you think is mostly likely to be true, and make decision based on it, ignoring othersâ, I think some philosophers actually have defended views along those lines: https://ââbrian.weatherson.org/ââRRM.pdf
Itâs pretty crucial how much less weight you place on future people, right? If you weight there lives at say 1/â1000 saving the life of a current person, and there are in expectation going to be 1 million x more people in the future than exist currently, then most of the value of preventing extinction will still come from the fact that it allows future people to come into existence.
âAt least one of the young tech workers helping him feed foreign aid âinto the wood chipperâ is also an avowed effective altruist.â
Can you provide a link for this? Not that I find it implausible, just curious.
âHow can we deny that this is what EA stands for? â
Because most/âall leaders would disavow it, including Nick Beckstead, who I imagine wrote the founding document you mean-indeed heâs already disavowed it-and we donât personally control Elon, whether or not he considers himself EA? And also, EAs, including some quite aggressively un-PC ones like Scott Alexander and Matthew Adelstein/âBenthamâs Bulldog have been pushing back strongly against the aid cuts/âthe America First agenda behind them?
Having said that, it definitely reduced my opinion of Will MacAskill, or at least his political judgment, that he tried to help SBF get in on Elonâs twitter purchase, since I think Elonâs fascist leanings were pretty obvious even at that point. And I agree we can ask whether EA ideas influence Musk in a bad direction, whether or not EAs themselves approve of the direction he is going in.
Why would âis x consciousâ always be a verbal dispute on type A-physicalism?
Lab leaders are probably trying mostly to maximize the value of their company, not the value of the world in my view. (Doesnât mean they give zero weight to moral considerations.) Also, if the US government realizes that they are reasoning along the lines of âletâs slow down development because it doesnât matter if the US beats Chinaâ, the US government will probably find ways to stop them being lab leaders.
âThe emphasis on technical solutions only benefits themâ
This is blatantly question-begging, right? In that it is only true if looking for technical solutions doesnât lead to safe models, which is one of the main points in dispute between you versus people with a higher opinion of the work inside on safety strategy. Of course, it is true that if you donât have your own opinion already, you shouldnât trust people who work at leading labs (or want to) on the question of whether technical safety work will help, for the reasons you give. But âpeople have an incentive to say Xâ isnât actually evidence that X is false, itâs just evidence you shouldnât trust them. If all people outside labs thought technical safety work was useless that would be one thing. But I donât think that is actually true, it seems people with relevant expertise are divided even outside the labs. Now of course, there are subtler ways in which even people outside the labs might be incentivized to play down the risks. (Though they might also have other reasons to play them up.) But even that wonât get you to âtherefore technical safety is definitely uselessâ; itâs all meta, not object-level.
Thereâs also a subtler point that even if âdo technical safety work on the insideâ is unlikely to work, it might still be the better strategy if confrontational lobbying from the outside is unlikely to work too (something that I think is more true now Trump is in power, although Musk is a bit of a wildcard in that respect.)
German has always had laws allowing this, for the extremely obvious reason that Germany once fairly elected a fascist government that ended democracy, created a totalitarian dictatorship, started the most destructive war in history*, and committed genocide. Understandably, the designers of (West) Germanyâs post-war constitution wanted to stop this happening again. These laws have been used to ban neo-Nazi parties at least 4 times since 1945, so even the idea of actually using them is not a new panic response to the AfDâs popularity. If the laws make Germany a flawed democracy now, then arguably it always has been. Incidentally hardcore communist elements in Die Linke have also been surveilled by the German security services for suspected opposition to the democratic constitution, so itâs not true that only right-wing extremism is restricted in Germany. (Die Linke were cleared because it was decided the Stalinists were only a small % of the party with little influence.)
In fact of course, it is at the very least not clear the laws are bad even from a purely democracy-centric perspective and ignoring the substantive badness of Nazism. It is true I think that an election where you can vote for anti-democratic fascists is more democratic in itself. But it is of course also true that âfair elections except fascists are bannedâ is more democratic than âfascists dictatorshipâ. If the risk of the later is high in a completely free election, them a mildly restricted election that bans the fascists can easily be the democracy-maximizing move in the medium term. I think it is fair to say that in early 50s West Germany, a country where a decently-sized % of voters had been enthusiastic Nazis, the risk of fascist takeover at the ballot box was more than theoretical. (Though admittedly the result would probably have been an American military takeover of Germany, not a revived Nazi dictatorship, but that would also have been a very bad outcome.)
Now, maybe what you think is outrageous isnât that banning parties is allowed (or isnât just that), but that the accusation that the AfD are anti-democratic extremists is obviously false and pretextual. Two points about that.
Firstly, they havenât been banned yet! (And personally I suspect they wonât be, and Iâm fairly strongly inclined to think they shouldnât be, though Iâd change my mind on that if Hocke or his faction captured the leadership.**) German law doesnât allow the government to just decide a party is extremist and ban them. They have to provide evidence in a court of law that they really do count as dangerously extreme by specific standards. Now maybe that process will in fact be a total farce with terrible standards of evidence, but since it hasnât happened yet, I donât see any strong reason to think it will be right now. Of course, it is possible that the legal definitions of anti-democratic extremism are badly drafted and could be used to ban a non-fascist party in a procedurally fair way. Maybe that is true, I am not an expert on the laws. (But frankly I have some doubt that you know whether this true either.)
Now you might say it is anti-democratic for the government to threatening the AfD with a ban if they are clearly not a fascist threat to democracy, even if there is little chance of the ban getting through court. And yeah, I agree with the conditional claim here: that would be a very bad violation of liberal and democratic norms. But I donât think it is clear that the antecedent is true. Bjorn Hocke the AfDâs leader in Thuringia seems to have been a neo-Nazi in a very literal sense 10 or 15 years ago, and Iâve never seen any evidence that his views have changed. In particular, he was filmed chanting at a neo-Nazi rally in Dresden in 2010: https://ââwww.theguardian.com/ââworld/ââarticle/ââ2024/ââaug/ââ29/ââthe-trial-of-bjorn-hocke-the-real-boss-of-germany-far-right I think this sufficient evidence to show that Hocke was very probably a real Nazi in 2010, and that Nazis generally want to abolish democracy. (If you doubt The Guardianâs word that it really was a Nazi rally, note that Hockeâs supporters donât themselves seem to deny this. The defence of him quoted in the article is that he only went to the rally âto observeâ, not that it wasnât a Nazi rally.) On the other hand, Hocke doesnât currently lead the AfD, Alice Weidel does, and I think she has tried to kick Hocke out before. I havenât seen any evidence that she is anything more than a very conservative but democratic politcian. So I think it might not currently be correct to class them as Nazis as a whole, and for that reason, I think a ban is probably wrong. But I think the presence of a significant Nazi faction downgrades suggesting they should be banned from outrageous to merely not correct.
*Technically you could argue the Japanese actually started it when they invaded China, I suppose.
**If you care about track records, I am a Good Judgement superforecaster, and I gave Trump a higher chance if winning the popular vote than most of the other supers did.
At least 8 years ago though, Finland and Norway had relatively high levels of state ownership of enterprises, much higher than the US. If thatâs not a much higher level of real socialism, itâs hard to say what is. That suggests to me that whatever the Economic Freedom Index measures itâs not how little socialism there is in a country. Nonetheless, it could be the freedom not the socialism thatâs responsible for Finland and Norway doing well, of course.
Norway is a petro state so arguably it doesnât really count, but Finland isnât.
âanticapitalists often think that we should have very heavy taxation or outright wealth confiscation from rich people, even if this would come at the expense of aggregate utilitarian welfareâ
Whatâs the evidence for this? I think even if it is true, it is probably misleading, in that most leftists also just reject the claims mainstream economists make about when taxing the rich will reduce aggregate welfare (not that there is one single mainstream economist view on that anyway in all likelihood.) This sounds to me more like an American centre-right caricature of how socialists think, than something socialists themselves would recognize.
I think you probably need multiple kinds of skill and some level of cognitive style diversity within a political campaign. You definitely need a lot of people with people skills, and I am sure that the first gut instincts of people with good social skills about what messaging will work are better than those of people with worse social skills. Those socially skilled people should undoubtedly be doing detailed messaging and networking for the campaign. But you also need people who are prepared to tell campaigns things they donât want to hear, even when there is severe social pressure not to, including things about what data (rather than gut instinct) actually shows about public opinion and messaging. (Yes, it is possible to overrate such data which will no doubt be misleading in various ways, but it also possible to underrate it.) My guess is that âprepared to tell people really hard truthsâ is at least somewhat anticorrelated with people skills and somewhat correlated with STEM background. (There is of course a trade-off where the people most prepared to tell hard truths are probably less good at selling those truths than more socially agreeable people.) For what itâs worth Matt Yglesiasâ seems pretty similar to the median EA in personality, and I recall reading that Biden advisors did read his blog. Ezra Klein also seems like a genuinely politically influential figure who is fairly EA-ish. There is more than one way to contribute to a political movement.
I personally donât think EA should be doing much to combat authoritarianism (other than ideally stopping its occasional minor contributions to it via the right-wing of rationalism and being clear-eyed about what the 2nd Trump admin might mean for things like âwe want democratic countries to beat Chinaâ) because I donât think it is particularly tractable or neglected. But I donât think it is a skill issue, unless youâre talking about completely EA run projects (and even then, you donât necessarily have to put the median EA in charge; presumably some EAs have above average social skills.)