âMost articles seem to default to either full embrace of AI companiesâ claims or blanket skepticism, with relatively few spotlighting the strongest version of arguments on both sides of a debate. â Never agreed with anything as strongly in my life. Both these things are bad and we donât need to choose a side between them. And note that the issue here isnât about these things being âextremeâ. An article that actually tries to make a case for foom by 2027, or âthis is all nonsense, itâs just fancy autocomplete and overfitting on meaningless benchmarksâ could easily be excellent. The problem is people not giving reasons for their stances, and either re-writing PR, or just expressing social distaste for Silicon Valley, as a substitute.
David Mathersđ¸
Not surprising they are getting rid of the safety people, but getting rid of CHIPS act people seems to me to be evidence in favour of the âgenuinely idiotic, rather than Machiavellian geniusesâ theory of Trump and Musk. Presumably Trump still wants to be more powerful than China even if he moves away from hawkishness towards making friends. And Musk presumably wants Grok to be better than the best Chinese models. (In Muskâs case of course, itâs possible he actually doesnât favour getting rid of the CHIPS staff.)
Fair point. I certainly donât think it is established (or even more than 50% likely) that SBF was purely motivated by narrow personal gain to the exclusion of any real utilitarian convictions at all. But I do think he misrepresented his political convictions.
âhow much AGI companies have embedded with the national security state is a crux for the future of the lightconeâ
Whatâs the line of thought here?
I donât think cutting ties with Palantir would move the date of AGI much, and I doubt it is the key point of leverage for whether the US becomes a soft dictatorship under Trump. As for the other stuff, people could certainly try, but I think it is probably unlikely to succeed, since it basically requires getting the people who run Anthropic to act against the very clear interests of Anthropic and the people who run it (And I doubt Amodei in particular, sees himself as accountable to the EA community in any way whatsoever.)
For what itâs worth I also think this complicated territory and that there is genuinely a risk of very bad outcomes from China winning an AI race too, and that the US might recover relatively quickly from its current disaster. I expect the US to remain somewhat less dictatorial than China even in the worst outcomes, though it is also true that even the democratic US has generally been a lot more keen to intervene, often but not always to bad effect, in other countryâs business.
In fairness, SBF was also secretly a prominent Republican donor, right? Didnât he basically suggest in the infamous interview with Kelsey Piper that he was essentially cynical about politics and just trying to gain influence with both parties to help advance FTX and Alamedaâs interests?
I think you probably need multiple kinds of skill and some level of cognitive style diversity within a political campaign. You definitely need a lot of people with people skills, and I am sure that the first gut instincts of people with good social skills about what messaging will work are better than those of people with worse social skills. Those socially skilled people should undoubtedly be doing detailed messaging and networking for the campaign. But you also need people who are prepared to tell campaigns things they donât want to hear, even when there is severe social pressure not to, including things about what data (rather than gut instinct) actually shows about public opinion and messaging. (Yes, it is possible to overrate such data which will no doubt be misleading in various ways, but it also possible to underrate it.) My guess is that âprepared to tell people really hard truthsâ is at least somewhat anticorrelated with people skills and somewhat correlated with STEM background. (There is of course a trade-off where the people most prepared to tell hard truths are probably less good at selling those truths than more socially agreeable people.) For what itâs worth Matt Yglesiasâ seems pretty similar to the median EA in personality, and I recall reading that Biden advisors did read his blog. Ezra Klein also seems like a genuinely politically influential figure who is fairly EA-ish. There is more than one way to contribute to a political movement.
I personally donât think EA should be doing much to combat authoritarianism (other than ideally stopping its occasional minor contributions to it via the right-wing of rationalism and being clear-eyed about what the 2nd Trump admin might mean for things like âwe want democratic countries to beat Chinaâ) because I donât think it is particularly tractable or neglected. But I donât think it is a skill issue, unless youâre talking about completely EA run projects (and even then, you donât necessarily have to put the median EA in charge; presumably some EAs have above average social skills.)
I think itâd be better to title this âthe quest for a kidney-stone free worldâ. Keeping the words âkidney stoneâ till late in a long title means that people just browsing the forum front page donât see it, and so canât tell what the post is about. This is really substantial-looking work, and itâd be a shame if it got less clicks than it deserves.
Thanks, that is very helpful to me in clarifying your position.
I think for me, part of the issue with your posts on this (which I think are net positive to be clear, they really push at significant weak points in ideas widely held in the community) is that you seem to be sort of vacillating between three different ideas, in a way that conceal that one of them, taken on its own sounds super-crazy and evil:
1) Actually, if AI development were to literally lead to human extinction, that might be fine, because it might lead to higher utility.
2) We should care about humans harming sentient, human-like AIs as much as we care about AIs harming humans.
3) In practice, the benefits to current people from AI development outweigh the risks, and the only moral views which say that we should ignore this and pause in the face of even tiny risks of extinction from AI because there are way more potential humans in the future, in fact, when taken seriously, imply 1), which nobody believes.
1) feels extremely bad to me, basically a sort of Nazi-style view on which genocide is fine if the replacing people are superior utility generators (or I guess, inferior but sufficiently more numerous). 1) plausibly is a consequence of classical utilitarianism (even maybe on some person-affecting versions of classical utilitarianism I think), but I take this to be a reason to reject pure classical utilitarianism, not a reason to endorse 1). 2) and 3), on the other hand, seem reasonable to me. But the thing is that you seem at least sometimes to be taking AI moral patienthood as a reason to push on in the face of uncertainty about whether AI will literally kill everyone. And that seems more like 1) than 2) or 3). 1-style reasoning supports the idea that AI moral patienthood is a reason for pushing on with AI development even in the face of human extinction risk, but as far as I can tell 2) and 3) donât. At the same time though I donât think you mean to endorse 1).
By creating certain agents in a scenario where it is (basically) guaranteed that there are some agents or other, we determine the amount of unfulfilled preferences in the future. Sensible person-affecting views still prefer agent-creating decision that lead to less frustrated existing future preferences over more existing future preferences.
EDIT: Look at it this way: we are not choosing between futures with zero subjects of welfare and futures with non-zero, where person-affecting views are indeed indifferent, so long as the future with subjects has net-positive utility. Rather we are choosing between two agent-filled futures: one with human agents and another with AIs. Sensible person-affecting views prefer the future with less unfulfilled preferences over the one with more, when both futures contain agents. So to make a person-affecting case against AIs replacing humans, you need to take into account whether AIs replacing humans leads to more/âless frustrated preferences existing in the future, not just whether it frustrates the preferences of currently existing agents.
It shows that just being person-affecting doesnât mean that you can argue that since current human preferences are the only ones that exist now, and they are against extinction, person-affecting utilitarians donât have to compare what a human-ruled future would be like to what an AI would be like, when deciding whether AIs replacing humans would be net bad from a utilitarian perspective. But maybe I was wrong to read you as denying that.
What is coherence here? Perfect coherence sounds like a very strong assumption to me, not a minimal one.
My point is that even if you believe in the assymetry you should still care whether humans or AIs being in charge leads to higher utility for those who do exist, even if you are indifferent between either of those outcomes and neither humans nor AIs existing in the future.
I donât think you can get from the procreation asymmetry to only current and not future preferences matter. Even if you think that people being brought into existence and having their preferences fulfilled has no greater value than them not coming into existence, you might still want to block the existence of unfulfilled future preferences. Indeed, it seems any sane view has to accept that harms to future people if they do exist are bad, otherwise it would be okay to bring about unlimited future suffering, so long as the people who will suffer donât exist yet.
Not an answer to your original question, but beware taking answers to the Metaculus question as reflecting when AGI will arrive, if by âAGIâ you mean AI that will rapidly transform the world, or be able to perform literally every task humans perform as well as almost all humans. If you look at the resolution criteria for the question, all it requires for the answer to be yes, is that there is a model able to pass 4 specific hard benchmarks. Passing a benchmark is not the same as performing well at all aspects of an actual human office or lab job. Furthermore, none of these benchmarks actually require being able to store memories long-term and act coherently on a time scale of weeks, two of the main things current models lack. It is a highly substantial assumption that any AI which can pass the Turing test, do well on a test of subject matter knowledge, code like a top human over relatively small time scales, and put together a complicated model car can do every economically significant task, or succeed in carrying out plans long-term, or have enough commonsense and adapatibility in practice to fully replace a white-collar middle manager or a plumber.
Not that this means you shouldnât be thinking about how to optimize your career for an age where AI can do a lot of tasks currently done by humans, or even that AGI isnât imminent. But people using that particular Metaculus question to say âsee, completely human-level or above on everything transformative AIâ is coming soon, when that doesnât really match the resolution criteria, is a pet hate of mine.
I donât know enough about moral uncertainty and the parliamentary model to say.
Itâs worth saying that although in EA, people favour approaches to moral uncertainty that reject âjust pick the theory that you think is mostly likely to be true, and make decision based on it, ignoring othersâ, I think some philosophers actually have defended views along those lines: https://ââbrian.weatherson.org/ââRRM.pdf
Itâs pretty crucial how much less weight you place on future people, right? If you weight there lives at say 1/â1000 saving the life of a current person, and there are in expectation going to be 1 million x more people in the future than exist currently, then most of the value of preventing extinction will still come from the fact that it allows future people to come into existence.
âAt least one of the young tech workers helping him feed foreign aid âinto the wood chipperâ is also an avowed effective altruist.â
Can you provide a link for this? Not that I find it implausible, just curious.
âIt is unclear to me whether less democracy would increase or decrease economic growth, which has been very connected to human welfare. So I do not know whether less democracy would increase or decrease human welfare.â
I usually think your posts are very good because you are prepared to honestly and clearly state unpopular beliefs. But this seems a bit glib: economic growth is not the only thing that effects well-being, by any means, and so simply being unsure about how democracy effects it is not a strong case on its own for being unsure whether democracy increases or decreases human well-being. Growth might be the most important thing of course, but if you really are neutral on the effect of democracy on growth, other factors will still determine whether you should think democracy is net beneficial for humans in expectation.
Also, in the particular case of the US to evaluate whether democracy continuing is a good thing for human well-being, what primarily matters is how democracy shapes up versus the realistic alternatives in the US, not whether democracy is the best possible system in principle, or even the best feasible system in most times and places. Itâs not like we are comparing democracy in the US to the Chinese communist system, market anarchism, sortition or the implementation of the knowledge-based restrictions on the franchise suggested by Jason Brennan in his book Against Democracy. We are comparing it to âon the surface democracy, but really Musk and Trump use the justice department to make it impossible for credible opponents to run against the Republican party for many national offices or against their favoured candidates in crucial Republican primaries, and also Musk can in practice stop any government payment to anyone so long as Trump himself doesnât prevent him doing so.â Maybe you think the risk of that is low, but thatâs what people are worried about. Maybe you also think that might be good, because Republican policies might be better for growth and that dominates all other factors, but even then, itâs worth being clear about what you are advocating agnosticism about and its not the merits of democracy in the abstract, but the current situation in the US.