Probably far beyond as well, right? Thereâs nothing distinctive about EA projects that make them [EDIT: more] subject to potential far future bad consequences we donât know about. And even (sane) non-consequentialists should care about consequences amongst other things, even if they donât care only about consequences.
David Mathersđ¸
I still donât think you have posted anything from the bill which clearly shows that you only get sued if A) [you fail to follow precautions and cause critical harms], but not if B) [you fail to follow precautions the bill says are designed to prevent critical harms, and some loss of life occurs]. In both cases you could reasonably characterise it as âyou fail to follow precautions the bill says are designed to prevent critical harmsâ and hence âviolateâ the âchapterâ.
Wait, what makes PauseAI ânot EAâ exactly? Iâm extremely surprised to hear that claim: people post promoting it on here, it has clear connections to a central EA goal, a founder with a background in EA. It might represent a minority view in the community, but so does âwe should prioritise animal welfare above X-risk and developmentâ, but Iâve never thought of people who think that as ânot EAâ.
I strongly disagree that Lincoln was correct to prioritize the union over ending slavery (though remember that this was when he was facing a risk of a massive war, a war which when it did break out killed hundreds of thousands). For one thing he probably wasnât doing that to preserve âfreedomâ in some universalist sense after cost benefit analysis, but rather because he valued US nationalism over Black lives. But I still think this is a little simplistic. In the late 18th century, many, probably most countries and cultures in the world either had slavery internally, or used slavery as part of a colonial Empire. For example, slavery was widepsread in Africa internally, many European countries had empires that used slave labour, Arabs had a large slave trade in East Africa, the Mughals sold slaves from India, and if you pick up the great 18th century Chinese novel The Story of the Stone, youâll find many characters are slaves. Meanwhile, the founding ideals of the US were unusually liberal and egalitarian relative to the vast majority of places at the time, and this probably did effect the internal experience of the average US citizen. The US reached a relatively expanded franchise with many working class male citizens able to vote far before almost anywhere else. So the US was not exceptional in its support for slavery or colonialist expansion (against Native Americans), but it was exceptional in its levels of internal (relative) liberal democracy. I think its plausible that on net the existence of the US therefore advanced the cause of âfreedomâ in some sense. Moving forward, it seems plausible that overall having the worldâs largest and most powerful country be a liberal democracy has plausibly advanced the cause of liberal democracy overall, and the US is primarily responsible for the fact that German and Japan, two other major powers, are liberal democracies. Against that, you can point to the fact that the US has certainly supported dictatorship when itâs suited it, or when itâs been in the private interests of US businesses (particularly egregiously in Guatemala was genuinely genocidal results*). But there are also plenty places where the US really has supported democracy (i.e. in the former socialist states of Eastern Europe), so I donât think this overcomes the prior that having the worldâs most powerful and one of its richest nations, with the dominant popular culture, be a liberal democracy was good for freedom overall. Washington and the other revolutionaries plausibly bear a fair amount of responsibility for this. And in particular, Washingtonâs decision to leave power willingly, when he probably could have carried on being re-elected as a war hero until he died probably did a lot to consolidate democracy (such as it was) at the time. Of course, those founders who DID oppose slavery are much more unambiguously admirable.
*More people should know about this, it was genuinely hideously evil: https://ââen.wikipedia.org/ââwiki/ââGuatemalan_genocide
I feel like this answer to the problem is easily forgotten by me, and probably a lot of similar-minded people who post here, because itâs not a clever, principled philosophical solution. But on reflection, it sounds quite reasonable!
This doesnât really solve the problem, but most animal suffering is likely not in factory farms but in nature, so getting rid of humans isnât necessarily net good for animals. (To be clear, I am strongly against murdering humans even if it is net good for animals.)
Hiding your conclusions feels a bit sleazy and manipulative to me.
In fairness, expertise is not required in all university settings. Student groups invite non-experts political figures to speak, famous politicians give speeches at graduation ceremonies etc. I am generally against universities banning student groups from having racist/âoffensive speakers, although I might allow exceptions in extreme cases.
Though I am nonetheless inclined to agree that the distinction between universities, which have as a central purpose free, objective, rational debate, and EA as a movement, which has a central purpose of carrying out a particular (already mildly controversial) ethical program, and which also, frankly, is in more danger of âbe safe for witches, becomes 90% witchâ than universities are, is important and means that EA should be less internally tolerant of speech expressing bad ideas.
Re: the first footnote: Max Tegmark has a Jewish father according to Wikipedia. I think that makes it genuinely very unlikely that he believes Holocaust denial specifically is OK. That doesnât necessarily mean that he is not racist in any way or that the grant to the Nazi newspaper was just an innocent mistake. But I think we can be fairly sure he is not literally a secret Nazi. Probably what he is guilty of is trusting his right-wing brother, who had written for the fascist paper, too much, and being too quick (initially) to believe that the Nazis were only âright-wing populistsâ.
(Also posted this comment on Less Wrong): One way to understand this is that Dario was simply lying when he said he thinks AGI is close and carries non-negligible X-risk, and that he actually thinks we donât need regulation yet because it is either far away or the risk is negligible. There have always been people who have claimed that labs simply hype X-risk concerns as a weird kind of marketing strategy. I am somewhat dubious of this claim, but Anthropicâs behaviour here would be well-explained by it being true.
This is a good comment, but I think Iâd always seen Singapore classed as a soft authoritarian state where elections arenât really free and fair, because of things like state harassment of government critics, even though the votes are counted honestly and multiple parties can run?Though I donât know enough about Singapore to give an example. I have a vague sense Botswana might be a purer example of an actual Liberal democracy where one party keeps winning because they have a good record in power. Itâs also usually a safe bet the LDP will be in power in Japan, though they have occasionally lost.
A NYT article I read a couple of days ago claimed Silicon Valley remains liberal overall.
Thanks, I will think about that.
If you know how to do this, maybe itâd be useful to do it. (Maybe not though, Iâve never actually seen anyone defend âthe market assigns a non-negligible probability to an intelligence explosion.)
I havenât had time to read the whole thing yet, but I disagree that the problem Wilkinson is pointing to with his argument is just that it is hard to know where to put the cut, because putting it anywhere is arbitrary. The issue to me seems more like, for any of the individual pairs in the sequence, looked at in isolation, rejecting the view that the very, very slightly lower probability of the much, MUCH better outcome is preferable, seems insane. Why would you ever reject an option with a trillion trillion times better outcome, just because it was 1x10^-999999999999999999999999999999999999 less likely to happen than trillion trillion times worse outcome (assuming for both options, if you donât get the prize, the result is neutral)? The fact that it is hard to say where is the best place in the sequence to first make that apparently insane choice seems also concerning, but less central to me?
I strongly endorse the overall vibe/âmessage of titotalâs post here, but Iâd add, as a philosopher, that EA philosophers are also a fairly professionally impressive bunch.
Peter Singer is a leading academic ethicist by any standards. The GPI in Oxfordâs broadly EA-aligned work is regularly published in leading journals. I think it is fair to say Derek Parfit was broadly aligned with EA, and a key influence on the actually EA philosophers, and many philosophers would tell you he was a genuinely great philosopher. Many of the most controversial EA ideas like longtermism have roots in his work. Longtermism is less like a view believed only by a few marginalised scientists, and more like say, a controversial new interpretation of quantum mechanics that most physicists reject, but some young people at top departments like and which you can publish work defending in leading journals.
I want to say just âtrust the marketâ, but unfotunately, if OpenAI has a high but not astronomical valuation, then even if the market is right, that could mean âalmost certainly will be quite useful and profitable, chance of near-term AGI almost zeroâ or it could mean âprobably wonât be very useful or profitable at all, but 1 in 1000 chance of near-term AGI supports high valuation nonethelessâ or many things inbetween those two poles. So I guess we are sort of stuck with our own judgment?
Itâs got nothing to do with crime is my main point.
Thereâs no reason to blame the Rationalist influence on the community for SBF that I can see. What would the connection be?
Suggests Newsom is going to be very hostile to any legislation that is designed to deal with X-risk concerns, and that he, frankly thinks they are bullshit. (I personally am also pretty skeptical of X-risk from AI, but I donât want nothing done given how bad the risk would be if it did manifest.)