I feel like this answer to the problem is easily forgotten by me, and probably a lot of similar-minded people who post here, because it’s not a clever, principled philosophical solution. But on reflection, it sounds quite reasonable!
David Mathers
New Metaculus Space for AI and X-Risk Related Questions
This doesn’t really solve the problem, but most animal suffering is likely not in factory farms but in nature, so getting rid of humans isn’t necessarily net good for animals. (To be clear, I am strongly against murdering humans even if it is net good for animals.)
Hiding your conclusions feels a bit sleazy and manipulative to me.
In fairness, expertise is not required in all university settings. Student groups invite non-experts political figures to speak, famous politicians give speeches at graduation ceremonies etc. I am generally against universities banning student groups from having racist/offensive speakers, although I might allow exceptions in extreme cases.
Though I am nonetheless inclined to agree that the distinction between universities, which have as a central purpose free, objective, rational debate, and EA as a movement, which has a central purpose of carrying out a particular (already mildly controversial) ethical program, and which also, frankly, is in more danger of “be safe for witches, becomes 90% witch” than universities are, is important and means that EA should be less internally tolerant of speech expressing bad ideas.
Re: the first footnote: Max Tegmark has a Jewish father according to Wikipedia. I think that makes it genuinely very unlikely that he believes Holocaust denial specifically is OK. That doesn’t necessarily mean that he is not racist in any way or that the grant to the Nazi newspaper was just an innocent mistake. But I think we can be fairly sure he is not literally a secret Nazi. Probably what he is guilty of is trusting his right-wing brother, who had written for the fascist paper, too much, and being too quick (initially) to believe that the Nazis were only “right-wing populists”.
(Also posted this comment on Less Wrong): One way to understand this is that Dario was simply lying when he said he thinks AGI is close and carries non-negligible X-risk, and that he actually thinks we don’t need regulation yet because it is either far away or the risk is negligible. There have always been people who have claimed that labs simply hype X-risk concerns as a weird kind of marketing strategy. I am somewhat dubious of this claim, but Anthropic’s behaviour here would be well-explained by it being true.
This is a good comment, but I think I’d always seen Singapore classed as a soft authoritarian state where elections aren’t really free and fair, because of things like state harassment of government critics, even though the votes are counted honestly and multiple parties can run?Though I don’t know enough about Singapore to give an example. I have a vague sense Botswana might be a purer example of an actual Liberal democracy where one party keeps winning because they have a good record in power. It’s also usually a safe bet the LDP will be in power in Japan, though they have occasionally lost.
A NYT article I read a couple of days ago claimed Silicon Valley remains liberal overall.
Thanks, I will think about that.
If you know how to do this, maybe it’d be useful to do it. (Maybe not though, I’ve never actually seen anyone defend “the market assigns a non-negligible probability to an intelligence explosion.)
I haven’t had time to read the whole thing yet, but I disagree that the problem Wilkinson is pointing to with his argument is just that it is hard to know where to put the cut, because putting it anywhere is arbitrary. The issue to me seems more like, for any of the individual pairs in the sequence, looked at in isolation, rejecting the view that the very, very slightly lower probability of the much, MUCH better outcome is preferable, seems insane. Why would you ever reject an option with a trillion trillion times better outcome, just because it was 1x10^-999999999999999999999999999999999999 less likely to happen than trillion trillion times worse outcome (assuming for both options, if you don’t get the prize, the result is neutral)? The fact that it is hard to say where is the best place in the sequence to first make that apparently insane choice seems also concerning, but less central to me?
I strongly endorse the overall vibe/message of titotal’s post here, but I’d add, as a philosopher, that EA philosophers are also a fairly professionally impressive bunch.
Peter Singer is a leading academic ethicist by any standards. The GPI in Oxford’s broadly EA-aligned work is regularly published in leading journals. I think it is fair to say Derek Parfit was broadly aligned with EA, and a key influence on the actually EA philosophers, and many philosophers would tell you he was a genuinely great philosopher. Many of the most controversial EA ideas like longtermism have roots in his work. Longtermism is less like a view believed only by a few marginalised scientists, and more like say, a controversial new interpretation of quantum mechanics that most physicists reject, but some young people at top departments like and which you can publish work defending in leading journals.
I want to say just “trust the market”, but unfotunately, if OpenAI has a high but not astronomical valuation, then even if the market is right, that could mean “almost certainly will be quite useful and profitable, chance of near-term AGI almost zero’ or it could mean “probably won’t be very useful or profitable at all, but 1 in 1000 chance of near-term AGI supports high valuation nonetheless” or many things inbetween those two poles. So I guess we are sort of stuck with our own judgment?
It’s got nothing to do with crime is my main point.
There’s no reason to blame the Rationalist influence on the community for SBF that I can see. What would the connection be?
I don’t see why we’d expect less factory farms under socialism, except via us being poorer in general. And I feel like “make everything worse for humans to make things better for animals” feels a bit “cartoon utilitarian super-villain”, even if I’m not sure what is wrong with it. It’s also not why socialists support socialism, even if many are also pro-animal. On the other hand, if socialism worked as intended, why would factory farming decrease?
I think two things are being conflated here into a 3rd position no one holds
-Some people don’t like the big R community very much.-Some people don’t think improving the world’s small-r rationality/epistemics should be a leading EA cause area.
Are getting conflated into:
-People don’t think it’s important to try hard at being small-r rational.I agree that some people might be running together the first two claims, and that is bad, since they are independent, and it could easily be high impact to work on improving collective epistemics in the outside world even if the big R rationalist community was bad in various ways. But holding the first two claims (which I think I do moderately) doesn’t imply the third. I think the rationalists are often not that rational in practice, and are too open to racism and sexim. And I also (weakly) think that we don’t currently know enough about “improving epistemics” for it to be a tractable cause area. But obviously I still want us to make decisions rationally, in the small-r sense internally. Who wouldn’t! Being against small-r rationality is like being against kindness or virtue; no one thinks of themselves as taking that stand.
For what it’s worth, I was one of the most anti-Hanania/Manifest people in the original big thread, and I don’t think I’m all that “cancel-y” overall. I’m opposed to people being fired from universities for edgy right-wing opinions on empirical matters, and I’m definitely opposed to them being cut off from all jobs. I do think people should not hire open neo-Nazis (or for that matter left-wingers who believe in genuinely deranged antisemitic conspiracy theories) for normal jobs, but I don’t think any of the Manifest speakers fell in that category. But I see a difference between the role of universities-find out the truth no matter what by permitting very broad debate-and the role of a group like EA that has a particular viewpoint and no obligation to invite in people who disagree with it.
I strongly disagree that Lincoln was correct to prioritize the union over ending slavery (though remember that this was when he was facing a risk of a massive war, a war which when it did break out killed hundreds of thousands). For one thing he probably wasn’t doing that to preserve “freedom” in some universalist sense after cost benefit analysis, but rather because he valued US nationalism over Black lives. But I still think this is a little simplistic. In the late 18th century, many, probably most countries and cultures in the world either had slavery internally, or used slavery as part of a colonial Empire. For example, slavery was widepsread in Africa internally, many European countries had empires that used slave labour, Arabs had a large slave trade in East Africa, the Mughals sold slaves from India, and if you pick up the great 18th century Chinese novel The Story of the Stone, you’ll find many characters are slaves. Meanwhile, the founding ideals of the US were unusually liberal and egalitarian relative to the vast majority of places at the time, and this probably did effect the internal experience of the average US citizen. The US reached a relatively expanded franchise with many working class male citizens able to vote far before almost anywhere else. So the US was not exceptional in its support for slavery or colonialist expansion (against Native Americans), but it was exceptional in its levels of internal (relative) liberal democracy. I think its plausible that on net the existence of the US therefore advanced the cause of “freedom” in some sense. Moving forward, it seems plausible that overall having the world’s largest and most powerful country be a liberal democracy has plausibly advanced the cause of liberal democracy overall, and the US is primarily responsible for the fact that German and Japan, two other major powers, are liberal democracies. Against that, you can point to the fact that the US has certainly supported dictatorship when it’s suited it, or when it’s been in the private interests of US businesses (particularly egregiously in Guatemala was genuinely genocidal results*). But there are also plenty places where the US really has supported democracy (i.e. in the former socialist states of Eastern Europe), so I don’t think this overcomes the prior that having the world’s most powerful and one of its richest nations, with the dominant popular culture, be a liberal democracy was good for freedom overall. Washington and the other revolutionaries plausibly bear a fair amount of responsibility for this. And in particular, Washington’s decision to leave power willingly, when he probably could have carried on being re-elected as a war hero until he died probably did a lot to consolidate democracy (such as it was) at the time. Of course, those founders who DID oppose slavery are much more unambiguously admirable.
*More people should know about this, it was genuinely hideously evil: https://en.wikipedia.org/wiki/Guatemalan_genocide