Most whites had abhorent views on race at certain points in the past (probably not before 1500 though, unless Medieval antisemitism counts) but that is weak evidence that most people did, since whites were always a minority. I’m not sure many of us know what if any racial views people held in Nigeria, Iran, China or India in 1780.
David Mathers
Yeah, you’re probably right. It’s just I got a strong “history=Western history” vibe from the comment I was responding to, but maybe that was unfair!
One (probably surmountable but non-trivial in my view) problem with this is that once you start trying to draft a statement about exactly what attitude we have to capitalism/economics you’ll start to see underlying diversity beneath “don’t want to abolish capitalism.” This, I predict, will make it trickier than it seems to come up with anything clear and punchy that everyone can sign onto. In particular, leaving aside for a minute people with actually anti-capitalist views, you’ll start to see a split between people with actual neo-liberal or libertarian economic views they are confident of who would give ringing endorsements of capitalism, people who are just skeptical that we know whether or not “capitalism is good” is true or regard it as too vague to be worth assessing, and people who simply don’t think political activism is as good a use of the marginal dollar as other stuff, because they think it’s usually not very neglected or tractable. For example, I’d hesitate to sign onto “we are pro-capitalist”, but not because I’m anti, so much as because I have a mixture of the second 2 positions.
Incidentally, for what it’s worth, I strongly suspect that in developed countries with a traditional “party of business” and “party of labour”, a somewhat higher % of EAs in those countries vote for the “labour” one. I actually think that is consistent with what you’ve said about community attitudes to capitalism. But if I’m correct about it, I think saying were pro-capitalis economic rightists will actually confuse at least some outsiders about where we stand on a measure of political affiliation they really care about. (I’m thinking of people on the centre-left here primarily, rather than more radical socialists.)
Obvious point, but you could assign significant credence to this being the right take, and still think working on A.I. risk is very good in expectation, given exceptional neglectedness and how bad an A.I. takeover could be. Something feels sleazy and motivated about this line of defence to me, but I find it hard to see where it goes wrong.
In my view, Phil Torres’ stuff, whilst not entirely fair, and quite nasty rhetorically, is far from the worst this could get. He actually is familiar with what some people within EA think in detail, reports that information fairly accurately, even if he misleads by omission somewhat*, and makes criticisms of controversial philosophical assumptions of some leading EAs that have some genuine bite, and might be endorsed by many moral philosophers. His stuff actually falls into the dangerous sweet spot where legitimate ideas, like ‘is adding happy people actually good anyway’ get associated with less fair criticism-”Nick Beckstead did white supremacy when he briefly talked about different flow-through effects of saving lives in different places”, potentially biasing us against the legit stuff in a dangerous way.
But there could-again, in my view-easily be a wave of criticism coming from people who share Torres’ political viewpoint and tendency towards heated rhetoric, but who, unlike him, haven’t really taken the time to understand EA /longtermist/AI safety ideas in the first place. I’ve already seen one decently well-known anti-”tech” figure on twitter re-tweet a tweet that in it’s entirety consisted of “long-termism is eugenics!”. People should prepare emotionally (I have already mildly lost my temper on twitter in a way I shouldn’t have, but at least I’m not anyone important!) for keeping their cool in the face of criticisms that is:
-Poorly argued
-Very rhetorically forceful
-Based on straightforward misunderstandings
-Involves infuriatingly confident statements of highly contestable philosophical and empirical assumptions.
-Deploy guilt-by-association tactics of an obviously unreasonable sort**: i.e. so-and-so once attended a conference with Peter Thiel, therefore they share [authoritarian view] with Thiel.
-Attacks motives not just ideas
-Gendered in a way that will play directly to the personal insecurities of some male EAs.
Alas, stuff can be all those things and also identify some genuine errors we’re making. It’s important we remain open to that, and also don’t get too polarized politically by this kind of stuff ourselves.
* (i.e. he leaves out reasons to be longtermist that don’t depend on total utilitarianism or adding happy people being good, doesn’t discuss why you might reject person-affecting population ethics etc.)
** I say “of an unreasonable sort” because in principle people’s associations can be legitimately criticized if they have bad effects, just like anything else.- (Re)considering the Aesthetics of EA by 20 May 2022 15:01 UTC; 24 points) (
- 6 Jun 2023 19:18 UTC; 14 points) 's comment on JWS’s Quick takes by (
Also, I doubt Torres is writing in bad faith exactly. “Bad faith” to me has connotations of ‘is saying stuff they know to be untrue’, when with Torres I’m sure he believes what he’s saying he’s just angry about it, and anger biases.
I suspect that it varies within the domain of X-risk focused work how weird and cultish it looks to the average person. I think both A.I. risk stuff and a generic “reduce extinction risk” framing will look more “religious” to the average person than “we are worried about pandemics an nuclear wars.”
For what it’s worth, as a layperson, I found it pretty hard to follow properly. I also think there’s a selection effect where people who found it easy will post but people who found it hard won’t.
Ah, I made an error here, I misread what was in which thread and thought Amber was talking about Gwern’s comment rather than your original post. The post itself is fine! Sorry!
Several thoughts:
-
I’m not sure I can argue for this, but it feels weird and off-putting to me that all this energy is being spent discussing how good a track-record one guy has, especially one guy with a very charismatic and assertive writing-style, and a history of attempting to provide very general guidance for how to think across all topics (though I guess any philosophical theory of rationality does the last thing.) It just feels like a bad sign to me, though that could just be for dubious social reasons.
-
The question of how much to defer to E.Y. isn’t answered just by things like “he has possibly the best track record in the world on this issue.” If he’s out of step with other experts, and by a long way, we need to have reason to think he outperforms the aggregate of experts before we weight him more than the aggregate and it’s entirely normal, I’d have thought, for the aggregate to significantly outperform the single best individual. (I’m not making as strong a claim as that the best individual outperforming the aggregate is super-unusual and unlikely.) Of course if you think he’s nearly as good as the aggregate, then you should still move a decent amount in his direction. But even that is quite a strong claim that goes beyond him being in the handful of individuals with the best track record.
-
It strikes me that some of the people criticizing this post on the grounds that actually E.Y. has a great track record keep citing “he’s been right that there is significant X-risk from A.I., when almost everyone else missed that’ for a couple of reasons.
Firstly, this isn’t actually a prediction that has been resolved as correct in any kind of unambiguous way. Sure, a lot of very smart people in the EA community now agree. (And I agree the risk is worth assigning EA resources to as well, to be clear.) But we should be wary of substituting the judgment of the community that a prediction looks rational, for a track record of predictions that have actually resolved successfully in my view. (I think the later is better evidence than the former in most cases.)
Secondly, I feel like E.Y. being right about the importance of A.I.-risk is actually not very surprising, conditional on the key assumption here about E.Y. that Ben is relying on in telling people to be cautious about the probabilities and timelines that E.Y. gives for A.I. doom, but that even given this, IF Ben’s assumption is correct it’s still a good reason to doubt E.Y.‘s p(doom). Suppose, as is being alleged here, someone has a general bias, for whatever reasons towards the view that doom from some technological source or other is likely and imminent. Does that make it especially surprising that that individual finds an important source of doom most people have missed? Not especially that I can see: sure they will be less rational on the topic perhaps, but a) a bias towards p(doom) wbeing high doesn’t necessarily imply being poor ranking sources of doom-risk by relative importance, and b) there is probably a counter-effect where bias towards doom makes you more likely to find underrated doom-risks, because you spend more time looking. Of course, finding a doom-risk larger than most others that approx. everyone had missed would still be a very impressive achievement. But the question Ben’s addressing isn’t “is E.Y. a smart person with insights about A.I. risk?” but rather “how much should we update on E.Y.‘s views about p(near-term A.I. doom)?” Suppose significant bias towards doom is genuinely evidenced by E.Y.‘s earlier nanotech prediction (which to be fair is only 1 data point) and a good record at identifying neglected important doom sources is only weak evidence that E.Y. lacks the bias. Then we’d be right to only update a little towards doom, even if E.Y.’s record on A.I. risk was impressive in some ways.
-
For all I know, you maybe right or not (insofar as I follow what’s being insinuated), but whilst I freely admit that l, like anyone who wants to work in EA, have self-interested incentives to not be too critical of Eliezer, there is no specific secret “latent issue” that I personally am aware of and consciously avoiding talking about. Honest.
‘If you’d always assumed he’s wrong about literally everything, it should be telling for you that OP had to go 15 years back to get good examples.’ How strong evidence this is also depends on whether he has made many resolvable predictions since 15-years ago, right? If he hasn’t it’s not very telling. To be clear, I genuinely don’t know if he has or hasn’t.
‘Here’s one data point I can offer from my own life: Through a mixture of college classes and other reading, I’m pretty confident I had already encountered the heuristics and biases literature, Bayes’ theorem, Bayesian epistemology, the ethos of working to overcome bias, arguments for the many worlds interpretation, the expected utility framework, population ethics, and a number of other ‘rationalist-associated’ ideas before I engaged with the effective altruism or rationalist communities.’
I think some of this is just a result of being a community founded partly by analytic philosophers. (though as a philosopher I would say that!).
I think it’s normal to encounter some of these ideas in undergrad philosophy programs. At my undergrad back in 2005-09 there was a whole upper-level undergraduate course in decision theory. I don’t think that’s true everywhere all the time, but I’d be surprised if it was wildly unusual. I can’t remember if we covered population ethics in any class, but I do remember discovering Parfit on the Repugnant Conclusion in 2nd-year of undergrad because one of my ethics lecturers said Reasons and Persons was a super-important book. In terms of the Oxford phil scene where the term “effective altruism” was born, the main titled professorship in ethics at that time was held by John Broome, a utilitarianism-sympathetic former economist, who had written famous stuff on expected utility theory. I can’t remember if he was the PhD supervisor of anyone important to the founding of EA, but I’d be astounded if some of the phil. people involved in that had not been reading his stuff and talking to him about it. Most of the phil. physics people at Oxford were gung-ho for many worlds, it’s not a fringe view in philosophy of physics as far as I know. (Though I think Oxford was kind of a centre for it and there was more dissent elsewhere.) As far as I can tell, Bayesian epistemology in at least some senses of that term is a fairly well-known approach in philosophy of science. Philosophers specializing in epistemology might more often ignore it, but they know it’s there. And not all of them ignore it! I’m not an epistemologist, by my doctoral supervisor was, and it’s not unusual for his work to refer to Bayesian ideas in modelling stuff about how to evaluate evidence. (I.e. in uhm, defending the fine-tuning argument for the existence of God, which might not be the best use, but still!: https://www.yoaavisaacs.com/uploads/6/9/2/0/69204575/ms_for_fine-tuning_fine-tuning.pdf). (John was my supervisor, not Yoav.)
A high interest in bias stuff might genuinely be more an Eliezer/LessWrong legacy though.
It seems really bad, from a communications/PR point of view, to write something that was ambiguous in this way. Like, bad enough that it makes me slightly worried that MIRI will commit some kind of big communications error that gets into the newspapers and does big damage to the reputation of EA as a whole.
Old comment, so maybe this isn’t worth it, but: as someone diagnosed with Asperger’s as a kid, I’d really prefer if people didn’t attribute things you don’t like about people to their being autistic, in a causal manner and without providing supporting evidence. I don’t mean you can never be justified in saying that a group having a high prevalence of autism explains some negative feature of their behavior as a group. But I think care should be taken here, as when dealing with any minority.
I agree peer review is good, and people should not dismiss it, and too much speculation about how smart people are can be toxic. (I probably don’t avoid it as much as I should.) But that’s kind of part of my point, not all autists track some negative stereotype of cringe Silicon Valley people, even if like most stereotypes, there is a grain of truth in it.)
I’ve upvoted this because I think the parallels between A.I. worries and apocalyptic religious stuff are genuinely epistemically worrying, and I inclined to think that the most likely path is that A.I. risk turns out to be yet another failed apocalyptic prediction. (This is compatible with work on it being high value in expectation.)
But I think there’s an issue with your framing of “whose predictions of apocalypse should we trust more, climate scientists or A.I. risk people”: if apocalyptic predictions means predictions of human extinction, it’s not clear to me that most climate scientists are making them (at least in official scientific work). I think this is certainly how people prioritizing A.I. risk over climate change interpret the consensus among climate scientists.
What exactly do you mean by “have an objective axiology” and why do you think it makes it (distinctively) hard to defend asymmetry? (I have an eccentric philosophical view that the word “objective” nearly always causes more trouble than it’s worth and should be tabooed.)
Which version of the intuition? If you just mean ‘there is greater value in preventing the creation of a life with X net utils of suffering than in creating a life with X net utils of pleasure’, then maybe. But people often claim that ‘adding net-happy people is neutral, whilst adding net-suffering people is bad’ is intuitive, and there was a fairly recent paper claiming to find that this wasn’t what ordinary people thought when surveyed: https://www.iza.org/publications/dp/12537/the-asymmetry-of-population-ethics-experimental-social-choice-and-dual-process-moral-reasoning
I haven’t actually read the paper to check if it’s any good though...
I’m not sure I really follow (though I admit I’ve only read the comment, not the post you’ve linked to.) Is the argument something like we should only care about fulfilling preferences that already exist, and adding people to the world doesn’t automatically do that, so there’s no general reason to add happy people if it doesn’t satisfy a preference of someone who is here already? Couldn’t you show that adding suffering people isn’t automatically bad by the same reasoning, since it doesn’t necessarily violate an existing preference? (Also, on the word “objective”: you can definitely have a view of morality on which satisfying existing preference or doing what people value is all that matters, but it is mind-independently true that this is the correct morality, which makes it a realist view as academic philosophers classify things, and hence a view on which morality is objective in one sense of “objective”. Hence why I think “objective” should be tabooed.)
For what it’s worth, I think the basic critique of total utilitarianism of ‘it’s just obviously more important to save a life than to bring a new one into existence’ is actually very strong. I think insofar as longtermist folk don’t see that, it’s probably a) because it’s so obvious that they are bored with it now and b) Torres tone is so obnoxious and plausibly motivated by personal animosity. But neither of those are good reason to reject the objection!