Wei Dai
The salient thing to notice is that this person wants to burn your house down.
In your example, after I notice this, I would call the police to report this person. What do you think I should do (or what does David want me to do) after noticing the political agenda of the people he mentioned? My own natural inclination is to ignore them and keep doing what I was doing before, because it seems incredibly unlikely that their agenda would succeed, given the massive array of political enemies that such agenda has.
I was concerned that after the comment was initially downvoted to −12, it would be hidden from the front page and not enough people would see it to vote it back into positive territory. It didn’t work out that way, but perhaps could have?
I want to note that within a few minutes of posting the parent comment, it received 3 downvotes totaling −14 (I think they were something like −4, −5, −5, i.e., probably all strong downvotes) with no agreement or disagreement votes, and subsequently received 5 upvotes spread over 20 hours (with no further downvotes AFAIK) that brought the net karma up to 16 as of this writing. Agreement/disagreement is currently 3⁄1.
This pattern of voting seems suspicious (e.g., why were all the downvotes clustered so closely in time). I reported the initial cluster of downvotes to the mods in case they want to look into it, but have not heard back from them yet. Thought I’d note this publicly in case a similar thing happened or happens to anyone else.
I think too much moral certainty doesn’t necessarily cause someone to be dangerous by itself, and there has to be other elements to their personality or beliefs. For example lots of people are or were unreasonably certain about divine command theory[1], but only a minority of them caused much harm (e.g. by being involved in crusades and inquisitions). I’m not sure it has much to do with realism vs non-realism though. I can definitely imagine some anti-realist (e.g., one with strong negative utilitarian beliefs) causing a lot of damage if they were put in certain positions.
Uncertainty can transition to increased certainty later on, as people do more thinking. So, it doesn’t feel like a stable solution.
This seems like a fair point. I can think of some responses. Under realism (or if humans specifically tend to converge under reflection) people would tend to converge to similar values as they think more, so increased certainty should be less problematic. Under other metaethical alternatives, one might hope that as we mature overall in our philosophies and social systems, we’d be able to better handle divergent values through compromise/cooperation.
(Not to mention that, as EAs tell themselves it’s virtuous to remain uncertain, this impedes philosophical progress at the level of individuals.)
Yeah, there is perhaps a background disagreement between us, where I tend to think there’s little opportunity to make large amounts of genuine philosophical progress without doing much more cognitive work (i.e., to thoroughly explore the huge space of possible ideas/arguments/counterarguments), making your concern not significant for me in the near term.
- ^
Self-nitpick: divine command theory is actually a meta-ethical theory. I should have said “various religious moralities”.
- ^
It’s entirely possible that I misinterpreted David. I asked for clarification from David in the original comment if that was the case, but he hasn’t responded so far. If you want to offer your own interpretation, I’d be happy to hear it out.
I’m saying that you can’t determine the truth about an aspect of reality (in this case, what cause group differences in IQ), when both sides of a debate over it are pushing political agendas, by looking at which political agenda is better. (I also think one side of it is not as benign as you think, but that’s besides the point.)
I actually don’t think this IQ debate is one that EAs should get involved in, and said as much to Ives Parr. But if people practice or advocate for what seem to me like bad epistemic norms, I feel an obligation to push back on that.
More specifically, you don’t need to talk about what causes group differences in IQ to make a consequentialist case for genetic enhancement, since there is no direct connection between what causes existing differences and what the best interventions are. So one possible way forward is just to directly compare the cost-effectiveness of different ways of raising intelligence.
- 19 Apr 2024 10:27 UTC; 3 points) 's comment on An instance of white supremacist and Nazi ideology creeping onto the EA Forum by (
Materialism is an important trait in individuals, and plausibly could be an important difference between groups. (Certainly the history of the Jewish people attests to the fact that it has been considered important in groups!) But the horrific recent history of false hypotheses about innate Jewish behavior helps us see how scientifically empty and morally bankrupt such ideas really are.
Coincidentally, I recently came across an academic paper that proposed a partial explanation of the current East Asian fertility crisis (e.g., South Korea’s fertility decreased from 0.78 to 0.7 in just one year, with 2.1 being replacement level) based on high materialism (which interestingly, the paper suggests is really about status signaling, rather than actual “material” concerns).
The paper did not propose a genetic explanation of this high materialism, but if it did, I would hope that people didn’t immediately dismiss it based on similarity to other hypotheses historically or currently misused by anti-Semites. (In other words, the logic of this article seems to lead to absurd conclusions that I can’t agree with.)
All this tends sadly to distract people from the fact that when white nationalists like Lynn talk about race/IQ stuff, they are trying to push a political agenda
From my perspective, both sides of this debate are often pushing political agendas. It would be natural, but unvirtuous, to focus our attention on the political agenda of only one side, or to pick sides of an epistemic divide based on which political agenda we like or dislike more. (If I misinterpreted you, please clarify what implications you wanted people to draw from this paragraph.)
Note that Will does say a bit in the interview about why he doesn’t view SBF’s utilitarian beliefs as a major explanatory factor here (the fraud was so obviously negative EV, and the big lesson he took from the Soltes book on white-collar crime was that such crime tends to be more the result of negligence and self-deception than deliberate, explicit planning to that end).
I disagree with Will a bit here, and think that SBF’s utilitarian beliefs probably did contribute significantly to what happened, but perhaps somewhat indirectly, by 1) giving him large scale ambitions, 2) providing a background justifications for being less risk-averse than most, 3) convincing others to trust him more than they otherwise would. Without those beliefs, he may well not have gotten to a position where he started committing large scale fraud through negligence and self-deception.
I basically agree with the lessons Will suggests in the interview, about the importance of better “governance” and institutional guard-rails to disincentivize bad behavior
I’m pretty confused about the nature of morality, but it seems that one historical function of morality is to be a substitute for governance (which is generally difficult and costly; see many societies with poor governance despite near universal desire for better governance). Some credit the success of Western civilization in part to Christian morality, for example. (Again I’m pretty confused and don’t know how relevant this is, but it seems worth pointing out.)
I think it would be a big mistake to conflate that sort of “overconfidence in general” with specifically moral confidence (e.g. in the idea that we should fundamentally always prefer better outcomes over worse ones). It’s just very obvious that you can have the latter without the former, and it’s the former that’s the real problem here.
My view is that the two kinds of overconfidence seem to have interacted multiplicatively in causing the disaster that happened. I guess I can see why you might disagree, given your own moral views (conditional on utilitarianism being true/right, it would be surprising if high confidence in it is problematic/dangerous/blameworthy), but my original comment was written more with someone who has relatively low credence in utilitarianism in mind, e.g., Will.
BTW it would be interesting to hear/read a debate between you and Will about utilitarianism. (My views are similar to his in putting a lot of credence on anti-realism and “something nobody has thought of yet”, but I feel like his credence for “something like utilitarianism” is too low. I’m curious to understand both why your credence for it is so high, and why his is so low.)
My memory of the podcast (could be wrong, only listened once!) is that Will said that, conditional on error theory being false, his credence in consequentialism, is about 0.5.
I think he meant conditional on error theory being false, and also on not “some moral view we’ve never thought of”.
Here’s a quote of what Will said starting at 01:31:21: “But yeah, I tried to work through my credences once and I think I ended up in like 3% in utilitarianism or something like. I mean large factions go to, you know, people often very surprised by this, but large factions go to, you know, to error theory. So there’s just no correct moral view. Very large faction to like some moral view we’ve never thought of. But even within positive moral views, and like 50-50 on non consequentialism or consequentialism, most people are not consequentialists. I don’t think I’m.”
Overall it seems like Will’s moral views are pretty different from SBF’s (or what SBF presented to Will as his moral views), so I’m still kind of puzzled about how they interacted with each other.
If future humans were in the driver’s seat instead, but with slightly more control over the process
Why only “slightly” more control? It’s surprising to see you say this without giving any reasons or linking to some arguments, as this degree of alignment difficulty seems like a very unusual position that I’ve never seen anyone argue for before.
The source code was available, but if someone wanted to claim compliance with the NIST standard (in order to sell their product to the federal government, for example), they had to use the pre-compiled executable version.
I guess there’s a possibility that someone could verify the executable by setting up an exact duplicate of the build environment and re-compiling from source. I don’t remember how much I looked into that possibility, and whether it was infeasible or just inconvenient. (Might have been the former; I seem to recall the linker randomizing some addresses in the binary.) I do know that I never documented a process to recreate the executable and nobody asked.
It’s not clear to me why human vs. AIs would make war more likely to occur than in the human vs. human case, if by assumption the main difference here is that one side is more rational.
We have more empirical evidence that we can look at when it comes to human-human wars, making it easier to have well-calibrated beliefs about chances of winning. When it comes to human-AI wars, we’re more likely to have wildly irrational beliefs.
This is just one reason war could occur though. Perhaps a more likely reason is that there won’t be a way to maintain the peace, that both sides can be convinced will work, and is sufficiently cheap that the cost doesn’t eat up all of the gains from avoiding war. For example, how would the human faction know that if it agrees to peace, the AI faction won’t fully dispossess the humans at some future date when it’s even more powerful? Even if AIs are able to come up with some workable mechanisms, how would the humans know that it’s not just a trick?
Without credible assurances (which seems hard to come by), I think if humans do agree to peace, the most likely outcome is that it does get dispossessed in the not too distant future, either gradually (for example getting scammed/persuaded/blackmailed/stolen from in various ways), or all at once. I think society as a whole won’t have a strong incentive to protect humans because they’ll be almost pure consumers (not producing much relative to what they consume), and such classes of people are often killed or dispossessed in human history (e.g., landlords after communist takeovers).
I don’t think this follows. Humans presumably also had empathy in e.g. 1500, back when war was more common, so how could it explain our current relative peace?
I mainly mean that without empathy/altruism, we’d probably have even more wars, both now and then.
To the extent that changing human nature explains our current relatively peaceful era, this position seems to require that you believe human nature is fundamentally quite plastic and can be warped over time pretty easily due to cultural changes.
Well, yes, I’m also pretty scared of this. See this post where I talked about something similar. I guess overall I’m still inclined to push for a future where “AI alignment” and “human safety” are both solved, instead of settling for one in which neither is (which I’m tempted to summarize your position as, but I’m not sure if I’m being fair).
What are some failure modes of such an agency for Paul and others to look out for? (I shared one anecdote with him, about how a NIST standard for “crypto modules” made my open source cryptography library less secure, by having a requirement that had the side effect that the library could only be certified as standard-compliant if it was distributed in executable form, forcing people to trust me not to have inserted a backdoor into the executable binary, and then not budging when we tried to get an exception for this requirement.)
I’ve looked into the game theory of war literature a bit, and my impression is that economists are still pretty confused about war. As you mention, the simplest model predicts that rational agents should prefer negotiated settlements to war, and it seems unsettled what actually causes wars among humans. (People have proposed more complex models incorporating more elements of reality, but AFAIK there isn’t a consensus as to which model gives the best explanation of why wars occur.) I think it makes sense to be aware of this literature and its ideas, but there’s not a strong argument for deferring to it over one’s own ideas or intuitions.
My own thinking is that war between AIs and humans could happen in many ways. One simple (easy to understand) way is that agents will generally refuse a settlement worse than what they think they could obtain on their own (by going to war), so human irrationality could cause a war when e.g. the AI faction thinks it will win with 99% probability, and humans think they could win with 50% probability, so each side demand more of the lightcone (or resources in general) than the other side is willing to grant.
To take this one step further, I would say that given that many deviations from the simplest game theoretic model do predict war, war among consequentialist agents may well be the default in some sense. Also, given that humans often do (or did) go to war with each other, our shared values (i.e. the extent to which we do have empathy/altruism for others) must contribute to the current relative peace in some way.
I followed the instructions here.
I was curious why given Will’s own moral uncertainty (in this interview he mentioned having only 3% credence in utilitarianism) he wasn’t concerned about SBF’s high confidence in utilitarianism, but didn’t hear the topic addressed. Maybe @William_MacAskill could comment on it here?
One guess is that apparently many young people in EA are “gung ho” on utilitarianism (mentioned by Spencer in this episode), so perhaps Will just thought that SBF isn’t unusual in that regard? One lesson could be that such youthful over-enthusiasm is more dangerous than it seems, and EA should do more to warn people about the dangers of too much moral certainty and overconfidence in general.
Made a transcript with Microsoft Word.
Some suggestions for you to consider:
Target a different (non-EA) audience.
Do not say anything or cite any data that could be interpreted or misinterpreted as racist (keeping in mind that some people will be highly motivated to interpret them in this way).
Tailor your message to what you can say/cite. For example, perhaps frame the cause as one of pure justice/fairness (as opposed to consequentialist altruism), e.g., it’s simply unfair that some people can not afford genetic enhancement while others can. (Added: But please think this through carefully to prevent undesirable side effects, e.g., making some people want to ban genetic enhancement altogether.)
You may need to start a new identity in order to successfully do the above.
I of course also think that philosophical progress, done right, is a good thing. However I also think genuine philosophical progress is much harder than it looks (see Some Thoughts on Metaphilosophy for some relevant background views), and therefore am perhaps more worried than most about philosophical “progress”, done wrong, being a bad thing.