Wei Dai
But we’re so far away from having that alternative that pining after it is a distraction from the real world.
For one thing, we could try to make OpenAI/SamA toxic to invest in or do business with, and hope that other AI labs either already have better governance / safety cultures, or are greatly incentivized to improve on those fronts. If we (EA as well as the public in general) give him a pass (treat him as a typical/acceptable businessman), what lesson does that convey to others?
I should add that there may be a risk of over-correcting (focusing too much on OpenAI and Sam Altman), and we shouldn’t forget about other major AI labs, how to improve their transparency, governance, safety cultures, etc. This project (Zach Stein-Perlman’s AI Lab Watch) seems a good start, if anyone is interested in a project to support or contribute ideas to.
I’m also concerned about many projects having negative impact, but think there are some with robustly positive impact:
Making governments and the public better informed about AI risk, including e.g. what x-safety cultures at AI labs are like, and the true state of alignment progress. Geoffrey Irving is doing this at UK AISI and recruiting, for example.
Try to think of important new arguments/considerations, for example a new form of AI risk that nobody has considered, or new arguments for some alignment approach being likely or unlikely to succeed. (But take care to not be overconfident or cause others to be overconfident.)
Agreed with the general thrust of this post. I’m trying to do my part, despite a feeling of “PR/social/political skills is so far from what I think of as my comparative advantage. What kind of a world am I living in, that I’m compelled to do these things?”
Those low on the spectrum tend to shape the incentives around them proactively to create a culture that rewards what they don’t want to lose about their good qualities.
What percent of people do you think fall into this category? Any examples? Why are we so bad at distinguishing such people ahead of time and often handing power to the easily corrupted instead?
#5 seems off to me. I don’t know whether OpenAI uses nondisparagement agreements;
Details about OpenAI’s nondisparagement agreements have come out.
Unlike FTX, OpenAI has now had a second wave of resignations in protest of insufficient safety focus.
Personally, I think fascism should be more upsetting than woke debate!
I’m not very familiar with Reactionary philosophy myself, but was suspicious of your use of “fascism” here. Asked Copilot (based on GPT-4) and it answered:
As an AI, I don’t form personal opinions. However, I can share that Reactionary philosophy and Fascism are distinct ideologies, even though they might share some common elements such as a critique of modernity and a preference for traditional social structures.
Fascism is typically characterized by dictatorial power, forcible suppression of opposition, and strong regimentation of society and of the economy which is not necessarily present in Reactionary philosophy. Reactionaries might advocate for a return to older forms of governance, but this does not inherently involve the authoritarian aspects seen in Fascism.
(Normally I wouldn’t chime in on some topic I know this little about, but I suspect others who are more informed might fear speaking up and getting associated with fascism in other people’s minds as a result.)
Also, I’m not Scott but I can share that I’m personally upset with wokeness, not because of how it changed debate, but based on more significant harms to my family and the community we live in (which I described in general terms in this post), to the extent that we’re moving half-way across the country to be in a more politically balanced area, where hopefully it has less influence. (Not to mention damage to other institutions I care about, such as academia and journalism.)
(Yes, that is melodramatic phrasing, but I am trying to shock people out what I think is complacency on this topic.)
Not entirely sure what you’re referring to by “melodramatic phrasing”, but if this is an excuse for using “fascism” to describe “Reactionary philosophy” in order to manipulate people’s reactions to it and/or prevent dissent (I’ve often seen “racism” used this way in other places), I think I have to stand against that. If everyone started excusing themselves from following good discussion norms when they felt like others were complacent about something, that seems like a recipe for disaster.
That said, I very much agree about the “weirdness” of turning to philosophical uncertainty as a solution. Surely philosophical progress (done right) is a good thing, not a moral threat.
I of course also think that philosophical progress, done right, is a good thing. However I also think genuine philosophical progress is much harder than it looks (see Some Thoughts on Metaphilosophy for some relevant background views), and therefore am perhaps more worried than most about philosophical “progress”, done wrong, being a bad thing.
The salient thing to notice is that this person wants to burn your house down.
In your example, after I notice this, I would call the police to report this person. What do you think I should do (or what does David want me to do) after noticing the political agenda of the people he mentioned? My own natural inclination is to ignore them and keep doing what I was doing before, because it seems incredibly unlikely that their agenda would succeed, given the massive array of political enemies that such agenda has.
I was concerned that after the comment was initially downvoted to −12, it would be hidden from the front page and not enough people would see it to vote it back into positive territory. It didn’t work out that way, but perhaps could have?
I want to note that within a few minutes of posting the parent comment, it received 3 downvotes totaling −14 (I think they were something like −4, −5, −5, i.e., probably all strong downvotes) with no agreement or disagreement votes, and subsequently received 5 upvotes spread over 20 hours (with no further downvotes AFAIK) that brought the net karma up to 16 as of this writing. Agreement/disagreement is currently 3⁄1.
This pattern of voting seems suspicious (e.g., why were all the downvotes clustered so closely in time). I reported the initial cluster of downvotes to the mods in case they want to look into it, but have not heard back from them yet. Thought I’d note this publicly in case a similar thing happened or happens to anyone else.
I think too much moral certainty doesn’t necessarily cause someone to be dangerous by itself, and there has to be other elements to their personality or beliefs. For example lots of people are or were unreasonably certain about divine command theory[1], but only a minority of them caused much harm (e.g. by being involved in crusades and inquisitions). I’m not sure it has much to do with realism vs non-realism though. I can definitely imagine some anti-realist (e.g., one with strong negative utilitarian beliefs) causing a lot of damage if they were put in certain positions.
Uncertainty can transition to increased certainty later on, as people do more thinking. So, it doesn’t feel like a stable solution.
This seems like a fair point. I can think of some responses. Under realism (or if humans specifically tend to converge under reflection) people would tend to converge to similar values as they think more, so increased certainty should be less problematic. Under other metaethical alternatives, one might hope that as we mature overall in our philosophies and social systems, we’d be able to better handle divergent values through compromise/cooperation.
(Not to mention that, as EAs tell themselves it’s virtuous to remain uncertain, this impedes philosophical progress at the level of individuals.)
Yeah, there is perhaps a background disagreement between us, where I tend to think there’s little opportunity to make large amounts of genuine philosophical progress without doing much more cognitive work (i.e., to thoroughly explore the huge space of possible ideas/arguments/counterarguments), making your concern not significant for me in the near term.
- ^
Self-nitpick: divine command theory is actually a meta-ethical theory. I should have said “various religious moralities”.
- ^
It’s entirely possible that I misinterpreted David. I asked for clarification from David in the original comment if that was the case, but he hasn’t responded so far. If you want to offer your own interpretation, I’d be happy to hear it out.
I’m saying that you can’t determine the truth about an aspect of reality (in this case, what cause group differences in IQ), when both sides of a debate over it are pushing political agendas, by looking at which political agenda is better. (I also think one side of it is not as benign as you think, but that’s besides the point.)
I actually don’t think this IQ debate is one that EAs should get involved in, and said as much to Ives Parr. But if people practice or advocate for what seem to me like bad epistemic norms, I feel an obligation to push back on that.
More specifically, you don’t need to talk about what causes group differences in IQ to make a consequentialist case for genetic enhancement, since there is no direct connection between what causes existing differences and what the best interventions are. So one possible way forward is just to directly compare the cost-effectiveness of different ways of raising intelligence.
- Apr 19, 2024, 10:27 AM; 5 points) 's comment on An instance of white supremacist and Nazi ideology creeping onto the EA Forum by (
Materialism is an important trait in individuals, and plausibly could be an important difference between groups. (Certainly the history of the Jewish people attests to the fact that it has been considered important in groups!) But the horrific recent history of false hypotheses about innate Jewish behavior helps us see how scientifically empty and morally bankrupt such ideas really are.
Coincidentally, I recently came across an academic paper that proposed a partial explanation of the current East Asian fertility crisis (e.g., South Korea’s fertility decreased from 0.78 to 0.7 in just one year, with 2.1 being replacement level) based on high materialism (which interestingly, the paper suggests is really about status signaling, rather than actual “material” concerns).
The paper did not propose a genetic explanation of this high materialism, but if it did, I would hope that people didn’t immediately dismiss it based on similarity to other hypotheses historically or currently misused by anti-Semites. (In other words, the logic of this article seems to lead to absurd conclusions that I can’t agree with.)
All this tends sadly to distract people from the fact that when white nationalists like Lynn talk about race/IQ stuff, they are trying to push a political agenda
From my perspective, both sides of this debate are often pushing political agendas. It would be natural, but unvirtuous, to focus our attention on the political agenda of only one side, or to pick sides of an epistemic divide based on which political agenda we like or dislike more. (If I misinterpreted you, please clarify what implications you wanted people to draw from this paragraph.)
Note that Will does say a bit in the interview about why he doesn’t view SBF’s utilitarian beliefs as a major explanatory factor here (the fraud was so obviously negative EV, and the big lesson he took from the Soltes book on white-collar crime was that such crime tends to be more the result of negligence and self-deception than deliberate, explicit planning to that end).
I disagree with Will a bit here, and think that SBF’s utilitarian beliefs probably did contribute significantly to what happened, but perhaps somewhat indirectly, by 1) giving him large scale ambitions, 2) providing a background justifications for being less risk-averse than most, 3) convincing others to trust him more than they otherwise would. Without those beliefs, he may well not have gotten to a position where he started committing large scale fraud through negligence and self-deception.
I basically agree with the lessons Will suggests in the interview, about the importance of better “governance” and institutional guard-rails to disincentivize bad behavior
I’m pretty confused about the nature of morality, but it seems that one historical function of morality is to be a substitute for governance (which is generally difficult and costly; see many societies with poor governance despite near universal desire for better governance). Some credit the success of Western civilization in part to Christian morality, for example. (Again I’m pretty confused and don’t know how relevant this is, but it seems worth pointing out.)
I think it would be a big mistake to conflate that sort of “overconfidence in general” with specifically moral confidence (e.g. in the idea that we should fundamentally always prefer better outcomes over worse ones). It’s just very obvious that you can have the latter without the former, and it’s the former that’s the real problem here.
My view is that the two kinds of overconfidence seem to have interacted multiplicatively in causing the disaster that happened. I guess I can see why you might disagree, given your own moral views (conditional on utilitarianism being true/right, it would be surprising if high confidence in it is problematic/dangerous/blameworthy), but my original comment was written more with someone who has relatively low credence in utilitarianism in mind, e.g., Will.
BTW it would be interesting to hear/read a debate between you and Will about utilitarianism. (My views are similar to his in putting a lot of credence on anti-realism and “something nobody has thought of yet”, but I feel like his credence for “something like utilitarianism” is too low. I’m curious to understand both why your credence for it is so high, and why his is so low.)
My memory of the podcast (could be wrong, only listened once!) is that Will said that, conditional on error theory being false, his credence in consequentialism, is about 0.5.
I think he meant conditional on error theory being false, and also on not “some moral view we’ve never thought of”.
Here’s a quote of what Will said starting at 01:31:21: “But yeah, I tried to work through my credences once and I think I ended up in like 3% in utilitarianism or something like. I mean large factions go to, you know, people often very surprised by this, but large factions go to, you know, to error theory. So there’s just no correct moral view. Very large faction to like some moral view we’ve never thought of. But even within positive moral views, and like 50-50 on non consequentialism or consequentialism, most people are not consequentialists. I don’t think I’m.”
Overall it seems like Will’s moral views are pretty different from SBF’s (or what SBF presented to Will as his moral views), so I’m still kind of puzzled about how they interacted with each other.
Yeah, I also tried to point this out to Leopold on LW and via Twitter DM, but no response so far. It confuses me that he seems to completely ignore the possibility of international coordination, as that’s the obvious alternative to what he proposes, that others must have also brought up to him in private discussions.