If OpenPhil’s allocation is really so dependent on moral weight numbers, you should be spending significant money on research in this area, right? Are you doing this? Do you plan on doing more of this given the large divergence from Rethink’s numbers?
RedStateBlueState
I mean, he said “the part I most regret was filing for bankruptcy” (ie, when he stopped hurting people and acknowledged his poor actions) , and that he has spent his entire career lying about his ethical beliefs, and in general showed absolutely no sign of remorse for the people he had hurt. This is borderline-indistinguishable from the logic that horrific dictators use to justify themselves, and he did it all while being a well-known figure in a movement built around doing good! I don’t know if sociopath is exactly the right word, but it is definitely the sign of someone who doesn’t care about other human beings.
Shouldn’t we know better than to update in retrospect based on one highly uncertain datapoint?
We have a number of political data people in EA who thought donating to Flynn was a good investment early in the campaign cycle (later on I was hearing they thought it was no longer worth it). There was also good reason to believe Flynn could be high-impact if elected. Let’s not overthink this.
There is a big difference between working in policy institutions and in politics/campaigning directly. By working in Republican policy institutions (eg think tanks), you can have enormous impact that you couldn’t while working under Democrats. By working in Republican campaigns, you are contributing (non-negligibly given the labor shortage you describe!) to the fall of US democracy and a party that has much worse views on almost every subject under most moral frameworks.
For someone with a reasonably clear picture of the moral impacts of policy, working under Republicans is also enormously emotionally difficult. Valuable, yes, but not for the faint of heart.
The difference, from my perspective, is that the mixing of romantic and work relationships in a poly context has much more widespread damage. In monogamous relationships, the worst that can happened is that there is one incident involving 2 or so people, which can be dealt with in a contained way. In poly relationships, when you have a relationship web spanning a large part of an organization, this can cause very large harm to the company and to potential future employees. I, frankly, would feel very uncomfortable if I was at an organization where most of my coworkers were in a polyamorous relationship.
I think a better way of looking at this is that EA is very inviting of criticism but not necessarily that responsive to it. There are like 10 million critiques on the EA Forum, most with serious discussion and replies. Probably very few elicit actual change in EA. (I am of the opinion that most criticism just isn’t very good, and that there is a reason it hasn’t been adopted, but obviously this is debatable).
Trump recently said in an interview (https://time.com/6972973/biden-trump-bird-flu-covid/) that he would seek to disband the White House office for pandemic preparedness. Given that he usually doesn’t give specifics on his policy positions, this seems like something he is particularly interested in.
I know politics is discouraged on the EA forum, but I thought I would post this to say: EA should really be preparing for a Trump presidency. He’s up in the polls and IMO has a >50% chance of winning the election. Right now politicians seem relatively receptive to EA ideas, this may change under a Trump administration.
I would caution people against reading too much into this. If you poll people about a concept they know nothing about (“AI will cause the end of the human race”) you will always get answers that don’t reflect real belief. These answers are very easily swayed, they don’t cause people to take action like real beliefs would, they are not going to affect how people vote or which elites they trust, etc.
Vox’s Future Perfect is pretty good for this!
Altman, like most people with power, doesn’t have a totally coherent vision for why him gaining power is beneficial for humanity, but can come up with some vague values or poorly-thought-out logic when pressed. He values safety, to some extent, but is skeptical of attempts to cut back on progress in the name of safety.
Not to be rude but this seems like a lot of worrying about nothing. “AI is powerful and uncontrollable and could kill all of humanity, like seriously” is not a complicated message. I’m actually quite scared if AI Safety people are hesitant to communicate because they think the misinterpretation will be as bad as you are saying here; this is a really strong assumption, an untested one at that, and the opportunity cost of not pursuing media coverage is enormous.
The primary purpose of media coverage is to introduce the problem, not to immediately push for the solution. I stated ways that different actors taking the problem more seriously would lead to progress; I’m not sure that a delay is actually the main impact. On this last point, note that (as I expected when it was first released) the main effect of the FLI letter is that a lot more people have heard of AI Safety and people who have heard of it are taking it more seriously (the latter based largely on Twitter observations), not that a delay is actually being considered.
I don’t actually know where you’re getting “these issues in communication...historically have led to a lot of x-risk” from. There was no large public discussion about nuclear weapons before initial use (and afterwards we settled into the most reasonable approach there was for preventing nuclear war, namely MAD) or gain-of-function research. The track record of “tell people about problems and they become more concerned about these problems”, on the other hand, is very good.
(also: premature??? really???)
This might be a dumb question, but shouldn’t we be preserving more elementary resources to rebuild a flourishing society? Current EA is kind of only meaningful in a society with sufficient abundant resources to go into nonprofit work. It feels like there are bigger priorities in the case of sub-x-risk.
Really good write-up!
I find the proportion of people who have heard of EA even after adjusting for controls to be extremely high. I imagine some combination of response bias and just looking up the term is causing overestimation of EA knowledge.
Moreover, given that I expect EA knowledge to be extremely low in the general population, I’m not sure what the point of doing these surveys is. It seems to me you’re always fighting against various forms of survey bias that are going to dwarf any real data. Doing surveys of specific populations seems a more productive way of measuring knowledge.
I’ll update my priors a bit but I remain skeptical
This consideration is something I had never thought of before and blew my mind. Thank you for sharing.
Hopefully I can summarize it (assuming I interpreted it correctly) in a different way that might help people who were as befuddled as I was.
The point is that, when you have probabilistic weight to two different theories of sentience being true, you have to assign units to sentience in these different theories in order to compare them.
Say you have two theories of sentience that are similarly probable, one dependent on intelligence and one dependent on brain size. Call these units IQ-qualia and size-qualia. If you assign fruit flies a moral weight of 1, you are implicitly declaring a conversion rate of (to make up some random numbers) 1000 IQ-qualia = 1 size-qualia. If you assign elephants however to have a moral weight of 1, you implicitly declare a conversion rate of (again, made-up) 1 IQ-qualia = 1000 size-qualia, because elephant brains are much larger but not much smarter than fruit flies. These two different conversion rates are going to give you very different numbers for the moral weight of humans (or as Shulman was saying, of each other).
Rethink Priorities assigned humans a moral weight of 1, and thus assumed a certain conversion rate between different theories that made for a very small-animal-dominated world by sentience.
With all due respect I think people are reading way too far into this, Eliezer was just talking about the enforcement mechanism for a treaty. Yes, treaties are sometimes (often? always?) backed up by force. Stating this explicitly seems dumb because it leads to posts like this, but let’s not make this bigger than it is.
I don’t think points about timelines reflect an accurate model of how AI regulations and guardrails are actually developed. What we need is for Congress to pass a law ordering some department within the executive branch to regulate AI, eg by developing permitting requirements or creating guidelines for legal AI research or whatever. Once this is done, the specifics of how AI is regulated are mostly up to that executive branch, which can and will change over time.
Because of this, it is never “too soon” to order the regulation of AI. We may not know exactly what regulations would be like, but this is very unlikely to be written into law anyway. What we want right now is to create mechanisms to develop and enforce safety standards. Similar arguments apply to internal safety standards at companies developing AI capabilities.
It seems really hard for us to know exactly when AGI (or ASI or whatever you want to call it) is actually imminent. Even if it was possible, however, I just don’t think last-minute panicking about AGI would actually accomplish much. It’s all but impossible to quickly create societal consensus that the world is about to end before any harm has actually occurred. I feel like there’s an unrealistic image of “we will panic and then everyone will agree to immediately stop AI research” implicit in this post. The smart thing to do is to develop mechanisms early and then use these mechanisms when we get closer to crunch time.
″...for most professional EA roles, and especially for “thought leadership”, English-language communication ability is one of the most critical skills for doing the job well”
Is it, really? Like, this is obviously true to some extent. But I’m guessing that English communication ability isn’t much more important for most professional EA roles than it is for eg academics or tech startup founders. These places are much more diverse in native language than EA I think.
The point of the letter is to raise awareness for AI safety, not because they actually think a pause will be implemented. We should take the win.
EDIT: after reflecting on this comment I think I was too dismissive of the risk of association between EA and Democrats, particularly because I think we’re headed toward a period of Republican domination of US politics and the risk of being associated with Democrats may feasibly outweigh the reward of potential policy. Anyway, below are my original thoughts.
Interesting post. I lean toward disagreeing, for a couple of reasons.
I think you would agree that Congress can, if it adopts EA legislation, be greatly helpful to the EA cause. It just has way more money and influence than EA can dream of at the moment. The questions are then:
Is having a self-proclaimed EA in Congress helpful to getting legislation passed?
Will the potential negative press and association with Democrats be too harmful to the EA movement to be worth it?
On (1), I think the answer is a resounding YES, and you have to overthink really hard to reach a different conclusion. Congresspeople deal with a ton of different interest groups all the time trying to have their preferred policies make their way into legislation. We are competing with all of them, which makes our odds of success quite low, especially since politicians generally consult the interest groups they agree with rather than interest groups actually persuading politicians. Having a congressman tirelessly devoted to the singular cause of getting EA legislation enacted, on the other hand—that can be powerful. I think the Tea Party/Squad comparison is quite a bad one, given that these groups focus on hyper-partisan legislation, which as you say EA is not. A more apt comparison in my eyes is pork. Politicians get funding all the time for projects in their districts so they can report back to their constituents. Hundreds of these things get included in many omnibus bills in order to ensure the vote of every single legislator. If Carrick Flynn is unwilling to vote for legislation without AI safety funding, and Democrats in the House need his vote due to a narrow majority, that boosts our odds significantly (especially since AI safety is pretty non-partisan and unlikely to be a sticking point in the Senate). I think you focus too much on the short-term in this analysis—politicians stick around for a long time, and Flynn could very realistically have a lot of influence in future sessions. The upside here seems massive.
For question 2, first let me comment on negative press. I’m pretty skeptical. Flynn really didn’t advertise EA at all during his campaign, and his opponents did not attack him for it (aside from crypto due to its association with wealth/corruption), and for good reason. It’s really hard to get voters to care about esoteric ideas one way or the other. Voters at large hold pretty authoritarian values and don’t care about Republicans’ attacks on democracy. Very liberal whites are basically the only ones who care about climate change. And these are esoteric issues that are already quite political—the idea that AI safety or animal welfare would become a campaign point is in my view laughable. To add, if it becomes too much of an issue, we (as a movement) can always decide it’s not worth it and stop running candidates. Flynn was a nice trial run that showed us crypto is a weakness, and it’s worth having more tests.
Now onto association with Democrats. EA will always be a left-dominated organization due to the extreme left lean of highly educated people. I do share your concern about Republicans being unwilling to pass EA legislation if it’s associated with Democrats. But I think you vastly exaggerate how much more association EA would have with Democrats if there were a couple of Democratic EA-associated legislators in office, especially if they never really talk about EA in public. And besides, going back to question 1, I still think there is a (much) higher chance of getting legislation passed if we have an EA in Congress.
I really don’t understand how you could have read that whole interview and see SBF as incompetent rather than a malicious sociopath. I know this is a very un-EA-forum-like comment, but I think it’s necessary to say.