If OpenPhil’s allocation is really so dependent on moral weight numbers, you should be spending significant money on research in this area, right? Are you doing this? Do you plan on doing more of this given the large divergence from Rethink’s numbers?
RedStateBlueState
Keep Chasing AI Safety Press Coverage
I mean, he said “the part I most regret was filing for bankruptcy” (ie, when he stopped hurting people and acknowledged his poor actions) , and that he has spent his entire career lying about his ethical beliefs, and in general showed absolutely no sign of remorse for the people he had hurt. This is borderline-indistinguishable from the logic that horrific dictators use to justify themselves, and he did it all while being a well-known figure in a movement built around doing good! I don’t know if sociopath is exactly the right word, but it is definitely the sign of someone who doesn’t care about other human beings.
Keep Making AI Safety News
Shouldn’t we know better than to update in retrospect based on one highly uncertain datapoint?
We have a number of political data people in EA who thought donating to Flynn was a good investment early in the campaign cycle (later on I was hearing they thought it was no longer worth it). There was also good reason to believe Flynn could be high-impact if elected. Let’s not overthink this.
There is a big difference between working in policy institutions and in politics/campaigning directly. By working in Republican policy institutions (eg think tanks), you can have enormous impact that you couldn’t while working under Democrats. By working in Republican campaigns, you are contributing (non-negligibly given the labor shortage you describe!) to the fall of US democracy and a party that has much worse views on almost every subject under most moral frameworks.
For someone with a reasonably clear picture of the moral impacts of policy, working under Republicans is also enormously emotionally difficult. Valuable, yes, but not for the faint of heart.
The difference, from my perspective, is that the mixing of romantic and work relationships in a poly context has much more widespread damage. In monogamous relationships, the worst that can happened is that there is one incident involving 2 or so people, which can be dealt with in a contained way. In poly relationships, when you have a relationship web spanning a large part of an organization, this can cause very large harm to the company and to potential future employees. I, frankly, would feel very uncomfortable if I was at an organization where most of my coworkers were in a polyamorous relationship.
I think a better way of looking at this is that EA is very inviting of criticism but not necessarily that responsive to it. There are like 10 million critiques on the EA Forum, most with serious discussion and replies. Probably very few elicit actual change in EA. (I am of the opinion that most criticism just isn’t very good, and that there is a reason it hasn’t been adopted, but obviously this is debatable).
Trump recently said in an interview (https://time.com/6972973/biden-trump-bird-flu-covid/) that he would seek to disband the White House office for pandemic preparedness. Given that he usually doesn’t give specifics on his policy positions, this seems like something he is particularly interested in.
I know politics is discouraged on the EA forum, but I thought I would post this to say: EA should really be preparing for a Trump presidency. He’s up in the polls and IMO has a >50% chance of winning the election. Right now politicians seem relatively receptive to EA ideas, this may change under a Trump administration.
I would caution people against reading too much into this. If you poll people about a concept they know nothing about (“AI will cause the end of the human race”) you will always get answers that don’t reflect real belief. These answers are very easily swayed, they don’t cause people to take action like real beliefs would, they are not going to affect how people vote or which elites they trust, etc.
Vox’s Future Perfect is pretty good for this!
Altman, like most people with power, doesn’t have a totally coherent vision for why him gaining power is beneficial for humanity, but can come up with some vague values or poorly-thought-out logic when pressed. He values safety, to some extent, but is skeptical of attempts to cut back on progress in the name of safety.
The standard person-affecting view doesn’t solve the Repugnant Conclusion.
Not to be rude but this seems like a lot of worrying about nothing. “AI is powerful and uncontrollable and could kill all of humanity, like seriously” is not a complicated message. I’m actually quite scared if AI Safety people are hesitant to communicate because they think the misinterpretation will be as bad as you are saying here; this is a really strong assumption, an untested one at that, and the opportunity cost of not pursuing media coverage is enormous.
The primary purpose of media coverage is to introduce the problem, not to immediately push for the solution. I stated ways that different actors taking the problem more seriously would lead to progress; I’m not sure that a delay is actually the main impact. On this last point, note that (as I expected when it was first released) the main effect of the FLI letter is that a lot more people have heard of AI Safety and people who have heard of it are taking it more seriously (the latter based largely on Twitter observations), not that a delay is actually being considered.
I don’t actually know where you’re getting “these issues in communication...historically have led to a lot of x-risk” from. There was no large public discussion about nuclear weapons before initial use (and afterwards we settled into the most reasonable approach there was for preventing nuclear war, namely MAD) or gain-of-function research. The track record of “tell people about problems and they become more concerned about these problems”, on the other hand, is very good.
(also: premature??? really???)
This might be a dumb question, but shouldn’t we be preserving more elementary resources to rebuild a flourishing society? Current EA is kind of only meaningful in a society with sufficient abundant resources to go into nonprofit work. It feels like there are bigger priorities in the case of sub-x-risk.
Really good write-up!
I find the proportion of people who have heard of EA even after adjusting for controls to be extremely high. I imagine some combination of response bias and just looking up the term is causing overestimation of EA knowledge.
Moreover, given that I expect EA knowledge to be extremely low in the general population, I’m not sure what the point of doing these surveys is. It seems to me you’re always fighting against various forms of survey bias that are going to dwarf any real data. Doing surveys of specific populations seems a more productive way of measuring knowledge.
I’ll update my priors a bit but I remain skeptical
Two Types of Average Utilitarianism
This consideration is something I had never thought of before and blew my mind. Thank you for sharing.
Hopefully I can summarize it (assuming I interpreted it correctly) in a different way that might help people who were as befuddled as I was.
The point is that, when you have probabilistic weight to two different theories of sentience being true, you have to assign units to sentience in these different theories in order to compare them.
Say you have two theories of sentience that are similarly probable, one dependent on intelligence and one dependent on brain size. Call these units IQ-qualia and size-qualia. If you assign fruit flies a moral weight of 1, you are implicitly declaring a conversion rate of (to make up some random numbers) 1000 IQ-qualia = 1 size-qualia. If you assign elephants however to have a moral weight of 1, you implicitly declare a conversion rate of (again, made-up) 1 IQ-qualia = 1000 size-qualia, because elephant brains are much larger but not much smarter than fruit flies. These two different conversion rates are going to give you very different numbers for the moral weight of humans (or as Shulman was saying, of each other).
Rethink Priorities assigned humans a moral weight of 1, and thus assumed a certain conversion rate between different theories that made for a very small-animal-dominated world by sentience.
With all due respect I think people are reading way too far into this, Eliezer was just talking about the enforcement mechanism for a treaty. Yes, treaties are sometimes (often? always?) backed up by force. Stating this explicitly seems dumb because it leads to posts like this, but let’s not make this bigger than it is.
I really don’t understand how you could have read that whole interview and see SBF as incompetent rather than a malicious sociopath. I know this is a very un-EA-forum-like comment, but I think it’s necessary to say.