I think that trying to get safe concrete demonstrations of risk by doing research seems well worth pursuing (I don’t think you were saying it’s not).
Isaac Dunn
Do you have any thoughts on how should people decide between working on groups at CEA and running a group on the ground themselves?
I imagine a lot of people considering applying could be asking themselves that question, and it doesn’t seem obvious to me how to decide.
To be fair, I think I’m partly making wrong assumptions about what exactly you’re arguing for here.
On a slightly closer read, you don’t actually argue in this piece that it’s as high as 90% - I assumed that because I think you’ve argued for that previously, and I think that’s what “high” p(doom) normally means.
Relatedly, I also think that your arguments for “p(doom|AGI)” being high aren’t convincing to people that don’t share your intuitions, and it looks like you’re relying on those (imo weak) arguments, when actually you don’t need to
I think you come across as over-confident, not alarmist, and I think it hurts how you come across quite a lot. (We’ve talked a bit about the object level before.) I’d agree with John’s suggested approach.
Makes sense. To be clear, I think global health is very important, and I think it’s a great thing to devote one’s life to! I don’t think it should be underestimated how big a difference you can make improving the world now, and I admire people who focus on making that happen. It just happens that I’m concerned the future might be even higher priority thing that many people could be in a good position to address.
On your last point, if you believe that the EV from a “effective neartermism → effective longtermism” career change is greater than a “somewhat harmful career → effective neartermism” career change, then the downside of using a “somewhat harmful career → effective longtermism” example is that people might think the “stopped doing harm” part is more important than the “focused on longtermism” part.
More generally, I think your “arguments for the status quo” seem right to me! I think it’s great that you’re thinking clearly about the considerations on both sides, and my guess is that you and I would just weight these considerations differently.
Thank you for sharing these! I’m probably going to try the first three as a result of this post.
Another thing on my mind is that we should beware surprising and suspicious convergence—it would be surprising and suspicious if the same intervention (present-focused WAW work) was best for improving animals’ lives today and also happened to be best for improving animals’ lives in the distant future.
I worry about people interested in animal welfare justifying maintaining their existing work when they switch their focus to longtermism, when actually it would be better if they worked on something different.
Thanks for your reply! I can see your perspective.
On your last point, but future-focused WAW interventions, I’m thinking of things that you mention in the tractability section of your post:
Here is a list of ways we could work on this issue (directly copied from the post by saulius[9]):
“To reduce the probability of humans spreading of wildlife in a way that causes a lot of suffering, we could:
Directly argue about caring about WAW if humans ever spread wildlife beyond Earth
Lobby to expand the application of an existing international law that tries to protect other planets from being contaminated with Earth life by spacecrafts to planets outside of our solar system.
Continue building EA and WAW communities to ensure that there will be people in the future who care about WAW.
Spread the general concern for WAW (e.g., through WAW documentaries, outreach to academia).”
That is, things aimed at improving (wild) animals’ lives in the event of space colonisation.
Relatedly, I don’t think you necessarily need to show that “interfering with nature could be positive for welfare”, because not spreading wild animals in space wouldn’t be interfering with nature. That said, it would be useful in case we do spread wild animals, then interventions to improve their welfare might look more like interfering with nature, so I agree it could be helpful.
My personal guess is that a competent organisation that eventually advocates for humanity to care about the welfare of all sentient beings would be good to exist. It would probably have to start by doing a lot of research into people’s existing beliefs and doing testing to see what kinds of interventions get people to care. I’m sure there must be some existing research about how to get people to care about animals.
I’m not sure either way how important this would be compared with other priorities, though. I believe some existing organisations believe the best way to reduce the expected amount of future suffering is to focus on preventing the cases where the amount of future suffering is very large. I haven’t thought about it, but that could be right.
For the kinds of reasons you give, I think it could be good to get people to care about the suffering of wild animals (and other sentient beings) in the event that we colonise the stars.
I think that the interventions that decrease the chance of future wild animal suffering are only a subset of all WAW things you could do, though. For example, figuring out ways to make wild animals suffer less in the present would come under “WAW”, but I wouldn’t expect to make any difference to the more distant future. That’s because if we care about wild animals, we’ll figure out what to do sooner or later.
So rather than talking about “wild animal welfare interventions”, I’d argue that you’re really only talking about “future-focused wild animal welfare interventions”. And I think making that distinction is important, because I don’t think your reasoning supports present-focused WAW work.
I’d be curious what you think about that!
If I understand correctly, you put 0.01% on artificial sentience in the future. That seems overconfident to me—why are you so certain it won’t happen?
I’ve only skimmed this, but just want to say I think it’s awesome that you’re doing your own thinking trying to compare these two approaches! In my view, you don’t need to be “qualified” to try to form your own view, which depends on understanding the kinds of considerations you raise. This decision matters a lot, and I’m glad you’re thinking carefully about it and sharing your thoughts.
I interpreted the title of this post as a bill banning autonomous AI systems from paying people to do things! I did think it was slightly early.
Would you be eligible for the graduate visa? https://www.gov.uk/graduate-visa
If so, would that meet your needs?
(I’ve just realised this is close to just a rephrasing of some of the other suggestions. Could be a helpful rephrasing though.)
The Superalignment team’s goal is “to build a roughly human-level automated alignment researcher”.
Human-level AI systems sound capable enough to cause a global catastrophe if misaligned. So is the plan to make sure that these systems are definitely aligned (if so, how?), or to make sure that they are deployed in a such a way that they would not be able to take catastrophic actions even if they want to (if so, what would that look like?)?
Thanks David, that’s just the kind of reply I was hoping for! Those three goals do seem to me like three of the most important. It might be worth adding that context to your write-up.
I’m curious whether there’s much you did specifically to achieve your third goal—inspiring people to take action based on high quality reasoning—beyond just running an event where people might talk to others who are doing that. I wouldn’t expect so, but I’d be interested there was.
Thanks for writing this up! I’d be interested if you had time to say more about what you think the main theory of change of the event was (or should have been).
Is this a problem? Seems fine to me, because the meaning is often clear, as in two of your examples, and I think it adds value in those contexts. And if it’s not clear, doesn’t seem like a big loss compared to a counterfactual of having none of these types of vote available.