Right but this requires believing the future will be better if humans survive. I take ops point as saying she doesn’t agree or is at least skeptical
I think the post isn’t clear between the stances “it would make the far future better to end factory farming now” and “the only path by which the far future is net positive requires ending factory farming”, or generally how much of the claim that we should try to end factory farming now is motivated by the “if we can’t do that, we shouldn’t attempt to do longtermist interventions because they will probably fail” vs. “if we can’t do that, we shouldn’t attempt to do longtermist interventions because they are less valuable becuase the EV of the future is worse”
Anyway, working to cause humans to survive requires (or at least, is probably motivated by) thinking the future will be better that way. Not all longtermism is about that (see e.g. s-risk mitigation), and those parts are also relevant to the hinge of history question.
I think again, the point of OP is trying to make is we have very little proof of concept of getting people to go against their best interests. And so if doing what’s right isn’t in the ai companies best interest op wouldn’t believe we can get them to do what we think they should.
I am saying aligning AI is in the best interests of AI companies, unlike the situation with ending factory farming and animal ag companies, which is a relevant difference. Any AI company that could align their AIs once and for all for $10M would do it in a heartbeat. I don’t think they will do nearly enough to align their AIs (so in that sense, their incentives are not humanity’s incentives), given the stakes, but they do want to at least a little
Yea my original framing was a little confused wrt the “vs” dichotomy you present in paragraph one, good shout. I guess I actually meant a little bit of each, though. My interpretation of the post is basically, (1) in so forth as we need to defeat powerful people or thought patterns we (ea or humans) haven’t proven it (2) it’s somewhat likely we will need to do this to create the world we want.
I.e. Given that future s-risk efforts are probably not going to be successful, current extinction-risk efforts are therefore also less useful.
I am saying aligning AI is in the best interests of AI companies
If you define it in a specifically narrow AI Takeover way yes. Making sure it doesn’t allow a dictator to take power or gradually disempowerment scenarios, not really. Or to the extent that ensuring alignment requires slowing down progress.
Anyway mostly in agreement with your points/world, I definitely think we should be focusing on AI right now and I think that our goals and the AI companies/US gov are sufficiently aligned atm that we aren’t swimming up stream, but I resonante with OP that it would allievate some concerns if we actually racked up some hard fought politically unpopular battles before trying to steer the whole future.
It certainly seems possible (>1%) that in the next 2 US admins (current plus next) AI safety becomes so toxic that all the EA -adj ai safety people in the gov get purged and they stop listeing to most ai safety researchers. If this co-occurs with some sort of AI nationalization most of our TOC is cooked.
I think the post isn’t clear between the stances “it would make the far future better to end factory farming now” and “the only path by which the far future is net positive requires ending factory farming”, or generally how much of the claim that we should try to end factory farming now is motivated by the “if we can’t do that, we shouldn’t attempt to do longtermist interventions because they will probably fail” vs. “if we can’t do that, we shouldn’t attempt to do longtermist interventions because they are less valuable becuase the EV of the future is worse”
Anyway, working to cause humans to survive requires (or at least, is probably motivated by) thinking the future will be better that way. Not all longtermism is about that (see e.g. s-risk mitigation), and those parts are also relevant to the hinge of history question.
I am saying aligning AI is in the best interests of AI companies, unlike the situation with ending factory farming and animal ag companies, which is a relevant difference. Any AI company that could align their AIs once and for all for $10M would do it in a heartbeat. I don’t think they will do nearly enough to align their AIs (so in that sense, their incentives are not humanity’s incentives), given the stakes, but they do want to at least a little
Yea my original framing was a little confused wrt the “vs” dichotomy you present in paragraph one, good shout. I guess I actually meant a little bit of each, though. My interpretation of the post is basically, (1) in so forth as we need to defeat powerful people or thought patterns we (ea or humans) haven’t proven it (2) it’s somewhat likely we will need to do this to create the world we want.
I.e. Given that future s-risk efforts are probably not going to be successful, current extinction-risk efforts are therefore also less useful.
If you define it in a specifically narrow AI Takeover way yes. Making sure it doesn’t allow a dictator to take power or gradually disempowerment scenarios, not really. Or to the extent that ensuring alignment requires slowing down progress.
Anyway mostly in agreement with your points/world, I definitely think we should be focusing on AI right now and I think that our goals and the AI companies/US gov are sufficiently aligned atm that we aren’t swimming up stream, but I resonante with OP that it would allievate some concerns if we actually racked up some hard fought politically unpopular battles before trying to steer the whole future.
It certainly seems possible (>1%) that in the next 2 US admins (current plus next) AI safety becomes so toxic that all the EA -adj ai safety people in the gov get purged and they stop listeing to most ai safety researchers. If this co-occurs with some sort of AI nationalization most of our TOC is cooked.