[Just noting that an online AI detector says the above comment is most likely written by a human and then “AI polished”; I strongly prefer that you just write the unpolished version even if you think it’s “worse”.]
Yeah, you’re right. I usually use AI mostly for translation, but this time I asked it to rewrite some parts that had come out a bit tangled. It said the same things, but expressed them a bit too much in its own way, and later I half-regretted leaving the text like that, too.
But I do also think HIA and reprogenetics are very good interventions even if there were no AGI X-risk, so for anyone who cares about interventions like that, they should be a top cause area.
I mostly agree with this. On whether they should be a top cause area, less so. As long as it stays framed as a marginal investment or a “secondary strategy” against AI catastrophic risk, it seems more justifiable and defensible to the general public, institutions, or people who might join EA. Making it a top cause area would mean going all-in on it. I realize the post was arguing exactly for that, but it seems like a pretty divisive topic even within the EA community itself, and it raises a lot of risks and open questions that other interventions don’t face to the same degree (both reputationally for EA, and in terms of actual risks from adopting the technology).
This is largely true, yeah. However, I think it misses a big contribution of HIA: demonstrating the absence of a need to risk everything on AGI.
That’s a good point, but even if HIA demonstrated that we don’t really need AGI, it seems unlikely that society as a whole would give up pursuing it if it could get there first. That said, I agree that even a small increase in the chances of avoiding the risk matters a lot given the stakes.
not remotely on track to being solved in time, and pouring more resources into the problem basically doesn’t help at the moment.
I’m not too optimistic about AI alignment. But does that mean you’d estimate, for example, an extra dollar in HIA has a better chance of solving the problem than spending it directly on AI alignment? Or even that taking a dollar away from alignment right now to move it to HIA would better reduce AI existential risk? (setting aside the case for just a marginal investment, perhaps?)
Ok, thanks for noting! (It occurred to me after I wrote that that translation would be a major use case and obviously a good one.)
I realize the post was arguing exactly for that, but it seems like a pretty divisive topic even within the EA community itself, and it raises a lot of risks and open questions that other interventions don’t face to the same degree
You’re right, it certainly wouldn’t make sense for it to immediately jump to being a top priority cause, yeah, even if I’m arguing it should maybe be one eventually. If we’re being granular about the computations I’m bidding for, it would be more like “some EAs should do some more investigation into whether this could make sense as a cause area for substantially more investment”.
it seems more justifiable and defensible to the general public, institutions, or people who might join EA.
Interesting. Regarding people who might join EA, I don’t think I quite see it, but the point is interesting and I’ll maybe think about it a bit more.
That said, in terms of societal justification, I would want to distinguish between motivations about AGI X-risk, and concrete aims and intentions with reprogenetics. The latter is what I’d propose to collectively work on. That would still involve intelligence amplification, and transparently so, as is owed to society. But the actual plan, and the pitch to society, would be more broad. It would be about the whole of reprogenetics. So it would include empowering parents to give their kids an exceptionally healthy happy life, and so on, and it would include policy, professional, social, and moral safeguards against the major downside risks.
In other words, to borrow from an old CFAR tagline, I’m saying something like “reprogenetics for its own sake, for the sake of X-risk reduction”, if that makes any sense.
In a bunch more detail, I want to distinguish:
(motivation) my background motivation for devoting a lot of effort to HIA and reprogenetics (HIA helping decrease AGI X-risk)
(explanation of motivation) how I describe/explain/justify my background motivation to people / the public / etc.
(concrete aims) the concrete aims/targets that I pursue with my actions within the space of reprogenetics
(explanation of aims) how I describe/explain/justify/commit-to concrete aims
(proposed societal motivation) What I’m putting forward as a vision / motivation for developing and deploying reprogenetics that would be good and would justify doing so
For honesty’s sake, I personally strongly aim to think and communicate so that:
My public explanation of my motivation gives an honest (truthful, open, salient, clear) presentation of my actual motivation.
My public explanation of my concrete aims is likewise honest.
Both my motivations and my concrete aims are clearly presented.
My concrete aims have clear boundaries around them. For example, I might commit to certain actions on the basis of my publicly stated concrete aims.
My concrete aims are consonant with my proposed societal motivation.
This serves multiple purposes. For example:
I want to work out how, and argue to the public, that reprogenetics is good “on its own terms”; in particular, I want to argue that it’s good even if you don’t buy into anything about AGI X-risk. This is a stronger position I want to argue for, and expose my position to critique on the basis of.
I want to work out and communicate to the public / stakeholders a vision of how society can orient around reprogenetics that is beneficial to ~everyone. This involves working out societal coordination. The flag of [figuring out what to coordinate on and how] would be more about the concrete aims and the proposed societal motivation, and not about my background motivations.
I would suggest that EA could do something similar. That might work differently / not work at all, in the context of a large social movement. I haven’t thought about that, it’s an interesting question.
it seems unlikely that society as a whole would give up pursuing it if it could get there first.
Yeah, I’m quite uncertain on this point. I’m interested in understanding better the details of why AGI is actually being pursued, and under what conditions various capabilities researchers might walk away from that research. But that’s a whole other intellectual project that I don’t have bandwidth for; I’d strongly encourage someone to pick that one up though!
I’m not too optimistic about AI alignment. But does that mean you’d estimate, for example, an extra dollar in HIA has a better chance of solving the problem than spending it directly on AI alignment? Or even that taking a dollar away from alignment right now to move it to HIA would better reduce AI existential risk? (setting aside the case for just a marginal investment, perhaps?)
I do think that the current marginal dollar is much better spent on either supporting a global ban on AGI research, and/or HIA, compared to marginal alignment research. That’s definitely a controversial opinion, but I’ll stand on that (and FWIW, not that I should remotely be taken to speak for them, but for example I would suspect that Yudkowsky and Soares would agree with this judgement). I’m actually unsure whether I personally think the benefit of HIA is more in “some of the kids might solve alignment” vs. “some of the kids might figure out some other way to make the world safe”; I’ve become quite pessimistic about solving AGI alignment, but that’s kinda idiosyncratic.
Yeah, you’re right. I usually use AI mostly for translation, but this time I asked it to rewrite some parts that had come out a bit tangled. It said the same things, but expressed them a bit too much in its own way, and later I half-regretted leaving the text like that, too.
I mostly agree with this. On whether they should be a top cause area, less so. As long as it stays framed as a marginal investment or a “secondary strategy” against AI catastrophic risk, it seems more justifiable and defensible to the general public, institutions, or people who might join EA. Making it a top cause area would mean going all-in on it. I realize the post was arguing exactly for that, but it seems like a pretty divisive topic even within the EA community itself, and it raises a lot of risks and open questions that other interventions don’t face to the same degree (both reputationally for EA, and in terms of actual risks from adopting the technology).
That’s a good point, but even if HIA demonstrated that we don’t really need AGI, it seems unlikely that society as a whole would give up pursuing it if it could get there first. That said, I agree that even a small increase in the chances of avoiding the risk matters a lot given the stakes.
I’m not too optimistic about AI alignment. But does that mean you’d estimate, for example, an extra dollar in HIA has a better chance of solving the problem than spending it directly on AI alignment? Or even that taking a dollar away from alignment right now to move it to HIA would better reduce AI existential risk? (setting aside the case for just a marginal investment, perhaps?)
Ok, thanks for noting! (It occurred to me after I wrote that that translation would be a major use case and obviously a good one.)
You’re right, it certainly wouldn’t make sense for it to immediately jump to being a top priority cause, yeah, even if I’m arguing it should maybe be one eventually. If we’re being granular about the computations I’m bidding for, it would be more like “some EAs should do some more investigation into whether this could make sense as a cause area for substantially more investment”.
Interesting. Regarding people who might join EA, I don’t think I quite see it, but the point is interesting and I’ll maybe think about it a bit more.
That said, in terms of societal justification, I would want to distinguish between motivations about AGI X-risk, and concrete aims and intentions with reprogenetics. The latter is what I’d propose to collectively work on. That would still involve intelligence amplification, and transparently so, as is owed to society. But the actual plan, and the pitch to society, would be more broad. It would be about the whole of reprogenetics. So it would include empowering parents to give their kids an exceptionally healthy happy life, and so on, and it would include policy, professional, social, and moral safeguards against the major downside risks.
In other words, to borrow from an old CFAR tagline, I’m saying something like “reprogenetics for its own sake, for the sake of X-risk reduction”, if that makes any sense.
In a bunch more detail, I want to distinguish:
(motivation) my background motivation for devoting a lot of effort to HIA and reprogenetics (HIA helping decrease AGI X-risk)
(explanation of motivation) how I describe/explain/justify my background motivation to people / the public / etc.
(concrete aims) the concrete aims/targets that I pursue with my actions within the space of reprogenetics
(explanation of aims) how I describe/explain/justify/commit-to concrete aims
(proposed societal motivation) What I’m putting forward as a vision / motivation for developing and deploying reprogenetics that would be good and would justify doing so
For honesty’s sake, I personally strongly aim to think and communicate so that:
My public explanation of my motivation gives an honest (truthful, open, salient, clear) presentation of my actual motivation.
My public explanation of my concrete aims is likewise honest.
Both my motivations and my concrete aims are clearly presented.
My concrete aims have clear boundaries around them. For example, I might commit to certain actions on the basis of my publicly stated concrete aims.
My concrete aims are consonant with my proposed societal motivation.
This serves multiple purposes. For example:
I want to work out how, and argue to the public, that reprogenetics is good “on its own terms”; in particular, I want to argue that it’s good even if you don’t buy into anything about AGI X-risk. This is a stronger position I want to argue for, and expose my position to critique on the basis of.
I want to work out and communicate to the public / stakeholders a vision of how society can orient around reprogenetics that is beneficial to ~everyone. This involves working out societal coordination. The flag of [figuring out what to coordinate on and how] would be more about the concrete aims and the proposed societal motivation, and not about my background motivations.
I would suggest that EA could do something similar. That might work differently / not work at all, in the context of a large social movement. I haven’t thought about that, it’s an interesting question.
Yeah, I’m quite uncertain on this point. I’m interested in understanding better the details of why AGI is actually being pursued, and under what conditions various capabilities researchers might walk away from that research. But that’s a whole other intellectual project that I don’t have bandwidth for; I’d strongly encourage someone to pick that one up though!
I do think that the current marginal dollar is much better spent on either supporting a global ban on AGI research, and/or HIA, compared to marginal alignment research. That’s definitely a controversial opinion, but I’ll stand on that (and FWIW, not that I should remotely be taken to speak for them, but for example I would suspect that Yudkowsky and Soares would agree with this judgement). I’m actually unsure whether I personally think the benefit of HIA is more in “some of the kids might solve alignment” vs. “some of the kids might figure out some other way to make the world safe”; I’ve become quite pessimistic about solving AGI alignment, but that’s kinda idiosyncratic.