I think poverty reduction can decrease the risk of unaligned AGI through two channels:
Poverty reduces (or is at least associated with lower) patience, trust, and attitudes around global cooperation. Evidence is of varying quality on these points, as I discuss in the relevant sections above, but if it holds, societies with less poverty will be more cautious around existential risks like AGI.
More egalitarian economies expand access to markets, and since lots of AI is trained on transactions data, they will better represent the diversity of humanity. This could in turn make them more robust.
I admit these are somewhat speculative, and I’d like to explore higher quality causal chains, and other chains like whether we’re missing out on talent to address these risks like you suggest (though that talent could go into AI in general without necessarily working on alignment).
I think there are then three pieces from EA to UBI:
Establishing poverty as a broad longtermist cause area. Per the above, direct evidence here is somewhat sparse, so more perspectives could improve the relationship’s precision.
Establishing UBI as an effective antipoverty policy. I think the case is pretty strong here, and allows us to define UBI as a policy direction; i.e., steps toward cash benefits over in-kind benefits, and toward universality away from targeting and conditionality, will generally be positive in terms of poverty reduction and cost (other than public health, per GiveWell).
Establishing tax and benefit policy research and advocacy as an EA cause area. Scale is clearly high, neglectedness depends on the scope (tax policy in general is not at all neglected, but I think technology for jointly analyzing tax and benefit policy is pretty neglected), and tractability is probably medium-low (people don’t like taxes, and tax reform is realistically required to substantially improve benefits).
I think poverty reduction can decrease the risk of unaligned AGI through two channels:
Poverty reduces (or is at least associated with lower) patience, trust, and attitudes around global cooperation. Evidence is of varying quality on these points, as I discuss in the relevant sections above, but if it holds, societies with less poverty will be more cautious around existential risks like AGI.
More egalitarian economies expand access to markets, and since lots of AI is trained on transactions data, they will better represent the diversity of humanity. This could in turn make them more robust.
I admit these are somewhat speculative, and I’d like to explore higher quality causal chains, and other chains like whether we’re missing out on talent to address these risks like you suggest (though that talent could go into AI in general without necessarily working on alignment).
I think there are then three pieces from EA to UBI:
Establishing poverty as a broad longtermist cause area. Per the above, direct evidence here is somewhat sparse, so more perspectives could improve the relationship’s precision.
Establishing UBI as an effective antipoverty policy. I think the case is pretty strong here, and allows us to define UBI as a policy direction; i.e., steps toward cash benefits over in-kind benefits, and toward universality away from targeting and conditionality, will generally be positive in terms of poverty reduction and cost (other than public health, per GiveWell).
Establishing tax and benefit policy research and advocacy as an EA cause area. Scale is clearly high, neglectedness depends on the scope (tax policy in general is not at all neglected, but I think technology for jointly analyzing tax and benefit policy is pretty neglected), and tractability is probably medium-low (people don’t like taxes, and tax reform is realistically required to substantially improve benefits).