Thanks for this post. I’d like to see more about this in the future. I admit I’m very pro-UBI, so a bit biased.
I don’t quite follow with the associations you make with GC risks, though; for me, e.g., it’s unlikely that a global UBI would significantly decrease the risk of unaligned AGI. On the other hand, perhaps with an UBI we wouldn’t have almost 10% of humanity living in extreme poverty (or 37 million in US), which might help add a lot more smart people into those cause areas. Is it consistent with your case?
Also, it’s arguable that inequality and poverty imply a significant decrease in the expected welfare of future generations under uncertain growth (so justifying a lower SDR). Sometimes, I think some EAs underestimate the problem of inequality (how it affects social stability and welfare measures) and the importance of having effective redistributive policies. Do you think this makes sense?
It sounds like you’re implying that 10% of the US lives in “extreme poverty,” following the 10% of the world — but this isn’t the case. The article you cite gives 37 million for the number below the US poverty line, which is not the same thing. (Possibly you know this already, but I thought the sentence was a bit confusing for onlookers.)
I think it still suggests 37 million in extreme poverty in the US.
I’d also suggest using the Supplemental Poverty Measure for US poverty. Unlike the Official Poverty Measure, it incorporates taxes, non-cash benefits, and variation in local housing costs. The current SPM poverty rate is 9.1%.
I think poverty reduction can decrease the risk of unaligned AGI through two channels:
Poverty reduces (or is at least associated with lower) patience, trust, and attitudes around global cooperation. Evidence is of varying quality on these points, as I discuss in the relevant sections above, but if it holds, societies with less poverty will be more cautious around existential risks like AGI.
More egalitarian economies expand access to markets, and since lots of AI is trained on transactions data, they will better represent the diversity of humanity. This could in turn make them more robust.
I admit these are somewhat speculative, and I’d like to explore higher quality causal chains, and other chains like whether we’re missing out on talent to address these risks like you suggest (though that talent could go into AI in general without necessarily working on alignment).
I think there are then three pieces from EA to UBI:
Establishing poverty as a broad longtermist cause area. Per the above, direct evidence here is somewhat sparse, so more perspectives could improve the relationship’s precision.
Establishing UBI as an effective antipoverty policy. I think the case is pretty strong here, and allows us to define UBI as a policy direction; i.e., steps toward cash benefits over in-kind benefits, and toward universality away from targeting and conditionality, will generally be positive in terms of poverty reduction and cost (other than public health, per GiveWell).
Establishing tax and benefit policy research and advocacy as an EA cause area. Scale is clearly high, neglectedness depends on the scope (tax policy in general is not at all neglected, but I think technology for jointly analyzing tax and benefit policy is pretty neglected), and tractability is probably medium-low (people don’t like taxes, and tax reform is realistically required to substantially improve benefits).
Thanks for this post. I’d like to see more about this in the future. I admit I’m very pro-UBI, so a bit biased.
I don’t quite follow with the associations you make with GC risks, though; for me, e.g., it’s unlikely that a global UBI would significantly decrease the risk of unaligned AGI. On the other hand, perhaps with an UBI we wouldn’t have almost 10% of humanity living in extreme poverty (or 37 million in US), which might help add a lot more smart people into those cause areas. Is it consistent with your case?
Also, it’s arguable that inequality and poverty imply a significant decrease in the expected welfare of future generations under uncertain growth (so justifying a lower SDR). Sometimes, I think some EAs underestimate the problem of inequality (how it affects social stability and welfare measures) and the importance of having effective redistributive policies. Do you think this makes sense?
It sounds like you’re implying that 10% of the US lives in “extreme poverty,” following the 10% of the world — but this isn’t the case. The article you cite gives 37 million for the number below the US poverty line, which is not the same thing. (Possibly you know this already, but I thought the sentence was a bit confusing for onlookers.)
You are right. I rephrased it to avoid this misunderstanding. Thank you very much
I think it still suggests 37 million in extreme poverty in the US.
I’d also suggest using the Supplemental Poverty Measure for US poverty. Unlike the Official Poverty Measure, it incorporates taxes, non-cash benefits, and variation in local housing costs. The current SPM poverty rate is 9.1%.
I think poverty reduction can decrease the risk of unaligned AGI through two channels:
Poverty reduces (or is at least associated with lower) patience, trust, and attitudes around global cooperation. Evidence is of varying quality on these points, as I discuss in the relevant sections above, but if it holds, societies with less poverty will be more cautious around existential risks like AGI.
More egalitarian economies expand access to markets, and since lots of AI is trained on transactions data, they will better represent the diversity of humanity. This could in turn make them more robust.
I admit these are somewhat speculative, and I’d like to explore higher quality causal chains, and other chains like whether we’re missing out on talent to address these risks like you suggest (though that talent could go into AI in general without necessarily working on alignment).
I think there are then three pieces from EA to UBI:
Establishing poverty as a broad longtermist cause area. Per the above, direct evidence here is somewhat sparse, so more perspectives could improve the relationship’s precision.
Establishing UBI as an effective antipoverty policy. I think the case is pretty strong here, and allows us to define UBI as a policy direction; i.e., steps toward cash benefits over in-kind benefits, and toward universality away from targeting and conditionality, will generally be positive in terms of poverty reduction and cost (other than public health, per GiveWell).
Establishing tax and benefit policy research and advocacy as an EA cause area. Scale is clearly high, neglectedness depends on the scope (tax policy in general is not at all neglected, but I think technology for jointly analyzing tax and benefit policy is pretty neglected), and tractability is probably medium-low (people don’t like taxes, and tax reform is realistically required to substantially improve benefits).