See explainer on why AGI could not be controlled enough to stay safe:
https://www.lesswrong.com/posts/xp6n2MG5vQkPpFEBH/the-control-problem-unsolved-or-unsolvable
Note: I am no longer part of EA because of the community’s/philosophy’s overreaches. I still post here about AI safety.
I spent time digging into Uganda Community Farm’s plans last year, and ended up becoming a regular donor. From reading the write-ups and later asking Anthony about the sorghum training and grain-processing plant projects, I understood Anthony to be thoughtful and strategic about actually helping relieve poverty in the Kamuli & Buyende region.
Here are short explainers worth reading:
https://www.ugandafarm.org/sorghum/
https://www.ugandafarm.org/plant-details/
UCF focusses on training farmers and giving them the materials and tools needed to build up their own incomes, which is a much more targeted approach than just transferring money (though need to account for differences in local income levels too).
Personally, I think the EA community often focussed on measuring and mapping out consequences of global poverty interventions from afar and not as much on enabling charity entrepreneurs on the ground who have first-hand contextual knowledge on what’s holding their community back. My sense is that robust approaches will tend to consider both.