to move your understanding of some EA and rationality concepts to the gut level
Do you have any particular concepts in mind that you think I might be missing?
Presumably neither of us know most of the things that are known about EA and rationality… You probably know more about EA than rationality, more about animals than tech risks, and more about EA theory than EA orgs? One insight that I picked up in my travels is that in a certain sense, asteroid detection is the most ‘robust’ cause, since we know a lot more about how to do it, compared to entering a complex human system like global poverty. An interesting meditation on whether we should pivot to asteroid deflection, whether we want ‘robustness’, and what people mean by ‘robustness’.
Seems like another uncharitable implicit argument against the EAs known for favouring robustness (GiveWell, the Vancouverites, people skeptical about leafleting and metacharities and xrisk on those grounds). I’ve heard experts say the most important parts of asteroid detection are fully funded. If they weren’t people would generally accept funding them as a priority.
I’m not trying to say folks who espouse robustness are fools—Until I encountered it, I had not thought of this line of reasoning myself. As I understand it, the point is that sometimes the connotations of such words lead in different directions from if we thought more carefully. Yes, >1km asteroid detection is well-covered now. So is next thing to move onto is asteroid deflection? You can see how an argument would run, that since physical annihilation is so final and well-understood, it wins on robustness grounds...
Presumably neither of us know most of the things that are known about EA and rationality… You probably know more about EA than rationality, more about animals than tech risks, and more about EA theory than EA orgs? One insight that I picked up in my travels is that in a certain sense, asteroid detection is the most ‘robust’ cause, since we know a lot more about how to do it, compared to entering a complex human system like global poverty. An interesting meditation on whether we should pivot to asteroid deflection, whether we want ‘robustness’, and what people mean by ‘robustness’.
Seems like another uncharitable implicit argument against the EAs known for favouring robustness (GiveWell, the Vancouverites, people skeptical about leafleting and metacharities and xrisk on those grounds). I’ve heard experts say the most important parts of asteroid detection are fully funded. If they weren’t people would generally accept funding them as a priority.
I’m not trying to say folks who espouse robustness are fools—Until I encountered it, I had not thought of this line of reasoning myself. As I understand it, the point is that sometimes the connotations of such words lead in different directions from if we thought more carefully. Yes, >1km asteroid detection is well-covered now. So is next thing to move onto is asteroid deflection? You can see how an argument would run, that since physical annihilation is so final and well-understood, it wins on robustness grounds...