If I had to guess I would predict Luke is more careful than various other EA leaders (mostly cause of Luke’s ties to Eliezer). But you can look at the observed behavior of OpenPhil/​80K/​etc and I dont think they are behaving as carefully as I would endorse with respect to the most dangerous possible topic (besides maybe gain of function research which Ea would not fund). It doesn’t make sense to write leadership a blank check. But it also doesn’t make sense to worry about the ‘unilateralists curse’ when deciding if you should buy your friend a laptop!
Luke from Open Phil on net negative interventions in AI safety (maybe AI governance specifically): https://​​forum.effectivealtruism.org/​​posts/​​pxALB46SEkwNbfiNS/​​the-motivated-reasoning-critique-of-effective-altruism#6yFEBSgDiAfGHHKTD
If I had to guess I would predict Luke is more careful than various other EA leaders (mostly cause of Luke’s ties to Eliezer). But you can look at the observed behavior of OpenPhil/​80K/​etc and I dont think they are behaving as carefully as I would endorse with respect to the most dangerous possible topic (besides maybe gain of function research which Ea would not fund). It doesn’t make sense to write leadership a blank check. But it also doesn’t make sense to worry about the ‘unilateralists curse’ when deciding if you should buy your friend a laptop!