Rationalists do a lot of argument mapping under the label double crux (and similar derivative names: crux-mapping). I would even argue that double crux approach to argument mapping is better than the standard one, and rationalists integrate explicit argument mapping to their lives more than likely any other identifiable group.
Also: more argument mapping / double-cruxing / … is currently unlikely to create more clarity around AI safety, because are constrained by Limits to Legibility, not by ability to map arguments.
Rationalists do a lot of argument mapping under the label double crux (and similar derivative names: crux-mapping). I would even argue that double crux approach to argument mapping is better than the standard one, and rationalists integrate explicit argument mapping to their lives more than likely any other identifiable group.
Also: more argument mapping / double-cruxing / … is currently unlikely to create more clarity around AI safety, because are constrained by Limits to Legibility, not by ability to map arguments.