I think of EA as a cluster of values and related actions that people can hold/practice to different extents. For instance, caring about social impact, seeking comparative advantage, thinking about long term positive impacts, and being concerned about existential risks including AI. He touched on all of those.
It’s true that he doesn’t mention donations. I don’t think that discounts his alignment in other ways.
Thanks for the input!
I think of EA as a cluster of values and related actions that people can hold/practice to different extents. For instance, caring about social impact, seeking comparative advantage, thinking about long term positive impacts, and being concerned about existential risks including AI. He touched on all of those.
It’s true that he doesn’t mention donations. I don’t think that discounts his alignment in other ways.
Useful to know he might not be genuine though.