I’m not sure I have as good of a handle on the broader EA ecosystem as others, so consider my thoughts provisional, but I’d suggest adding
A special subset of low-status blindness: there’s a bias toward more conventional projects that are easy to understand, since it’s easier to get affirmation from others if they understand what you’re working on. (Lifted from Jaan Taallinn’s Singularity Summit 2011 talk)
I suspect EAs may prefer going down the nonprofit route, which seems very noble, but more overall long-term utility may often be produced by starting a for-profit business. E.g., Elon Musk is one of the most effective EAs on the planet because he did decide to go the capitalist route.
I’m not sure whether to add basic research stuff or not- the QALY is a pretty creaky foundation, but I grant there’s a lot of uncertainty as to how to improve it.
Hi Nate,
Thanks for the AMA. I’m most curious as to what MIRI’s working definition is for what has intrinsic value. The core worry of MIRI has been that it’s easy to get the AI value problem wrong, to build AIs that don’t value the correct thing. But how do we humans get the value problem right? What should we value?
Max Tegmark alludes to this in Friendly Artificial Intelligence: the Physics Challenge:
So I have two questions: (1) Do you see this (e.g., what Tegmark is speaking about above) as part of MIRI’s bailiwick? (2) If so, do you have any thoughts or research directions you can share publicly?