Interesting, I wasn’t at all thinking about the orthogonality thesis or moral realism when writing that. I was thinking a bit about people who: 1) Start out wanting to do lots of tech or AI work. 2) Find out about AGI and AGI risks. 3) Conclude on some worldview where doing lots of tech or AI work is the best thing for AGI success.
Interesting, I wasn’t at all thinking about the orthogonality thesis or moral realism when writing that. I was thinking a bit about people who:
1) Start out wanting to do lots of tech or AI work.
2) Find out about AGI and AGI risks.
3) Conclude on some worldview where doing lots of tech or AI work is the best thing for AGI success.