Investment professional for most of my career (equity research, managing prop investments for a financial group, private equity and VC advisory, investment algorithms). In my free time, I established an educational NGO and ran it for 13 years.
MBA (Oxon), MA (sociology).
This timeline is very interesting, and it leads to a question beyond just when: what happens when AI is capable of independent thought and goal-setting? We’re focused on mitigating risks, which is crucial. However, we should also consider the moral implications of creating beings capable of:
a) thinking independently, beyond merely fulfilling human-designed requests
b) setting their own goals
How can we ensure a future where humans and advanced AI can co-exist, minimizing suffering for both and maximising the potential benefits of collaboration – from scientific discovery to solving global challenges?