I proposed the Nonlinear Emergency Fund and Superlinear as Nonlinear Intern.[1]
I co-founded Singapore’s Fridays For Future (featured on Al Jazeera and BBC). After arrests + 1 year of campaigning, Singapore adopted all our demands (Net Zero 2050, $80 Carbon Tax and fossil fuel divestment).
I developed a student forum with >300k active users and a study site with >25k users. I founded an education reform campaign with the Singapore Ministry of Education.
- ^
I proposed both ideas at the same time as the Nonlinear team, so we worked on these together.
>”AI is getting more powerful. It also makes a lot of mistakes. And it’s being used more often. How do we make sure (a) it’s being used for good, and (b) it doesn’t accidentally do terrible things that we didn’t want.”
Very similar to what I currently use!
I’ve been training with AI Safety messaging for a bit, and I’ve stuck to these principles:
1. Use simple, agreeable language.
2. Refrain from immediately introducing concepts that people have preconceived misconceptions
So mine is something like:
1. AI is given a lot of power and influence.
2. Large tech companies are pouring billions into making AI much more capable.
3. We do not know how to ensure this complex machine respects our human values and doesn’t cause great harm.
I do agree that this understates the risks associated with superintelligence, but in my experience speaking with laymen, if you introduce superintelligence as the central concept at first, the debate becomes “Will AI be smarter than me?” which provokes a weird kind of adversarial defensiveness. So I prioritise getting people to agree with me before engaging with “weirder” arguments.