I’m a senior software developer in Canada (earning ~US$70K in a good year) who, being late to the EA party, earns to give. Historically I’ve have a chronic lack of interest in making money; instead I’ve developed an unhealthy interest in foundational software that free markets don’t build because their effects would consist almost entirely of positive externalities.
I dream of making the world better by improving programming languages and developer tools, but AFAIK no funding is available for this kind of work outside academia. My open-source projects can be seen at loyc.net, core.loyc.net, ungglish.loyc.net and ecsharp.net (among others).
A key implication here is that we need models of how AI will transform the world with many qualitative and quantitative details. Individual EAs working in global health, for example, cannot be expected to broadly predict how the world will change.
My view, having thought about this a fair bit, is that there is an extremely broad range of outcomes ranging from human extinction, to various dystopias, to utopia or “utopia”. But there are probably a lot of effects that are relatively predictable, especially in the near term.
Of course, EAs in field X can think about how AI affects X. But this should work better after learning about whatever broad changes superforecasters (or whoever) can predict.