Reading/listening to these works has caused me to reevaluate the risks posed by advanced artificial intelligence. While AI risk is currently the top cause in x-risk reduction, I don’t think this is necessarily warranted. I think the CAIS model is a more plausible description of how AI is likely to evolve in the near future, but I haven’t read enough to assess whether it makes AI more or less of a risk (to humanity, civilization, liberal democracy, etc.) than it would be under the classic “Superintelligence” model.
I’m strongly interested in improving diversity in EA, and I think this is an interesting case study about how one could do that. Right now, it seems like there is a core/middle/periphery of the EA community where the core includes people and orgs in countries like the US, UK, and Australia, and I think the EA movement would be stronger if we actively tried to bring more people in more countries into the core.
I’m also interested in how we could use qualitative methods like those employed in user experience research (UXR) to solve problems in EA causes. I’m familiar enough with design thinking (the application of design methods to practical problems) that I could do some of this given enough time and training.
Have you read “Methods of Math Destruction” or “Invisible Women”? Both are on how bias in mostly white, mostly well/off, mostly male developers lead to unfair but self enforcing ai systems.
Epistemic status: Raw thoughts that I’ve just started to think about. I’m highly uncertain about a lot of this.
Some works that have inspired my thinking recently:
Ben Garfinkel on scrutinising classic AI risk arguments − 80,000 Hours
Unpacking Classic Arguments for AI Risk
Reframing Superintelligence: Comprehensive AI Services as General Intelligence—I’ve read a little bit of this
Reading/listening to these works has caused me to reevaluate the risks posed by advanced artificial intelligence. While AI risk is currently the top cause in x-risk reduction, I don’t think this is necessarily warranted. I think the CAIS model is a more plausible description of how AI is likely to evolve in the near future, but I haven’t read enough to assess whether it makes AI more or less of a risk (to humanity, civilization, liberal democracy, etc.) than it would be under the classic “Superintelligence” model.
User research case study: Developing Effective Altruism in Asia
I’m strongly interested in improving diversity in EA, and I think this is an interesting case study about how one could do that. Right now, it seems like there is a core/middle/periphery of the EA community where the core includes people and orgs in countries like the US, UK, and Australia, and I think the EA movement would be stronger if we actively tried to bring more people in more countries into the core.
I’m also interested in how we could use qualitative methods like those employed in user experience research (UXR) to solve problems in EA causes. I’m familiar enough with design thinking (the application of design methods to practical problems) that I could do some of this given enough time and training.
Have you read “Methods of Math Destruction” or “Invisible Women”? Both are on how bias in mostly white, mostly well/off, mostly male developers lead to unfair but self enforcing ai systems.