Reading/ālistening to these works has caused me to reevaluate the risks posed by advanced artificial intelligence. While AI risk is currently the top cause in x-risk reduction, I donāt think this is necessarily warranted. I think the CAIS model is a more plausible description of how AI is likely to evolve in the near future, but I havenāt read enough to assess whether it makes AI more or less of a risk (to humanity, civilization, liberal democracy, etc.) than it would be under the classic āSuperintelligenceā model.
Iām strongly interested in improving diversity in EA, and I think this is an interesting case study about how one could do that. Right now, it seems like there is a core/āmiddle/āperiphery of the EA community where the core includes people and orgs in countries like the US, UK, and Australia, and I think the EA movement would be stronger if we actively tried to bring more people in more countries into the core.
Iām also interested in how we could use qualitative methods like those employed in user experience research (UXR) to solve problems in EA causes. Iām familiar enough with design thinking (the application of design methods to practical problems) that I could do some of this given enough time and training.
Have you read āMethods of Math Destructionā or āInvisible Womenā? Both are on how bias in mostly white, mostly well/āoff, mostly male developers lead to unfair but self enforcing ai systems.
Epistemic status: Raw thoughts that Iāve just started to think about. Iām highly uncertain about a lot of this.
Some works that have inspired my thinking recently:
Ben Garfinkel on scrutinising classic AI risk arguments ā 80,000 Hours
Unpacking Classic Arguments for AI Risk
Reframing Superintelligence: Comprehensive AI Services as General IntelligenceāIāve read a little bit of this
Reading/ālistening to these works has caused me to reevaluate the risks posed by advanced artificial intelligence. While AI risk is currently the top cause in x-risk reduction, I donāt think this is necessarily warranted. I think the CAIS model is a more plausible description of how AI is likely to evolve in the near future, but I havenāt read enough to assess whether it makes AI more or less of a risk (to humanity, civilization, liberal democracy, etc.) than it would be under the classic āSuperintelligenceā model.
User research case study: Developing Effective Altruism in Asia
Iām strongly interested in improving diversity in EA, and I think this is an interesting case study about how one could do that. Right now, it seems like there is a core/āmiddle/āperiphery of the EA community where the core includes people and orgs in countries like the US, UK, and Australia, and I think the EA movement would be stronger if we actively tried to bring more people in more countries into the core.
Iām also interested in how we could use qualitative methods like those employed in user experience research (UXR) to solve problems in EA causes. Iām familiar enough with design thinking (the application of design methods to practical problems) that I could do some of this given enough time and training.
Have you read āMethods of Math Destructionā or āInvisible Womenā? Both are on how bias in mostly white, mostly well/āoff, mostly male developers lead to unfair but self enforcing ai systems.