Thank you!
Norman Borlaug Stan
If you’re interested in more resources to help you decide, may I recommend https://80000hours.org/
It has a pretty good set of decision-making tips for someone like yourself. They also occasionally give out personalized career advice which might be of benefit.
I find it a bit frustrating that most critiques of AI Safety work or longtermism in general seem to start by constructing a strawman of the movement. I’ve read a ton of stuff by self-proclaimed long-termists and would consider myself one and I don’t think I’ve ever heard anyone seriously propose choosing to decrease the risk of existential risk by .0000001 percent instead of lifting a billion people out of poverty. I’m sure people have, but it’s certainly not a mainstream view in the community.
And as others have rightly pointed out, there’s a strong case to be made for caring about AI safety or engineered pandemics or nuclear war even if all you care about are the people alive today.
The critique also does the “guilt by association” thing where it tries to make the movement bad by associating it with people the author knows are unpopular with their audience.
Hello everyone,
I have a quick question: if I want to have maximum impact to mitigate climate change, what’s the best use of a small monthly donation? I was planning to pay the extra money to my utility company every month for renewable energy, but I figured there might be a more effective use of that same money. Any suggestions?
Wow, this is a well-written, well-researched post. Thanks for putting it together!
Factors that would lend themselves to AI restraint
The cost of constant percentage improvements in Moore’s law have gotten more and more expensive. State of the art chip fabs now cost many billions to build.
Preventing new discoveries in AI from being published might just be in the near-term interest of countries that view having differential AI progress as a strategic advantage. The same can be said of companies.
Generative models that rely on open-source datasets like those from the common crawl could run into issues with copyrights if they start invading the markets of the artists and writers whose data was used to train them.
Factors working against restraint
Widespread distribution of computational resources makes it difficult to prevent progress from being made on AI in the near term
Many countries (including China and the US) view AI as being relevant to their strategic political dominance.
The general public does not take the idea of dangerous AI seriously yet other than a small focus on AI bias, which does not seem particularly relevant to the most concerning aspects of AGI. It will be very difficult to rally public support for legislation unless this changes.
The short-term benefits of Moore’s law continuing are widespread. If people can’t buy a better iPhone next year because we banned new fabs, they are going to be upset.
Possible improvements to the post
It would have been nice to read some in-text examples of the ban-enabling features lacking in AI. I clicked on the links you provided but there was too much information for it to be worth my time to go through them.