Dear EAs,
After the interview with Tristan Harris Rob asked EAs to do their own research on the topic of aligning recommender systems / short term AI risks. A few weeks ago I already posed the short question on donating against these risks. After that I listened to the 2.5 hour podcast on the topic and read multiple articles on this topic on this forum (https://forum.effectivealtruism.org/posts/E4gfMSqmznDwMrv9q/are-social-media-algorithms-an-existential-risk, https://forum.effectivealtruism.org/posts/xzjQvqDYahigHcwgQ/aligning-recommender-systems-as-cause-area, https://forum.effectivealtruism.org/posts/ptrY5McTdQfDy8o23/short-term-ai-alignment-as-a-priority-cause)
I would love to collaborate with some others on collecting more in-depth arguments including some back- of-the-envelope calculations / Fermi estimates on the possible scale of the problem. I’ve spent some hours creating a structure including some next steps. In particular we should focus on the first argument since I feel the mental health one is very shaky after recent research.
I didn’t start reading the mentioned papers (except for the abstracts). We can define multiple work streams and divide them between possible collaborators. My expertise is mostly in back- of-the-envelope calculations / Fermi estimates given my background as management consutant. I am especially looking for people who like to / are good at assessing the value of scientific papers.
Please drop a message below or send an e-mail to jan-willem@effectiefaltruisme.nl if you want to participate
Arguments in favour of aligning recommender systems as cause area
Social media causes political polarization
Polarization causes less international collaboration
Proved by paper?
Brexit: What are the odds that social media was decisive in the outcome?
Trump: What are the odds that social media was decisive in the outcome?
Less appetite for EU
This increases risks for other (X-)risks:
Extreme Climate change scenarios
Estimate Trump’s impact on climate (What is the chance that an event like this causes certain amplifiers of climate change to push us towards extreme scenarios?)
Direct
Indirect through other countries doing less
Show that more instead of less international collaboration is needed if you want to decrease the probability of extreme scenarios
Nuclear war (increasing Sino-American tensions)
AI safety risks from misuse (through Sino-American tensions)
(Engineered) Pandemics
Can we already calculate extra deaths caused through misinformation?
Increased chances of biowarfare
Lower economic growth because of trade barriers / protectionist measures
Calculate extra economic growth from EU to calculate what it will cost is the EU falls apart?
Convert to possible additional QALY’s?
Next steps here:
Find papers for all the relevant claims
Look at Stefan_Schuberts counterarguments below (https://forum.effectivealtruism.org/posts/E4gfMSqmznDwMrv9q/are-social-media-algorithms-an-existential-risk)
Synthesize findings from papers that prove political polarisation
Show papers that prove that political polarisation led to less international cooperation
Look for cases (Trump / Brexit) where social media are to be blamed. Calculate the chance that social media actually flipped the election there
Make back- of-the-envelope calculations / Fermi estimates for all relevant negative consequences
Social media causes declining mental health
Doesn’t seem to be the case after doing a short review
This one looks interesting as well: https://docs.google.com/document/d/1w-HOfseF2wF9YIpXwUUtP65-olnkPyWcgF5BiAtBEy0/edit#
Next steps here:
Countercheck these paper
If we find prove that it causes mental health to decline: calculate lost QALY’s (see https://forum.effectivealtruism.org/posts/xzjQvqDYahigHcwgQ/aligning-recommender-systems-as-cause-area#Scale for first estimate)
Aligning recommender systems is “training practice” for larger AI alignment problems
Next step here:
Should we expand this argument?
The problem is solvable (but we need more capacity for research)
See e.g. https://www.turing.ac.uk/sites/default/files/2020-10/epistemic-security-report_final.pdf
Next step here:
Collect additional papers on solutions
What kind of research is interesting and would be worthwhile investing in?
I was informed of this thread by someone in the EA community who suggested I help. I have deep subject matter expertise in this domain (depending on how you count, I’ve been working in it full-time for 5 years, and toward it for 10+ years).
The reason I started working on this could be characterized as resulting from my beliefs on the “threat multiplier” impact that broken information ecosystems have on catastrophic risk.
A few caveats about all this though:
Most of the public dialogue around these issues is very simplistic and reductionist (which leads to the following two issues...).
The framing of the questions you provide may not be ideal for getting at your underlying goals/questions. I would think more about that.
Much of the academic research is terrible, simply due to the lack of quality data and the newness and interdisciplinary nature of the fields; even “top researchers” sometimes draw the unsubstantiated conclusions from their studies.
All that said, I continue to believe that the set of problems around information systems (and relatedly governance) are a prerequisite for addressing catastrophic global risks—that they are among the most urgent and important issues that we could be addressing—and that we are still heading at a faster and faster rate in the wrong direction.
I have very limited bandwidth, with a number of other projects in the space, but if people are putting put significant money and time toward this, I may be able to put in some time in an advisory role at least to help direct that energy effectively. My contact info and more context about me is at aviv.me .
Thanks for this! I’ve sent you an email. Especially regarding caveat #2 I believe you can help with relative little time and resources.