I’m not convinced this area is really neglected. For example, Internet-usage/-addiction has been recognized as a national healthcare issue in China since at least the end of the 1990s—early 2000s. Public policy measures seem to have been taken since a few years and increasingly severe screentime limitations have been introduced to limit screentime in minors. These policies are fairly new in their severity and I haven’t really been able to find scientific studies to research the impact of these.
Sources:
Jiang, Q. (2022, September 15). Development and Effects of Internet Addiction in China. Oxford Research Encyclopedia of Communication. Retrieved 15 May. 2025, from https://oxfordre.com/communication/view/10.1093/acrefore/9780190228613.001.0001/acrefore-9780190228613-e-1142.
In some discussions I had with people at EAG it was interesting to discover that there might be a significant lack of EA-aligned people in the hardware-space of AI, which seems to translate towards difficulties in getting industry contacts for co-development of hardware-level AI safety measures. To the degree to which there are EA members in these companies, it might make sense to create some kind of communication space to exchange ideas between people working on hardware AI safety with people at hardware-relevant companies (think Broadcomm, Samsung, Nvidia, GloFo, Tsmc etc). Unfortunately I feel that culturally these spaces (EEng/CE) are not very transmissible to EA-ideas and the boom in ML/AI has caused significant self-selection of people towards hotter topics.
I believe there might be significant benefit for accelerating realistic safety designs, if discussions can be moved into industry as fast as possible.