Okay, thanks, so FAI — the Foundation for American Innovation. What’s the relation between FAI and Open Philanthropy Coefficient Giving? Has Coefficient Giving given grant money to FAI?
Oh, you must just be referring to the fact that FAI “co-hosted” the Abundance 2025 conference. I actually have no idea what the list of “co-hosts” on the website means — there are 15 of them. I have no context for what this means.
You disapprove even of those grants related to AI safety?
For me, it’s all very theoretical because AI capabilities currently aren’t very consequential for good or for ill, and the returns to scaling compute and data seem to be very much in decline. So, I don’t buy that either immediate-term, mundane AI safety or near-term AI x-risk is a particularly serious concern.
There are some immediate-term, mundane concerns with how chatbots talk to users with certain kinds of mental health problems, and things of that nature, but these are comparatively small problems in the grand scheme of things. Social media is probably 10x to 1,000x more problematic.
Uh huh, you got me on a technicality. Let me clarify that I see the social problems associated with social media, including the ML-based recommender systems they use, as far more consequential than the social problems associated with LLM-based chatbots.
The recommender systems are one part of why social media is problematic, but not nearly the whole story.
I think looking at the problems of social media through the lens of “AI safety” would be too limiting and not helpful.
Sorry, I don’t know where I got that R from.
Okay, thanks, so FAI — the Foundation for American Innovation. What’s the relation between FAI and
Open PhilanthropyCoefficient Giving? Has Coefficient Giving given grant money to FAI?Oh, you must just be referring to the fact that FAI “co-hosted” the Abundance 2025 conference. I actually have no idea what the list of “co-hosts” on the website means — there are 15 of them. I have no context for what this means.
Yes.
You disapprove even of those grants related to AI safety?
For me, it’s all very theoretical because AI capabilities currently aren’t very consequential for good or for ill, and the returns to scaling compute and data seem to be very much in decline. So, I don’t buy that either immediate-term, mundane AI safety or near-term AI x-risk is a particularly serious concern.
There are some immediate-term, mundane concerns with how chatbots talk to users with certain kinds of mental health problems, and things of that nature, but these are comparatively small problems in the grand scheme of things. Social media is probably 10x to 1,000x more problematic.
Social media recommendation algorithms are typically based on machine learning and generally fall under the purview of near-term AI ethics.
Uh huh, you got me on a technicality. Let me clarify that I see the social problems associated with social media, including the ML-based recommender systems they use, as far more consequential than the social problems associated with LLM-based chatbots.
The recommender systems are one part of why social media is problematic, but not nearly the whole story.
I think looking at the problems of social media through the lens of “AI safety” would be too limiting and not helpful.