Relatedly, it’s also time to start focusing on the increased conflicts of interest and epistemic challenges that an influx of AI industry insider cash could bring. As Nathan implies in his comment, proximity to massive amounts of money can have significant adverse effects in addition to positive ones. And I worry that if and when a relevant IPO or cashout is announced, the aroma of expected funds will not improve our ability to navigate these challenges well.
Most people are very hesitant to bite the hand that feeds them. Orgs may be hesitant to do things that could adversely effect their ability to access future donations from current or expected donors. We might expect that AI-insider donors will disproportionately choose to fund charities that align fairly well with—or at least are consonant with—their personal interests and viewpoints.
(I am aware that significant conflicts of interest with the AI industry have existed in the past and continue to exist. But there’s not much I can do about that, and the conflict for the hypothesized new funding sources seems potentially even more acute. I imagine that some of these donors will retain significant financial interests in frontier AI labs even if they cash out part of their equity, as opposed to old-school donors who have a lesser portion of their wealth in AI. Also, Dustin and Cari donated their Anthropic stake, which addresses their personal conflict of interest on that front (although it may create a conflict for wherever that donation went)).
For purposes of these rest of this comment, a significantly AI-involved source has a continuing role at an frontier AI lab, or has a significant portion of their wealth still tied up in AI-related equity. The term does not include those who have exited their AI-related positions.
What Sorts of Adverse Effects Could Happen?
There are various ways in which the new donors’ personal financial interests could bias the community’s actions and beliefs. I use the word bias here because those personal interests should not have an effect on what the community believes and says.
Take stop/pause advocacy for an obvious example. Without expressing a view about the merits of such advocacy, significantly AI-involved sources have an obvious conflict of interest that creates a bias against that sort of work. To be fair, it is their choice on how to spend their money.
But—one could imagine the community changing its behavior and/or beliefs in ways that are problematic. Maybe people don’t write posts and comments in support of stop/pause advocacy because they don’t want to irritate the new funders. Maybe grantmakers don’t recommend stop/pause advocacy grants for their other clients because their AI-involved clients could view their money as indirectly supporting such advocacy via funging.
There’s also a risk of losing public credibility—it would not be hard to cast orgs that took AI-involved source funds as something like a lobbying arm of Anthropic equity holders.
What Types of Things Could Be Done to Mitigate This?
This is tougher, but some low-hanging fruit might include:
Orgs could commit to identifying whether, and how much, of their funding comes from significantly AI-involved sources.
Many orgs could have a limit on the percentage of their budget they will accept from significantly AI-involved sources. Some orgs—those with particular sensitivity on AI knowledge and policy—should probably avoid any major gifts from AI-involved sources at all.
Particularly sensitive orgs could be granted extended runways and/or funding agreements with some sort of independent protection against non-renewal.
Other donors could provide more funding for red-teaming AI work, especially work that potentially affected AI-involved source donors.
Anyway, it is this sort of thing that concerns me more than (e.g.) some university student scamming a free trip to some location by simulating interest in EA.
Thank you for writing this. To be honest, I’m pretty shocked that the main discussions around the anthropic IPO have been about “patient philanthropy” concerns and not the massive, earth shattering conflicts of interest (both for us as non-anthropic members of the EA community and for anthropic itself which will now have a “fiduciary responsibility”). I think this shortform does a pretty good job summarizing my concern. The missing mood is big. I also just have a sense that way too many of us are living in group houses and attending the same parties together, and that AI employees are included in this + I think if you actually hear those conversations at parties they are less like “man I am so scared” and more like “holy shit that new proto-memory paper is sick”. Conflicts of interest, nepotism, etc. are not taken seriously enough by the community and this just isn’t a new problem or something I have confidence in us fixing.
Without trying to heavily engage in a timelines debate, I’ll just say it’s pretty obvious we are in go time. I don’t think anyone should be able to confidently say that we are more than a single 10x or breakthrough away from machines being smarter than us. I’m not personally huge in beating the horn for pause AI, I think there are probably better ways to regulate than that. That being said, I genuinely think it might be time for people to start disclosing their investments. I’m paranoid about everyone’s motives (including my own).
You are talking about the movement scale issues, with the awareness that crashing anthropic stock could crash ea wealth. That’s charitable but let’s be less charitable—way too many people here have yolo’d significant parts of their networth on ai stocks, low delta snp/ai calls, etc. and are still holding the bag. Assuming many of you are anything like me, you feel in your brain that you want the world to go well, but I personally feel happier when my brokerage account goes up 3% than when I hear news that AI timelines are way longer than we thought because xyz.
Again kind of just repeating you here but I think it’s important and under discussed.
I don’t think anyone should be able to confidently say that we are more than a single 10x or breakthrough away from machines being smarter than us.
Very prominent deep learning experts who are otherwise among the most bullish public figures in the world on AI such as Ilya Sutskever (AlexNet co-author, OpenAI co-founder and Chief Scientist, now runs Safe Superintelligence) and Demis Hassabis (DeepMind co-founder, Google DeepMind CEO, Nobel Prize winner for AI work) both say that multiple research breakthroughs are needed. Sutskever specifically said that another 100x scaling of AI wouldn’t be that meaningful. Hassabis specifically names three breakthroughs that are needed: continual learning, world models, and System 2 thinking (reasoning, planning) — that last one seems like it might be more than a single research breakthrough, but this is how Hassabis frames the matter. Sutskever and Hassabis are the kind of AI capabilities optimists that people cite to bolster arguments for short timelines, and even they’re saying this.
There are other world-class experts who say similar things, but they are better known as skeptics of LLMs. Yann LeCun (Meta AI’s departing Chief Scientist, won the Turing Award for his pioneering work in deep learning) and Richard Sutton (won the Turing Award for his pioneering work in reinforcement learning) have both argued that AGI or human-level AI will take a lot of fundamental research work. LeCun and Sutton have also both gone through the exceptional step of sketching out a research roadmap to AGI/human-level AI, i.e., LeCun’s APTAMI and Sutton and co-authors’ Alberta Plan. They are serious about this, and they are both actively working on this research.
I’m not cherry-picking; this seems to be the majority view. According to a survey from early this year, 76% of AI experts don’t think LLMs or other current AI techniques with scale to AGI.
I don’t see this strain of argument as particularly action relevant. I feel like you are getting way to caught up in the abstractions of what “agi” is and such. This is obviously a big deal, this is obviously going to happen “soon” and/or already “happening”, it’s obviously time to take this very serious and act like responsible adults.
Ok so you think “AGI” is likely 5+ years away. Are you not worried about anthropic having a fiduciary responsibility to it’s shareholders to maximize profits? I guess reading between the lines you see very little value in slowing down or regulating AI? While leaving room for the chance that our whole disagreement does revolve around our object level timeline differences, I think you probably are missing the forrest from the trees here in your quest to prove the incorrectness of people with shorter timelines.
I am not a doom maximilist in the sense that I think this technology is already profoundly world-bending and scary today. I am worried about my cousin becoming a short form addicted goonbot with an AI best friend right now—whether or not robot bees are about to gorge my eyes out.
I think there are a reasonably long list of sensible regulations around this stuff (both x-risk related and more minor stuff) that would probably result in a large drawdown in these companies valuations and really the stock market at large. For example but not limited to—AI companionship, romance, porn should probably be on a pause right now while the government performs large scale AB testing, the same thing we should have done with social media and cellphone use especially in children that our government horribly failed to do because of its inability to utilitize RCTs and the absolute horrifying average age of our president and both houses of congress.
I was specifically responding to your assertion that no one should be able to confidently say X. There are world-class experts like Ilya Sutskever and Demis Hassabis who do confidently say X, and they’re even on the bullish, optimistic end of the spectrum in terms of AI capabilities forecasts/AGI forecasts, such that they’re some of the public figures in AI that people cite when they want to make an argument for near-term AGI. I was only directly responding to that narrow point.
It doesn’t really have anything to do with different specific definitions of AGI. I’m not sure if Sutskever and Hassabis even define AGI the same way, for example. It’s just what both of them have said about what it will take to get to AGI, which is what you specifically said no one should be able to confidently say X.
On your more general argument that it’s obvious AGI or something enough to AGI is obviously going to be developed soon, or has already been developed, well, no, I don’t agree with that general argument. To try to quickly boil down the main cruxes of my counterargument, AI isn’t that useful for anything and there are a lot of thorny research problems people have already been banging their heads against for years that we need to make progress on to make AI more useful.
But I was just trying to respond to your narrow point about no one being able to confidently say X. I wasn’t trying to open up a general debate about near-term AGI (let alone about regulating the generative AI systems that currently exist). However, if you’re eager, I would be happy to have that debate in the comments of another post (e.g. any of the ones I’ve written on the topic, such as the two I just linked to).
Thanks for restarting this conversation!
Relatedly, it’s also time to start focusing on the increased conflicts of interest and epistemic challenges that an influx of AI industry insider cash could bring. As Nathan implies in his comment, proximity to massive amounts of money can have significant adverse effects in addition to positive ones. And I worry that if and when a relevant IPO or cashout is announced, the aroma of expected funds will not improve our ability to navigate these challenges well.
Most people are very hesitant to bite the hand that feeds them. Orgs may be hesitant to do things that could adversely effect their ability to access future donations from current or expected donors. We might expect that AI-insider donors will disproportionately choose to fund charities that align fairly well with—or at least are consonant with—their personal interests and viewpoints.
(I am aware that significant conflicts of interest with the AI industry have existed in the past and continue to exist. But there’s not much I can do about that, and the conflict for the hypothesized new funding sources seems potentially even more acute. I imagine that some of these donors will retain significant financial interests in frontier AI labs even if they cash out part of their equity, as opposed to old-school donors who have a lesser portion of their wealth in AI. Also, Dustin and Cari donated their Anthropic stake, which addresses their personal conflict of interest on that front (although it may create a conflict for wherever that donation went)).
For purposes of these rest of this comment, a significantly AI-involved source has a continuing role at an frontier AI lab, or has a significant portion of their wealth still tied up in AI-related equity. The term does not include those who have exited their AI-related positions.
What Sorts of Adverse Effects Could Happen?
There are various ways in which the new donors’ personal financial interests could bias the community’s actions and beliefs. I use the word bias here because those personal interests should not have an effect on what the community believes and says.
Take stop/pause advocacy for an obvious example. Without expressing a view about the merits of such advocacy, significantly AI-involved sources have an obvious conflict of interest that creates a bias against that sort of work. To be fair, it is their choice on how to spend their money.
But—one could imagine the community changing its behavior and/or beliefs in ways that are problematic. Maybe people don’t write posts and comments in support of stop/pause advocacy because they don’t want to irritate the new funders. Maybe grantmakers don’t recommend stop/pause advocacy grants for their other clients because their AI-involved clients could view their money as indirectly supporting such advocacy via funging.
There’s also a risk of losing public credibility—it would not be hard to cast orgs that took AI-involved source funds as something like a lobbying arm of Anthropic equity holders.
What Types of Things Could Be Done to Mitigate This?
This is tougher, but some low-hanging fruit might include:
Orgs could commit to identifying whether, and how much, of their funding comes from significantly AI-involved sources.
Many orgs could have a limit on the percentage of their budget they will accept from significantly AI-involved sources. Some orgs—those with particular sensitivity on AI knowledge and policy—should probably avoid any major gifts from AI-involved sources at all.
Particularly sensitive orgs could be granted extended runways and/or funding agreements with some sort of independent protection against non-renewal.
Other donors could provide more funding for red-teaming AI work, especially work that potentially affected AI-involved source donors.
Anyway, it is this sort of thing that concerns me more than (e.g.) some university student scamming a free trip to some location by simulating interest in EA.
Thank you for writing this. To be honest, I’m pretty shocked that the main discussions around the anthropic IPO have been about “patient philanthropy” concerns and not the massive, earth shattering conflicts of interest (both for us as non-anthropic members of the EA community and for anthropic itself which will now have a “fiduciary responsibility”). I think this shortform does a pretty good job summarizing my concern. The missing mood is big. I also just have a sense that way too many of us are living in group houses and attending the same parties together, and that AI employees are included in this + I think if you actually hear those conversations at parties they are less like “man I am so scared” and more like “holy shit that new proto-memory paper is sick”. Conflicts of interest, nepotism, etc. are not taken seriously enough by the community and this just isn’t a new problem or something I have confidence in us fixing.
Without trying to heavily engage in a timelines debate, I’ll just say it’s pretty obvious we are in go time. I don’t think anyone should be able to confidently say that we are more than a single 10x or breakthrough away from machines being smarter than us. I’m not personally huge in beating the horn for pause AI, I think there are probably better ways to regulate than that. That being said, I genuinely think it might be time for people to start disclosing their investments. I’m paranoid about everyone’s motives (including my own).
You are talking about the movement scale issues, with the awareness that crashing anthropic stock could crash ea wealth. That’s charitable but let’s be less charitable—way too many people here have yolo’d significant parts of their networth on ai stocks, low delta snp/ai calls, etc. and are still holding the bag. Assuming many of you are anything like me, you feel in your brain that you want the world to go well, but I personally feel happier when my brokerage account goes up 3% than when I hear news that AI timelines are way longer than we thought because xyz.
Again kind of just repeating you here but I think it’s important and under discussed.
Very prominent deep learning experts who are otherwise among the most bullish public figures in the world on AI such as Ilya Sutskever (AlexNet co-author, OpenAI co-founder and Chief Scientist, now runs Safe Superintelligence) and Demis Hassabis (DeepMind co-founder, Google DeepMind CEO, Nobel Prize winner for AI work) both say that multiple research breakthroughs are needed. Sutskever specifically said that another 100x scaling of AI wouldn’t be that meaningful. Hassabis specifically names three breakthroughs that are needed: continual learning, world models, and System 2 thinking (reasoning, planning) — that last one seems like it might be more than a single research breakthrough, but this is how Hassabis frames the matter. Sutskever and Hassabis are the kind of AI capabilities optimists that people cite to bolster arguments for short timelines, and even they’re saying this.
There are other world-class experts who say similar things, but they are better known as skeptics of LLMs. Yann LeCun (Meta AI’s departing Chief Scientist, won the Turing Award for his pioneering work in deep learning) and Richard Sutton (won the Turing Award for his pioneering work in reinforcement learning) have both argued that AGI or human-level AI will take a lot of fundamental research work. LeCun and Sutton have also both gone through the exceptional step of sketching out a research roadmap to AGI/human-level AI, i.e., LeCun’s APTAMI and Sutton and co-authors’ Alberta Plan. They are serious about this, and they are both actively working on this research.
I’m not cherry-picking; this seems to be the majority view. According to a survey from early this year, 76% of AI experts don’t think LLMs or other current AI techniques with scale to AGI.
I don’t see this strain of argument as particularly action relevant. I feel like you are getting way to caught up in the abstractions of what “agi” is and such. This is obviously a big deal, this is obviously going to happen “soon” and/or already “happening”, it’s obviously time to take this very serious and act like responsible adults.
Ok so you think “AGI” is likely 5+ years away. Are you not worried about anthropic having a fiduciary responsibility to it’s shareholders to maximize profits? I guess reading between the lines you see very little value in slowing down or regulating AI? While leaving room for the chance that our whole disagreement does revolve around our object level timeline differences, I think you probably are missing the forrest from the trees here in your quest to prove the incorrectness of people with shorter timelines.
I am not a doom maximilist in the sense that I think this technology is already profoundly world-bending and scary today. I am worried about my cousin becoming a short form addicted goonbot with an AI best friend right now—whether or not robot bees are about to gorge my eyes out.
I think there are a reasonably long list of sensible regulations around this stuff (both x-risk related and more minor stuff) that would probably result in a large drawdown in these companies valuations and really the stock market at large. For example but not limited to—AI companionship, romance, porn should probably be on a pause right now while the government performs large scale AB testing, the same thing we should have done with social media and cellphone use especially in children that our government horribly failed to do because of its inability to utilitize RCTs and the absolute horrifying average age of our president and both houses of congress.
I was specifically responding to your assertion that no one should be able to confidently say X. There are world-class experts like Ilya Sutskever and Demis Hassabis who do confidently say X, and they’re even on the bullish, optimistic end of the spectrum in terms of AI capabilities forecasts/AGI forecasts, such that they’re some of the public figures in AI that people cite when they want to make an argument for near-term AGI. I was only directly responding to that narrow point.
It doesn’t really have anything to do with different specific definitions of AGI. I’m not sure if Sutskever and Hassabis even define AGI the same way, for example. It’s just what both of them have said about what it will take to get to AGI, which is what you specifically said no one should be able to confidently say X.
On your more general argument that it’s obvious AGI or something enough to AGI is obviously going to be developed soon, or has already been developed, well, no, I don’t agree with that general argument. To try to quickly boil down the main cruxes of my counterargument, AI isn’t that useful for anything and there are a lot of thorny research problems people have already been banging their heads against for years that we need to make progress on to make AI more useful.
But I was just trying to respond to your narrow point about no one being able to confidently say X. I wasn’t trying to open up a general debate about near-term AGI (let alone about regulating the generative AI systems that currently exist). However, if you’re eager, I would be happy to have that debate in the comments of another post (e.g. any of the ones I’ve written on the topic, such as the two I just linked to).
“Orgs could commit to identifying whether, and how much, of their funding comes from significantly AI-involved sources.”
This should be bare minimum.