The same conflict of interest argument applies to ML engineers who have every reason to argue that their work isn’t leading to the potential death of everyone on Earth.
And also to people who are significantly invested in other cause areas and feel it diminishes the importance of their work.
Unfortunately, I think the conflict of interest line of thought ends up being far more expansive in a way that impinges basically everyone.
It’s a lot more direct with AI though. Ai safety org people and EA org are often the same people, or are personal friends, or at least know each other in some capacity. This undeniably grants them advantages compared to some far off animal rights org. Social aspects give their ideas more access, more consideration, and less temptation to be written off as crazy. If someone found decisive proof that AI safety was nonsense, I’m sure they would publish it, but they might be sad about putting some of their personal friends out of jobs, making them look foolish, etc. I think this bias seeps, at least a little bit, into AI safety consideration.
There is a difference. For ML engineers they actually have to follow up their claims by making products that actually work and earn revenue or successfully convince a VC to keep funding their ventures. The source of funding and the ones appealing for the funding have different interests. In this regard ML engineers have more of an incentive to try upsell the capabilities of their products than downplay them. It’s still possible for someone to burn their money funding something that won’t pan out and this is the risk investors have to make (I don’t know of any top VCs as bullish on AI capabilities on as aggressive timelines as EA folks). In the case of AI safety some of the folks who are in charge of the funding are the ones who are also the loudest advocates for the cause as well as some of the leading researchers. The source of funding and the ones utilizing the funding are comingled in a way that would lead to a conflict of interest that seems quite more problematic than I’ve noticed in other cause areas. But if such serious conflicts do exist, then those too are a problem and not an excuse to ignore conflicts of interest.
Not really? Yes, I do think that EA probably has conflict of interest re AI, though I don’t understand why actually having capabilities is actually a defense to the criticism that they are incentivized to ignore the risk, exactly? This is a symmetrical claim that admittedly does teach us to lower or raise our credences in things we have a stake on, but there’s no asymmetry.
I think they would have to believe there is a risk but they are actually just trying to figure out how to make headway on basic issues. The point of my comment was not to argue about AI risk since I think that is a waste of time as those who believe in it seem to hold it more like an ideological/religious belief and I don’t think there is any amount of argumentation or evidence that can convince them(there is also a lot of material online where the top researchers are interviewed and talk about some of these issues for anyone actually interested about what the state of AI is outside the EA bubble). My intention was just to name that there is a conflict of interest in this particular domain that is having a lot of influence in the community and I doubt there will be much done about it.
The same conflict of interest argument applies to ML engineers who have every reason to argue that their work isn’t leading to the potential death of everyone on Earth.
And also to people who are significantly invested in other cause areas and feel it diminishes the importance of their work.
Unfortunately, I think the conflict of interest line of thought ends up being far more expansive in a way that impinges basically everyone.
It’s a lot more direct with AI though. Ai safety org people and EA org are often the same people, or are personal friends, or at least know each other in some capacity. This undeniably grants them advantages compared to some far off animal rights org. Social aspects give their ideas more access, more consideration, and less temptation to be written off as crazy. If someone found decisive proof that AI safety was nonsense, I’m sure they would publish it, but they might be sad about putting some of their personal friends out of jobs, making them look foolish, etc. I think this bias seeps, at least a little bit, into AI safety consideration.
There is a difference. For ML engineers they actually have to follow up their claims by making products that actually work and earn revenue or successfully convince a VC to keep funding their ventures. The source of funding and the ones appealing for the funding have different interests. In this regard ML engineers have more of an incentive to try upsell the capabilities of their products than downplay them. It’s still possible for someone to burn their money funding something that won’t pan out and this is the risk investors have to make (I don’t know of any top VCs as bullish on AI capabilities on as aggressive timelines as EA folks). In the case of AI safety some of the folks who are in charge of the funding are the ones who are also the loudest advocates for the cause as well as some of the leading researchers. The source of funding and the ones utilizing the funding are comingled in a way that would lead to a conflict of interest that seems quite more problematic than I’ve noticed in other cause areas. But if such serious conflicts do exist, then those too are a problem and not an excuse to ignore conflicts of interest.
Not really? Yes, I do think that EA probably has conflict of interest re AI, though I don’t understand why actually having capabilities is actually a defense to the criticism that they are incentivized to ignore the risk, exactly? This is a symmetrical claim that admittedly does teach us to lower or raise our credences in things we have a stake on, but there’s no asymmetry.
I think they would have to believe there is a risk but they are actually just trying to figure out how to make headway on basic issues. The point of my comment was not to argue about AI risk since I think that is a waste of time as those who believe in it seem to hold it more like an ideological/religious belief and I don’t think there is any amount of argumentation or evidence that can convince them(there is also a lot of material online where the top researchers are interviewed and talk about some of these issues for anyone actually interested about what the state of AI is outside the EA bubble). My intention was just to name that there is a conflict of interest in this particular domain that is having a lot of influence in the community and I doubt there will be much done about it.