Even if you think that AI welfare is important (which I do!), the field doesnât have the existing talent pipelines or clear strategy to absorb $50 million in new funding each year.
Yep completely agree here, and as Siebe pointed out I did got to the extreme end of âmake the changes right nowâ. It could be structured in more gradual way, and potential from more external funding.
The fact that something might have a huge scale and we might be able to do something about it is enough for it to be taken seriously and provides prima facie evidence that it should be a priority.
I agree in principle on the huge scale point, but much less so the âmight be able to do somethingâ. I think we need a lot more than that, we need something tractable to get going, especially for something to be considered a priority. I think the general form of argument Iâve seen this week is that AI Welfare could have a huge scale, therefore it should be an EA priority without much to flesh out the âdo somethingâ part.
AI persons (or things that look like AI persons) could easily be here in the next decade...AI people (of some form or other) are not exactly a purely hypothetical technology,
I think I disagree empirically here. Counterfeit âpeopleâ might be here soon, but I am not moved much by arguments that digital âlifeâ with full agency, self-awareness, autopoiesis, moral values, moral patienhood etc will be here in the next decade. Especially not easily here. I definitely think that case hasnât been made, and I think (contra Chris in the other thread) that claims of this sort should have been made much more strongly during AWDW.
We might have that opportunity now with AI welfare. Perhaps this means that we only need a small core group, but I do think some people should make it a priority.
Some small people should, I agree. Funding Jeff Sebo and Rob Long? Sounds great. Giving them 438 research assistants and $49M in funding taken from other EA causes? Hell to the naw. We werenât discussing whether AI Welfare should be a priority for some EAs, we were discussing specific terms set out in the weekâs statement, and I feel like Iâm the only person during this week who paid any attention to them.
Secondly, the âwe might have that opportunityâ is very unconving to me. Itâs the same convingness to me of saying in 2008 that ââIf CERN is turned on, it make create a black hole that destroys the world. Nobody else is listening. We might only have the opportunity to act now!â Itâs just not enough to be action-guiding in my opinion.
Iâm pretty aware the above is unfair to strong advocates of AI Safety and AI Welfare, but at the moment thatâs where the quality of arguments this week have roughly stood from my viewpoint.
Thanks for extensive reply Derek :)
Yep completely agree here, and as Siebe pointed out I did got to the extreme end of âmake the changes right nowâ. It could be structured in more gradual way, and potential from more external funding.
I agree in principle on the huge scale point, but much less so the âmight be able to do somethingâ. I think we need a lot more than that, we need something tractable to get going, especially for something to be considered a priority. I think the general form of argument Iâve seen this week is that AI Welfare could have a huge scale, therefore it should be an EA priority without much to flesh out the âdo somethingâ part.
I think I disagree empirically here. Counterfeit âpeopleâ might be here soon, but I am not moved much by arguments that digital âlifeâ with full agency, self-awareness, autopoiesis, moral values, moral patienhood etc will be here in the next decade. Especially not easily here. I definitely think that case hasnât been made, and I think (contra Chris in the other thread) that claims of this sort should have been made much more strongly during AWDW.
Some small people should, I agree. Funding Jeff Sebo and Rob Long? Sounds great. Giving them 438 research assistants and $49M in funding taken from other EA causes? Hell to the naw. We werenât discussing whether AI Welfare should be a priority for some EAs, we were discussing specific terms set out in the weekâs statement, and I feel like Iâm the only person during this week who paid any attention to them.
Secondly, the âwe might have that opportunityâ is very unconving to me. Itâs the same convingness to me of saying in 2008 that ââIf CERN is turned on, it make create a black hole that destroys the world. Nobody else is listening. We might only have the opportunity to act now!â Itâs just not enough to be action-guiding in my opinion.
Iâm pretty aware the above is unfair to strong advocates of AI Safety and AI Welfare, but at the moment thatâs where the quality of arguments this week have roughly stood from my viewpoint.