Even if you think that AI welfare is important (which I do!), the field doesn’t have the existing talent pipelines or clear strategy to absorb $50 million in new funding each year.
Yep completely agree here, and as Siebe pointed out I did got to the extreme end of ‘make the changes right now’. It could be structured in more gradual way, and potential from more external funding.
The fact that something might have a huge scale and we might be able to do something about it is enough for it to be taken seriously and provides prima facie evidence that it should be a priority.
I agree in principle on the huge scale point, but much less so the ‘might be able to do something’. I think we need a lot more than that, we need something tractable to get going, especially for something to be considered a priority. I think the general form of argument I’ve seen this week is that AI Welfare could have a huge scale, therefore it should be an EA priority without much to flesh out the ‘do something’ part.
AI persons (or things that look like AI persons) could easily be here in the next decade...AI people (of some form or other) are not exactly a purely hypothetical technology,
I think I disagree empirically here. Counterfeit “people” might be here soon, but I am not moved much by arguments that digital ‘life’ with full agency, self-awareness, autopoiesis, moral values, moral patienhood etc will be here in the next decade. Especially not easily here. I definitely think that case hasn’t been made, and I think (contra Chris in the other thread) that claims of this sort should have been made much more strongly during AWDW.
We might have that opportunity now with AI welfare. Perhaps this means that we only need a small core group, but I do think some people should make it a priority.
Some small people should, I agree. Funding Jeff Sebo and Rob Long? Sounds great. Giving them 438 research assistants and $49M in funding taken from other EA causes? Hell to the naw. We weren’t discussing whether AI Welfare should be a priority for some EAs, we were discussing specific terms set out in the week’s statement, and I feel like I’m the only person during this week who paid any attention to them.
Secondly, the ‘we might have that opportunity’ is very unconving to me. It’s the same convingness to me of saying in 2008 that ’”If CERN is turned on, it make create a black hole that destroys the world. Nobody else is listening. We might only have the opportunity to act now!” It’s just not enough to be action-guiding in my opinion.
I’m pretty aware the above is unfair to strong advocates of AI Safety and AI Welfare, but at the moment that’s where the quality of arguments this week have roughly stood from my viewpoint.
Thanks for extensive reply Derek :)
Yep completely agree here, and as Siebe pointed out I did got to the extreme end of ‘make the changes right now’. It could be structured in more gradual way, and potential from more external funding.
I agree in principle on the huge scale point, but much less so the ‘might be able to do something’. I think we need a lot more than that, we need something tractable to get going, especially for something to be considered a priority. I think the general form of argument I’ve seen this week is that AI Welfare could have a huge scale, therefore it should be an EA priority without much to flesh out the ‘do something’ part.
I think I disagree empirically here. Counterfeit “people” might be here soon, but I am not moved much by arguments that digital ‘life’ with full agency, self-awareness, autopoiesis, moral values, moral patienhood etc will be here in the next decade. Especially not easily here. I definitely think that case hasn’t been made, and I think (contra Chris in the other thread) that claims of this sort should have been made much more strongly during AWDW.
Some small people should, I agree. Funding Jeff Sebo and Rob Long? Sounds great. Giving them 438 research assistants and $49M in funding taken from other EA causes? Hell to the naw. We weren’t discussing whether AI Welfare should be a priority for some EAs, we were discussing specific terms set out in the week’s statement, and I feel like I’m the only person during this week who paid any attention to them.
Secondly, the ‘we might have that opportunity’ is very unconving to me. It’s the same convingness to me of saying in 2008 that ’”If CERN is turned on, it make create a black hole that destroys the world. Nobody else is listening. We might only have the opportunity to act now!” It’s just not enough to be action-guiding in my opinion.
I’m pretty aware the above is unfair to strong advocates of AI Safety and AI Welfare, but at the moment that’s where the quality of arguments this week have roughly stood from my viewpoint.