I think your claim is not that “all value-alignment is bad” but rather “when EAs talk about value-alignment, they’re talking about something much more specific and constraining than this tame interpretation”.
To attempt an answer on behalf of the author. The author says “an increasingly narrow definition of value-alignment” and I think the idea is that seeking “value-alignment” has got narrower and narrower over term and further from the goal of wanting to do good.
In my time in EA value alignment has, among some folk, gone from the tame meaning you provide of really wanting to figure out how to do good to a narrower meaning such as: you also think human extinction is the most important thing.
To attempt an answer on behalf of the author. The author says “an increasingly narrow definition of value-alignment” and I think the idea is that seeking “value-alignment” has got narrower and narrower over term and further from the goal of wanting to do good.
In my time in EA value alignment has, among some folk, gone from the tame meaning you provide of really wanting to figure out how to do good to a narrower meaning such as: you also think human extinction is the most important thing.