I believe PornHub is a bigger company than most of today’s AI companies (~150 employees, half software engineers according to Glass Door)? If Brave AI is to be believed, they have $100B in annual revenue and handle 15TB of uploads per day.
If this is the benchmark for the limits of an AI company in a world where AI research is stigmatized, then I am of the opinion that all that stigmatization will accomplish is to make it so people who are OK working in the dark get to make decisions on what gets built. I feel like PornHub sized companies are big enough to produce AGI.
I agree with you that Porn is a very distributed industry overall, and I do suspect that is partially because of the stigmatization. However, this has resulted in a rather robust organization arrangement where individuals work independently and these large companies (like PornHub) focus on handling the IT side of things.
In a stigmatized AI future, perhaps individuals all over the world will work on different pieces of AI stuff while a small number of big AI companies perhaps do bulk training or coordination. Interestingly, this sort of decentralized approach to building could result in a better AI outcome because we wouldn’t end up with a small number of very powerful people deciding trajectory, and instead would have a large number of individuals working independently and in competition with each other.
I do like your idea about comparing to other stigmatized industries! Gambling and drugs are, of course, other great examples of how an absolutely massive industry can grow in the face of weak stigmatization!
How can you align AI with humans when humans are not internally aligned?
AI Alignment researchers often talk about aligning AIs with humans, but humans are not aligned with each other as a species. There are groups whose goals directly conflict with each other, and I don’t think there is any singular goal that all humans share.
As an extreme example, one may say “keep humans alive” is a shared goal among humans, but there are people who think that is an anti-goal and humans should be wiped off the planet (e.g., eco-terrorists). “Humans should be happy” is another goal that not everyone shares, and there are entire religions that discourage pleasure and enjoyment.
You could try to simplify further to “keep species around” but there are some who would be fine with a wire-head future while others are not, and some who would be fine with humans merely existing in a zoo while others are not.
Almost every time I hear alignment researchers speak about aligning humans with AI, they seem to start with a premise that there is a cohesive worldview to align with. The best “solution” to this problem that I have heard suggested is that there should be multiple AIs that compete with each other on behalf of different groups of humans or perhaps individual humans, and each would separately represent the goals of those humans. However, the people who suggest this strategy are generally not AI Alignment researchers but rather people arguing against AI alignment researchers.
What is the implied alignment target that AI alignment researchers are trying to work towards?