I believe PornHub is a bigger company than most of today’s AI companies (~150 employees, half software engineers according to Glass Door)? If Brave AI is to be believed, they have $100B in annual revenue and handle 15TB of uploads per day.
If this is the benchmark for the limits of an AI company in a world where AI research is stigmatized, then I am of the opinion that all that stigmatization will accomplish is to make it so people who are OK working in the dark get to make decisions on what gets built. I feel like PornHub sized companies are big enough to produce AGI.
I agree with you that Porn is a very distributed industry overall, and I do suspect that is partially because of the stigmatization. However, this has resulted in a rather robust organization arrangement where individuals work independently and these large companies (like PornHub) focus on handling the IT side of things.
In a stigmatized AI future, perhaps individuals all over the world will work on different pieces of AI stuff while a small number of big AI companies perhaps do bulk training or coordination. Interestingly, this sort of decentralized approach to building could result in a better AI outcome because we wouldn’t end up with a small number of very powerful people deciding trajectory, and instead would have a large number of individuals working independently and in competition with each other.
I do like your idea about comparing to other stigmatized industries! Gambling and drugs are, of course, other great examples of how an absolutely massive industry can grow in the face of weak stigmatization!
How do we choose which human gets aligned with?
Is everyone willing to accept that “whatever human happens to build the hard takeoff AI gets to be the human the AI is aligned with”? Do AI alignment researchers realize this human may not be them, and may not align with them? Are AI alignment researchers all OK with Vladimir Putin, Kim Jong Un, or Xi Jinping being the alignment target? What about someone like Ted Kaczynski?
If the idea is “we’ll just decide collectively”, then in the most optimistic scenario we can assume (based on our history with democracy) that the alignment target will be something akin to today’s world leaders, none of whom I would be comfortable having an AI aligned with.
If the plan is “we’ll decide collectively, but using a better mechanism than every current existing mechanism” then it feels like there is an implication here that not only can we solve AI alignment but we can also solve human alignment (something humans have been trying and failing to solve for millennia).
Separately, I’m curious why my post got downvoted on quality (not sure if you or someone else). I’m new to this community so perhaps there is some rule I unintentionally broke that I would like to be made aware of.