I don’t think anyone can win a bidding war against OpenAI right now, because they’ve established themselves as the current “top dog”. Even if some other company can pay them more, they’d probably still choose to work at OpenAI instead, just because they’re OpenAI. But not everyone can work at OpenAI, so that still gives us a lot of opportunity. I don’t think this would be much of a problem, as long as the metrics for success are set. As mentioned above, x gains in interpretability is something that can be demonstrated, and at that point it doesn’t matter who does it, or why they do it. Other fields of alignment are harder to set metrics for, but there are still a good number of unsolved sub-problems that are demonstrable if solved. Set the metrics for success, and then you don’t have to worry about value drift.
I don’t think anyone can win a bidding war against OpenAI right now, because they’ve established themselves as the current “top dog”. Even if some other company can pay them more, they’d probably still choose to work at OpenAI instead, just because they’re OpenAI. But not everyone can work at OpenAI, so that still gives us a lot of opportunity. I don’t think this would be much of a problem, as long as the metrics for success are set. As mentioned above, x gains in interpretability is something that can be demonstrated, and at that point it doesn’t matter who does it, or why they do it. Other fields of alignment are harder to set metrics for, but there are still a good number of unsolved sub-problems that are demonstrable if solved. Set the metrics for success, and then you don’t have to worry about value drift.