I thought this was a really useful framework to look at the system-level. Thank you for posting this!
Quick points after just reading through it:
1) Your phrasing seems to convey too much certainty to me/flowed too much into a coherent story. I’m not sure if you did this too strongly bring across your points or because that’s the confidence level you have in your arguments.
2)
If you want to acquire control over something, that implies that you think you can manage it more sensibly than whoever is in control already.
To me, it appears that you view Holden’s position of influence at Open AI as something like a zero-sum alpha investment decision (where his amount of control replaces someone else’s commensurate control). I don’t see why Holden also couldn’t have a supportive role where his feedback and different perspectives can help Open AI correct for aspects they’ve overlooked.
3) Overall principle I got from this: correct for model error through external data and outside views.
I don’t see why Holden also couldn’t have a supportive role where his feedback and different perspectives can help Open AI correct for aspects they’ve overlooked.
I agree this can be the case, and that in the optimistic scenario this is a large part of OpenAI’s motivation.
I thought this was a really useful framework to look at the system-level. Thank you for posting this!
Quick points after just reading through it:
1) Your phrasing seems to convey too much certainty to me/flowed too much into a coherent story. I’m not sure if you did this too strongly bring across your points or because that’s the confidence level you have in your arguments.
2)
To me, it appears that you view Holden’s position of influence at Open AI as something like a zero-sum alpha investment decision (where his amount of control replaces someone else’s commensurate control). I don’t see why Holden also couldn’t have a supportive role where his feedback and different perspectives can help Open AI correct for aspects they’ve overlooked.
3) Overall principle I got from this: correct for model error through external data and outside views.
I agree this can be the case, and that in the optimistic scenario this is a large part of OpenAI’s motivation.