Could you build a sequence for the AI Regulatory Landscape Review? It would be easier to link it than individual posts.
Lucie Philippon šø
Overview of inĀtroĀducĀtory reĀsources in AI Governance
It seems to me that youāre not maintaining at least two hypotheses consistent with the data.
A hypothesis you do not seem to consider is that she did make an attempt at communicating āI made my decision and do not need more of your inputā, and that you did not understand this message.
This hypothesis seems more probable to me than her straightforwardly saying a false thing, as there seems to be multiple similar misunderstandings of the sort between you.
Another misunderstanding example:
he usually did not accept my answers when I gave them but continued to argue with me, either straight up or by insisting I didnāt really understand his argument or was contradicting myself somehow.
It seems to me that this quote points to another similar misunderstanding, and that it was this misunderstanding that lead to a breakdown in communication initially.
Please tell me if Iām missing something. If Iām not, youāre either continuing to be inattentive to the facts, or youāre directly lying.
Iād be happy to share all of our message exchanges with a third party, such as CEA Community Health, or share them publicly, if you agree to that.
You seem to be paying lip service to the āmissing somethingā hypothesis, but framing this as an issue of someone deliberately lying is not cooperative with Holly in the world where you are in fact missing something.
Asking to share messages publicly or showing them to a third party seems to unnecessarily up the stakes. Iām not sure why youāre suggesting that.
It seems to me that you are not considering the possibility that you may in fact not have said this clearly, and that this was a misunderstanding that you could have prevented by communicating another way.
I donāt think the miscommunication can be blamed on any one party specifically. Both could have made different actions to reduce the risk of misunderstanding. I find it reasonable for both of them to think they had more important stuff to do than spend 10x time reducing the risk of misunderstanding and think the responsibility is on the other person.
To give my two cents on this, each time I talked with Mikhail he had really good points on lots of topics, and the conversations helped me improve my models a lot.
However, I do have a harder time understanding Mikhail than understanding the average person, and definitely feel the need to put in lots of work to get his points. In particular, his statements tend to feel a lot like attacks (like saying youāre deceptive), and itās straining to decouple and not get defensive to just consider the factual point heās making.
I think this halo effect could be reduced by making small UI changes:
Removing the OpenAI logo
Replacing the āOpenAIā name in the search results by āHarmful frontier AI Labā or similar
Starting with a disclaimer on why this specific job might be good despite the overall org being bad
I would be all for a cleanup of 80k material to remove mentions of OpenAI as a place to improve the world.