TL;DR: In this comment I share my experience being coached by Kat.
I care about the world and about making sure that we develop and implement effective solutions to the many global challenges we face. To accomplish this, we need more people actively working on these issues. I think that Kat plays an important role in facilitating this.
Since I have not followed or analyzed all the recent developments surrounding Nonlinear in detail, I cannot and will not provide my opinion on these developments.
However, I think it’s still useful to share my experience with Kat, because I believe that if more people had the opportunity to speak with her about their projects and challenges, it would be highly valuable, provided they go as I experienced them. I had three calls with Kat, two of which occurred in July and August 2023.
So, what was my experience being coached by Kat? It was very positive. During our conversations, I felt listened to, and she directly addressed the challenges I communicated. What particularly stood out was Kat’s energy and enthusiasm which are infectious. Starting a new organization is challenging, and I remember a call where I felt somewhat discouraged about a development at my project. After the call, I felt re-energized and gained new perspectives on tackling the issues we discussed. She encouraged me to reach out again if I needed further discussion which made me feel supported.
Having someone to bounce ideas off, especially someone who has co-founded multiple organizations is incredibly helpful. Kat’s directness was both amusing and beneficial in ensuring clear communication. This frank approach is refreshing compared to the often indirect and confusing hints others may give.
A significant aspect of coaching is understanding the coachee’s needs in depth to provide tailored solutions. Different coaching styles work for different people. In my case, while I felt listened to, the coaching could have been even more effective if Kat had spent more time initially asking questions. This would have allowed for a more nuanced understanding before she passionately began offering resources and solutions to my problems. However, this point didn’t detract from the overall value of the calls. I always felt that I made significant progress and found the calls highly beneficial.
Another aspect of my interaction with Kat that I greatly appreciated was her warm and bubbly nature. This demeanor added a sense of comfort and positivity to our discussions. Working on reducing existential risks can often be a daunting and emotionally taxing endeavor. It’s rare to find someone who can blend professional insight with a genuinely uplifting attitude, and Kat does this exceptionally well. Her ability to lighten the mood without undermining the seriousness of the topics we discussed was a skill that significantly enhanced the coaching experience.
Overall, I would rate her 9 out of 10, considering these points. I am grateful for having had the opportunity to receive guidance and coaching from Kat and hope that she can assist many more individuals in their efforts to do good better.
AI safety is largely about ensuring that humanity can reap the benefits of AI in the long term. To effectively address the risks of AI, it’s useful to keep in mind what we haven’t yet figured out.
I am currently exploring the implications of our current situation and the best ways to contribute to the positive development of AI. I am eager to hear your perspective on the gaps we have not yet addressed. Here is my quick take on things we seem to not have figured out yet:
We have not figured out how to solve the alignment problem. We don’t know whether alignment is solvable in the first place, even though we hope so. It may not be solvable at all.
We don’t know the exact timelines (I define ‘timelines’ here as the moments when an AI system becomes capable of recursively self-improving). It might range from already having happened to 100 years or more.
We don’t know what takeoff will look like once we develop AGI.
We don’t know how likely it is that AI will become uncontrollable, and if it does become uncontrollable, how likely it is to cause human extinction.
We haven’t figured out the most effective ways to govern and regulate AI development and deployment, especially at an international level.
We don’t know how likely it is that rogue actors will use sophisticated open-source AI to cause large-scale harm to the world.
I think it is useful to call it “we have not figured x out” if there is no consensus on it. People in the community have very different probability estimates for each, all across the range.
Do you disagree with any of these points? And what are other points we might want to add to the list?
I hope to read your take!