I support people to move forward on meaningful and important project by listening to their wisdom instead of worry. When you know what you should do, but aren’t sure why you aren’t, I can help you clear the way.
You may be coming across my profile because of my involvement with the EA Hiring Agency, High Impact Recruitment (HIRE). If you’re looking for hiring help, please contact Neil Ferro neil@eahire.org. He can help!
My coaching training and style cross a few areas:
-Modern neuroscience of how to be aware of and change your mind using tools like self-observation, body-awareness, mindfulness and self-hypnosis**.
-Ancient wisdom principles common to stoicism, Buddhism, and other traditions that focus on self-agency and integrity.
-Systems thinking: looking beyond yourself to see how you are connected to the rest of the world, so you can decide where to focus your attention to change the world for the better.
I offer 1:1 coaching and occasional trainings and group workshops! check out my website: leemcc.com
Thanks post this post! Seeing how many global challenges are in a sense alignment problems also brought me on board with understanding AI Safety. Climate change and social media are good touchstones for what I think of as social/political alignment issues.
I don’t know if this is exactly correct (so someone help me if I’m off base) but I find the AI alignment issue especially mentally complex to wrap my head around because it doesn’t seem like we have good solutions yet at almost any level of technical or social/political alignment. Here’s how I think of them in my head:
technical alignment: can we have an inconceivably smart optimizing machine follow what we really want it to do in order to benefit us, vs taking the letter of its programming down paths that would be bad. Can we look into the black box to know what the heck is going on, so that we can stop it if needed.
AND
social/political alignment: Can we as humans create and uphold fair and effective rules of regulation on power that are effective in a globalized economy without a strong world government. Can we design laws and social norms that prevent catastrophe when more and more people and businesses have access to access to increasingly powerful machines that do what they are asked (blow people up if you want them to with enormous accuracy) and have unintended side effects (influencing elections through social media algorithms).
With AI we don’t have either. It is sort of as if runaway climate change were happening and we didn’t yet understand that CO2 was part of the root cause or something.
The fact that a lot of x-risk issues share common threads in the social-political alignment sphere to me is interesting, and is one of my main arguments for why EA-ers should pay more attention to climate change. It seems to share some of the global game-theory elements to other issues like pandemics and AI regulation, and work on x-risks as a whole may be stronger if there is a lot of cross-pollination of strategies and leanings, ESPECIALLY because climate change is less neglected and has had some amount of progress in recent decades.