Hey, thanks for asking about this! I am writing this in a bit of a rush, so sorry if it’s disjointed!
Yeah, I don’t really have a good answer for that question. What happened was that I asked for feedback on an article outlining my theory of what AI Safety movement building should focus on.
The people who responded starting mentioning terms like funding, field building and community building and ‘buying time’. I was unsure if/where all of these concepts overlapped with AI community/movement building.
So it was more so the case that I felt unclear on my take and other people’s takes than I noticed some clear and specific differences between them.
When I posted this, it was partially to test my ideas on what the AI Safety community is and how it functions and learn if people had different takes from me. So far, Jonathan Claybrough is the only one who has offered a new take.
At this stage, I haven’t really been able to read and integrate much of the general perspective on what the AI Safety community as a whole should do. I think that this is ok for now anyway, because I want to focus on the smaller space of what movement builders’ should think about.
I am also unsure if I will explore the macro strategy space much in the future because of the trade-offs. I think that in the long term, people are going to need to think about strategy at different levels of scope (e.g., around governance, overall strategy, movement building in governance etc). It’s going to be very hard for me to have a high fidelity model of macro-strategy and all the concepts and actors etc while also really understanding what I need to do for movement building.
I therefore suspect that in the future, I will probably rely on an expert source of information to understand strategy (e.g., what do these three people or this community survey suggest I should think) rather than try to have my own in depth understanding. It’s perhaps like a recruiter for a tech company is probably just going to rely on a contact to learn what the company needs for its hires, rather than understanding the companies structure and goals in great detail. However, I could be wrong about all of that and change my mind once I get more feedback!
Thanks for writing this up!
As someone who is getting started in AIS movement building, this was great to read!
I would be curious, how does your take differ from others’ takes?
I have read Three pillars for avoiding AGI catastrophe: Technical alignment, deployment decisions, and coordination and I feel the two posts are trying to answer slightly different questions but would be keen to learn about some other way people have conceptualised the problem.
Hey, thanks for asking about this! I am writing this in a bit of a rush, so sorry if it’s disjointed!
Yeah, I don’t really have a good answer for that question. What happened was that I asked for feedback on an article outlining my theory of what AI Safety movement building should focus on.
The people who responded starting mentioning terms like funding, field building and community building and ‘buying time’. I was unsure if/where all of these concepts overlapped with AI community/movement building.
So it was more so the case that I felt unclear on my take and other people’s takes than I noticed some clear and specific differences between them.
When I posted this, it was partially to test my ideas on what the AI Safety community is and how it functions and learn if people had different takes from me. So far, Jonathan Claybrough is the only one who has offered a new take.
I am not sure how my take aligns with Three pillars for avoiding AGI catastrophe: Technical alignment, deployment decisions, and coordination etc.
At this stage, I haven’t really been able to read and integrate much of the general perspective on what the AI Safety community as a whole should do. I think that this is ok for now anyway, because I want to focus on the smaller space of what movement builders’ should think about.
I am also unsure if I will explore the macro strategy space much in the future because of the trade-offs. I think that in the long term, people are going to need to think about strategy at different levels of scope (e.g., around governance, overall strategy, movement building in governance etc). It’s going to be very hard for me to have a high fidelity model of macro-strategy and all the concepts and actors etc while also really understanding what I need to do for movement building.
I therefore suspect that in the future, I will probably rely on an expert source of information to understand strategy (e.g., what do these three people or this community survey suggest I should think) rather than try to have my own in depth understanding. It’s perhaps like a recruiter for a tech company is probably just going to rely on a contact to learn what the company needs for its hires, rather than understanding the companies structure and goals in great detail. However, I could be wrong about all of that and change my mind once I get more feedback!
Fair enough, thank you!