Meta: This feels like something emotional where if somebody would look at my plan from the outside, they’d have obvious and good feedback, but my own social circle is not worried or knowledgable about AGI, and so I hope someone will read this.
Best bet: Meta software projects
It would be my best personal fit, running one or multiple software projects that require product work such as understanding what the users actually want.
My bottle neck: Talking to actual users with pain points (researchers? meta orgs with software problems? funders? I don’t know)
Plan B: Advocacy
I think I have potential to grow into a role where I explain complicated things in a simple way, without annoying people. Advocacy seems scary, but I think my experience strongly suggests I should try.
Plan C: Research?
Usually when I look closely at a field, I have new stuff to contribute. I do have impostor syndrome around AGI Safety research, but again, probably people like me should try (?) [I am not a mathematician at all. Am I just wrong here?]
Bottle neck for Plans B+C: Getting a better model
What model specifically: If you’d erase all information I heard about experts speculating “when will we have AGI” and “what’s the chance it will kill us all?”, could I re-invent it? could I figure out which expert is right? This seems like the first layer, and an important one
My actionable items:
Talk to friends about AGI. They ask questions, like “can’t the AGI simply ADVICE us on what to do?”, and I answer.
We both improve our model (specifically, if what I say doesn’t seem convincing, then maybe it’s wrong?)
I slowly exit my comfort zone of “being the weird person talking about AGI”
It sounds like you’re a fairly senior software engineer, so my first thought is to look at engineering roles at AI safety orgs. There are a bunch of them! You’ve probably already seen this post, but just in case: AI Safety Needs Great Engineers.
It sounds to me like you’re concerned about a gap between the type of engineering work you’re good at, and the type of engineering work that AI safety orgs need. This is something I’ve also been thinking about a lot recently. I’m a full stack developer for a consumer product, which means I spend a lot of time discussing plans with product managers, writing React code, and sometimes working on backend APIs. Whereas it seems like AI safety orgs mostly need great backend engineers who are very comfortable setting up infrastructure and working with distributed systems, and/or machine learning engineers.
This suggests 2 options to me, if you want to stay focused on software engineering rather than research or something else:
Find a way that you can help using your existing skills. This sounds like your option A above, but to me option A reads like you want to work independently as a contractor or something? Idk, it sounds like you’re not too sure what it would look like in practice. But there are AI safety orgs that have job postings for full-stack or frontend/UX engineers. If this lines up with your skillset and personal fit, this could be a really good option. One example is Ought. They’re unusual in the AI safety space in that they’re building a user-facing product, so all of the frontend skills that apply at any other startup would apply here. I know other AI safety orgs have frontend roles too, but I think they’re more focused on building internal tooling.
Build up your backend/infrastructure/ML skills enough that you could fill one of the more common AI safety engineering roles, like this one. I don’t know how easy it is for a great frontend engineer to become a great backend/infra engineer. I expect it’s MUCH faster to make that leap than it is for a complete novice to become a great backend engineer. But how quickly you can do it depends on a lot of things like your existing experience, and how great a learning environment you’re able to put yourself in for learning the new stuff.
I’m personally trying to decide between these options right now. The first thing to check is whether you feel excited at all about option 2. If ramping up in those new areas sounds super unpleasant, then I think you can rule that option out right away. But if you feel excited about both options and think you could be successful at either (which is the situation I’m in), then it’s a tougher question. I’m planning to talk to a bunch of AI safety folks at EAG in a few weeks to help figure out how to maximize my impact, and I hope to have more clarity on the matter then. I’ll update this comment afterwards if I have anything new to add.
I think my experience is probably sufficient to apply to Anthropic or Redwood or any other place that doesn’t need an ML background. Including my background in backend/infra. I did many “tech lead” roles where I was basically in charge of everything, so I’m up for that.
What I enjoy:
The thing I would be missing, I imagine, is the social interaction or something like that.
I don’t think I’d enjoy sitting on a hard problem for weeks/months alone, I imagine I’d be sad.
Location:
I don’t want to relocate (at least not a fulltime relocation), so Anthropic is off the table
Why do you think that Anthropic or Redwood etc would be missing social interaction? I wouldn’t have assumed that… on the Anthropic post I linked they mention that they love pair programming.
I expect that people that read shortforms on the EA forum are not those that would give useful advice, and I think there are a lot of people that would be happy to give advice to someone with your skills
Related, “my own social circle is not worried or knowledgable about AGI”, might it make sense to spend time networking with people working on AI Safety and getting a feel for needs and opportunities in the area e.g. joining discussion groups?
Still, random questions on plan A as someone not knowledgable but worried about AI
Why product work only for meta orgs? Random examples that I know you know about: Senior Software Engineer at Anthropic, and they were looking for someone to help with some dev tooling. They seem to require product skills / understanding what the users actually need. (Not asking about Anthropic in particular, but non-meta in general)
What would make it easier to clear the bottleneck of talking to actual users with pain points?
What happened to the idea of internal prediction markets for EA orgs? I think it has potential and an MVP could be simple enough, e.g. I received this proposal for a freelance project a few days ago from a longtermist (non AI safety) EA org that made me update positively towards the general idea
we want an app that lets people bet “[edited] bucks” via slack, and then when the bet expires a moderator says whether they won or lost, and this adjusts their balance. If this data was fed into airtable, I could then build some visualisations etc
This would involve a slack bot/ app hosted in the cloud and an airtable integration
Let me know what you think! I’m super excited about this for helping us hone our decision making over time i.e. getting everyone in the habit of betting on outcomes, which apparently is a great way to get around things like the planning fallacy :D Also the /bet Slack interface seems very low friction & would be very easy for people to interact with
Not sure if any of this helps, but I am really excited to see whatever you will end up choosing!
Related, “my own social circle is not worried or knowledgable about AGI”, might it make sense to spend time networking with people working on AI Safety
I don’t think it will help with the social aspect which I’m trying to point at
and getting a feel for needs and opportunities in the area
[...]
What would make it easier to clear the bottleneck of talking to actual users with pain points?
I think it’s best if one person goes do the user research instead of each person like me bothering the AGI researchers (?)
I’m happy to talk to any such person who’ll talk to me and summarize whatever there is for others to follow, if I don’t pick it up myself
e.g. joining discussion groups?
Could be nice actually
Why product work only for meta orgs?
I mean “figure out what AGI researchers need” [which is a “product” task] and help do that [which helps the community, rather than helping the research directly]
Internal prediction markets
I’m in touch with them and basically said “yes”, but they want full time and by default I don’t think I’ll be available, but I’m advancing and checking it
My attempt to help with AI Safety
Meta: This feels like something emotional where if somebody would look at my plan from the outside, they’d have obvious and good feedback, but my own social circle is not worried or knowledgable about AGI, and so I hope someone will read this.
Best bet: Meta software projects
It would be my best personal fit, running one or multiple software projects that require product work such as understanding what the users actually want.
My bottle neck: Talking to actual users with pain points (researchers? meta orgs with software problems? funders? I don’t know)
Plan B: Advocacy
I think I have potential to grow into a role where I explain complicated things in a simple way, without annoying people. Advocacy seems scary, but I think my experience strongly suggests I should try.
Plan C: Research?
Usually when I look closely at a field, I have new stuff to contribute. I do have impostor syndrome around AGI Safety research, but again, probably people like me should try (?) [I am not a mathematician at all. Am I just wrong here?]
Bottle neck for Plans B+C: Getting a better model
What model specifically: If you’d erase all information I heard about experts speculating “when will we have AGI” and “what’s the chance it will kill us all?”, could I re-invent it? could I figure out which expert is right? This seems like the first layer, and an important one
My actionable items:
Talk to friends about AGI. They ask questions, like “can’t the AGI simply ADVICE us on what to do?”, and I answer.
We both improve our model (specifically, if what I say doesn’t seem convincing, then maybe it’s wrong?)
I slowly exit my comfort zone of “being the weird person talking about AGI”
Write my own model, post it for comments
Maybe my agreements/disagreements with this?
Seems hard and tiring
What am I missing?
Give me the obvious stuff
It sounds like you’re a fairly senior software engineer, so my first thought is to look at engineering roles at AI safety orgs. There are a bunch of them! You’ve probably already seen this post, but just in case: AI Safety Needs Great Engineers.
It sounds to me like you’re concerned about a gap between the type of engineering work you’re good at, and the type of engineering work that AI safety orgs need. This is something I’ve also been thinking about a lot recently. I’m a full stack developer for a consumer product, which means I spend a lot of time discussing plans with product managers, writing React code, and sometimes working on backend APIs. Whereas it seems like AI safety orgs mostly need great backend engineers who are very comfortable setting up infrastructure and working with distributed systems, and/or machine learning engineers.
This suggests 2 options to me, if you want to stay focused on software engineering rather than research or something else:
Find a way that you can help using your existing skills. This sounds like your option A above, but to me option A reads like you want to work independently as a contractor or something? Idk, it sounds like you’re not too sure what it would look like in practice. But there are AI safety orgs that have job postings for full-stack or frontend/UX engineers. If this lines up with your skillset and personal fit, this could be a really good option. One example is Ought. They’re unusual in the AI safety space in that they’re building a user-facing product, so all of the frontend skills that apply at any other startup would apply here. I know other AI safety orgs have frontend roles too, but I think they’re more focused on building internal tooling.
Build up your backend/infrastructure/ML skills enough that you could fill one of the more common AI safety engineering roles, like this one. I don’t know how easy it is for a great frontend engineer to become a great backend/infra engineer. I expect it’s MUCH faster to make that leap than it is for a complete novice to become a great backend engineer. But how quickly you can do it depends on a lot of things like your existing experience, and how great a learning environment you’re able to put yourself in for learning the new stuff.
I’m personally trying to decide between these options right now. The first thing to check is whether you feel excited at all about option 2. If ramping up in those new areas sounds super unpleasant, then I think you can rule that option out right away. But if you feel excited about both options and think you could be successful at either (which is the situation I’m in), then it’s a tougher question. I’m planning to talk to a bunch of AI safety folks at EAG in a few weeks to help figure out how to maximize my impact, and I hope to have more clarity on the matter then. I’ll update this comment afterwards if I have anything new to add.
What I’m good at:
I think my experience is probably sufficient to apply to Anthropic or Redwood or any other place that doesn’t need an ML background. Including my background in backend/infra. I did many “tech lead” roles where I was basically in charge of everything, so I’m up for that.
What I enjoy:
The thing I would be missing, I imagine, is the social interaction or something like that.
I don’t think I’d enjoy sitting on a hard problem for weeks/months alone, I imagine I’d be sad.
Location:
I don’t want to relocate (at least not a fulltime relocation), so Anthropic is off the table
Why do you think that Anthropic or Redwood etc would be missing social interaction? I wouldn’t have assumed that… on the Anthropic post I linked they mention that they love pair programming.
Anthropic and Redwood will hire you with zero ML experience so please don’t spend time learning ML before applying
[I think this deserves its own comment]
Yes, good point, I shouldn’t have included ML in the list of things to learn in option 2.
> Give me the obvious stuff
I expect that people that read shortforms on the EA forum are not those that would give useful advice, and I think there are a lot of people that would be happy to give advice to someone with your skills
Related, “my own social circle is not worried or knowledgable about AGI”, might it make sense to spend time networking with people working on AI Safety and getting a feel for needs and opportunities in the area e.g. joining discussion groups?
Still, random questions on plan A as someone not knowledgable but worried about AI
Why product work only for meta orgs? Random examples that I know you know about: Senior Software Engineer at Anthropic, and they were looking for someone to help with some dev tooling. They seem to require product skills / understanding what the users actually need. (Not asking about Anthropic in particular, but non-meta in general)
What would make it easier to clear the bottleneck of talking to actual users with pain points?
What happened to the idea of internal prediction markets for EA orgs? I think it has potential and an MVP could be simple enough, e.g. I received this proposal for a freelance project a few days ago from a longtermist (non AI safety) EA org that made me update positively towards the general idea
Not sure if any of this helps, but I am really excited to see whatever you will end up choosing!
I don’t think it will help with the social aspect which I’m trying to point at
I think it’s best if one person goes do the user research instead of each person like me bothering the AGI researchers (?)
I’m happy to talk to any such person who’ll talk to me and summarize whatever there is for others to follow, if I don’t pick it up myself
Could be nice actually
I mean “figure out what AGI researchers need” [which is a “product” task] and help do that [which helps the community, rather than helping the research directly]
I’m in touch with them and basically said “yes”, but they want full time and by default I don’t think I’ll be available, but I’m advancing and checking it