I expect that people that read shortforms on the EA forum are not those that would give useful advice, and I think there are a lot of people that would be happy to give advice to someone with your skills
Related, āmy own social circle is not worried or knowledgable about AGIā, might it make sense to spend time networking with people working on AI Safety and getting a feel for needs and opportunities in the area e.g. joining discussion groups?
Still, random questions on plan A as someone not knowledgable but worried about AI
Why product work only for meta orgs? Random examples that I know you know about: Senior Software Engineer at Anthropic, and they were looking for someone to help with some dev tooling. They seem to require product skills /ā understanding what the users actually need. (Not asking about Anthropic in particular, but non-meta in general)
What would make it easier to clear the bottleneck of talking to actual users with pain points?
What happened to the idea of internal prediction markets for EA orgs? I think it has potential and an MVP could be simple enough, e.g. I received this proposal for a freelance project a few days ago from a longtermist (non AI safety) EA org that made me update positively towards the general idea
we want an app that lets people bet ā[edited] bucksā via slack, and then when the bet expires a moderator says whether they won or lost, and this adjusts their balance. If this data was fed into airtable, I could then build some visualisations etc
This would involve a slack bot/ā app hosted in the cloud and an airtable integration
Let me know what you think! Iām super excited about this for helping us hone our decision making over time i.e. getting everyone in the habit of betting on outcomes, which apparently is a great way to get around things like the planning fallacy :D Also the /ābet Slack interface seems very low friction & would be very easy for people to interact with
Not sure if any of this helps, but I am really excited to see whatever you will end up choosing!
Related, āmy own social circle is not worried or knowledgable about AGIā, might it make sense to spend time networking with people working on AI Safety
I donāt think it will help with the social aspect which Iām trying to point at
and getting a feel for needs and opportunities in the area
[...]
What would make it easier to clear the bottleneck of talking to actual users with pain points?
I think itās best if one person goes do the user research instead of each person like me bothering the AGI researchers (?)
Iām happy to talk to any such person whoāll talk to me and summarize whatever there is for others to follow, if I donāt pick it up myself
e.g. joining discussion groups?
Could be nice actually
Why product work only for meta orgs?
I mean āfigure out what AGI researchers needā [which is a āproductā task] and help do that [which helps the community, rather than helping the research directly]
Internal prediction markets
Iām in touch with them and basically said āyesā, but they want full time and by default I donāt think Iāll be available, but Iām advancing and checking it
> Give me the obvious stuff
I expect that people that read shortforms on the EA forum are not those that would give useful advice, and I think there are a lot of people that would be happy to give advice to someone with your skills
Related, āmy own social circle is not worried or knowledgable about AGIā, might it make sense to spend time networking with people working on AI Safety and getting a feel for needs and opportunities in the area e.g. joining discussion groups?
Still, random questions on plan A as someone not knowledgable but worried about AI
Why product work only for meta orgs? Random examples that I know you know about: Senior Software Engineer at Anthropic, and they were looking for someone to help with some dev tooling. They seem to require product skills /ā understanding what the users actually need. (Not asking about Anthropic in particular, but non-meta in general)
What would make it easier to clear the bottleneck of talking to actual users with pain points?
What happened to the idea of internal prediction markets for EA orgs? I think it has potential and an MVP could be simple enough, e.g. I received this proposal for a freelance project a few days ago from a longtermist (non AI safety) EA org that made me update positively towards the general idea
Not sure if any of this helps, but I am really excited to see whatever you will end up choosing!
I donāt think it will help with the social aspect which Iām trying to point at
I think itās best if one person goes do the user research instead of each person like me bothering the AGI researchers (?)
Iām happy to talk to any such person whoāll talk to me and summarize whatever there is for others to follow, if I donāt pick it up myself
Could be nice actually
I mean āfigure out what AGI researchers needā [which is a āproductā task] and help do that [which helps the community, rather than helping the research directly]
Iām in touch with them and basically said āyesā, but they want full time and by default I donāt think Iāll be available, but Iām advancing and checking it