Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Two lists I’m considering making:
Software developers who are interested in doing paid EA work (According to 80000 hours, it seems to be hard to hire software developers for EA orgs even though lots of software developers seem to exist in our community. Seems confusing. This would be a cheap first try at solving it)
Pain points that could potentially be solved by software—from EA orgs (see #6 here. The post is about looking for places to invest in software. I think the correct place to approach this would be to start from actual needs. But there’s no place for orgs to surface such needs beyond posting a job)
Any thoughts?
I’ll note:
When you say “paid”, do you mean full-time? I’ve found that “part-time” people often drop off very quickly. Full-time people would be the domain of 80,000 Hours, so I’d suggest working with them on this.
“no place for orgs to surface such needs beyond posting a job” → This is complicated. I think that software consultancy models could be neat, and of course, full-time software engineering jobs do happen. Both are a lot of work. I’m much less excited about volunteer-type arrangements, outside of being used to effectively help filter candidates for later hiring.
I think that a lot of people just really can’t understand or predict what would be useful without working in an EA org or in an EA group/hub. It took me a while! The obvious advice would be for people who want to really kickstart things, is to first try to work in or right next to an EA org for a year or so; then you’ll have a much better sense.
Developers who’d like to do EA work: Not only full time
I’m talking about discovering needs here. I’m not talking at all about how the needs would be solved
Working at an EA org to discover needs: This seems much slower than asking people who work there, no? (I am not trying to guess the needs myself)
It really depends on how sophisticated the work is and how tied it is to existing systems.
For example, if you wanted to build tooling that would be useful to Google, it would probably be easiest just to start a job at Google, where you can see everything and get used to the codebases, than to try to become a consultant for Google, where you’d ask for very narrow tasks that don’t require you to be part of their confidential workflows and similar.
I agree I won’t get everything
Still, I don’t think Google is a good example. It is full of developers who have a culture of automating things and even free time every week to do side projects. This is really extreme.
A better example would be some organization that has 0 developers. If you ask someone in such an organization if there’s anything they want to automate, or some repetitive task they’re doing a lot, or an idea for an app (which is probably terrible but will indicate an underlying need) - things come up
But also, I tried, and I think 0 such needs surfaced
:)
Just throwing a thought: if many EA orgs have software needs and are struggling to employ people who’ll solve them; and on the other hand, part-time employees or volunteer directories don’t help that much—would it make sense to start a SaaS org aimed at helping EA orgs?
I could see a space for software consultancies that work with EA orgs, that basically help build and maintain software for them.
I’m not sure what you mean by SaaS in this case. If you only have 2-10 clients, it’s sort of weird to have a standard SaaS business model. I was imagining more of the regular consultancy payment structure.
EA Software Consultancy: In case you don’t know these posts:
Part 1
Part 2
Part 3
Yea, I was briefly familiar.
I think it’s still tough, and agree with Ben’s comment here.
https://forum.effectivealtruism.org/posts/kQ2kwpSkTwekyypKu/part-1-ea-tech-work-is-inefficiently-allocated-and-bad-for?commentId=ypo3SzDMPGkhF3GfP
But I think consultancy engineers could be a fit for maybe ~20-40% of EA software talent.
Both sound to me probably at least somewhat useful! I’m ~agnostic on how likely they are to be very useful, how they compare to other things you could spend your time on, or how best to do them, which is mostly because I haven’t thought much about software development.
I expect some other people in the community (e.g., Ozzie Gooen, Nuno Sempere, JP Addison) would have more thoughts on that. But it might make sense to just spend like 0.5-4 hours on MVPs before asking anyone else, if you already have a clear enough idea in your head.
I can also imagine having a Slack workspace / Slack channel in an existing workspace for people in EA who are doing software development or are interested in that could perhaps be useful.
(Sidenote: You may also be interested in posts tagged software engineering and/or looking into their authors/commenters.)
Great work Michael, I’ve already included this Airtable in the curriculum of Training For Good’s upcoming impactful policy careers workshop. Well done, this work is of high value!
Glad to hear that you think this’ll be helpful!
(Btw, your comment also made me realise I should add Training For Good to the database, so I’ve now done so. )
Also note that there are EA Forum Wiki entries for many of the orgs in this database, which will in some cases be worth checking out either for the text of the entry itself, for the links in the Bibliography section, or for the tagged posts.
Cool that you made this, and that you even made a Softr page! Although I think the Softr page is worse than just sharing a public grid view of the Airtable.
I realize it would be cool to have a similar database for all EA-related organisations. Jamie Gittins made one on Notion and has a Forum post here listing EA orgs, but they’re both not easily filterable. It could have similar attributes to the Airtable you have. I saw that Taymon also has a Google Sheet, but it would be nice to have it on an Airtable and have it have more attributes, to make it more easily filterable and more colorful.
Can you share a public grid view of the Airtable in a way that allows people to filter and/or sort however they want but then doesn’t make that the filtering/sorting that everyone else sees? I wasn’t aware of how to do that, which is the sole reason I added the Softr option. I think the set of Airtable views I also link people to is probably indeed better if people are happy with the views (i.e., combos of filters and orders) that I’ve already set up.
Agreed that an all-of-EA version of this would also be useful, and that Airtable would be better for that than Notion, a Forum post, or a Google Sheet. I also expect it’s something that literally anyone reading this could set up in less than a day, by:
duplicating my database
manually adding things from Gittins’ and Taymon’s database
maybe removing anything that was in mine that might be out of scope for them (e.g., if they want to limit the scope to just orgs that are in or “aware of & friendly to” EA, since a database of all orgs that are merely quite relevant to any EA cause area may be too large a scope)
looking up how to do Airtable stuff whenever stuck (I found the basics fairly easy, more so than expected)
You can share this link instead, which is better than the Softr view, and this means people don’t need to get comment access to be able to view the Airtable grid. It also prevents people from being able to see each other’s emails if they check the base collaborators. To find that link, I just pressed “Share” at the top right of the base, and scrolled down to the bottom of that modal/pop-up to find the link.
Ah, nice, thanks for that! It seems that that indeed allows for changing both “Filtered by” and “Sorted by”, including from each of my pre-set views, without that changing things for other people, so that’s perfect!
I still want to provide the comment access version as well, so people can more easily make suggestions on specific entries. But I’ll edit my post to swap the softr link for the link you suggested and to make the comment access link less prominent.
No problem!
I just wanted to leave a note saying that I found this database useful in my work.
I suggested as one possible next step “People could duplicate and then adapt this database in order to make [a] version that’s relevant to all EA cause areas”
I think such a database has now been made! (Though I’m not sure if that was done by duplicating & adapting my one.) Specifically, Michel Justen has made A Database of EA Organizations & Initiatives. I imagine this’d be useful to some people who find their way to this post.*
Here’s the summary section of their post, for convenience:
*I guess I should flag that I haven’t looked closely at Michel’s post or database, so can’t personally vouch for its accuracy, comprehensiveness, etc.
Some orgs that should maybe be added (I’d be keen for someone to fill in the form to add them, including relevant info on them):
Aligned AI
See https://forum.effectivealtruism.org/posts/emKDqNjyE2h22MJ2T/we-re-aligned-ai-we-re-aiming-to-align-ai
Conjecture
See https://forum.effectivealtruism.org/posts/m5EBkgivRypdqm3zi/we-are-conjecture-a-new-alignment-research-startup
ML Progress research group
See https://www.lesswrong.com/s/T9pBzinPXYB3mxSGi
Cohere?
See https://forum.effectivealtruism.org/posts/DDDyTvuZxoKStm92M/ai-safety-needs-great-engineers , but see also my comment there
Czech Priorities
See https://ceskepriority.cz/o-nas/
Sage
(Not sure if there are public writings on them yet)
Arb
See https://forum.effectivealtruism.org/users/arb
Samotsvety
See https://forum.effectivealtruism.org/posts/KRFXjCqqfGQAYirm5/samotsvety-nuclear-risk-forecasts-march-2022
Epoch
Labour for the Long Term
EffiSciences
Palisade Research
“At Palisade, our mission is to help humanity find the safest possible routes to powerful AI systems aligned with human values. Our current approach is to research offensive AI capabilities to better understand and communicate the threats posed by agentic AI systems.”
Jeffrey Ladish is the Executive Director.
Admond
“Admond is an independent Danish think tank that works to promote the safe and beneficial development of artificial intelligence.”
“Artificial intelligence is going to change Denmark. Our mission is to ensure that this change happens safely and for the benefit of our democracy.”
Senter for Langsiktig Politikk
“A politically independent organisation aimed at creating a better and safer future”
A think tank based in Norway.
Confido Institute
Epistea
Transformative Futures Institute
Led by Ross Gruetzemacher
SaferAI
Orthogonal: A new agent foundations alignment organization
Apart Research
Also the European Network for AI Safety (ENAIS)
Riesgos Catastróficos Globales
International Center for Future Generations
As of today, their website lists their priorities as:
Climate crisis
Technology [including AI] and democracy
Biosecurity
Harvard AI Safety Team (HAIST), MIT AI Alignment (MAIA), and Cambridge Boston Alignment Initiative (CBAI)
These are three distinct but somewhat overlapping field-building initiatives. More info at Update on Harvard AI Safety Team and MIT AI Alignment and at the things that post links to.
Policy Foundry
The Collective Intelligence Project
Also Cavendish Labs:
Also the Forecasting Research Institute
Also School of Thinking
Also Research in Effective Altruism and Political Science (REAPS)
Also AFTER (Action Fund for Technology and Emerging Risk)
Also Future Academy (but maybe that’s not an org and instead a project of EA Sweden?).
Also anything in Alignment Org Cheat Sheet that’s not in here. And maybe adding that post’s 1-sentence descriptions to the info this database has on each org listed in that post.
Also fp21 and maybe Humanity Forward.
(Reminder: This is a database of orgs relevant to longtermist/x-risk work, and includes some orgs that are not part of the longtermist/x-risk-reduction community, don’t associate with those labels, and/or don’t focus specifically on those issues.)
Also Alvea and Nucleic Acid Observatory
Also Apollo Fellowship, Atlas Fellowship, Condor Camp, and
PathfinderSuccessifAlso Apollo Academic Surveys
Also AI Safety Field Building Hub and Center for AI Safety
Also Space Futures Initiative and Center for Space Governance
Also EA Engineers
Also Fund for Alignment Research
Also Institute for Progress
Also Encultured AI
Also Pour Demain
To the best of my knowledge, Samotsvety is a group of forecasters, not an organization (although some of its members have recently launched or will soon launch forecasting-related orgs).
Times I have used this post in the course of my research: II.
Is that 11 or 2?
(Either way, thanks for letting me know :) )
2. Cheers.
See also Description of some organizations in or adjacent to long-term AI governance (non-exhaustive) (2021) (linked to from https://forum.effectivealtruism.org/posts/68ANc8KhEn6sbQ3P9/ai-governance-fundamentals-curriculum-and-application ).
How do I submit notes / corrections on orgs in the table?
“If you spot any errors or if you know any relevant info I failed to mention about these orgs, let me know via an EA Forum message or via following this link and then commenting there.”
(The very first link I provide in this post allows changing the filtering & sorting, but not commenting, so you have to instead either send a message or use that other link.)
Thanks for your interest in suggesting extra info / correction :)