I’m not sure this is directly addressing the questions, but this might be useful to onlookers?
There is currently a sense at major established animal welfare orgs (headcount >30 people), that it’s hard to recruit “tech” talent:
The situation is much less the need for strong senior generalist engineers that can build complex systems (skillsets that would be useful at Anthropic), but just data or tech savvy specialists who can work with systems like CRMs, websites and integrate them well.
It’s unclear how much this affects impact. There is some sense of “streetlamp” effects (e.g. middle/senior leaders say, we might do things better and save a lot of time if we had someone else look at this and build a system).
Based on the existence of this sentiment, presumably, an outside agency isn’t entirely adequate, I’m guessing for the normal reasons?:
Automating processes well takes a lot of ownership and investment, both from the engineer and from the organization, so an internal hire is best.
Pretty much every EA is competent, there’s a lot of tech services out there, and half-good solutions are probably common. It’s getting it to the next level that is needed, and this level of quality and focus on the user is harder to achieve.
The root issues:
Might be pretty mundane and related to the extreme demand for any experienced software talent globally. Any serious tech entity has a powerful and effective recruiting arm and large salaries.
Despite the pedigree, I suspect EA has a much weaker talent flow of dev and engineers than it seems
Solutions:
Maybe just day dreaming, but a reasonable solution might be having strong developers (or even junior with good taste and communication) move across a few orgs and talk to people to see what need there is, and then lay out a plan.
The key is for this person not to use a lot of exec or leadership time, yet have strong perspective and high signal to noise and be trusted by junior and middle managers. This is a rarer skill and in demand, but there’s probably many EAs who could do this.
It would be totally worth getting EAIF money for this, as the general lessons and knowledge would be useful to everyone (many EA orgs are small and would benefit from perspective).
Yeah, I agree that recruitment and compensation are major obstacles. For example, Mercy for Animals is currently hiring a global director of data and analytics, and are willing to pay up to $87,000 in salary. This role involves more responsibility than I currently have for half of my current total comp. In writing this post, however, I was hoping that there’d be potential roles for volunteers.
On the other hand, my impression is that AI safety orgs pay software and ML engineers competitively, because these are their core competencies.
Another, long term solution is broad and big picture, and is definitely day dreaming, but important:
There could be a major project that focuses on building a strong generalist EA software community (by deliberating hiring and stocking a pool of mentors, generalists, and paying large salaries).
This would be useful to build up a strong culture that consistently attracts SWE of high quality to EA. This supporting newer and growing organizations, for example, supporting nascent and growing AI orgs.
This is hard(er) to do, but extremely valuable (impact of a good version probably over $100M, justifying significant up front investment)
It’s probably generalist software talent that is limiting at AI orgs, and it seems like this won’t change for the next few years
To calibrate, several orgs are paying hundreds and some over 7 figures for generalist software talent right now. This comp is only one thing that attracts talent, and creating these conditions is hard
The corollary is that, this talent allocation, not forum writing, not cause prioritization, decides the fate of much object level work.
I’m not sure this is directly addressing the questions, but this might be useful to onlookers?
There is currently a sense at major established animal welfare orgs (headcount >30 people), that it’s hard to recruit “tech” talent:
The situation is much less the need for strong senior generalist engineers that can build complex systems (skillsets that would be useful at Anthropic), but just data or tech savvy specialists who can work with systems like CRMs, websites and integrate them well.
It’s unclear how much this affects impact. There is some sense of “streetlamp” effects (e.g. middle/senior leaders say, we might do things better and save a lot of time if we had someone else look at this and build a system).
Based on the existence of this sentiment, presumably, an outside agency isn’t entirely adequate, I’m guessing for the normal reasons?:
Automating processes well takes a lot of ownership and investment, both from the engineer and from the organization, so an internal hire is best.
Pretty much every EA is competent, there’s a lot of tech services out there, and half-good solutions are probably common. It’s getting it to the next level that is needed, and this level of quality and focus on the user is harder to achieve.
The root issues:
Might be pretty mundane and related to the extreme demand for any experienced software talent globally. Any serious tech entity has a powerful and effective recruiting arm and large salaries.
Despite the pedigree, I suspect EA has a much weaker talent flow of dev and engineers than it seems
Solutions:
Maybe just day dreaming, but a reasonable solution might be having strong developers (or even junior with good taste and communication) move across a few orgs and talk to people to see what need there is, and then lay out a plan.
The key is for this person not to use a lot of exec or leadership time, yet have strong perspective and high signal to noise and be trusted by junior and middle managers. This is a rarer skill and in demand, but there’s probably many EAs who could do this.
It would be totally worth getting EAIF money for this, as the general lessons and knowledge would be useful to everyone (many EA orgs are small and would benefit from perspective).
Yeah, I agree that recruitment and compensation are major obstacles. For example, Mercy for Animals is currently hiring a global director of data and analytics, and are willing to pay up to $87,000 in salary. This role involves more responsibility than I currently have for half of my current total comp. In writing this post, however, I was hoping that there’d be potential roles for volunteers.
On the other hand, my impression is that AI safety orgs pay software and ML engineers competitively, because these are their core competencies.
This is really well said, thanks for laying out your perspective in searching for volunteers.
Another, long term solution is broad and big picture, and is definitely day dreaming, but important:
There could be a major project that focuses on building a strong generalist EA software community (by deliberating hiring and stocking a pool of mentors, generalists, and paying large salaries).
This would be useful to build up a strong culture that consistently attracts SWE of high quality to EA. This supporting newer and growing organizations, for example, supporting nascent and growing AI orgs.
This is hard(er) to do, but extremely valuable (impact of a good version probably over $100M, justifying significant up front investment)
It’s probably generalist software talent that is limiting at AI orgs, and it seems like this won’t change for the next few years
To calibrate, several orgs are paying hundreds and some over 7 figures for generalist software talent right now. This comp is only one thing that attracts talent, and creating these conditions is hard
The corollary is that, this talent allocation, not forum writing, not cause prioritization, decides the fate of much object level work.