Some of these are good enough questions that I am just raising an eyebrow, nodding, and hoping someone writes them up.
A few miscellaneous thoughts on the rest, which seem more tractable:
Are there cheap and easy ways to kill fish quickly?
Maybe you’re already aware of ikejime and have concluded that it can’t be cheaply scaled, but in case you haven’t, check it out.
Figure out how to put to good use some greater proportion of the approximately 1 Billion recent college grads who want to work at an “EA org”
This might look like a collective of independent-ish researchers?
Agree that this sounds promising. I think this could be an org that collected well-scoped, well-defined research questions that would be useful for important decisions and then provided enough mentorship and supervision to get the work done in a competent way; I might be trying to do this this year, starting at a small scale. E.g., there are tons of tricky questions in AI governance that I suspect could be broken down into lots of difficult but slightly simpler research questions. DM me for a partial list.
For anyone who does think that improving human welfare in the developing world is the best thing to do: do AMF-type charities actually increase the number of human life-years lived?
Is this different from GiveWell because GiveWell doesn’t try to estimate, like, the nth-order effects of AMF? I think I’m convinced by the cluelessness explanation that those would cancel out in expectation so we should be fine with first and maybe second-order effects.
(As I asked on Twitter) What jobs/tasks/roles are high impact (by normal EA standards) but relatively low status within EA?
I think one of the big ways EA could screw up is by having intra-EA status incongruent (at least ordinally) with expected impact.
(As I responded on Twitter and hope to turn into a forum post) I think aligning intra-EA status with impact is basically the whole point of EA community-building, so this is very important. I would guess that organizational operations is still too low-status and neglected: we need more people who are willing to set up payroll. (Low confidence, willing to be talked out of this, but it seems like the case to me.)
What positions of power and/or influence in the world are most neglected or easiest to access, perhaps because they’re low prestige and/or low pay?
An early and low-confidence guess: political careers that begin outside the NEC or California.
Agree that this sounds promising. I think this could be an org that collected well-scoped, well-defined research questions that would be useful for important decisions and then provided enough mentorship and supervision to get the work done in a competent way; I might be trying to do this this year, starting at a small scale. E.g., there are tons of tricky questions in AI governance that I suspect could be broken down into lots of difficult but slightly simpler research questions. DM me for a partial list.
You may be able to draw lessons from management consulting firms. One big idea behind these firms is that bright 20-somethings can make big contributions to projects in subject areas they don’t have much experience in as long as they are put on teams with the right structure.
Projects at these firms are typically led by a partner and engagement manager who are fairly familiar with the subject area at hand. Actual execution and research is mostly done by lower level consultants, who typically have little background in the relevant subject area.
Some high-level points on how these teams work:
The team leads formulate a structure for what specific tasks need to be done to make progress on the project
There is a lot of hand-holding and specific direction of lower-level consultants, at least until they prove they can do more substantial tasks on their own
There are regular check-ins and regular deliverables to ensure people are on the right track and to switch course if necessary
Maybe you’re already aware of ikejime and have concluded that it can’t be cheaply scaled, but in case you haven’t, check it out.
Yeah, I consider that the best case slaughter method and regret that it seems so labor intensive, but seems like there might be other less bad methods than the current status quo
Is this different from GiveWell because GiveWell doesn’t try to estimate, like, the nth-order effects of AMF? I think I’m convinced by the cluelessness explanation that those would cancel out in expectation so we should be fine with first and maybe second-order effects.
Sure, I think 4th+ order effects are likely impossible to model, but 2nd and maybe even 3rd not so much. I’d bet (though far from certain) you could get a well-identified study for the causal effect of e.g. malaria nets on total life years lived/population/pop growth in a certain geographic region, at least for some period of time
(As I responded on Twitter and hope to turn into a forum post) I think aligning intra-EA status with impact is basically the whole point of EA community-building, so this is very important. I would guess that organizational operations is still too low-status and neglected: we need more people who are willing to set up payroll. (Low confidence, willing to be talked out of this, but it seems like the case to me.)
Strong +1 on this, would be a super interesting and productive post IMO!
Some of these are good enough questions that I am just raising an eyebrow, nodding, and hoping someone writes them up.
A few miscellaneous thoughts on the rest, which seem more tractable:
Maybe you’re already aware of ikejime and have concluded that it can’t be cheaply scaled, but in case you haven’t, check it out.
Agree that this sounds promising. I think this could be an org that collected well-scoped, well-defined research questions that would be useful for important decisions and then provided enough mentorship and supervision to get the work done in a competent way; I might be trying to do this this year, starting at a small scale. E.g., there are tons of tricky questions in AI governance that I suspect could be broken down into lots of difficult but slightly simpler research questions. DM me for a partial list.
Is this different from GiveWell because GiveWell doesn’t try to estimate, like, the nth-order effects of AMF? I think I’m convinced by the cluelessness explanation that those would cancel out in expectation so we should be fine with first and maybe second-order effects.
(As I responded on Twitter and hope to turn into a forum post) I think aligning intra-EA status with impact is basically the whole point of EA community-building, so this is very important. I would guess that organizational operations is still too low-status and neglected: we need more people who are willing to set up payroll. (Low confidence, willing to be talked out of this, but it seems like the case to me.)
An early and low-confidence guess: political careers that begin outside the NEC or California.
You may be able to draw lessons from management consulting firms. One big idea behind these firms is that bright 20-somethings can make big contributions to projects in subject areas they don’t have much experience in as long as they are put on teams with the right structure.
Projects at these firms are typically led by a partner and engagement manager who are fairly familiar with the subject area at hand. Actual execution and research is mostly done by lower level consultants, who typically have little background in the relevant subject area.
Some high-level points on how these teams work:
The team leads formulate a structure for what specific tasks need to be done to make progress on the project
There is a lot of hand-holding and specific direction of lower-level consultants, at least until they prove they can do more substantial tasks on their own
There are regular check-ins and regular deliverables to ensure people are on the right track and to switch course if necessary
Good points, thanks!
Yeah, I consider that the best case slaughter method and regret that it seems so labor intensive, but seems like there might be other less bad methods than the current status quo
Sure, I think 4th+ order effects are likely impossible to model, but 2nd and maybe even 3rd not so much. I’d bet (though far from certain) you could get a well-identified study for the causal effect of e.g. malaria nets on total life years lived/population/pop growth in a certain geographic region, at least for some period of time
Strong +1 on this, would be a super interesting and productive post IMO!