A related question: which fraction of your and RSP’s impact do you expect to come from direct and from community/field-building?
E.g.
When working on a paper, do you think value consists of field-building or from a small personal chance of, say, coming up with a crucial consideration?
Will most value of RSP will come from direct work done by scholars or by scholars [and program] indirectly influencing other people/organizations? [I would count consulting policy-makers as direct work.]
Oh, even better! In your What Does (and Doesn’t) AI Mean for Effective Altruism? slide four speaks about different timelines: immediate (~5 years), this generation (~15), next-generation (~40), distant (~100). Which timelines are you optimizing RSP for?
When working on a paper, do you think value consists of field-building or from a small personal chance of, say, coming up with a crucial consideration?
This question doesn’t quite feel right to me. I think that when working on a paper I normally have an idea of what insights I want it to convey. The value might be in field-building, or the direct value of disseminating that insight (not counting its spillover to field-building).
Work that might find crucial insights feels like it happens before the paper-writing stage. I try to spend some time in that mode.
Yeah, on a reflection framing of “working on a paper” is not quite right. So let me be more specific,
Prospecting for Gold’s impact comes from promoting a certain established way of thinking [≈ econ 101 and ITN] within the EA community and, unclear, if intended or not, also providing local communities with an excellent discussion topic.
The expected value of cost-effectiveness of research seems to be dominated by chances of stumbling on considerations for the EA researchers, GiveWell, 80K’s career recommendations, etc.
The impact of work on moral uncertainty seems to primarily come from field-building. Doing EA-relevant research within a prestigious branch of philosophy increase odds that more pressing EA questions would be addressed by the next generation of academics.
There are other potentials reasons to do research, say, one might prefer to fully concentrate on mentoring but need to do research for the second-order effects: having prestige for hiring; having scholars’ respect for better mentorship; having fresh meta-cognitive observations to emphasize with mentees for better advising). I am curious about which impact pathways do you prioritize?
I feel the most confused about moral uncertainty because it doesn’t resonate with my taste and my knowledge of the subject and of field politics is very limited. I hope my oversimplification doesn’t diminish/misrepresent your work too much.
Will most value of RSP will come from direct work done by scholars or by scholars [and program] indirectly influencing other people/organizations? [I would count consulting policy-makers as direct work.]
I want to say “yes, by indirect influence”, but I expect that this will be true also of most cases of consulting policy-makers (this would remain true even if you got to set policies directly, as I think that most things we do have value filtered through what future people do). This makes me think I’m somehow using a different lens on the world which makes it hard to answer this question directly.
A related question: which fraction of your and RSP’s impact do you expect to come from direct and from community/field-building?
E.g.
When working on a paper, do you think value consists of field-building or from a small personal chance of, say, coming up with a crucial consideration?
Will most value of RSP will come from direct work done by scholars or by scholars [and program] indirectly influencing other people/organizations? [I would count consulting policy-makers as direct work.]
Oh, even better! In your What Does (and Doesn’t) AI Mean for Effective Altruism? slide four speaks about different timelines: immediate (~5 years), this generation (~15), next-generation (~40), distant (~100). Which timelines are you optimizing RSP for?
Of these, I think RSP is most aiming at “next-generation”, with “this generation” a significant secondary target.
This question doesn’t quite feel right to me. I think that when working on a paper I normally have an idea of what insights I want it to convey. The value might be in field-building, or the direct value of disseminating that insight (not counting its spillover to field-building).
Work that might find crucial insights feels like it happens before the paper-writing stage. I try to spend some time in that mode.
Yeah, on a reflection framing of “working on a paper” is not quite right. So let me be more specific,
Prospecting for Gold’s impact comes from promoting a certain established way of thinking [≈ econ 101 and ITN] within the EA community and, unclear, if intended or not, also providing local communities with an excellent discussion topic.
The expected value of cost-effectiveness of research seems to be dominated by chances of stumbling on considerations for the EA researchers, GiveWell, 80K’s career recommendations, etc.
The impact of work on moral uncertainty seems to primarily come from field-building. Doing EA-relevant research within a prestigious branch of philosophy increase odds that more pressing EA questions would be addressed by the next generation of academics.
There are other potentials reasons to do research, say, one might prefer to fully concentrate on mentoring but need to do research for the second-order effects: having prestige for hiring; having scholars’ respect for better mentorship; having fresh meta-cognitive observations to emphasize with mentees for better advising). I am curious about which impact pathways do you prioritize?
I feel the most confused about moral uncertainty because it doesn’t resonate with my taste and my knowledge of the subject and of field politics is very limited. I hope my oversimplification doesn’t diminish/misrepresent your work too much.
I want to say “yes, by indirect influence”, but I expect that this will be true also of most cases of consulting policy-makers (this would remain true even if you got to set policies directly, as I think that most things we do have value filtered through what future people do). This makes me think I’m somehow using a different lens on the world which makes it hard to answer this question directly.