Commenting on our public output, particularly if youhavespecializedtechnicalexpertise, can often be somewhere from mildly to really helpful. RP has a lot of knowledge, but so does the rest of the EA community and extended EA network, so if you can route our reports to the relevant connections, this can be really valuable in improving the quality of our reasoning and epistemics.
One thing the EA community can help us with is by encouraging suitable candidates to apply to our jobs. (New ones will be posted here and announced in our newsletter.) Some of our most recent hires have transitioned from fields which, at first sight, would seem unlikely to produce typical applicants. But we’re open to anyone proving us they can do the job during the application process (we do blinded skills assessments). I think we’re really not credentialist (i.e. we don’t care much about formal degress if people have gained the skills that we’re looking for). So whenever you read a job ad and think “Oh, this friend could actually do that job!”, do tell them to apply if they’re interested.
More importantly, I think EA community builders in all geographies and fields can greatly help us by training people to become good at the type of reasoning that’s important in EA jobs. I particularly think of reasoning transparency,expressing degrees of (un)certainty and clarifying the epistemic status of what you write. Furthermore, probabilistic thinking and Bayesian updating. Also learning to build models and getting familiar with tools like Guesstimate and Causal. Forecasting also seems to be a valuable skill to train (e.g. on Metaculus). I think EAs anywhere in the world can set up groups where people train such skills together.
Letting us know about or connecting us to stakeholders who could use our work to make better decisions
E.g., philanthropists, policy makers, policy advisers, or think tanks who could make better funding, policy, or research decisions if guided by our published work, by conversations with our researchers, or by future work we might do (partly in light of learning that it could have this additional path to impact)
Letting us know if you have areas of expertise that are relevant to our work and you’d be willing to review draft reports and/or have conversations with us
Letting us know about or connecting us to actors who could likewise provide us with feedback, advice, etc.
Letting us know if there are projects you think it might be very valuable for us to do
We (at least the longtermism department) are already drowning in good project ideas and lacking capacity to do them all, but I think it costs little to hear an additional idea, and it’s plausible some would be better than our existing ideas or could be nicely merged with one of our existing ideas.
Are there any ways that the EA community can help RP that we might not be aware of? Or any that we do already that you would like more of?
Commenting on our public output, particularly if you have specialized technical expertise, can often be somewhere from mildly to really helpful. RP has a lot of knowledge, but so does the rest of the EA community and extended EA network, so if you can route our reports to the relevant connections, this can be really valuable in improving the quality of our reasoning and epistemics.
One thing the EA community can help us with is by encouraging suitable candidates to apply to our jobs. (New ones will be posted here and announced in our newsletter.) Some of our most recent hires have transitioned from fields which, at first sight, would seem unlikely to produce typical applicants. But we’re open to anyone proving us they can do the job during the application process (we do blinded skills assessments). I think we’re really not credentialist (i.e. we don’t care much about formal degress if people have gained the skills that we’re looking for). So whenever you read a job ad and think “Oh, this friend could actually do that job!”, do tell them to apply if they’re interested.
More importantly, I think EA community builders in all geographies and fields can greatly help us by training people to become good at the type of reasoning that’s important in EA jobs. I particularly think of reasoning transparency, expressing degrees of (un)certainty and clarifying the epistemic status of what you write. Furthermore, probabilistic thinking and Bayesian updating. Also learning to build models and getting familiar with tools like Guesstimate and Causal. Forecasting also seems to be a valuable skill to train (e.g. on Metaculus). I think EAs anywhere in the world can set up groups where people train such skills together.
I like this answer.
Some additional possible ideas:
Letting us know about or connecting us to stakeholders who could use our work to make better decisions
E.g., philanthropists, policy makers, policy advisers, or think tanks who could make better funding, policy, or research decisions if guided by our published work, by conversations with our researchers, or by future work we might do (partly in light of learning that it could have this additional path to impact)
Letting us know if you have areas of expertise that are relevant to our work and you’d be willing to review draft reports and/or have conversations with us
Letting us know about or connecting us to actors who could likewise provide us with feedback, advice, etc.
Letting us know if there are projects you think it might be very valuable for us to do
We (at least the longtermism department) are already drowning in good project ideas and lacking capacity to do them all, but I think it costs little to hear an additional idea, and it’s plausible some would be better than our existing ideas or could be nicely merged with one of our existing ideas.
Testing & building fit for research management
See also Collection of collections of resources relevant to (research) management, mentorship, training, etc.
Testing & building fit for ops roles
Donating
(In all cases, I mean either doing this thing yourself or encouraging other people to do so.)