Thanks for putting up with my follow-up questions.
Out of the areas you mention, I’d be very interested in:
Improving science. Things like academia.edu and sci-hub have been interesting. Replacing LaTeX is interesting. Working on publishing incentives is also interesting. In general, there seems to be plenty of room for improvement!
I’d be interested in:
Improving political institutions and political wisdom: EA might need to escalate its involvement in many areas adjacent to this, such as policy intersected with great power relations or pivotal technologies. It would be very interesting to better-understand what can be done with funding alone.
Reducing political bias and partisanship: this seems hard, but somewhat important. Most lobbyists are not trying to do this. Russia is actively trying to do the opposite. It would be interesting if more can be done in this space. Fact-checking websites and investigative journalism (Bellingcat) are interesting in this space too. Another interesting area is counteracting political corruption.
Sundry ex risks/GCRs
I’d be a little interested in:
Increasing economic growth
I think the other might be disadvantageous based on my understanding that it’s better for EA to train people up in longtermist-relevant areas, and be percieved as being focused on the same.
Out of those you haven’t mentioned, but that seem similar, I’d also be interested in:
Promotion of effective altruism
Scholarships for people working on high-impact research
More on AI safety—OpenPhil seems to be funding high-prestige mostly-aligned figures (e.g. Stuart Russell, OpenAI) and high-prestige unaligned figures (e.g. their fellows) but has mostly not funded low-mid prestige highly-aligned figures (with notable exceptions of MIRI, Michael C and Dima K). Other small but comparably informed funders mostly favor low-mid prestige highly-aligned targets to a greater extent e.g. Paul’s funding for AI safety research, and Paul and Carl argued to OpenPhil that they should fund MIRI more. I think there are residual opportunities to fund other low-mid prestige highly-aligned figures. [edited for clarity]
Sci-Hub has had a huge positive impact. Finding ways to support it / make it more legal / defend it from rent-seeking academic publishers would be great.
Thanks a lot for this Ryan. Re promoting science, what do you make of the worry that the long-term sign of the effect of improving science is unclear because it doesn’t produce differential technological development and instead broadly increases the increase of all knowledge, including potentially harmful knowledge?
I think it’s a reasonable concern, especially for AI and bio, and I guess that is part of what a grantmaker might investigate. Any such negative effect could be offset by: (1) associating scientific quality with EA/ recruiting competent scientists into EA, (2) improving the quality of risk-reducing research, and (3) improving commentary/reflection on science (which could help with identifying risky research). My instinct is that (1-3) are greater than risk-increasing effects, at least for many projects in this space and that most relevant experts would think so, but it would be worth asking around.
I don’t have any inside info, and perhaps “pressure” is too strong, but Holden reported recieving advice in that direction in 2016:
“Paul Christiano and Carl Shulman–a couple of individuals I place great trust in (on this topic)–have argued to me that Open Phil’s grant to MIRI should have been larger. (Note that these individuals have some connections to MIRI and are not wholly impartial.) Some other people I significantly trust on this topic are very non-enthusiastic about MIRI’s work, but having a couple of people making the argument in favor carries substantial weight with me from a “let many flowers bloom”/”cover your bases” perspective. (However, I expect that the non-enthusiastic people will be less publicly vocal, which I think is worth keeping in mind in this context.)”
[My views only]
Thanks for putting up with my follow-up questions.
Out of the areas you mention, I’d be very interested in:
Improving science. Things like academia.edu and sci-hub have been interesting. Replacing LaTeX is interesting. Working on publishing incentives is also interesting. In general, there seems to be plenty of room for improvement!
I’d be interested in:
Improving political institutions and political wisdom: EA might need to escalate its involvement in many areas adjacent to this, such as policy intersected with great power relations or pivotal technologies. It would be very interesting to better-understand what can be done with funding alone.
Reducing political bias and partisanship: this seems hard, but somewhat important. Most lobbyists are not trying to do this. Russia is actively trying to do the opposite. It would be interesting if more can be done in this space. Fact-checking websites and investigative journalism (Bellingcat) are interesting in this space too. Another interesting area is counteracting political corruption.
Sundry ex risks/GCRs
I’d be a little interested in:
Increasing economic growth
I think the other might be disadvantageous based on my understanding that it’s better for EA to train people up in longtermist-relevant areas, and be percieved as being focused on the same.
Out of those you haven’t mentioned, but that seem similar, I’d also be interested in:
Promotion of effective altruism
Scholarships for people working on high-impact research
More on AI safety—OpenPhil seems to be funding high-prestige mostly-aligned figures (e.g. Stuart Russell, OpenAI) and high-prestige unaligned figures (e.g. their fellows) but has mostly not funded low-mid prestige highly-aligned figures (with notable exceptions of MIRI, Michael C and Dima K). Other small but comparably informed funders mostly favor low-mid prestige highly-aligned targets to a greater extent e.g. Paul’s funding for AI safety research, and Paul and Carl argued to OpenPhil that they should fund MIRI more. I think there are residual opportunities to fund other low-mid prestige highly-aligned figures. [edited for clarity]
+1 to doing something with Sci-Hub.
Sci-Hub has had a huge positive impact. Finding ways to support it / make it more legal / defend it from rent-seeking academic publishers would be great.
Thanks a lot for this Ryan. Re promoting science, what do you make of the worry that the long-term sign of the effect of improving science is unclear because it doesn’t produce differential technological development and instead broadly increases the increase of all knowledge, including potentially harmful knowledge?
I think it’s a reasonable concern, especially for AI and bio, and I guess that is part of what a grantmaker might investigate. Any such negative effect could be offset by: (1) associating scientific quality with EA/ recruiting competent scientists into EA, (2) improving the quality of risk-reducing research, and (3) improving commentary/reflection on science (which could help with identifying risky research). My instinct is that (1-3) are greater than risk-increasing effects, at least for many projects in this space and that most relevant experts would think so, but it would be worth asking around.
I’m curious what this is referring to. Are there specific instances of such pressure being applied on Open Phil that you could point to?
Not sure if this counts, but I did make a critique that Open Phil seemed to have evaluated MIRI in a biased way relative to OpenAI.
I don’t have any inside info, and perhaps “pressure” is too strong, but Holden reported recieving advice in that direction in 2016: