There will be a lot to learn from the current pandemic from global society. Which lesson would be most useful to “push” from EA’s side?
I assume this question is in between the “best lesson to learn” and “lesson most likely to be learned”. We probably want to push a lesson that’s useful to learn, and that our push actually helps to bring it into policy.
Given the high uncertainty of this question, would you (Toby) consider giving imprecise credences?
Not a funding opportunity, but I think a grassroots effort to employ social norms to enforce social distancing could be effective in countries in early stages where authorities are not enforcing, e.g. The Netherlands, UK, US, etc.
Activists (Student EA’s?) could stand with signs in public places asking people non-aggressively to please go home.
I think this article very nicely undercuts the following common sense research ethics:
If your research advances the field more towards a positive outcome than it moves the field towards a negative outcome, then your research is net-positive
Whether research is net-positive depends on the current field’s position relative to both outcomes (assuming that when either outcome is achieved, the other can no longer be achieved). It replaces this with another heuristic:
To make a net-positive impact with research, move the field closer to the positive outcome than the negative outcome with a ratio of at least the same ratio as distance-to-positive : distance-to-negative.
If we add uncertainty to the mix, we could calculate how risk averse we should be (where risk aversion should be larger when the research step is larger, as the small projects probably carry much less risk to accidentally make a big step towards FAI).
The ratio and risk-aversion could lead to some semi-concrete technology policy. For example, if the distance to FAI and UAI is (100, 10), technology policy could prevent funding any projects that either have a distance-ratio (for lack of a better term) lower than 10 or that have a 1% or higher probability a taking a 10d step towards UAI.
Of course, the real issue is whether such a policy can be plausibly and cost-effectively enforced or not, especially given that there is competition with other regulatory areas (China/US/EU).
Without policy, the concepts can still be used for self-assessment. And when a researcher/inventor/sponsor assesses the risk-benefit profile of a technology themselves, they should discount for their own bias as well, because they are likely to have an overly optimistic view of their own project.
I really love Charity Entrepreneurship :) A remark and a question:
1. I notice one strength you mention at family planning is “Strong funding outside of EA”—I think this is a very interesting and important factor that’s somewhat neglected in EA analyses because it goes beyond cost-effectiveness. We are not asking the ‘given our resources, how can we spend them most effectively?’ but the more general (and more relevant) ‘how can we do the most good?’ I’d like to see ‘how much funding is available outside of EA for this intervention/cause area’ as a standard question in EA’s cost-effectiveness analyses :)
2. Is there anything you can share about expanding to two of the other cause areas: long-termism and meta-EA?
A consulting organisation aimed at EA(-aligned) organisations, as far as I’m aware: https://www.goodgrowth.io/.
Mark McCoy, mentioned in this post, is the Director of Strategy for it.
This might be just restrating what you wrote, but regarding learning unusual and valuabe skills outside of standard EA career paths:
I believe there is a large difference in the context of learning a skill. Two 90th-percentile quality historians with the same training would come away with very different usefulness for EA topics if one learned the skills keeping EA topics in mind, while the other only started thinking about EA topics after their training. There is something about immediately relating and applying skills and knowledge to real topics that creates more tailored skills and produces useful insights during the whole process, which cannot be recreated by combining EA ideas with the content knowledge/skills at the end of the learning process. I think this relates to something Owen Cotton-Barratt said somewhere, but I can’t find where. As far as I recall, his point was that ‘doing work that actually makes an impact’ is a skill that needs to be trained, and you can’t just first get general skills and then decide to make an impact.
Personally, even though I did a master’s degree in Strategic Innovation Management with longtermism ideas in mind, I didn’t have enough context and engagement with ideas on emerging technology to apply the things I learned to EA topics. In addition, I didn’t have the freedom to apply the skills. Besides the thesis, all grades were based on either group assignments or exams. So some degree of freedom is also an important aspect to look for in non-standard careers.
Can I add the importance of patience and trust/faith here?
I think a lot of non-standard career paths involve doing a lot of standard stuff to build skill and reputation, while maintaining a connection with EA ideas and values and keeping an eye open for unusual opportunities. It may be 10 or 20 years before someone transitions into an impactful position, but I see a lot of people disengaging from the community after 2-3 years if they haven’t gotten into an impactful position yet.
Furthermore, trusting that one’s commitment to EA and self-improvement is strong enough to lead to an impactful career 10 years down the line can create a self-fulfilling prophecy where one views their career path as “on the way to impact” rather than “failing to get an EA job”. (I’m not saying it’s easy to build, maintain, and trust one’s commitment though.)
In addition, I think having good language is really important for keeping these people motivated and involved. We have “building career capital” and Tara MacAulay’s term of “Journeymen” but these are not catchy enough I’m afraid.
Is tagging users going to be a feature on the Forum someday? It’d be quite useful! Especially for asking a question to non-OP’s where the answer can be shared and would be useful publicly.
Will any changes be made to the application and funding process in light of how this project went? I can imagine that it would be valuable to plan a go/no-go decision for projects with medium to large uncertainty/downside risk, and perhaps add a question or two (e.g., ‘what information would you need to learn to make a go/no-go decision?’) if that does not bloat the application process too much. I think this could be very valuable to explore more risky funding opportunities. For example, a two-stage funding commitment can be made where the involved parties can pre-agree to a number of conditions that would decide the go/no-go decision, making follow-up funding much more efficient than going through a new complete funding round.
I wonder what is currently happening with Good Growth and how it relates to this current so-far nameless operations project. It seems like it is an unfunded merging of the two projects? Could you briefly elaborate on the plans and funding situation for the project?
Props for making a no-go decision and switching the focus of the project—I think that is very commendable!
I am very curious about what is going to happen further, and have a few questions:
@EA Norway: Do you have any ideas/opinions on addressing operations bottlenecks that might also be highly impactful, such as
a) organisations doing highly impactful work but not explicitly branded as EA (e.g. top charities, research labs) and
b) other EA projects, such as large local/national groups, and early-stage projects.
This is a really interesting idea and I’m glad you are taking this up! Some considerations of the top of my head:
1. This set-up would probably not only ‘take away’ money that would otherwise have been donated directly. There is some percentage of ‘extra’ money this set-up would attract. So the discussion should not be solely decided by ’would the money be better spent investing or donated now?
2. There is probably a formal set-up for this (optimization) problem, and I think some economist or computer scientist would find it a worthwhile and publishable research question to work on. I’m sure there is related work somewhere, but I suppose the problem is somewhat new with the assumptions of ‘full altruism’, time-neutrality, and letting go of the fixed-resource assumption.
3. There is a difference between investing money for a) later opportunities that seem high-value that can be found by careful evaluation, and b) later opportunities that seem high-value and require a short-time frame to respond. I hope this fund would address both, and I think the case for b) might be stronger than for a). One option for a) would be a global catastrophic response fund. As far as I am aware, there is not a coordinated protocol to respond to global catastrophes or catastrophic crises, and the speed of funding can play a crucial role. A non-governmental fund would be much faster than trying to coordinate the international response. Furthermore, I think a) and b) play substantially different roles in the optimization problem.
Sam, this is a good post on an important topic! I believe EA’s policy-thinking is very underdeveloped and I’m glad you’re pulling the cart here! I look forward to seeing more posts and discussions on effective policy.
Is there an active network/meeting point for people to learn more about policy from an EA perspective?
Thanks! Late replies are better than no replies ;)
I don’t think this type of efficiency deals with the practical problem of impact credit allocation though! Because there the problem appears to be that it’s difficult to find a common denominator for people’s contributions. You can’t just use man hours, and I don’t think the market value of man hours would do that much better (although it gets in the right direction).
Hey Matt, good points! This all relates to what Avin et al. call the spread mechanism of global catastrophic risk. If you haven’t read it already, I’m sure you’ll like their paper!
For some of these we actually do have an inkling of knowledge though! Nuclear winter is more likely to affect the northern hemisphere given that practically every nuclear target is located in the northern hemisphere. And it’s my impression that in biosecurity geographical containment is a big issue: an extra case in the same location is much less threatening than an extra case in a new country. As a result there are border checks for a hazardous disease at borders where one might expect a disease (e.g. currently the borders with the Democratic Repbulic of the Congo).
Yes, s-risks are definitely an important concept there! I mention them only at 7. but not because I thought they weren’t important :)
Yeah so the first point is what I’m referring to by timelines. And we should all also discount the risk of a particular hazard by the probability of achieving invulnerability.