Disappointed it’s not dedicated to asteroid risk.
Michael_Wiebe
This is a particularly hot topic when it comes down to near-term vs long-term causes; Do we think humans today morally matter more than humans in 10,000 years, and how, if at all, should we discount the value of humans over time?
Is there much debate on this? I’d expect most EAs to answer ‘no’ and ‘discount rate=0’.
I’d expect more debate over the tractability of longtermist interventions.
New replication: I find that the results in Moretti (AER 2021) are caused by coding errors.
The paper studies agglomeration effects for innovation, but the results supporting a causal interpretation don’t hold up.https://twitter.com/michael_wiebe/status/1749462957132759489
Would ‘countercyclical altruism’ also capture this view?
Flipping the question around, we might also ask “where is the EA in social justice”? What has the social justice movement done to prioritize their efforts, to focus on cost-effectiveness, to ask how they can do the most good?
Yes, it’s a bit question-begging to assert that the actions with the highest marginal utility per dollar are those targeting long-term outcomes.
I don’t share your optimistic view of research. You write:
it is reasonable to think that research would make progress because:
Very little research has been done on this so far.That’s because cause prioritization research is extremely difficult, not because no one has thought to do this.
Human history reflects positively on our ability to build a collective understanding of a difficult subject and eventually make headway.
Survivorship bias: what about all of the difficult subjects where we couldn’t make any progress and gave up?
Even if difficult, we should at least try! We would learn why such research is hard and should keep going until we reach a point of diminishing returns.
No, we should try if the expected returns are better than the next alternative. What if we’ve already hit diminishing returns?
- 18 Aug 2020 9:16 UTC; 2 points) 's comment on The case of the missing cause prioritisation research by (
We should disaggregate down to the level of specific funding opportunities. Eg, suppose the top three interventions for hits-based development are {funding think tanks in developing countries, funding academic research, charter cities} with corresponding MU/$ {1000, 200, 100}. Suppose it takes $100M to fully fund developing-country think tanks, after which there’s a large drop in MU/$ (moving to the next intervention, academic research). In this case, despite economic development being a huge problem area, we do see diminishing returns at the intervention level within the range of the EA budget.
What empirical claims are baked into EA?
What’s a ‘trade’?
Our AI focus area is part of our longtermism-motivated portfolio of grants,[2] and we focus on AI alignment and AI governance grantmaking that seems especially helpful from a longtermist perspective. On the governance side, I sometimes refer to this longtermism-motivated subset of work as “transformative AI governance” for relative concreteness, but a more precise concept for this subset of work is “longtermist AI governance.”[3]
What work is “from a longtermist perspective” doing here? (This phrase is used 8 times in the article.) Is it: longtermists have pure time preference = 0, while neartermists have >0, so longtermists care a lot more about extinction than neartermists do (because they care more about future generations). Hence, longtermist AI governance means focusing on extinction-level AI risks, while neartermist AI governance is about non-extinction AI risks (eg. racial discrimination in predicting recidivism).
If so, I think this is misleading. Neartermists also care a lot about extinction, because everyone dying is really bad.
Is there another interpretation that I’m missing? Eg. would neartermists and longtermists have different focuses within extinction-level AI risks?
More precisely, longtermists have zero pure time preference. They still discount for exogenous extinction risk and diminishing marginal utility.
More generally, research isn’t magic. Hiring a researcher and having them work 9-5 is no guarantee of solving a problem. You write:
What empirical evidence is there that we can reliably impact the long run trajectory of humanity and how have similar efforts gone in the past? [...]
I think there needs to be much better research into how to make complex decisions despite high uncertainty.
Isn’t it obvious that allocating researcher hours to these questions would be a waste of money? Almost by definition, we can’t have good evidence that we can impact the long-run (ie. centuries) trajectory of humanity, because we haven’t been collecting data for that long. And making complex decisions under high uncertainty will always be incredibly difficult; in the best case scenario, more research might yield small improvements in decision-making.
Note that RCTs are still a minority in published academic research. I think Pritchett’s criticism is that NGOs have been dominated by randomistas; eg, even the International Growth Centre does a lot of RCTs, instead of following his preferred growth diagnostics approach.
Do you think economic growth is key to popular acceptance of longtermism, as increased wealth leads people to adopt post-materialist values?
- 18 Mar 2020 3:37 UTC; 17 points) 's comment on AMA: Toby Ord, author of “The Precipice” and co-founder of the EA movement by (
Tweet-thread promoting Rotblat Day on Aug. 31, to commemorate the spirit of questioning whether a dangerous project should be continued.
There seems to be a “intentions don’t matter, results do” lesson that’s relevant here. Intending to solve AI alignment is secondary, and doesn’t mean that you’re making progress on the problem.
And we don’t want people saying “I’m working on AI” just for the social status, if that’s not their comparative advantage and they’re not actually being productive.
When looking for new opportunities, a less cost-effective (in terms of social good per dollar spent) opportunity that is more scalable (in terms of total dollars that can be spent to achieve the target cost-effectiveness) can sometimes be more exciting and more helpful to the overall EA portfolio than a more cost-effective but less scalable opportunity.
In the ITC framework, this is captured by diminishing returns. To optimally allocate resources, you give your next dollar to the intervention with the highest marginal utility per dollar. This means funding the low-scale intervention until its MU/$ is below that of the high-scale intervention, and then switching to allocating your next dollar to the high-scale intervention.
Restating your point: if you have a huge budget, then you need to have scalable opportunities (ie. with low diminishing returns) in order to spend your whole budget. There might be a bunch of small interventions (ie. fully funding them would use up 0.0000001% of your budget) with the highest MU/$, but if there are transaction costs to identifying and funding them, it could be optimal to ignore them and focus on more scalable interventions.
So far, the effective altruist strategy for global poverty has followed a high-certainty, low-reward approach. GiveWell only looks at charities with a strong evidence base, such as bednets and cash transfers. But there’s also a low certainty, high reward approach: promote catch-up economic growth. Poverty is strongly correlated with economic development (urbanization, industrialization, etc), so encouraging development would have large effects on poverty. Whereas cash transfers have a large probability of a small effect, economic growth is a small probability of a large effect. (In general, we should diversify across high- and low-risk strategies.) In short, can we do “hits-based development”?
How can we affect growth? Tractability is the main problem for hits-based development, since GDP growth rates are notoriously difficult to change. However, there are a few promising options. One specific mechanism is to train developing-country economists, who can then work in developing-country governments and influence policy. Lant Pritchett gives the example of a think tank in India that influenced its liberalizing reforms, which preceded a large growth episode. This translates into a concrete goal: get X economists working in government in every developing country (where X might be proxied by the number in developed countries). Note that local experts are more likely than foreign World Bank advisors to positively affect growth, since they have local knowledge of culture, politics, law, etc.
I will focus on two instruments for achieving this goal: funding scholarships for developing-country scholars to get PhDs in economics, and funding think tanks and universities in developing countries. First, there are several funding sources within economics for developing-country students, such as Econometric Society scholarships, CEGA programs, and fee waivers at conferences. I will map out this funding space, contacting departments and conference organizers, and determine if more money could be used profitably. For example, are conference fees a bottleneck for developing-country researchers? Would earmarked scholarships make economics PhD programs accept more developing-country students? (We have to be careful in designing the funding mechanism, so that recipients don’t simply reduce funding elsewhere.) Next, I will organize fundraisers, so that donors have a ‘one-click’ opportunity to give money to hits-based development. (This might take the form of small recurring donations, or larger funding drives, or an endowment.) Then I will advertise these donation opportunities to effective altruists and others who want to promote hits-based development. (One potential large funder is the EA Global Health and Development Fund.)
My second approach is based on funding developing-country think tanks. Recently, IDRC led the Think Tank Initiative (TTI), which funded over 40 think tanks in 20 countries over 2009-2019. This program has not been renewed. My first step here would be to analyze the effectiveness of the TTI, and figure out whether it deserves to be renewed. While causal effects are hard to estimate, it seems reasonable to measure the number of think tanks, their progress under the program, and their effects on policy. To do this I will interview think tank employees, development experts, and the TTI organizers. Next I will determine what funding exists for renewing the program, as well as investigate whether a decentralized funding approach would work.
I think it’s important to frame longtermism as particular subset of EA. We should be EAs first and longtermists second. EA says to follow the importance-tractability-crowdedness framework, and allocate funding to the most effective causes. This can mean funding longtermist interventions, if they are the most cost-effective. If longtermist interventions get a lot of funding and hit diminishing returns, then they won’t be the most cost-effective anymore. The ITC framework is more general than the longtermist framing of “focus on the long-term future”, and allows us to pivot as funding and tractability changes.