Disappointed it’s not dedicated to asteroid risk.
Michael_Wiebe
Solving the replication crisis (FTX proposal)
Hits-based development: funding developing-country economists
Formalizing the cause prioritization framework
This is a particularly hot topic when it comes down to near-term vs long-term causes; Do we think humans today morally matter more than humans in 10,000 years, and how, if at all, should we discount the value of humans over time?
Is there much debate on this? I’d expect most EAs to answer ‘no’ and ‘discount rate=0’.
I’d expect more debate over the tractability of longtermist interventions.
New replication: I find that the results in Moretti (AER 2021) are caused by coding errors.
The paper studies agglomeration effects for innovation, but the results supporting a causal interpretation don’t hold up.https://twitter.com/michael_wiebe/status/1749462957132759489
Would ‘countercyclical altruism’ also capture this view?
Flipping the question around, we might also ask “where is the EA in social justice”? What has the social justice movement done to prioritize their efforts, to focus on cost-effectiveness, to ask how they can do the most good?
Yes, it’s a bit question-begging to assert that the actions with the highest marginal utility per dollar are those targeting long-term outcomes.
I don’t share your optimistic view of research. You write:
it is reasonable to think that research would make progress because:
Very little research has been done on this so far.That’s because cause prioritization research is extremely difficult, not because no one has thought to do this.
Human history reflects positively on our ability to build a collective understanding of a difficult subject and eventually make headway.
Survivorship bias: what about all of the difficult subjects where we couldn’t make any progress and gave up?
Even if difficult, we should at least try! We would learn why such research is hard and should keep going until we reach a point of diminishing returns.
No, we should try if the expected returns are better than the next alternative. What if we’ve already hit diminishing returns?
- 18 Aug 2020 9:16 UTC; 2 points) 's comment on The case of the missing cause prioritisation research by (
We should disaggregate down to the level of specific funding opportunities. Eg, suppose the top three interventions for hits-based development are {funding think tanks in developing countries, funding academic research, charter cities} with corresponding MU/$ {1000, 200, 100}. Suppose it takes $100M to fully fund developing-country think tanks, after which there’s a large drop in MU/$ (moving to the next intervention, academic research). In this case, despite economic development being a huge problem area, we do see diminishing returns at the intervention level within the range of the EA budget.
What empirical claims are baked into EA?
What’s a ‘trade’?
Our AI focus area is part of our longtermism-motivated portfolio of grants,[2] and we focus on AI alignment and AI governance grantmaking that seems especially helpful from a longtermist perspective. On the governance side, I sometimes refer to this longtermism-motivated subset of work as “transformative AI governance” for relative concreteness, but a more precise concept for this subset of work is “longtermist AI governance.”[3]
What work is “from a longtermist perspective” doing here? (This phrase is used 8 times in the article.) Is it: longtermists have pure time preference = 0, while neartermists have >0, so longtermists care a lot more about extinction than neartermists do (because they care more about future generations). Hence, longtermist AI governance means focusing on extinction-level AI risks, while neartermist AI governance is about non-extinction AI risks (eg. racial discrimination in predicting recidivism).
If so, I think this is misleading. Neartermists also care a lot about extinction, because everyone dying is really bad.
Is there another interpretation that I’m missing? Eg. would neartermists and longtermists have different focuses within extinction-level AI risks?
More precisely, longtermists have zero pure time preference. They still discount for exogenous extinction risk and diminishing marginal utility.
More generally, research isn’t magic. Hiring a researcher and having them work 9-5 is no guarantee of solving a problem. You write:
What empirical evidence is there that we can reliably impact the long run trajectory of humanity and how have similar efforts gone in the past? [...]
I think there needs to be much better research into how to make complex decisions despite high uncertainty.
Isn’t it obvious that allocating researcher hours to these questions would be a waste of money? Almost by definition, we can’t have good evidence that we can impact the long-run (ie. centuries) trajectory of humanity, because we haven’t been collecting data for that long. And making complex decisions under high uncertainty will always be incredibly difficult; in the best case scenario, more research might yield small improvements in decision-making.
Note that RCTs are still a minority in published academic research. I think Pritchett’s criticism is that NGOs have been dominated by randomistas; eg, even the International Growth Centre does a lot of RCTs, instead of following his preferred growth diagnostics approach.
Do you think economic growth is key to popular acceptance of longtermism, as increased wealth leads people to adopt post-materialist values?
- 18 Mar 2020 3:37 UTC; 17 points) 's comment on AMA: Toby Ord, author of “The Precipice” and co-founder of the EA movement by (
Tweet-thread promoting Rotblat Day on Aug. 31, to commemorate the spirit of questioning whether a dangerous project should be continued.
I think it’s important to frame longtermism as particular subset of EA. We should be EAs first and longtermists second. EA says to follow the importance-tractability-crowdedness framework, and allocate funding to the most effective causes. This can mean funding longtermist interventions, if they are the most cost-effective. If longtermist interventions get a lot of funding and hit diminishing returns, then they won’t be the most cost-effective anymore. The ITC framework is more general than the longtermist framing of “focus on the long-term future”, and allows us to pivot as funding and tractability changes.