even if we’re coming from a position that thinks they’re not the most effective causes
How do you interpret “most effective cause”? Is it “most effective given the current funding landscape”?
even if we’re coming from a position that thinks they’re not the most effective causes
How do you interpret “most effective cause”? Is it “most effective given the current funding landscape”?
The EAecon Retreat will be a ~30-person retreat for facilitating connections between EA economists of all levels. [...] We are open to applications from advanced undergraduate, master’s, and early-stage Ph.D. students interested in EAeconomics who may not have yet been exposed to the area more in-depth.
So ‘all levels’ does not include late-stage or post-PhD economists?
Great to see this!
I don’t think the recent diff-in-diff literature is a huge issue here—you’re computing a linear approximation, which might be bad if the actual effect size isn’t linear, but this is just the usual issue with linear regression.
What is this referring to?
I don’t think that’s the bottleneck in economic development
I think it’s too simplistic to say there’s a single bottleneck.
such as economic classes for youngsters, or funding more economists in these countries, or sending experts from top univerties to teach there, etc.
The latter two seem consistent with my proposal. Part of the problem is that there aren’t many economists in developing countries, hence the need to train more. And ASE does bring experts to teach at their campus.
My brief response: I think it’s bad form to move the discussion to the meta-level (ie. “your comments are too terse”) instead of directly discussing the object-level issues.
Instead of what you are suggesting in this ellipsis, it seems like a reasonable first pass perspective is given directly by the interview you quoted from. I think omitting this is unreasonable.
To be clear, you’re using the linguistic sense of ‘ellipsis’, and not the punctuation mark?
Also, in OLS and most variants, order doesn’t matter
See the Gelbach paper linked above.
Yes, but there are often many plausible sets of control variables that (hopefully) get you conditional independence. I find it easier to plot everything, with the understanding that some specifications are better than others.
Why wouldn’t FTX just refer this to the Global Health and Development Fund?
Did you run additional robustness checks? I like to see a multiverse analysis, aka specification curve, (see here). This involves running all combinations of control variables, since the order in which the controls are added matters, and the authors could have selected only the significant ones. (See also.)
Thanks!
Here’s another framing: if you claim that asteroid detection saves 300K lives per $100, pandemic prevention saves 200M lives per $100, and GiveWell interventions save 0.025 lives per $100, isn’t it a bit odd to fund the latter?
Or: longtermists claim that what matters most is the very long term effects of our actions. How is that being implemented here?
Casting everything into some longtermist/neartermist thing online seems unhealthy.
Longtermists make very strong claims (eg. “positively influencing the longterm future is *the* key moral priority of our time”). It seems healthy to follow up on those claims, and not sweep under the rug any seeming contradictions.
what does “unanimously” mean?
I chose that word to reflect Will’s statement that everyone at FTX was “totally on board”, in contrast to his expectations of an internal fight. Does that make sense?
FTX Future Fund says they support “ambitious projects to improve humanity’s long-term prospects”. Does it seem weird that they’re unanimously funding neartermist global health interventions like lead elimination?
Lead Exposure Elimination Project. [...] So I saw the talk, I made sure that Clare was applying to [FTX] Future Fund. And I was like, “OK, we’ve got to fund this.” And because the focus [at FTX] is longtermist giving, I was thinking maybe it’s going to be a bit of a fight internally. Then it came up in the Slack, and everyone was like, “Oh yeah, we’ve got to fund this.” So it was just easy. No brainer. Everyone was just totally on board.
Disappointed it’s not dedicated to asteroid risk.
However, we are trying to reward the creation and sharing of new work, so you may not submit work that has previously been published or posted publicly on the internet (e.g., on a blog, preprint server, or academic article). (source)
I posted my FTX application here, but it was for a specific project (launching a replication institute). I could write up a broader proposal for scientific reproducibility as a cause area. Would that be allowed?
other better reasons like population axiologies or tractability concerns
In that case, “how long into the future you’re willing to look” doesn’t seem to capture what’s going on, since ‘neartermists’ are equally willing to look into the future.
Continuing the aside: yes, you might split the marginal dollar because of uncertainty, like playing a mixed strategy. Alternatively, you might have strongly diminishing returns, so that you go all-in on one intervention for a certain amount of funding until the marginal EV drops below that of the next best intervention, at which point you switch to funding that one; this also results in diversification.
I interpret this as using different discount rates (specifically, pure time preference, to distinguish from discounting for marginal utility or exogenous extinction risk). Is that right? That is, temporal radicalists have pure time preference = 0, while the others have pure time preference > 0.
Or do you mean something else by “how long into the future you’re willing to look”?
What work is “from a longtermist perspective” doing here? (This phrase is used 8 times in the article.) Is it: longtermists have pure time preference = 0, while neartermists have >0, so longtermists care a lot more about extinction than neartermists do (because they care more about future generations). Hence, longtermist AI governance means focusing on extinction-level AI risks, while neartermist AI governance is about non-extinction AI risks (eg. racial discrimination in predicting recidivism).
If so, I think this is misleading. Neartermists also care a lot about extinction, because everyone dying is really bad.
Is there another interpretation that I’m missing? Eg. would neartermists and longtermists have different focuses within extinction-level AI risks?