FTX Future Fund says they support “ambitious projects to improve humanity’s long-term prospects”. Does it seem weird that they’re unanimously funding neartermist global health interventions like lead elimination?
Lead Exposure Elimination Project. [...] So I saw the talk, I made sure that Clare was applying to [FTX] Future Fund. And I was like, “OK, we’ve got to fund this.” And because the focus [at FTX] is longtermist giving, I was thinking maybe it’s going to be a bit of a fight internally. Then it came up in the Slack, and everyone was like, “Oh yeah, we’ve got to fund this.” So it was just easy. No brainer. Everyone was just totally on board.
LEEP is lead by a very talented team of strong “neartermist” EAs.
In the real world and real EA, a lot of interest and granting can be dependent on team and execution (especially given the funding situation). Very good work and leaders are always valuable.
Casting everything into some longtermist/neartermist thing online seems unhealthy.
This particular comment seems poorly written (what does “unanimously” mean?) and seems to pull on some issue, but it just reads that everyone likes MacAskill, everyone likes LEEP and so decided to make a move.
Here’s another framing: if you claim that asteroid detection saves 300K lives per $100, pandemic prevention saves 200M lives per $100, and GiveWell interventions save 0.025 lives per $100, isn’t it a bit odd to fund the latter?
Or: longtermists claim that what matters most is the very long term effects of our actions. How is that being implemented here?
Casting everything into some longtermist/neartermist thing online seems unhealthy.
Longtermists make very strong claims (eg. “positively influencing the longterm future is *the* key moral priority of our time”). It seems healthy to follow up on those claims, and not sweep under the rug any seeming contradictions.
what does “unanimously” mean?
I chose that word to reflect Will’s statement that everyone at FTX was “totally on board”, in contrast to his expectations of an internal fight. Does that make sense?
My fanfiction (that is maybe “60% true” and so has somewhat more signal than noise) is:
The EA fund you mentioned is basically GiveWell.
GiveWell has a sort of institutional momentum, related to aesthetics about decisions and conditions for funding that make bigger granting costly or harder (alternatively, the deeper reason here is that Global Health and development has a different neglectedness and history of public intervention than any other EA cause area, increasing the bar, but elaborating too much will cause Khorton to hunt me down).
In a way that doesn’t make GiveWell’s choices or institutional role wrong, MacAskill saw that LEEP was great and there was an opportunity here to fund it with his involvement in FTX.
So why FTX?
There’s a cheap answer I can make here about “grantmaker diversity”, however I don’t fully believe this is true (or rather, I’m just clueness). For example, maybe there might be some value in GiveWell having a say in deciding whether to scale up EA global health orgs, like they did scale with Fortify Health. (Not sure about this paragraph, I am sort of wildly LARPing.)
More importantly, this doesn’t answer you point about the “longtermist” FTX funding a “neartermist intervention”.
So, then, why FTX?
This pulls on another thread (or rather one that you pulled on in your other comment).
A part of the answer is that the FTX “team” believes there is some conjunction between certain cause areas, such as highly cost effective health and development, and longtermist.
A big part of the answer is that this “conjunction” is sort of heavily influenced by the people involved (read: SBF and MacAskill). The issue with pulling on this thread, is that this conjunctiveness isn’t perfectly EA canon, it’s hard to formalize, and the decisions involved probably puts the senior EA figures involved into too much focus or authority (more anyone, including themselves, want).
I want to remind anyone reading this comment is that this is fanfiction that is only “60% true”.
I wrote the above comment because I feel like no one else will.
I feel that some of your comments are stilted and choose content in a way that has interpretations that is confrontational and overbearing, making it too difficult to answer. I view this as a form of bad rhetoric (sort of created by bad forum norms that has produced other pathologies) and doesn’t lend itself to truth or good discussion.
To be specific, when you say,
FTX Future Fund says they support “ambitious projects to improve humanity’s long-term prospects”. Does it seem weird that they’re unanimously funding neartermist global health interventions like lead elimination?
and
Here’s another framing: if you claim that asteroid detection saves 300K lives per $100, pandemic prevention saves 200M lives per $100, and GiveWell interventions save 0.025 lives per $100, isn’t it a bit odd to fund the latter?
This is terse and omits a lot.
A short direct read of your comments, is that you are implying that “MacAskill has clearly delineated the cost effectiveness of all EA cause areas/interventions and has ranked certain x-risk as the only principled cost effective one” and “MacAskill is violating his arrangement of cost effective interventions”.
Instead of what you are suggesting in this ellipsis, it seems like a reasonable first pass perspective is given directly by the interview you quoted from. I think omitting this is unreasonable.
To be specific, MacAskill is saying in the interview:
Will MacAskill: That’s just amazing, what quick turnaround to impact, doing this thing that’s just very clearly, very broadly making the world better. So in terms of things that get me up in the morning and make me excited to be part of this community, learning about that project is definitely one of them.
Will MacAskill: I think one reason I just love stuff like this, just for the EA community as a whole, the value of getting concrete wins is just really high. And you can imagine a community that is entirely focused on movement building and technical AI safety.
Will MacAskill: [laughs] One could imagine. I mean, obviously those are big parts of the EA community. Well, if the EA community was all of that, it’s like, are you actually doing anything? It is really helpful in terms of just the health of the overall community and culture of the community to be doing many things that are concretely, demonstrably making the world better. And I think there’s a misunderstanding that people often have of core longtermist thought, where you might think — and certainly on the basis of what people tend to say, at least in writing when talking in the abstract — “Oh, you just think everyone should work on AI safety or AI risk, and if not, then bio, and then nothing else really matters.”
Will MacAskill: It’s pretty striking that when you actually ask people and get them making decisions, they’re interested in a way broader variety of things, often in ways that are not that predictable necessarily from what they’ve just said in writing. Like the case of Lead Exposure Elimination Project. One thing that’s funny is EAs and names: there’s just always the most literal names. The Shrimp Welfare Project.
Will MacAskill: And why is that? Well, it’s because there’s more of a rational market now, or something like an efficient market of giving — where the marginal stuff that could or could not be funded in AI safety is like, the best stuff’s been funded, and so the marginal stuff is much less clear. Whereas something in this broad longtermist area — like reducing people’s exposure to lead, improving brain and other health development — especially if it’s like, “We’re actually making real concrete progress on this, on really quite a small budget as well,” that just looks really good. We can just fund this and it’s no downside as well. And I think that’s something that people might not appreciate: just how much that sort of work is valued, even by the most hardcore longtermists.
Rob Wiblin: Yeah. I think that the level of intuitive, emotional enthusiasm that people have about these things as well would actually really surprise folks who have the impression that, if you talk to you or me, we’re just like “AI or bust” or something like that.
So, without agreeing or disagreeing with him, MacAskill is saying there is real value to the EA community here with these interventions in several ways. (At the risk of being stilted myself here, maybe you could call this “flow-through effects”, good “PR”, or just “healthy for the EA soul”).
MacAskill can be right or wrong here, but this isn’t mentioned at all in this thread of yours that you raised.
(Yes, there’s some issues with MacAskill’s reasoning, but it’s less that he’s wrong, rather that it’s just a big awkward thread to pull on, as mentioned in my comment above.)
I want to emphasize, I personally don’t mind the aggressiveness, poking at things.
However, the terseness, combined with the lack of context, not addressing the heart of the matter, is what is overbearing.
The ellipsis here is malign, especially combined with the headspace needed to address all of the threads being pulled at (resulting in this giant two part comment).
For example, this allows the presenter pretend that they never made the implication, and then rake the respondent through their lengthy reply.
Many people, looking ahead at all of this, won’t answer the (implied) questions. As a result, the presenter can then stand with his pregnant, implied criticisms. This isn’t good for discourse.
Yes, it sounds like MacAskill’s motivation is about PR and community health (“getting people out of bed in the morning”). I think it’s important to note when we’re funding things because of direct expected value, vs these indirect effects.
Just to be clear, I’m pretty sure the idea “The non-longtermist interventions are just community health and PR” is impractical and will be wobbly (a long term weakness) because:
The people leading these projects (and their large communities), who are substantial EA talent, won’t at all accept the idea that they are window dressing or there to make longtermists feel good.
Many would find that a slur, and that’s not healthiest to propagate from a community cohesion standpoint.
Even if the “indirect effects” model is mostly correct, it’s dubious at best who gets to decide which neartermist project is a “look/feel good project” that EA should fund, and this is problematic.
Basically, as a lowly peasant, IMO, I’m OK with MacAskill, Holden deciding this, because I think there is more information about the faculty of these people and how they think and they seem pretty reasonable.
But having this perspective and decision making apparatus seems wonky. Like, will neartermist leaders just spend a lot of their time pitching and analyzing flow through effects?
$1B a year (to GiveWell) seems large for PR and community health, especially since the spend on EA human capital from those funds is lower than other cause areas
To get a sense of the problems, this post here is centered entirely around the anomaly of EA vegan diets, which they correctly point out doesn’t pass a literal cost effectiveness test. They then spend the rest of the post drawing on this to promote their alternate cause area.
I think you can see how this would be problematic and self-defeating if EAs actually used this particular theory of change to fund interventions.
So I think drawing the straight line here, that these interventions are just community health and PR, is stilted and probably bad.
MacAskill is making the point that these interventions have value, that longtermists recognize, and that longtermists love this stuff in a very positive, emotional sense everyone can relate to.
I’m really doubtful (but I didn’t read the whole interview) that MacAskill believes that the main model of funding these interventions should be the instrumental utility in some narrow sense of PR or emotions.
My brief response: I think it’s bad form to move the discussion to the meta-level (ie. “your comments are too terse”) instead of directly discussing the object-level issues.
My brief response: I think it’s bad form to move the discussion to the meta-level (ie. “your comments are too terse”) instead of directly discussing the object-level issues.
Can this really be your complete response to my direct, fulsome answer of your question, which you have asked several times?
For example, can you explain why my lengthy comment isn’t a direct object level response?
Even much of my second comment is pointing out you omitted that MacAskill expressly answering why he supported funding LEEP, which is another object level response.
To be clear, I accuse you of engaging in bad faith rhetoric in your above comment and your last response, with an evasion that I specifically anticipated (“this allows the presenter pretend that they never made the implication, and then rake the respondent through their lengthy reply”).
Here’s some previous comments of yours that are more direct, and do not use the same patterns you are now using, where your views and attitudes are more clear.
Is this not laughable? How could anyone think that “looking at the 1000+ year effects of an action” is workable?
Strong waterism: dying of thirst is very bad, because it prevents all of the positive contributions you could make in your life. Therefore, the most important feature of our actions today is their impact on the stockpile of potable water.
If you just kept it in this longtermism/neartermism online thing (and drafted on the sentiment from one of the factions there), that’s OK.
This seems bad because I suspect you are entering into unrelated, technical discussions, for example, in economics, using some of the same rhetorical patterns, which I view as pretty bad, especially as it’s sort of flying under the radar.
Instead of what you are suggesting in this ellipsis, it seems like a reasonable first pass perspective is given directly by the interview you quoted from. I think omitting this is unreasonable.
FTX Future Fund says they support “ambitious projects to improve humanity’s long-term prospects”. Does it seem weird that they’re unanimously funding neartermist global health interventions like lead elimination?
Will MacAskill:
LEEP is lead by a very talented team of strong “neartermist” EAs.
In the real world and real EA, a lot of interest and granting can be dependent on team and execution (especially given the funding situation). Very good work and leaders are always valuable.
Casting everything into some longtermist/neartermist thing online seems unhealthy.
This particular comment seems poorly written (what does “unanimously” mean?) and seems to pull on some issue, but it just reads that everyone likes MacAskill, everyone likes LEEP and so decided to make a move.
Here’s another framing: if you claim that asteroid detection saves 300K lives per $100, pandemic prevention saves 200M lives per $100, and GiveWell interventions save 0.025 lives per $100, isn’t it a bit odd to fund the latter?
Or: longtermists claim that what matters most is the very long term effects of our actions. How is that being implemented here?
Longtermists make very strong claims (eg. “positively influencing the longterm future is *the* key moral priority of our time”). It seems healthy to follow up on those claims, and not sweep under the rug any seeming contradictions.
I chose that word to reflect Will’s statement that everyone at FTX was “totally on board”, in contrast to his expectations of an internal fight. Does that make sense?
Why wouldn’t FTX just refer this to the Global Health and Development Fund?
My fanfiction (that is maybe “60% true” and so has somewhat more signal than noise) is:
The EA fund you mentioned is basically GiveWell.
GiveWell has a sort of institutional momentum, related to aesthetics about decisions and conditions for funding that make bigger granting costly or harder (alternatively, the deeper reason here is that Global Health and development has a different neglectedness and history of public intervention than any other EA cause area, increasing the bar, but elaborating too much will cause Khorton to hunt me down).
In a way that doesn’t make GiveWell’s choices or institutional role wrong, MacAskill saw that LEEP was great and there was an opportunity here to fund it with his involvement in FTX.
So why FTX?
There’s a cheap answer I can make here about “grantmaker diversity”, however I don’t fully believe this is true (or rather, I’m just clueness). For example, maybe there might be some value in GiveWell having a say in deciding whether to scale up EA global health orgs, like they did scale with Fortify Health. (Not sure about this paragraph, I am sort of wildly LARPing.)
More importantly, this doesn’t answer you point about the “longtermist” FTX funding a “neartermist intervention”.
So, then, why FTX?
This pulls on another thread (or rather one that you pulled on in your other comment).
A part of the answer is that the FTX “team” believes there is some conjunction between certain cause areas, such as highly cost effective health and development, and longtermist.
A big part of the answer is that this “conjunction” is sort of heavily influenced by the people involved (read: SBF and MacAskill). The issue with pulling on this thread, is that this conjunctiveness isn’t perfectly EA canon, it’s hard to formalize, and the decisions involved probably puts the senior EA figures involved into too much focus or authority (more anyone, including themselves, want).
I want to remind anyone reading this comment is that this is fanfiction that is only “60% true”.
I wrote the above comment because I feel like no one else will.
I feel that some of your comments are stilted and choose content in a way that has interpretations that is confrontational and overbearing, making it too difficult to answer. I view this as a form of bad rhetoric (sort of created by bad forum norms that has produced other pathologies) and doesn’t lend itself to truth or good discussion.
To be specific, when you say,
and
This is terse and omits a lot.
A short direct read of your comments, is that you are implying that “MacAskill has clearly delineated the cost effectiveness of all EA cause areas/interventions and has ranked certain x-risk as the only principled cost effective one” and “MacAskill is violating his arrangement of cost effective interventions”.
Instead of what you are suggesting in this ellipsis, it seems like a reasonable first pass perspective is given directly by the interview you quoted from. I think omitting this is unreasonable.
To be specific, MacAskill is saying in the interview:
So, without agreeing or disagreeing with him, MacAskill is saying there is real value to the EA community here with these interventions in several ways. (At the risk of being stilted myself here, maybe you could call this “flow-through effects”, good “PR”, or just “healthy for the EA soul”).
MacAskill can be right or wrong here, but this isn’t mentioned at all in this thread of yours that you raised.
(Yes, there’s some issues with MacAskill’s reasoning, but it’s less that he’s wrong, rather that it’s just a big awkward thread to pull on, as mentioned in my comment above.)
I want to emphasize, I personally don’t mind the aggressiveness, poking at things.
However, the terseness, combined with the lack of context, not addressing the heart of the matter, is what is overbearing.
The ellipsis here is malign, especially combined with the headspace needed to address all of the threads being pulled at (resulting in this giant two part comment).
For example, this allows the presenter pretend that they never made the implication, and then rake the respondent through their lengthy reply.
Many people, looking ahead at all of this, won’t answer the (implied) questions. As a result, the presenter can then stand with his pregnant, implied criticisms. This isn’t good for discourse.
Yes, it sounds like MacAskill’s motivation is about PR and community health (“getting people out of bed in the morning”). I think it’s important to note when we’re funding things because of direct expected value, vs these indirect effects.
I think what you wrote is a fair take.
Just to be clear, I’m pretty sure the idea “The non-longtermist interventions are just community health and PR” is impractical and will be wobbly (a long term weakness) because:
The people leading these projects (and their large communities), who are substantial EA talent, won’t at all accept the idea that they are window dressing or there to make longtermists feel good.
Many would find that a slur, and that’s not healthiest to propagate from a community cohesion standpoint.
Even if the “indirect effects” model is mostly correct, it’s dubious at best who gets to decide which neartermist project is a “look/feel good project” that EA should fund, and this is problematic.
Basically, as a lowly peasant, IMO, I’m OK with MacAskill, Holden deciding this, because I think there is more information about the faculty of these people and how they think and they seem pretty reasonable.
But having this perspective and decision making apparatus seems wonky. Like, will neartermist leaders just spend a lot of their time pitching and analyzing flow through effects?
$1B a year (to GiveWell) seems large for PR and community health, especially since the spend on EA human capital from those funds is lower than other cause areas
To get a sense of the problems, this post here is centered entirely around the anomaly of EA vegan diets, which they correctly point out doesn’t pass a literal cost effectiveness test. They then spend the rest of the post drawing on this to promote their alternate cause area.
I think you can see how this would be problematic and self-defeating if EAs actually used this particular theory of change to fund interventions.
So I think drawing the straight line here, that these interventions are just community health and PR, is stilted and probably bad.
MacAskill is making the point that these interventions have value, that longtermists recognize, and that longtermists love this stuff in a very positive, emotional sense everyone can relate to.
I’m really doubtful (but I didn’t read the whole interview) that MacAskill believes that the main model of funding these interventions should be the instrumental utility in some narrow sense of PR or emotions.
My brief response: I think it’s bad form to move the discussion to the meta-level (ie. “your comments are too terse”) instead of directly discussing the object-level issues.
Can this really be your complete response to my direct, fulsome answer of your question, which you have asked several times?
For example, can you explain why my lengthy comment isn’t a direct object level response?
Even much of my second comment is pointing out you omitted that MacAskill expressly answering why he supported funding LEEP, which is another object level response.
To be clear, I accuse you of engaging in bad faith rhetoric in your above comment and your last response, with an evasion that I specifically anticipated (“this allows the presenter pretend that they never made the implication, and then rake the respondent through their lengthy reply”).
Here’s some previous comments of yours that are more direct, and do not use the same patterns you are now using, where your views and attitudes are more clear.
If you just kept it in this longtermism/neartermism online thing (and drafted on the sentiment from one of the factions there), that’s OK.
This seems bad because I suspect you are entering into unrelated, technical discussions, for example, in economics, using some of the same rhetorical patterns, which I view as pretty bad, especially as it’s sort of flying under the radar.
To be clear, you’re using the linguistic sense of ‘ellipsis’, and not the punctuation mark?
Yes, that is correct, I am using the linguistic sense, similar to “implication” or “suggestion”.