I wrote the above comment because I feel like no one else will.
I feel that some of your comments are stilted and choose content in a way that has interpretations that is confrontational and overbearing, making it too difficult to answer. I view this as a form of bad rhetoric (sort of created by bad forum norms that has produced other pathologies) and doesn’t lend itself to truth or good discussion.
To be specific, when you say,
FTX Future Fund says they support “ambitious projects to improve humanity’s long-term prospects”. Does it seem weird that they’re unanimously funding neartermist global health interventions like lead elimination?
and
Here’s another framing: if you claim that asteroid detection saves 300K lives per $100, pandemic prevention saves 200M lives per $100, and GiveWell interventions save 0.025 lives per $100, isn’t it a bit odd to fund the latter?
This is terse and omits a lot.
A short direct read of your comments, is that you are implying that “MacAskill has clearly delineated the cost effectiveness of all EA cause areas/interventions and has ranked certain x-risk as the only principled cost effective one” and “MacAskill is violating his arrangement of cost effective interventions”.
Instead of what you are suggesting in this ellipsis, it seems like a reasonable first pass perspective is given directly by the interview you quoted from. I think omitting this is unreasonable.
To be specific, MacAskill is saying in the interview:
Will MacAskill: That’s just amazing, what quick turnaround to impact, doing this thing that’s just very clearly, very broadly making the world better. So in terms of things that get me up in the morning and make me excited to be part of this community, learning about that project is definitely one of them.
Will MacAskill: I think one reason I just love stuff like this, just for the EA community as a whole, the value of getting concrete wins is just really high. And you can imagine a community that is entirely focused on movement building and technical AI safety.
Will MacAskill: [laughs] One could imagine. I mean, obviously those are big parts of the EA community. Well, if the EA community was all of that, it’s like, are you actually doing anything? It is really helpful in terms of just the health of the overall community and culture of the community to be doing many things that are concretely, demonstrably making the world better. And I think there’s a misunderstanding that people often have of core longtermist thought, where you might think — and certainly on the basis of what people tend to say, at least in writing when talking in the abstract — “Oh, you just think everyone should work on AI safety or AI risk, and if not, then bio, and then nothing else really matters.”
Will MacAskill: It’s pretty striking that when you actually ask people and get them making decisions, they’re interested in a way broader variety of things, often in ways that are not that predictable necessarily from what they’ve just said in writing. Like the case of Lead Exposure Elimination Project. One thing that’s funny is EAs and names: there’s just always the most literal names. The Shrimp Welfare Project.
Will MacAskill: And why is that? Well, it’s because there’s more of a rational market now, or something like an efficient market of giving — where the marginal stuff that could or could not be funded in AI safety is like, the best stuff’s been funded, and so the marginal stuff is much less clear. Whereas something in this broad longtermist area — like reducing people’s exposure to lead, improving brain and other health development — especially if it’s like, “We’re actually making real concrete progress on this, on really quite a small budget as well,” that just looks really good. We can just fund this and it’s no downside as well. And I think that’s something that people might not appreciate: just how much that sort of work is valued, even by the most hardcore longtermists.
Rob Wiblin: Yeah. I think that the level of intuitive, emotional enthusiasm that people have about these things as well would actually really surprise folks who have the impression that, if you talk to you or me, we’re just like “AI or bust” or something like that.
So, without agreeing or disagreeing with him, MacAskill is saying there is real value to the EA community here with these interventions in several ways. (At the risk of being stilted myself here, maybe you could call this “flow-through effects”, good “PR”, or just “healthy for the EA soul”).
MacAskill can be right or wrong here, but this isn’t mentioned at all in this thread of yours that you raised.
(Yes, there’s some issues with MacAskill’s reasoning, but it’s less that he’s wrong, rather that it’s just a big awkward thread to pull on, as mentioned in my comment above.)
I want to emphasize, I personally don’t mind the aggressiveness, poking at things.
However, the terseness, combined with the lack of context, not addressing the heart of the matter, is what is overbearing.
The ellipsis here is malign, especially combined with the headspace needed to address all of the threads being pulled at (resulting in this giant two part comment).
For example, this allows the presenter pretend that they never made the implication, and then rake the respondent through their lengthy reply.
Many people, looking ahead at all of this, won’t answer the (implied) questions. As a result, the presenter can then stand with his pregnant, implied criticisms. This isn’t good for discourse.
Yes, it sounds like MacAskill’s motivation is about PR and community health (“getting people out of bed in the morning”). I think it’s important to note when we’re funding things because of direct expected value, vs these indirect effects.
Just to be clear, I’m pretty sure the idea “The non-longtermist interventions are just community health and PR” is impractical and will be wobbly (a long term weakness) because:
The people leading these projects (and their large communities), who are substantial EA talent, won’t at all accept the idea that they are window dressing or there to make longtermists feel good.
Many would find that a slur, and that’s not healthiest to propagate from a community cohesion standpoint.
Even if the “indirect effects” model is mostly correct, it’s dubious at best who gets to decide which neartermist project is a “look/feel good project” that EA should fund, and this is problematic.
Basically, as a lowly peasant, IMO, I’m OK with MacAskill, Holden deciding this, because I think there is more information about the faculty of these people and how they think and they seem pretty reasonable.
But having this perspective and decision making apparatus seems wonky. Like, will neartermist leaders just spend a lot of their time pitching and analyzing flow through effects?
$1B a year (to GiveWell) seems large for PR and community health, especially since the spend on EA human capital from those funds is lower than other cause areas
To get a sense of the problems, this post here is centered entirely around the anomaly of EA vegan diets, which they correctly point out doesn’t pass a literal cost effectiveness test. They then spend the rest of the post drawing on this to promote their alternate cause area.
I think you can see how this would be problematic and self-defeating if EAs actually used this particular theory of change to fund interventions.
So I think drawing the straight line here, that these interventions are just community health and PR, is stilted and probably bad.
MacAskill is making the point that these interventions have value, that longtermists recognize, and that longtermists love this stuff in a very positive, emotional sense everyone can relate to.
I’m really doubtful (but I didn’t read the whole interview) that MacAskill believes that the main model of funding these interventions should be the instrumental utility in some narrow sense of PR or emotions.
My brief response: I think it’s bad form to move the discussion to the meta-level (ie. “your comments are too terse”) instead of directly discussing the object-level issues.
My brief response: I think it’s bad form to move the discussion to the meta-level (ie. “your comments are too terse”) instead of directly discussing the object-level issues.
Can this really be your complete response to my direct, fulsome answer of your question, which you have asked several times?
For example, can you explain why my lengthy comment isn’t a direct object level response?
Even much of my second comment is pointing out you omitted that MacAskill expressly answering why he supported funding LEEP, which is another object level response.
To be clear, I accuse you of engaging in bad faith rhetoric in your above comment and your last response, with an evasion that I specifically anticipated (“this allows the presenter pretend that they never made the implication, and then rake the respondent through their lengthy reply”).
Here’s some previous comments of yours that are more direct, and do not use the same patterns you are now using, where your views and attitudes are more clear.
Is this not laughable? How could anyone think that “looking at the 1000+ year effects of an action” is workable?
Strong waterism: dying of thirst is very bad, because it prevents all of the positive contributions you could make in your life. Therefore, the most important feature of our actions today is their impact on the stockpile of potable water.
If you just kept it in this longtermism/neartermism online thing (and drafted on the sentiment from one of the factions there), that’s OK.
This seems bad because I suspect you are entering into unrelated, technical discussions, for example, in economics, using some of the same rhetorical patterns, which I view as pretty bad, especially as it’s sort of flying under the radar.
Instead of what you are suggesting in this ellipsis, it seems like a reasonable first pass perspective is given directly by the interview you quoted from. I think omitting this is unreasonable.
I wrote the above comment because I feel like no one else will.
I feel that some of your comments are stilted and choose content in a way that has interpretations that is confrontational and overbearing, making it too difficult to answer. I view this as a form of bad rhetoric (sort of created by bad forum norms that has produced other pathologies) and doesn’t lend itself to truth or good discussion.
To be specific, when you say,
and
This is terse and omits a lot.
A short direct read of your comments, is that you are implying that “MacAskill has clearly delineated the cost effectiveness of all EA cause areas/interventions and has ranked certain x-risk as the only principled cost effective one” and “MacAskill is violating his arrangement of cost effective interventions”.
Instead of what you are suggesting in this ellipsis, it seems like a reasonable first pass perspective is given directly by the interview you quoted from. I think omitting this is unreasonable.
To be specific, MacAskill is saying in the interview:
So, without agreeing or disagreeing with him, MacAskill is saying there is real value to the EA community here with these interventions in several ways. (At the risk of being stilted myself here, maybe you could call this “flow-through effects”, good “PR”, or just “healthy for the EA soul”).
MacAskill can be right or wrong here, but this isn’t mentioned at all in this thread of yours that you raised.
(Yes, there’s some issues with MacAskill’s reasoning, but it’s less that he’s wrong, rather that it’s just a big awkward thread to pull on, as mentioned in my comment above.)
I want to emphasize, I personally don’t mind the aggressiveness, poking at things.
However, the terseness, combined with the lack of context, not addressing the heart of the matter, is what is overbearing.
The ellipsis here is malign, especially combined with the headspace needed to address all of the threads being pulled at (resulting in this giant two part comment).
For example, this allows the presenter pretend that they never made the implication, and then rake the respondent through their lengthy reply.
Many people, looking ahead at all of this, won’t answer the (implied) questions. As a result, the presenter can then stand with his pregnant, implied criticisms. This isn’t good for discourse.
Yes, it sounds like MacAskill’s motivation is about PR and community health (“getting people out of bed in the morning”). I think it’s important to note when we’re funding things because of direct expected value, vs these indirect effects.
I think what you wrote is a fair take.
Just to be clear, I’m pretty sure the idea “The non-longtermist interventions are just community health and PR” is impractical and will be wobbly (a long term weakness) because:
The people leading these projects (and their large communities), who are substantial EA talent, won’t at all accept the idea that they are window dressing or there to make longtermists feel good.
Many would find that a slur, and that’s not healthiest to propagate from a community cohesion standpoint.
Even if the “indirect effects” model is mostly correct, it’s dubious at best who gets to decide which neartermist project is a “look/feel good project” that EA should fund, and this is problematic.
Basically, as a lowly peasant, IMO, I’m OK with MacAskill, Holden deciding this, because I think there is more information about the faculty of these people and how they think and they seem pretty reasonable.
But having this perspective and decision making apparatus seems wonky. Like, will neartermist leaders just spend a lot of their time pitching and analyzing flow through effects?
$1B a year (to GiveWell) seems large for PR and community health, especially since the spend on EA human capital from those funds is lower than other cause areas
To get a sense of the problems, this post here is centered entirely around the anomaly of EA vegan diets, which they correctly point out doesn’t pass a literal cost effectiveness test. They then spend the rest of the post drawing on this to promote their alternate cause area.
I think you can see how this would be problematic and self-defeating if EAs actually used this particular theory of change to fund interventions.
So I think drawing the straight line here, that these interventions are just community health and PR, is stilted and probably bad.
MacAskill is making the point that these interventions have value, that longtermists recognize, and that longtermists love this stuff in a very positive, emotional sense everyone can relate to.
I’m really doubtful (but I didn’t read the whole interview) that MacAskill believes that the main model of funding these interventions should be the instrumental utility in some narrow sense of PR or emotions.
My brief response: I think it’s bad form to move the discussion to the meta-level (ie. “your comments are too terse”) instead of directly discussing the object-level issues.
Can this really be your complete response to my direct, fulsome answer of your question, which you have asked several times?
For example, can you explain why my lengthy comment isn’t a direct object level response?
Even much of my second comment is pointing out you omitted that MacAskill expressly answering why he supported funding LEEP, which is another object level response.
To be clear, I accuse you of engaging in bad faith rhetoric in your above comment and your last response, with an evasion that I specifically anticipated (“this allows the presenter pretend that they never made the implication, and then rake the respondent through their lengthy reply”).
Here’s some previous comments of yours that are more direct, and do not use the same patterns you are now using, where your views and attitudes are more clear.
If you just kept it in this longtermism/neartermism online thing (and drafted on the sentiment from one of the factions there), that’s OK.
This seems bad because I suspect you are entering into unrelated, technical discussions, for example, in economics, using some of the same rhetorical patterns, which I view as pretty bad, especially as it’s sort of flying under the radar.
To be clear, you’re using the linguistic sense of ‘ellipsis’, and not the punctuation mark?
Yes, that is correct, I am using the linguistic sense, similar to “implication” or “suggestion”.