I live for a high disagree-to-upvote ratio
huw
I am surprised at this, only because I remember the Gulf states were quite keen on bringing production into their countries and I would’ve thought they’d have declared Halal sooner!
(Yep, I’m not having a go at the mission here, more at the nuances of measurement)
Small drive-by question for you: In your opinion, if C. Elegans is conscious and has some moral significance, and suppose we could hypothetically train artificial neural networks to simulate a C. Elegans, would the resulting simulation have moral significance?
If so, what other consequences flow from this—do image recognition networks running on my phone have moral significance? Do LLMs? Are we already torturing billions of digital minds?
If not, what special sauce does C. Elegans have that an artificial neural network does not? (If you’re not sure, where do you think it might lie?)
(Asking out of genuine curiosity—haven’t had a lot of time to interface with this stuff)
I guess I don’t find your conclusion intuitive. I’m sure there are a range of preference questions you could ask these extreme sufferers. For example, whether they, at a 5⁄10 life satisfaction, would trade places with someone in a low-income country with a life satisfaction of 2⁄10 who does not have their condition.
If you believe that they would make this trade, then surely there is something that their life satisfaction score is simply failing to capture
If you believe that they wouldn’t make this trade, then either that preference game isn’t eliciting some true value of suffering, or otherwise, why should we allocate hypothetical marginal dollars to their suffering and not that of those with lower life satisfaction?
My hunch is that the former is true, that there is something you can elicit from these people that isn’t being captured in the Cantril Ladder. (In my work, we’ve found the Cantril Ladder to be unreliable in other ways). But on the other side of this, I do worry about rejecting people’s own accounts of their experiences—it may literally be true that these people are somewhat happy with their lives, and that we should focus our resources on those who report that they aren’t!
I take this as an indicator that we need to work harder to demonstrate that global mental health is a cause area worth investing in :)
Thank you!
Do you think it’s valuable to specifically measure negative affect instead of overall affect, in this case? Or would overall affect suffice?
Anthropic are now offering Claude for up to 75% off for Goodstack-eligible non-profits :)
Why do you think people who suffer so frequently and deeply rate their life satisfaction relatively highly?
(My best sense is some combination of:
The Cantril Ladder is remembered rather than experiential measurement, and if we did just capture the area under the curve of their hedonic states we would see a much lower value
The Cantril Ladder asks users to anchor on ‘the best life for you’, and users may not see a life for themselves without their suffering
Suicide omitting users who are likely to view their lives as not worth living—thereby selecting for optimists
The Cantril Ladder is on a scale of 1–10, which users perceive as linear, but extreme suffering is exponentially worse than mild suffering
It’s not clear where the ‘life not worth living point’ actually is, and it may genuinely be around 4 points, so these people actually are reporting living awful lives)
Like, I can’t see a reason why wellbeing measures shouldn’t, in theory, capture these extremely negative states.
Go Samantha!!!!
I was born in Sydney, but this is like, a minor part of the reason I’ve decided to stay here for the time being. However, there aren’t a lot of working EAs here, especially not in global health.
However, I make up for this by travelling long-term for big parts of the year. I spend a fair chunk of time in country or doing long stints in London during their summer (which is quite nice). You could pick a top EA hub to live in, just spend summer there, and travel the rest of the time—or live somewhere nice, and travel to the EA hub for a few months.
Alternatively, you could move to a nice city near a lot of EA hubs and with easy transport options. I’ve considered Barcelona for this purpose, since it’s a day’s train from London and 2 days from Berlin, but has great weather year-round. I know a few European digital nomads tend to be based around the Côte d’Azur for this reason (independently of EA).
The ‘deportation abundance’ guy, Charles Lehman, was not merely associating abundance with deportations in a stray tweet—he was a speaker on a panel at Abundance 2025. He himself claims to be not associated with the abundance coalition.
(I’m not taking a position here on whether I think Abundance 2025 should have invited speakers it explicitly disagrees with, or whether my impression is that Abundance 2025 endorses or disendorses his views—just correcting you on that specific point)
I’ll look at this properly later but just wanted to confirm that I got it wrong about WelcomeFest. I’d read a tweet about Open Philanthropy sponsoring Abundance 2025 around the same time WelcomeFest was happening, and conflated the two due to having similar speakers and an explicit pro-abundance position.
I’ve been meaning to write a longer post about my concerns with this cause area, including the high levels of political risk it exposes the EA movement to, and why we should be wary of that post-FTX. For example, I think it was unwise to sponsor a conference which invited a guy championing ‘deportation abundance’.
And that’s not even the most controversial conference they sponsored this year (the author here has already formed an association with effective altruism, also thankfully didn’t notice who funded the conference).(I will get to the rest of Yarrow’s comment later but this was a bad memory of mine; I had read that Open Philanthropy were going to fund Abundance festival well before it happened, and assumed they had funded WelcomeFest due to sharing speakers and the abundance ideology).(I don’t have this critique fully formed enough to share it on the forum in much more detail)
Really excited about this—having chatted extensively with Oli and Rowan, they mean business! Also quite excited to see lung health take off more and more as a cause area, could be the new lead, ya never know :)
Just to add additional context from the email attendees received last week (I am refraining from passing judgment either way, only to note that the organising team are taking additional precautions):
The venue is easily accessible via three nearby metro stations — Khan Market (Violet Line) and Lok Kalyan Marg or Jor Bagh (Yellow Line). Staying along either of these lines will likely save you a transfer. Given the air quality, we recommend avoiding autos and bikes.
Check the Air Quality Index (AQI) on Google Maps when choosing your stay; some areas will have better air quality than others.
We will provide N95 anti-pollution masks, but if you have allergies or have high sensitivity to air quality, please do carry your own precautions. Some people feel dryness in the eyes as well, so if this happens to you, having eye drops handy would be good.
If you feel that the pollution may severely affect your health, please let our team know—we can help make arrangements. If it’s still not feasible, it may be best not to attend, as your health comes first.
And on the linked FAQ in that email:
There is quite a high likelihood of severe air pollution in Delhi, especially during winter months (October to February) due to intense smog, dust, vehicle pollution, and cold weather.
At the conference, we’re making arrangements like having air purifiers at the venue to minimise impact on health, but we also encourage you to bring N95 masks if this is something that you’re concerned might affect you, such as if you have pre-existing medical or respiratory conditions. We also suggest taking precautions such as avoiding exercise first thing in the morning and in the evening (when the air quality is at its worst).
For those with serious respiratory conditions, we ask that you consider any health risks before deciding to attend EAGxIndia 2025 in Delhi.
I’m driving by on this, I know little about the space. What kinds of efforts have there been to blanket a small area (perhaps a local government) in good purifiers, far-UVC, etc. (thinking about passive technologies only, so not things like masking)? If such efforts existed, surely they would cause measurable changes in productivity due to decreasing common colds/flus/etc., and this would essentially make the case for themselves as an economic investment? Or am I way off the mark here?
EA jobs pay great if you’re from the global south—and yet!
(This discussion seems to be anchoring on diversity as it’s practiced in wealthy economies, which I don’t think necessarily has to be the main way of making EA diverse)
I mostly just want to join the chorus of people welcoming you here and repudiate the negative reaction that a very reasonable question is getting. It’s worth adding 3 things:
The demographic balance is improving over time, with new EAs in 2023 and 2024 being substantially more diverse.
Anecdotally, the EA forum skews whiter, more male, and more Bay Area. I personally feel that the forum is increasingly out of touch with ‘EA on the ground’, especially in cause areas such as global health. Implementing EAs very rarely post here, unfortunately. Don’t take any reaction here as representative of broader EA.
Anecdotally (and I believe I’ve seen some stats to back this up), some cause areas are more diverse than others. Global health and animal welfare seem to be substantially more diverse, specifically.
A few points:
There is still a lot of progress to be made in low-income country psychotherapy, which I think many EAs find counterintuitive. StrongMinds and Friendship Bench could both be about 5× cheaper, and have found ways to get substantially cheaper every year for the past half decade or so. At Kaya Guides, we’re exploring further improvements and should share more soon.
Plausibly, you could double cost-effectiveness again if it were possible to replace human counsellors with AI in a way that maintained retention (the jury is still out here).
The Happier Lives Institute has been looking at these kinds of interventions; their Promising Charities Pure Earth and Taimaka both appear to improve long-run mental health sustainably, by treating lead poisoning and malnutrition.