I live for a high disagree-to-upvote ratio
huw
This is clearly an outstanding issue for a non-negligible proportion of the community. It doesn’t matter if some people consider the issue closed, or the investigation superfluous; this investigation would bring that closure to the rest of EA. Everyone here should be interested in the unity that would come from this.
What beings are inside and outside of your moral circle these days? If your views (e.g. on insects) have meaningfully changed recently, why?
With that criteria, you would be extremely hard pressed to find any global health charities that avoid the meat-eater problem (or, for that matter, any GCR charities, since those would save the lives of rich people).
However, I would suggest a focus on culturally vegetarian countries such as India could still meet that criteria. Kaya Guides operate there currently.
I looked into this a number of years ago and it doesn’t seem like Founders Pledge’s methodology has changed since then. You can read their Cause Area Report for more depth, but the primary metric they rate on is tonnes of CO2-equivalent pollutants averted per year per U.S. dollar (CO2-equivalent uses simple weights to compare between different greenhouse gases, such as methane). They have somewhat strong estimates per charity, such that in 2018, the Clean Air Task Force and Coalition for Rainforest Nations came out ahead—but with the proviso that this extrapolated past performance into the future, which isn’t a given with lobbying organisations.
I agree that GWWC could use more depth here, and at the same time I tend to agree that they’re right to recommend Founders Pledge first.
Plenty of mental health charities are likely to directly improve human suffering for people whose lives they don’t save. It’s less clear how many lives they directly save (some screen out suicidal participants completely), but we know that the number of suicides is relatively low in most countries (India records around 200k suicides per year out of a 1.4b population).
EA mental health charities (in LMICs) include StrongMinds, Vida Plena, and Kaya Guides.
Microsoft have backed out of their OpenAI board observer seat, and Apple will refuse a rumoured seat, both in response to antitrust threats from US regulators, per Reuters.
I don’t know how to parse this—I think it’s likely that the US regulators don’t care much about safety in this decision, and nor do I think it meaningfully changes Microsoft’s power over the firm. Apple’s rumoured seat was interesting, but unlikely to have any bearing either.
Piggybacking off this:
How do you evaluate an opportunity once you have it?
How do you decide whether to invite Jimmy to make something a main channel video vs a Beast Philanthropy video?
What kinds of charities perform the best in terms of views? What about funds raised? (Do either of these metrics influence what charities you pick?)
Sorry, could you explain why ‘many people in the community think this is a necessary first step’ or provide a link? I must’ve missed that one and that sounds surprising to me that outright repealing it (or replacing it with nothing in the case of the GOP’s platform) would be desirable.
Feels like we’re talking past each other—I was explaining why a government might want to behave this way because I felt like it was missing from the discussion. Specifically, I think at the very least that reasonable people can disagree on whether a government with the goal of minimising suffering would paradoxically take longer to develop & test a vaccine (and also I wanted to suggest that the evidence is consistent with the German government having this in mind when shutting it down, rather than, say, the desire to establish their authority for its own sake).
I didn’t pass comment on whether it’s morally justified; that depends on your conception of personal liberty (which we clearly disagree on, but I doubt I’m gonna persuade you here).
This argument ignores that ‘serious’ COVID vaccine candidates were available and beginning human trials in March 2020 (some of which became the vaccines you and I probably have in our systems). The counterfactual world still never develops this vaccine; one in which governments were willing to take and push more risks would have just hastened the existing trials rather than encouraging new people to jump into the game.
Even so, given the rich and voluminous history of people that have used ‘vaccines’ to sterilise/kill/deceive minority populations, there’s just no way to trust a random savant who expressly dislikes immigrants to inject anything into you (especially if you’re not in his ingroup). So I also think this story misunderstands the causes for vaccine hesitancy & why a government would prefer—no, require—a vaccine to go through established, non-partisan, accountable organisations & multiple rounds of safety trials.
Having QURI’s code in open source forms explicitly helped me improve Squiggle’s Observable integration & then develop my own smaller subset of Squiggle, so even though I didn’t fork & deploy your code it was super helpful for debugging & adapting!
Co-founder Daniel Gross’ thoughts on AI safety are at best unclear beyond this statement. Here is an article he wrote a year ago: The Climate Justice of AI Safety, and he’s also appeared on the Stratechery podcast a few times and spoken about AI safety once or twice. In this space, he’s most well known as an investor, including in Leopold Aschenbrenner’s fund.
I think it would be good for Daniel Gross & Daniel Levy to clarify their positions on AI safety, and what exactly ‘commercial pressure’ means (do they just care about short-term pressure and intend to profit immensely from AGI?).
(Disclosure: I received a ~$10k grant from Daniel in 2019 that was AI-related)
I don’t understand why we should trust Ilya after he played a very significant role in legitimising Sam’s return to OpenAI. If he had not endorsed this, the board’s resolve would’ve been a lot stronger. So I find it hard to believe him when he says ‘we will not bend to commercial pressures’, as in some sense, this is literally what he did.
The best meta-analysis for deterioration (i.e. negative effects) rates of guided self-help ( = 18, = 2,079) found that deterioration was lower in the intervention condition, although they did find a moderating effect where participants with low education didn’t see this decrease in deterioration rates (but nor did they see an increase)[1].
So, on balance, I think it’s very unlikely that any of the dropped-out participants were worse-off for having tried the programme, especially since the counterfactual in low-income countries is almost always no treatment. Given that your interest is top-line cost-effectiveness, then only counting completed participants for effect size estimates likely underestimates cost-effectiveness if anything, since churned participants would be estimated at 0.
- ↩︎
Ebert, D. D. et al. (2016) Does Internet-based guided-self-help for depression cause harm? An individual participant data meta-analysis on deterioration rates and its moderators in randomized controlled trials, Psychological Medicine, vol. 46, pp. 2679–2693.
- ↩︎
Specifically on the cited RCTs, the Step-By-Step intervention has been specifically designed to be adaptable across multiple countries & cultures[1][2][3][4][5]. Although they initially focused on displaced Syrians, they have also expanded to locals in Lebanon across multiple studies[6][7][8] and found no statistically significant differences in effect sizes[8:1] (the latter is one of the studies cited in the OP). Given this, I would be default surprised if the intervention, when adapted, failed to produce similar results in new contexts.
- ↩︎
Carswell, Kenneth et al. (2018) Step-by-Step: a new WHO digital mental health intervention for depression, mHealth, vol. 4, pp. 34–34.
- ↩︎
Sijbrandij, Marit et al. (2017) Strengthening mental health care systems for Syrian refugees in Europe and the Middle East: integrating scalable psychological interventions in eight countries, European Journal of Psychotraumatology, vol. 8, p. 1388102.
- ↩︎
Burchert, Sebastian et al. (2019) User-Centered App Adaptation of a Low-Intensity E-Mental Health Intervention for Syrian Refugees, Frontiers in Psychiatry, vol. 9, p. 663.
- ↩︎
Abi Ramia, J. et al. (2018) Community cognitive interviewing to inform local adaptations of an e-mental health intervention in Lebanon, Global Mental Health, vol. 5, p. e39.
- ↩︎
Woodward, Aniek et al. (2023) Scalability of digital psychological innovations for refugees: A comparative analysis in Egypt, Germany, and Sweden, SSM—Mental Health, vol. 4, p. 100231.
- ↩︎
Cuijpers, Pim et al. (2022) Guided digital health intervention for depression in Lebanon: randomised trial, Evidence Based Mental Health, vol. 25, pp. e34–e40.
- ↩︎
Abi Ramia, Jinane et al. (2024) Feasibility and uptake of a digital mental health intervention for depression among Lebanese and Syrian displaced people in Lebanon: a qualitative study, Frontiers in Public Health, vol. 11, p. 1293187.
- ↩︎↩︎
Heim, Eva et al. (2021) Step-by-step: Feasibility randomised controlled trial of a mobile-based intervention for depression among populations affected by adversity in Lebanon, Internet Interventions, vol. 24, p. 100380.
- ↩︎
For those who are not deep China nerds but want a somewhat approachable lowdown, I can highly recommend Bill Bishop’s newsletter Sinocism (enough free issues to be worthwhile) and his podcast Sharp China (the latter is a bit more approachable but requires a subscription to Stratechery).
I’m not a China expert so I won’t make strong claims, but I generally agree that we should not treat China as an unknowable, evil adversary who has exactly the same imperial desires as ‘the west’ or past non-Western regimes. I think it was irresponsible of Aschenbrenner to assume this without better research & understanding, since so much of his argument relies on China behaving in a particular way.
❤️ I do wanna add that every interaction I had with you, Rachel, Saul, and all staff & volunteers was overwhelmingly positive, and I’d love to hang again IRL :) Were it not for the issue at hand, I would’ve also rated Manifest an 8–9 on my feedback form, you put on one hell of an event! I also appreciate your openness to feedback; there’s no way I would’ve posted publicly under my real name if I felt like I would get any grief or repercussions for it—that’s rare. (I don’t think I have much else persuasive to say on the main topic)
I guess I am trying to elucidate that the paradox of intolerance applies to this kind of extreme openness/transparency. The more open Manifest is to offensive, incorrect, and harmful ideas, the less of any other kinds of ideas it will attract. I don’t think there is an effective way to signpost that openness without losing the rest of their audience; nobody but scientific racists would go to a conference that signposted ‘it’s acceptable to be scientifically racist here’.
Anyway. It’s obviously their prerogative to host such a conference if they want. But it is equally up to EA to decide where to draw the line out of their own best interests. If that line isn’t an outright intolerance of scientific racism and eugenics, I don’t think EA will be able to draw in enough new members to survive.
I was at Manifest as a volunteer, and I also saw much of the same behaviour as you. If I had known scientific racism or eugenics were acceptable topics of conversation there, I wouldn’t have gone. I’m increasingly glad I decided not to organise a talk.
EA needs to recognise that even associating with scientific racists and eugenicists turns away many of the kinds of bright, kind, ambitious people the movement needs. I am exhausted at having to tell people I am an EA ‘but not one of those ones’. If the movement truly values diversity of views, we should value the people we’re turning away just as much.
Edit: David Thorstad levelled a very good criticism of this comment, which I fully endorse & agree with. I did write this strategically to be persuasive in the forum context, at the cost of expressing my stronger beliefs that scientific racism & eugenics are factually & morally wrong over and above just being reputational or strategic concerns for EA.
- 20 Jun 2024 19:37 UTC; 11 points) 's comment on Austin’s Quick takes by (
Congrats! These are great results and it looks like you’re scaling really well for a very early-stage org :)
I’m curious about that 9-point PHQ-9 reduction goal. How did you decide on it? Do you think it’s achievable (especially since you saw a much larger reduction in your pilot)? Why do you think you saw such a large difference in reductions between the pilot and now? And finally, do you think focusing on increasing effect size will take effort away from cost-reduction efforts?