I live for a high disagree-to-upvote ratio
huw
Co-founder Daniel Gross’ thoughts on AI safety are at best unclear beyond this statement. Here is an article he wrote a year ago: The Climate Justice of AI Safety, and he’s also appeared on the Stratechery podcast a few times and spoken about AI safety once or twice. In this space, he’s most well known as an investor, including in Leopold Aschenbrenner’s fund.
I think it would be good for Daniel Gross & Daniel Levy to clarify their positions on AI safety, and what exactly ‘commercial pressure’ means (do they just care about short-term pressure and intend to profit immensely from AGI?).
(Disclosure: I received a ~$10k grant from Daniel in 2019 that was AI-related)
I don’t understand why we should trust Ilya after he played a very significant role in legitimising Sam’s return to OpenAI. If he had not endorsed this, the board’s resolve would’ve been a lot stronger. So I find it hard to believe him when he says ‘we will not bend to commercial pressures’, as in some sense, this is literally what he did.
The best meta-analysis for deterioration (i.e. negative effects) rates of guided self-help ( = 18, = 2,079) found that deterioration was lower in the intervention condition, although they did find a moderating effect where participants with low education didn’t see this decrease in deterioration rates (but nor did they see an increase)[1].
So, on balance, I think it’s very unlikely that any of the dropped-out participants were worse-off for having tried the programme, especially since the counterfactual in low-income countries is almost always no treatment. Given that your interest is top-line cost-effectiveness, then only counting completed participants for effect size estimates likely underestimates cost-effectiveness if anything, since churned participants would be estimated at 0.
- ↩︎
Ebert, D. D. et al. (2016) Does Internet-based guided-self-help for depression cause harm? An individual participant data meta-analysis on deterioration rates and its moderators in randomized controlled trials, Psychological Medicine, vol. 46, pp. 2679–2693.
- ↩︎
Specifically on the cited RCTs, the Step-By-Step intervention has been specifically designed to be adaptable across multiple countries & cultures[1][2][3][4][5]. Although they initially focused on displaced Syrians, they have also expanded to locals in Lebanon across multiple studies[6][7][8] and found no statistically significant differences in effect sizes[8:1] (the latter is one of the studies cited in the OP). Given this, I would be default surprised if the intervention, when adapted, failed to produce similar results in new contexts.
- ↩︎
Carswell, Kenneth et al. (2018) Step-by-Step: a new WHO digital mental health intervention for depression, mHealth, vol. 4, pp. 34–34.
- ↩︎
Sijbrandij, Marit et al. (2017) Strengthening mental health care systems for Syrian refugees in Europe and the Middle East: integrating scalable psychological interventions in eight countries, European Journal of Psychotraumatology, vol. 8, p. 1388102.
- ↩︎
Burchert, Sebastian et al. (2019) User-Centered App Adaptation of a Low-Intensity E-Mental Health Intervention for Syrian Refugees, Frontiers in Psychiatry, vol. 9, p. 663.
- ↩︎
Abi Ramia, J. et al. (2018) Community cognitive interviewing to inform local adaptations of an e-mental health intervention in Lebanon, Global Mental Health, vol. 5, p. e39.
- ↩︎
Woodward, Aniek et al. (2023) Scalability of digital psychological innovations for refugees: A comparative analysis in Egypt, Germany, and Sweden, SSM—Mental Health, vol. 4, p. 100231.
- ↩︎
Cuijpers, Pim et al. (2022) Guided digital health intervention for depression in Lebanon: randomised trial, Evidence Based Mental Health, vol. 25, pp. e34–e40.
- ↩︎
Abi Ramia, Jinane et al. (2024) Feasibility and uptake of a digital mental health intervention for depression among Lebanese and Syrian displaced people in Lebanon: a qualitative study, Frontiers in Public Health, vol. 11, p. 1293187.
- ↩︎↩︎
Heim, Eva et al. (2021) Step-by-step: Feasibility randomised controlled trial of a mobile-based intervention for depression among populations affected by adversity in Lebanon, Internet Interventions, vol. 24, p. 100380.
- ↩︎
For those who are not deep China nerds but want a somewhat approachable lowdown, I can highly recommend Bill Bishop’s newsletter Sinocism (enough free issues to be worthwhile) and his podcast Sharp China (the latter is a bit more approachable but requires a subscription to Stratechery).
I’m not a China expert so I won’t make strong claims, but I generally agree that we should not treat China as an unknowable, evil adversary who has exactly the same imperial desires as ‘the west’ or past non-Western regimes. I think it was irresponsible of Aschenbrenner to assume this without better research & understanding, since so much of his argument relies on China behaving in a particular way.
❤️ I do wanna add that every interaction I had with you, Rachel, Saul, and all staff & volunteers was overwhelmingly positive, and I’d love to hang again IRL :) Were it not for the issue at hand, I would’ve also rated Manifest an 8–9 on my feedback form, you put on one hell of an event! I also appreciate your openness to feedback; there’s no way I would’ve posted publicly under my real name if I felt like I would get any grief or repercussions for it—that’s rare. (I don’t think I have much else persuasive to say on the main topic)
I guess I am trying to elucidate that the paradox of intolerance applies to this kind of extreme openness/transparency. The more open Manifest is to offensive, incorrect, and harmful ideas, the less of any other kinds of ideas it will attract. I don’t think there is an effective way to signpost that openness without losing the rest of their audience; nobody but scientific racists would go to a conference that signposted ‘it’s acceptable to be scientifically racist here’.
Anyway. It’s obviously their prerogative to host such a conference if they want. But it is equally up to EA to decide where to draw the line out of their own best interests. If that line isn’t an outright intolerance of scientific racism and eugenics, I don’t think EA will be able to draw in enough new members to survive.
I was at Manifest as a volunteer, and I also saw much of the same behaviour as you. If I had known scientific racism or eugenics were acceptable topics of conversation there, I wouldn’t have gone. I’m increasingly glad I decided not to organise a talk.
EA needs to recognise that even associating with scientific racists and eugenicists turns away many of the kinds of bright, kind, ambitious people the movement needs. I am exhausted at having to tell people I am an EA ‘but not one of those ones’. If the movement truly values diversity of views, we should value the people we’re turning away just as much.
Edit: David Thorstad levelled a very good criticism of this comment, which I fully endorse & agree with. I did write this strategically to be persuasive in the forum context, at the cost of expressing my stronger beliefs that scientific racism & eugenics are factually & morally wrong over and above just being reputational or strategic concerns for EA.
- 20 Jun 2024 19:37 UTC; 11 points) 's comment on Austin’s Quick takes by (
OpenAI appoints Retired U.S. Army General Paul M. Nakasone to Board of Directors
I don’t know anything about Nakasone in particular, but it should be of interest (and concern)—especially after Situational Awareness—that OpenAI is moving itself closer to the U.S. military-industrial complex. The article itself specifically mentions Nakasone’s cybersecurity experience as a benefit of having him on the board, and that he will be placed on OpenAI’s board’s Safety and Security Committee. None of this seems good for avoiding an arms race.
Is that just a kind of availability bias—in the ‘marketplace of ideas’ (scare quotes) they’re competing against pure speculation about architecture & compute requirements, which is much harder to make estimates around & generally feels less concrete?
I was under the impression that most people in AI safety felt this way—that transformers (or diffusion models) weren’t going to be the major underpinning of AGI. As has been noted a lot, they’re really good at achieving human-level performance in most tasks, particularly with more data & training, but that they can’t generalise well and are hence unlikely to be the ‘G’ in AGI. Rather:
Existing models will be economically devastating for large sections of the economy anyway
The rate of progress across multiple domains of AI is concerning, and that the increased funding to AI more generally will flow back to new development domains
Even if neither of these things are true, we still want to advocate for increased controls around the development of future architectures
But please forgive me if I had the wrong impression here.
I’m a bit confused. I was just calling Aschenbrenner unimaginative, because I think trying to avoid stable totalitarianism while bringing about the conditions he identified for stable totalitarianism lacked imagination. I think the onus is on him to be imaginative if he is taking what he identifies as extremely significant risks, in order to reduce those risks. It is intellectually lazy to claim that your very risky project is inevitable (in many cases by literally extrapolating straight lines on charts and saying ‘this will happen’) and then work to bring it about as quickly and as urgently as possible.
Just to try and make this clear, by corollary, I would support an unimaginative solution that doesn’t involve taking these risks, such as by not building AGI. I think the burden for imagination is higher if you are taking more risks, because you could use that imagination to come up with a win-win solution.
You are right—thank you for clarifying. This is also what Torres says in their TESCREAL FAQ. I’ve retracted the comment to reflect that misunderstanding, although I’d still love Ozy’s take on the eugenics criticism.
If you are concerned about extinction and stable totalitarianism, ‘we should continue to develop AI but the good guys will have it’ sounds like a very unimaginative and naïve solution
This sounds great! Have you given any thought to how we could make suzetrigine available to people in LMICs, or people who can’t afford it/don’t have health insurance?
Very curious what the actual play is here. I suspect, at worst, xAI just gets to be a holding company for GPUs and can flip them at a profit. At best, maybe Elon thinks generative Twitter will restore its original value for a sale? Regardless, his ability to fundraise for mid ideas is remarkable.
Equally, the best talent from non-Western countries usually migrates to Western countries where wages are orders of magnitude higher. So this ends up being self-reinforcing.
Hey there! For what it’s worth, did you look at the Global Burden of Disease study? They define ‘cause’ and ‘risk factor’ separately. So they have direct drug overdoses in causes, but also calculate death & DALY burdens that are attributable to drug addiction, tobacco use, and high alcohol use (you can play around with the models here). Note that all estimates below have wide credible intervals in their models, but I’ve omitted them for readability. I also don’t know how they perform their risk factor attribution, but since a lot of experts contribute to this I can’t imagine it’s worse than your analysis or missing something crucial.
In their data, tobacco contributes 195M DALYs/year (6.76% of the total DALY burden suffered by all humanity), high alcohol use contributes 72M or 2.51%, and drug use 28M or 0.96%.
In the U.S., these risk factors contribute 22M DALYs/year. Combined, this is more than all direct level-2 causes of death in the GBD (cardiovascular, in #1, has 18M DALYs/year). Equivalently, it would be the 5th-largest level-2 direct cause globally, behind cardiovascular, respiratory, neoplasms, and maternal disorders. But I’d warn against making these sorts of comparisons because they obviously depend on how you slice up your data (for the same reason, the chart you made from WHO data doesn’t hold much water with me on its face).
I think the best next steps for you would be to create a strong case that addiction is neglected relative to top EA cause areas such as malaria, childhood vaccinations, maternal health, and so on. You could try to find good estimates on the amount of global or per-country funding going to each issue relative to their contribution to the global burden of disease. I am not sure how that analysis would play out, but I’d love to see it on the forum!
Yeah, I think Ozy’s article is a great retort to Torres specifically, but probably doesn’t extrapolate well to anyone who has used the TESCREAL label to explain this phenomenon, many of whom probably have stronger arguments.
Having QURI’s code in open source forms explicitly helped me improve Squiggle’s Observable integration & then develop my own smaller subset of Squiggle, so even though I didn’t fork & deploy your code it was super helpful for debugging & adapting!