I live for a high disagree-to-upvote ratio
huw
For Kalshi specifically, it seems to have essentially become a backdoor to deregulate sports gambling in every US state. The mass deregulation of gambling in the US this decade feels harmful and like something we’ll probably really regret (legalisation seems fine but not like this).
It doesn’t seem popular to criticise the gambling aspects of prediction markets here, but it does seem strange to me that EAs seem to care a lot about reducing harms from tobacco and alcohol, but seem indifferent to gambling.
Yes, sorry, I should be clear I am arguing for new tobacco legislation to be paired with stronger enforcement, and not for new tobacco legislation to be avoided.
Here’s the data from Australia:
(FMC: Factory Made Cigarettes; RYO: Roll Your Own)
Australia instituted a full vaping ban in 2024 to combat the rise in total nicotine use, which had been trending toward before 2020. It’s a little early to tell, but a similar rise in taxation for cigarettes seems to have really pumped up the illicit market because the price of illegal cigarettes inverted below the price of legal ones. Anecdotally, there are way more drugstores than before that are straight up selling illegal cigarettes, and enforcement has been lacking.
I think this kind of generational ban can be done, but governments need to strictly enforce and tamp down on the underground market. I don’t know what the right policies are here.
The problem here is that mental health is just unbelievably neglected and cheap. You can plausibly provide a WELLBY (a tenth of a year of full wellbeing) for $20 or so. Saving lives or reducing disease is often substantially more expensive, to the point where it washes out, even if the per unit gains are massive. If you naïvely valued WELLBYs 1:1 with life years, you could spend around $200 per DALY, but that assumes people saved by GiveWell interventions live 10⁄10 lives, which they don’t.
There are some promising NCD interventions, usually around nutritional deficiencies or poisonings, that could be better than that (see HLI for more). Livelihoods may also fall into this category as a way of systematically preventing some diseases of despair.
Anyhow, the crux of my point was more that an evaluator with different moral weights could produce different results from GiveWell, which is the thesis (and to my understanding, the conclusion) of GWWC’s Evaluations of Evaluators project, which I think we broadly agree on.
Some great discussion in the original report here too, if you wanna deep dive
FWIW, I was mostly referring to this article and the one it’s responding to. Given that StrongMinds’ cost per participant is now 75% lower, it should appear cost-effective with AMF under GiveWell’s assumptions. However, my understanding is that they simply don’t take a worldview that values wellbeing a priori, and existing DALY computations undercount WELLBYs for, say, mental health. So if they changed their worldview, it seems reasonable they could value mental health as a top cost-effective option, and a future EA Global Health Fund could try and hedge against these positions if it wanted to expand.
Yeah, this is a trend I’ve seen a lot in these circles—I think the broader thing you’re looking at is people who are very good at systems thinking (ex. programming) assuming that social dynamics are, in fact, ordered and well-behaved enough to be manipulated in a particular way. Whereas instead, they are actually highly chaotic and unpredictable.
(I think the same impulse leads people to believe in longtermism, specifically tractability/cluelessness)
I’m interested in exploring whether there are additional ways GHDF can add value beyond that model. But this is not the most immediate priority given GiveWell’s continued support.
FWIW, it could be interesting to look at cruxes in GHD where GiveWell have taken a specific philosophical position, and hedge against those. I’m in mental health, so we see this in our field—lots of great EA charities with a lot of potential, but it really depends on your worldview and GiveWell don’t share one which would make mental health compelling.
I think this is an excellent idea!
Out of curiosity, is CE’s ideal outcome to have two specialist founders in each idea, or to pair one specialist with one non-specialist for a kind of balance?
This is the kind of GHD discussion that makes the Forum so good! Thanks for putting it together!
(Small nitpick—you shouldn’t really add DALYs and WELLBYs. WELLBYs are, in theory, a strict superset of DALYs.)
Very well said—always appreciate your clarity of thought on things like this.
I’ve been surprised at how little rigour funders have asked for with monitoring. There’s definitely an extent to which it matters less for smaller orgs (the ToC funders are operating with is much more catalytic), but I was expecting to at least have to show someone around our systems a bit. I’ve spent a lot of time trying to think about and reduce measurement biases, etc, because knowing my real impact matters a lot to me! I’m sure Evidence Action felt the same way and still failed, which is as good proof as any that these kinds of evaluations should be handled externally, by genuine critics, for anything serious. Incentive blindness is so real and so pervasive.
One other reason worth considering is that AIM may be hiring to a set bar, rather than filling a pre-determined number of seats. People who miss this bar, in AIM’s eyes, won’t make exceptional founders. Of course, many people grow and improve between rounds, but equally, many don’t.
AIM do maintain a list of second-placers internally, who are offered as candidates for high-level roles within their incubated charities.
(Disclosure: I know people on the team, but don’t actually know if this is the case)
This is really nice, I really like it. Millenarianism feels all too easy to reach for in AI risk—as you note, there is a subtle self-satisfaction in predicting the end of the world that we have to be careful not to use as a crutch. In the world where we succeed, it will have been important to have done so pro-socially for the world after to have any chance of being worth living in.
Hmm. Not a super well-thought out take here, but it seems to me that Situational Awareness’ biggest crux is around whether an arms race dynamic would develop between the U.S. and China, and he lays out a few specific ways in which that might happen.
I don’t see any evidence of such an arms race taking place. China don’t have any frontier labs, only labs which distill other models. They haven’t yet produced a capable chip and seem at least a few years to half a decade off (much slower than Aschenbrenner’s predictions). They haven’t waged a state-sponsored cyberattack to steal model weights or algorithmic secrets—but I suppose you could argue it’s cheaper and easier to just distill in the short term?
In fact, given the ease of distillation and the proliferation of open-source models, it might be more reasonable to argue that such an arms race may not even occur, because it will be cheap and easy to access intelligence.
One reason this is important is because AOC is very likely to run for president in 2028, and has so far been quite judicious about which policies she chooses to publicly support and endorse.
This is either an attempt to test the waters on AI regulation, to see if it will become part of her platform, or she is already convinced it will be. If she runs, she will then be in a position to leverage this policy to convince other Democratic presidential candidates to adopt similar measures (or a rhetorical anti-AI framing). The other most likely candidate for president is Gavin Newsom, in whose state most of the leading AI companies are headquartered.
What would you say to a potential attendee who has a legitimate interest in reprogenetics’ emancipatory capacity, but is concerned that the conference will be taken over by discussions of human biodiversity, especially given that two of the featured speakers, Jonathan Anomaly and Steve Hsu, have both pretty clearly endorsed HBD or at least, given the ambiguities in their statements, never explicitly refuted it?
Would you be interested in screening out certain problematic attendees or explicitly refuting human biodiversity on the conference website, in order to create an environment welcoming of open discussion of reprogenetics?
(Can you point me to something about the moral weight of fish eggs? I have never heard of this before)
One other thing that feels missing from these comments, is that a more mature field has a bunch of other interesting discussion points. If all the philosophical questions in EA GHD were one day solved, we could still have invigorating debates about how to develop and manage interventions, about who the payer should be, etc. etc.
So I’m not sure this is all just a dearth of topics to discuss—perhaps the nuance is that this forum tends to like those more philosophical or intellectual discussions and those aren’t generally the kinds of debates most GHD practitioners I know are having?
To me wellbeing is the most exciting topic in EA GHD at the moment, because with some serious engagement from the kinds of players attending that workshop, it has the greatest potential to credibly upend the currently accepted wisdom in EA GHD. There are a lot of questions that you and others have been chipping away at for some time that many people assume are either solved or unlikely to yield field-altering results, and I think that impression is wrong!
Ex-DeepMind scientist David Silver has just raised a $5 billion valuation for his new startup, and pledged to donate 100% of the proceeds from his equity stake via Founders Pledge.
Are we prepared for the AI money to start hitting?