I live for a high disagree-to-upvote ratio
huw
Reading between the lines, the narrative that the UK want to push here is that due to Trump’s presumed defunding of NATO & the general U.S. nuclear umbrella, they have to increase defence and cut aid? So if you buy this narrative, this is a follow-on consequence of Trump’s election?
Thank you MHFC! As with past grantees, I can attest that working with MHFC was extremely easy & pleasant. They take cost-effectiveness & future prospects seriously, but aren’t onerous in their requirements. If you’re in the mental health space they’re an excellent partner to have!
This opens a bit of a can of worms. FWIW, the World Inequality Database (founded by Thomas Piketty, and OWID’s main source on wealth data) reports Sweden as having a top-10% wealth share of 58.9%, on the lower end and much less than the US’ 70.7%:
I looked at the source for the infographic you linked, which is UBS’ Global Wealth Report (presumably from 2023, but undated). [Here’s the full data](https://rev01ution.red/wp-content/uploads/2024/03/global-wealth-databook-2023-ubs.pdf). Table 4–5 reports Sweden’s top-10% share at 74.4%, which would make it a highly unequal country, much more on par with the U.S, and much higher than their neighbours.
Although I’m interested, I don’t really have the time to deep dive around the different methodologies, but some light reading through section 1.1 of the UBS report and the [WID’s methodological report](https://wid.world/document/distributional-national-accounts-guidelines-2020-concepts-and-methods-used-in-the-world-inequality-database/) makes me think the WID have the more comprehensive methodology.
My suspicion from this read is that the UBS report is extrapolating from 2007 data (since this is the latest Sweden-specific dataset they cite). 2007 was the last year that Sweden had a wealth tax (which, FWIW, [they had from 1911–2007](http://piketty.pse.ens.fr/files/DuRietzHenrekson2015.pdf)) and so would produce good official estimates of the wealth distribution, but they might be skewed, especially because the wealth tax only applied above a threshold. Whereas the WID appear to be taking into account income tax data from capital gains ([this paper finds a top 10% share of 65.9% in 2012 using this methodology](https://www.ifn.se/media/wbldgg0m/wp1131.pdf)) and a bunch of other normalisations.
But I really wanna emphasise that I’m less sure, I would generally lean toward trusting the WID on this over UBS, but that’s probably not enough to make a point on the internet about Scandi anticapitalism.
I am trying my hardest to disambiguate ‘market/economic freedom’ from ‘unrestrained accumulation of wealth’. Europe produces a huge amount of tax revenue (see below; I don’t have an anticapitalism index at hand so this is as close as I could get in a 5 minute search) while maintaining similar levels of economic freedom to the US, and manages much higher life satisfaction and equality despite lower GDP per capita. That’s insane!
Anticapitalism in a strict economic sense is merely the opposition to unrestrained accumulation of wealth and/or concentration of ownership of the means of production in private hands. It doesn’t have to take a position on anything to do with markets. (Obviously, the popular Western conception of anticapitalism is also often anti-market, but actually-existing-anticapitalism in Europe is pro-market!)
Yeah sorry, to emphasise further, I’m referring to the position where we should place strong restrictions on wealth accumulation when it leads to market failures. The difference between this and the mainstream (in this conception) is that mainstream views take more of a siloed approach to these outcomes, and prefer income taxes or laws to remedy them.
An anticapitalist view contrasts with this by identifying wealth accumulation / concentrated ownership of the means of production as a root cause of these issues and works to restrain it in a more preventative capacity. As you identified, such a view typically advocates for policies like wealth taxes, worker co-determination on boards, and high tax surveillance.
Also loosely on your claim that anticapitalism is incompatible with EA because anticapitalists foreground equality over utility—I disagree. First, EA is scoped to ‘altruism’, not to ‘all policy worldwide’, so a view that aims to maximise altruism also maximises equality under regular conditions. Second, it’s not necessarily the case that there is a tradeoff between equality and global utility, and highly socialist societies such as the Scandis enjoy both higher equality and higher utility than more capitalist countries such as the United States or the UK.
(I’ve read Piketty and don’t remember him ever suggesting he would trade one for the other; can’t speak to the other authors you cite)
Yes. That’s why I only scoped my comment around weak anticapitalism (specifically: placing strong restrictions on wealth accumulation when it leads to market failures), rather than full-scale revolution. I’m personally probably more reformist and generally pretty pro-market, but anti-accumulation, FWIW.
As I said, I think that instead of siloed advocacy in distinct cause areas, EAs could realise that they have common cause around opposing bad economic incentives. AI safety, farmed animal welfare, and some global health concerns come from the same roots, and there are already large movements well-placed to solve these problems on the political left (ex. labour unions, veganism, environmentalism, internationalist political groups). Indeed, vegan EAs have already allied well with the existing movement to huge success, but this is the exception.
Frankly, I don’t see how that leads to bread lines but I am open to a clearer mechanism if you have one?
Great paper & a strong argument. I would even take it further to argue that most EAs and indeed, longtermists, probably already agree with weak anticapitalism; most EA projects are trying to compensate for externalities or market failures in one form or another, and the increasing turn to policy, rather than altruism, to settle these issues is a good sign.
I think a bigger issue, as you’ve confronted on this forum before, is an unwillingness (mostly down to optics / ideological inoculation), to identify these issues as having structural causes in capitalism. This arrests EAs/longtermists from drawing on centuries of knowledge & movement-building, or more to the matter, even representing a coherent common cause that could be addressed through deploying pooled resources (for instance, donating to anticapitalist candidates in US elections). It breaks my heart a bit tbh, but I’ve long accepted it probably won’t happen.
The Trump administration has indefinitely paused NIH grant review meetings, effectively halting US-government-funded biomedical research.
There are good criticisms of the NIH, but we are kidding ourselves if we believe that this is to do with anything but vindictiveness over COVID-19, or at best, a loss of public trust in health institutions from a minority of the US public. But this action will not rectify that. Instead of one public health institution with valid flaws that a minority of the public distrust, we have none now. Clinical trials have been paused too, so it’s likely that people will die from this.
I don’t have a great sense of what to do other than lament. Thankfully, there are good research funders globally—in my case, a lot of the research Kaya Guides relies on is funded by the WHO (😔) or the EU. We’re still waiting to see how the WHO withdrawal will affect us, but we’re lucky that there are other global leaders willing to pick up the slack. I hope that US philanthropic funding also doesn’t dry up over the coming years…
I think that the appropriate medium-term fit for the movement will be with organised labour (whether left or right!), as I’ve said before here. The economic impacts are not currently strong enough to have been felt in the unemployment rate, particularly since anti-inflationary policies typically prop up the employment rate a bit. But they will presumably be felt soon, and the natural home for those affected will be in the labour movement, which despite its currently weakened state will always be bigger and more mobile than, say, PauseAI.
(Specifically in tech, where I have more experience in labour organising, the largest political contingent among the workers has always been on the labour left. For example, [Bernie Sanders was far and away the most donated to candidate among big tech employees in 2020](https://www.theguardian.com/us-news/2020/mar/02/election-2020-tech-workers-donations-bernie-sanders).)
In that world, the best thing EAs can do is support that movement. Not necessarily explicitly or directly—I can see a world where Open Phil lobbies to strengthen the U.S. NLRB and overturn key Supreme Court decisions such as Janus. But, such a move will be perceived as highly political, and I wonder if the allergy to labour-left politics within EA precludes it.
Someone noted that at the rate of US GHD spending, this would cost ~12,000 counterfactual lives. A tremendous tragedy.
I think that’s a false dichotomy. It should be possible to have uncomfortable/weird ideas here while treating them with nuance and respect. (Are you instead trying to argue that having a higher bar for these kinds of posts is a bad idea?)
Equally, the original post doesn’t try to understand the perspective that abortion might be net good for the world. So I think the crux might actually be more about who you think should shoulder the burden of attempting-to-understand.
I sort of think that Twitter/Bluesky is the place for that, to be honest. I’m not sure that the forum needs to be that.
We legalise abortion because it helps people live their lives on their own terms, which is good (and some small cases where abortions are medical procedures that prevent death or physical harm directly). Young people can take risks and be stupid without it changing the course of their lives; or in more extreme cases, escape their abusers.
So, in the sort of Quixotic spirit of trying to avoid this thread getting out of hand, I want to be constructive. I think that such an obviously fraught and tense issue deserves more thought and care than a quick BOTEC. I get the broader point that you’re making, but you’re making it in a pretty crude way that feels insensitive to the very real harms people face due to restricted abortion access; I am not sure that the comparison was needed to make that point either.
Legendary! Thank you!
Has someone built donor screening as a service? I feel like a lot of this labour would be pretty repetitive and generalisable (you could modularise the different risk factors so that different orgs can tailor to their preferences).
Mmm, it is not merely the case that finance is drying up, but that according to OECD data, in 2023 net financial flows to the Global South were actually negative (i.e. they paid more in repayments than they received in new finance).
Nvidia’s moat comes from a few things. As you pointed out, they have CUDA, which is a proprietary set of APIs for running parallelised math operations. But they also have the best performing chips on the market by a long way. This is not merely a function of having strong optimisation on the software side (possibly replicable by o3 but I would need to see more evidence to be convinced that an LLM would be good at optimisation), or on the hardware side (much, MUCH trickier for an LLM given that a lot of the hardware has to operate on nanometre scale, which can be hard to simulate), but also because having the most money and a strong track record & relationship means they can get preferential access to next-gen fabs at TSMC.
It is also true that the recent boom has increased investment into running CUDA code on other GPUs. The SCALE project is one such example. This implies (a) the bottleneck is not about replicating CUDA’s functionality (which it does), but more about replicating its performance (they might have gains to make there) and/or (b) that the actual moat really does lie in the hardware. Again, probably a mix of both.
However, this hasn’t stopped other companies from making progress here. I think it’s indicative that Deepseek v3 was allegedly trained for less than $10m. If this is true, it suggests to me that:
-
Frontier labs might be currently using their hardware very inefficiently, and if these efficiencies were to be capitalised on, demand for Nvidia hardware would reduce (both by using less of their GPUs, but also because you wouldn’t need the best of the best to do well)
-
If it turns out to be cheap to train good LLMs, captured value might shift back to frontier labs, or even to downstream applications. This would reduce Nvidia’s pricing power.
Also, it looks like the competition is catching up anyway. It seems like it’s very reasonable to do inference on Apple or Google chips (Apple Intelligence runs on M2-series chips, these also have top TSMC node access; Google run a lot of inference on their own TPUs). I was particularly impressed that you can run a 600B+ parameter model on 8 Mac Minis, not even running Apple’s best chips. Even if it’s only inference, that’s a huge chunk of the market that might fall to competitors soon.
So I’m not exactly counting on Nvidia to hold, but I think it will be for other reasons than automation. Even if you are very AI-pilled, we still live in the world where market dynamics are much stronger than labour automation effects. For now :)
-
I don’t know enough about AMF to answer your question directly, but I can shed some light on market failures by way of analogy to my employer, Kaya Guides, which provides free psychotherapy in India:
Our beneficiaries usually can’t afford psychotherapy outright
They sometimes live rurally, and can’t travel to places that do psychotherapy in person
There are not enough psychotherapists in India for everyone to receive it
The government, equally, don’t have the capacity or interest to develop the mental health sector enough (against competing health priorities) to make free treatment available
Our beneficiaries usually don’t know what psychotherapy is, or that they have a problem at all, nor that it can be treated
We are incentivised to make psychotherapy as cheap as possible to reach the worst-served portion of the market, while for-profits are incentivised to compete in more lucrative parts of the market
I can see how many, if not all, of these would be analogous to AMF. The market doesn’t and can’t solve every problem!
Heya, I’m not an AI guy anymore so I find these posts kinda tricky to wrap my head around. So I’m earnestly interested in understanding: If AGI is that close, surely the outcomes are completely overdetermined already? Or if they’re not, surely you only get to push the outcomes by at most 0.1% on the margins (which is meaningless if the outcome is extinction/not extinction)? Why do you feel like you have agency in this future?
Nowhere in their RFP do they place restrictions on what kinds of energy capacity they want built. They are asking for a 4% increase in U.S. energy capacity—this is a serious amount of additional CO2 emissions if that capacity isn’t built renewably. But that’s just what they’re asking for now; if they’re serious about building & scaling AGI, they would be asking for much bigger increases, without a strong precedent of carbon-neutrality to back it up. That seems really bad?
Also to pre-empt—the energy capacity has to come before you build an AI powerful enough to ‘solve climate change’. So if they fail to do that, the downside is that they make the problem significantly worse. I think the environmental downsides of attempting to build AGI should be a meaningful part of one’s calculus.