Anything I write here is written purely on my own behalf, and does not represent my employer’s views (unless otherwise noted).
Erich_Grunewald
Can you specify what you mean with “2.7x is a ridiculous number”?
I ask because it does happen that economies grow like that in a fairly short amount of time. For example, since the year 2000:
China’s GDPpc 2.7x’d about 2.6 times
Vietnam’s did it ~2.4 times
Ethiopia’s ~2.1 times
India’s ~1.7 times
Rwanda’s ~1.3 times
The US’s GDPpc is on track to 2.7x from 2000 in about 2029, assuming a 4% annual increase
So I assume you don’t mean something like “2.7x never happens”. Do you mean something more like “it’s hard to find policies that produce 2.7x growth in a reasonable amount of time” or “typically it takes economies decades to 2.7x”?
I think the biggest danger to that reasoning is the premise that they are caused by GDP, and only by gdp, which I quite flatly dispute.
Well, this seems like something that is actually worth finding out. Because if it is the case that GDP (/ GDP per capita) does have a significant causal influence on one (or more) of them, then you are conditioning on a mediator, (partially) hiding the causal effect of GDP on the outcome. It seems to me like your model assumes that GDP does not have any casual influence on any of these variables, which seems like a pretty strong assumption. Unless I am misunderstanding something.
(ETA: Similarly, if both GDP and life satisfaction causally influence one of the variables, you are conditioning on a collider. That could introduce a spurious negative correlation masking a real correlation between GDP and life satisfaction, via Berkson’s paradox. For example, suppose both life satisfaction and GDP cause social stability. Then, when you stratify by social stability, it would not be surprising to find a spurious negative correlation between GDP and life satisfaction, because a high-social-stability country, if it happens to have relatively low GDP, must have very high life satisfaction in order to achieve high social stability, and vice versa.)
Any attempt of a defense of GDP, specifically, needs to take into the account the fact that it’s just a deeply flawed measure of value. That’s why econ nobelists have been arguing against it for over a decade (and likely much longer, given that whole international reports were being published on it in 2012). So even if it *were *more predictive than the model suggests, that still wouldn’t address the fact it’s known to be misleading, all on its own, and not something I would spend a lot of time defending on the merits.
My understanding of these critiques is that they say either that (1) GDP is not intrinsically valuable, (2) GDP does not measure perfectly anything that we care about, or fails to measure many things that we care about, and/or (3) GDP focuses too narrowly on quantifiable economical transactions.
But if you were to find empirically that GDP causes something we do care about, e.g., life satisfaction, then I don’t understand how those critiques would be relevant? (1) would not be relevant because we don’t care about increasing GDP for its own sake, only in order to increase life satisfaction. (2) would not be relevant because whatever GDP would or would not succeed in measuring, it does measure something, and it would be desirable to increase whatever it measures (since whatever that is, causes life satisfaction). (3) would not be relevant because whatever does or does not go into the measure, again, it does measure something, and it would be desirable to increase whatever it measures.
But perhaps the most definitive argument against the unique value of gdp is in simple counterexamples. Between 2005 and 2022, Costa Rica had a higher life satisfaction than the United States, with less than a third of the GDPpc. This simply wouldn’t be possible, if gdp just bought you happiness. Ergo, that simply cannot be the answer.
Your reductio shows that GDP cannot be the only thing that has a causal influence on life satisfaction (assuming measurements are good, etc.). But I don’t think OP or anyone else in this comment section is saying that GDP/wealth/money is the only thing that influences life satisfaction, only at most that it is one thing that has a comparatively strong influence on it. And your counterexample does not disprove that.
I don’t know if these things make it robustly good, but some considerations:
Raising and killing donkeys for their skin seems like it could scale up more than the use of working donkeys, since (1) there may be increasing demand for donkey skin as China develops economically, and (2) there may be diminishing demand for working donkeys as Africa develops economically. So it could be valuable to have a preemptive norm/ban against slaughtering donkeys for this use, even if the short-term effect is net-negative.
It is not obvious that working donkeys have net-negative lives. My impression is that their lives are substantially better than the lives of most factory farmed animals, though that is a low bar. One reason to think that is the case is that working donkeys’ owners live more closely to, and are more dependent on, their animals, than operators of factory farms, meaning they benefit more from their animals being healthy and happy.
Markets in donkey skin could have some pretty bad externalities, e.g., with people who rely on working donkeys for a living seeing their animals illegally poached. (On the other hand, this ban could also make such effects worse, by pushing the market underground.) Meanwhile, working donkeys do useful work, so they probably improve human welfare a bit. (I doubt donkey skin used for TCM improves human welfare.)
On non-utilitarian views, you may place relatively more value on not killing animals, and/or relatively less value on reducing suffering. So if you give some weight to those views, that may be another reason to think this ban is net positive.
That makes sense. I agree that capitalism likely advances AI faster than other economical systems. I just don’t think the difference is large enough for economic system to be a very useful frame of analysis (or point of intervention) when it comes to existential risk, let alone the primary frame.
Thank you for writing this. I thought it was an interesting article. I want to push back a bit against the claim that AI risk should primarily or even significantly be seen as a problem of capitalism. You write:
[I]n one corner are trillion-dollar companies trying to make AI models more powerful and profitable; in another, you find civil society groups trying to make AI reflect values that routinely clash with profit maximization.
In short, it’s capitalism versus humanity.
I do think it is true that the drive to advance AI technology faster makes AI safety harder, and that competition under the capitalist system is one thing generating this drive. But I don’t think this is unique to capitalism, or that things would be much better under some other economic system.
The Soviet Union was not capitalist, and yet it developed dangerous nuclear weapons and bioweapons. It put tremendous resources into technological development, e.g., space technology, missile systems, military aircraft, etc. I couldn’t find figures for the Cold War as a whole, but in 1980 the USSR outspent the US (and Japan, and Germany) on R&D, in terms of % of GDP. And it did not seem to do better at developing these technologies in a safe way than capitalist countries did (cf. Sverdlovsk, Chernobyl).
If you look at what is probably the second most capable country when it comes to AI, China, you see an AI industry driven largely by priorities set by the state, and investment partly directed or provided by the state. China also has markets, but the Chinese state is highly interested in advancing AI progress, and I see no reason why this would be different under a non-market-based system. This is pretty clear from e.g., its AI development plan and Made in China 2025, and it has much more to do with national priorities (of security, control, strategic competition, and economic growth) than free market competition.
I think the idea is that lots of money is spent on treating diseases caused by aging, but little is spent on preventing aging in the first place. So I don’t see a contradiction.
I reckon my donations this year will amount to about:
$3.7K to animal welfare, via Effektiv Spenden.
$1.7K to global health and development, via Effektiv Spenden.
$1.1K to the Donation Election Fund.
And my labour to mitigating risks from AI. In a way, this amounts to way more than the above, given that I would be earning 2x+ what I am earning now if I were doing what I did before, i.e., software engineering.
I recently reconfigured my giving to be about 85% animal welfare and 15% global health, however, for reasons similar to those spelled out in this post (I think, though I only skimmed that post, and came to my decision independently).
Some non-fiction books I enjoyed this year were James Gleick’s The Information (a sprawling book about information theory, communication, and much else), Wealth and Power by Orville Schell & John Delury (about the intellectual history of modern China), Fawn M. Brodie’s No Man Knows My History (about Joseph Smith and the early days of the LDS Church, or Mormonism), and David Stove’s The Plato Cult (polemics against Popper, Nozick, idealism, and more). Some of these are obviously rather narrow, and you probably would not enjoy them if you are not at all interested in the subject matters.
You can find it here, but use this power responsibly as I assume the author deleted it for a reason.
I agree that the idea could be restated in a clearer way. Here is an alternative way of saying essentially the same thing:
The project of doing good is a project of making better decisions. One important way of evaluating decisions is to compare the consequences they have to the consequences of alternative choices. Of course we don’t know the consequences of our decisions before we make them, so we must predict the consequences that a decision will have.
Those predictions are influenced by some of our beliefs. For example, do I believe animals are sentient? If so, perhaps I should donate more to animal charities, and less to charities aiming to help people. These beliefs pay rent in the sense that they help us make better decisions (they get to occupy some space in our heads since they provide us with benefits). Other beliefs do not influence our predictions about the consequences of important decisions. For example, whether or not I believe that Kanye West is a moral person does not seem important for any choice I care about. It is not decision-relevant, and does not “pay rent”.
In order to better predict the consequences of our decisions, it is better to have beliefs that more accurately reflect the world as it is. There are a number of things we can do to get more accurate beliefs—for example, we can seek out evidence, and reason about said evidence. But we have only so much time and energy to do so. So we should focus that time and energy on the beliefs that actually matter, in that they help us make important decisions.
It’s embarassing for the EA movement, too. It’s another SBF situation. Some EAs get control over billions of dollars, and act completely irresponsibly with that power.
Probably disagree? Hard to say for sure since we lack details, but it’s not obvious to me that the board acted irresponsibly, let alone to the degree that SBF did. I guess one, it seems fairly likely that Ilya Sutskever initiated the whole thing, not the EAs on the board. And two, the board members have fiduciary duties to further the OAI nonprofit’s mission, i.e., to ensure that AGI benefits all of humanity. (They do not have a duty to ensure OAI is valued at billions of dollars, except in so far as that helps further its mission.)
If the board members had reason to believe that Sam Altman was acting contrary to OAI’s mission of ensuring that AGI benefits all humanity, perhaps moving to fire him was the responsible thing to do (even if it turns out to be bad ex post), and what has been irresponsible are the efforts of investors and others to try to reinstate him. I guess we will know better within the next weeks, but I think it’s premature to say that the board acted irresponsibly right now.
That looks like a great interview subject!
Hugo argues that while many people believe that human beings are gullible and easily persuaded of false ideas, in fact people are surprisingly good at telling who is trustworthy, and generally aren’t easily convinced of anything they don’t already think.
That’s because communication couldn’t evolve among human unless it was beneficial to both the sender and receiver of information. If the receiver generally lost out, they would stop listening entirely.
I’m confused. I thought the general take was “people are tricked into believing things that are not true”, not “people are tricked into believing things that are bad for them”. The above argument is a reason to think the second claim is false, but not the first claim (since you can have false beliefs that are nonetheless not bad for you).
Also, could you not have communication evolve even if people are gullible, so long as it is good for groups to have unity/cohesion/obedience? Groups and tribes with more gullible members might have outcompeted groups with more independent-minded members if the former were more united/cohesive.
Some other questions:
What does he make of the claim that all cognitive biases at heart are just confirmation bias based around a few “fundamental prior” beliefs?
Is he an atheist, and if so what does he make of humanity’s history of belief in religion? I am thinking especially of times and places that were especially fertile ground for new religious ideas, e.g., the Mediterranean prior to and during the spread of Christianity, the Second Great Awakening, and the Taiping Rebellion in China. I think those were times when many people readily believed false ideas—why?
On social media and fake news, can he imagine any plausible information ecologies that would cause major problems? How would those look, and why will we avoid them?
Similarly, can he imagine an ideal information ecology? How different is it from what we have today, and how much would things change if we could switch over?
You could argue that fake news is a problem not because it convinces people of falsehoods, but because it spurs them into action, or extremizes their beliefs (e.g., by providing more extreme evidence of their beliefs’ truth than does reality). What does he make of that argument?
Presumably people sometimes do change their mind. What’s his model of how that typically happens? (Presumably it mostly involves things you would not call persuasion.)
Does he think LLMs and voice synthesis will be widely used for scams in the next decade? If not, why not? If yes, does scamming not involve persuasion?
Why did the ad media industry have over $800B revenue last year?
I have a little bit of a different perspective on that I don’t really consider earning “only” 70k a “sacrifice”. Maybe it could be considered a “relative sacrifice”? But even that language makes me uncomfortable.
Any sacrifice is relative. You can only sacrifice something if you had or could have had it in the first place.
Do you think it was a mistake (ex ante) for some folks to de-emphasize earning to give a few years back?
What sorts of field building efforts around earning to give are you more excited about? E.g., focusing on promising students versus trying to recruit high-net-worth individuals (aka rich people).
Which of your past donations do you feel best/worst about?
You may draw some ideas from when this topic was previously discussed here.
Nice work. Do you have an intuitions about whether the same patterns also apply to federal regulations in the US?
Does that figure adjust for number of episodes released? I imagine there are many people who listen to every episode, or who listen to every episode that they’re interested in. If that is the case, and because the podcast now has two hosts and seems to publish more content, the indicator may not be a useful proxy for EA growth this year (given the “number of episodes available” confounder).
How many EAs are vegan/vegetarian? Based on the 2022 ACX survey, and assuming my calculations are correct, people who identify as EA are about 40% vegan/vegetarian, and about 70% veg-leaning (i.e., vegan, vegetarian, or trying to eat less meat and/or offsetting meat-eating for moral reasons). For comparison, about 8% of non-EA ACX readers are vegan/vegetarian, and about 30% of non-EA ACX readers are veg-leaning.
(That’s conditioning on identifying as an LW rationalist, since anecdotally I think being vegan/vegetarian is somewhat less common among Bay Area EAs, and the ACX sample is likely to skew pretty heavily rationalist, but the results are not that different if you don’t condition. Take with a grain of salt in general as there are likely strong selection effects in the ACX survey data.)
I mostly agree with this (and did also buy some semiconductor stock last winter).
Besides plausibly accelerating AI a bit (which I think is a tiny effect at most unless one plans to invest millions), a possible drawback is motivated reasoning (e.g., one may feel less inclined to think critically of the semi industry, and/or less inclined to favor approaches to AI governance that reduce these companies’ revenue). This may only matter for people who work in AI governance, and especially compute governance.
Thank you for writing this! I love rats and found this—and especially watching the video of the rodent farm and reading your account of the breeder visit—distressing and pitiful.