Follow me on hauke.substack.com
I’m an independent researcher working on EA topics (Global Priorities Research, Longtermism, Global Catastrophic Risks, and Economics).
Follow me on hauke.substack.com
I’m an independent researcher working on EA topics (Global Priorities Research, Longtermism, Global Catastrophic Risks, and Economics).
Aschenbrenner and his investors will gain financially from more and accelerated AGI progress.
Not necessarily—they could just invest in publicly traded company where the counterfactual impact is not very large (even a large hedge fund buying some say Google stock wouldn’t much move the market cap). They could also be shorting certain companies which might reduce economically inefficient overinvestment into AI, which might also have x-risk externalities. It would be different if he ran a VC fund and invested in getting the next, say, Anthropic off the ground. Especially if the profits are donated and used for mission hedging, this might be good.
The hedge fund could position Aschenbrenner to have a deep understanding of and connections within the AI landscape, making the think tank outputs very good, and causing important future decisions to be made better.
Yes, the outputs might be better as the incentives are aligned: the hedge fund / think tank has ‘skin in the game’ to get the correct answers on the future of AI progress (though maybe some big banks are also trying to move markets with their publications).
Good point.
Fair point, but as I wrote, this is just the optimistic the ‘business as usual’ boring scenario in the absence of catastrophes (e.g. a new cold war). I still think it’s a somewhat likely outcome.
On environment / energy: perhaps we’ll decouple growth and environmental externalities.
The models from consultancies are based on standard growth models and correlate strongly with IMF projections.
Excellent point- I do cite the article from Our World in Data “Meat consumption tends to rise as we get richer”, that includes the figure you pasted.
I agree that we should try to decouple this trend—I think the most promising approach is increasing alternative protein R&D (GFI.org is working on this).
Thanks! Excellent comment.
My ambition here was perhaps simpler than you might assumed: my point here was just to highlight an even weaker version of Basil’s finding that I thought was worth highlighting: even if GDP percentage growth slows down a smaller growth rate can still mean more $ every year in absolute terms.
Sorry I also don’t know much more about this and don’t have the cognitive capacity right now to think this through for utility increases and maybe this breaks down at certain ηs.
Maybe it doesn’t make sense to think of just ‘one true average η’, like 1.5 for OECD countries, but rather specific ηs for different comparisons and doublings.
There was a related post on this recently—would love for someone to get to the bottom of it.
good catch! fixed this it should be:
“The next $1k/cap increase in a country at $10k/cap is worth 10x as much as in a country with $100k/cap, because, the utility gained from increasing consumption from $10k to $11k is much greater than the utility gained from increasing consumption from $100k to $101k, even though the absolute dollar amounts are the same.
Same with salaries actually. If you’d let people filter by salary ranges that would force orgs to give up some leverage during negotiation.
You should display how many people have already applied for a job and let applicants filter by that—so that they can target neglected jobs. Ideally via your own application forms, but click through statistics would do. Big orgs might not like that because they want as many applicants apply as possible, and do not internalize the externalities of wasted application time, but for candidates it would be better.
AI labs tend to partner with Big Tech for money, data, compute, scale etc. (e.g. Google Deepmind, Microsoft/OpenAI, and Amazon/Anthropic). Presumably to compete better? If they they’re already competing hard now, then it seems unlikely that they’ll coordinate much on slowing down in the future.
Also, it seems like a function of timelines: antitrust advocates argue that breaking up firms / preventing mergers would slow industry down in the short-run but speed up in the long-run by increasing competition, but if competition is usually already healthy, as libertarians often argue, then antitrust interventions might slow down industries in the long-run.
AI policy folks and research economists could engage with the arguments and the cited literature.
Grassroots folks like Pause AI sympathizers could put pressure on politicians and regulators to investigate this more (some claims, like the tax avoidance stuff seems most robustly correct and good).
I only said we should look into this more and have reviewed the pros and cons from different angles (e.g. not only consumer harms). As you say, the standard argument is that breaking up monopolists like Google increases consumer surplus and this might also apply here.
But I’m not sure in how far, in the short and long-run, this increases/decreases AI risks and/or race dynamics and within the west or between countries. This approach might be more elegant than Pausing AI, which definitely reduces consumer surplus.
Cool instance of black box evaluation—seems like a relatively simple study technically but really informative.
Do you have more ideas for future research along those lines you’d like to see?
it’s AI generated w/ Gemini 1.5 Pro- I had initially indicated that but then had formatting issues and had to repaste and forgot about adding it—now fixed.
Reimagining Malevolence: A Primer on Malevolence and Implications for EA—AI Summary
This extensive post delves into the concept of malevolence, particularly within the context of effective altruism (EA).
Key points:
Defining Malevolence:
The post critiques the limitations of the Dark Triad/Tetrad framework and proposes the Dark Factor (D) as a more comprehensive model. D focuses on the willingness to cause disutility to others, encompassing traits like callousness, sadism, and vindictiveness.
The post also distinguishes between callousness (lack of empathy) and antagonism (active desire to harm), and further differentiates reactive antagonism (vengefulness) from instrumental antagonism (premeditated harm for personal gain).
Why Malevolence Persists:
Despite its negative consequences, malevolence persists due to evolutionary factors such as varying environmental pressures, frequency-dependent selection, and polygenic mutation-selection balance.
Chaotic and lawless environments tend to favor individuals with malevolent traits, providing them with opportunities for power and survival.
Factors Amplifying Malevolence:
Admiration: The desire for power and recognition can drive individuals to seek positions of influence, amplifying the impact of their malevolent tendencies.
Boldness: The ability to remain calm and focused in stressful situations can be advantageous in attaining power.
Disinhibition/Planfulness: A balance of impulsivity and self-control can be effective in achieving goals, both good and bad.
Conscientiousness: Hard work and orderliness contribute to success in various domains, including those with potential for harm.
General Intelligence: Higher intelligence can enhance an individual’s ability to plan and execute harmful actions.
Psychoticism: Paranoia and impaired reality testing can lead to harmful decisions and actions.
Recommendations for EA:
Screening: Implementing psychometric measures to assess malevolence in individuals seeking positions of power.
Awareness: Recognizing that malevolence is not always linked to overt antisocial behavior or mental illness.
Intervention: While challenging, interventions should ideally target the neurological and biological underpinnings of malevolence, particularly during early development.
EA Community: While EA’s values and selection processes may offer some protection against malevolent actors, its emphasis on rationality and risk-neutrality could inadvertently attract or benefit such individuals. Vigilance and robust institutions are crucial.
Compassion and Action:
The post concludes by acknowledging the complexity of human nature and the potential for evil within all individuals. However, it emphasizes the need to draw lines and prevent individuals with high levels of malevolence from attaining positions of power. This requires a combination of compassion, understanding, and decisive action to safeguard the well-being of society.
Great comment—thanks so much!
Regarding CCEI’s effect of shifting deploy$ to RD&D$:
Yes, in the Guesstimate model the confidence intervals went from 0.1% to 1% lognormally distributed, with a mean of ~0.4%
With UseCarlo I used a metalog distribution with parameters 0%, 0.1%, 2%, 10%, resulting in a mean of ~5%
So you’re right, there is indeed about an order of magnitude difference between the two estimates:
This is mostly driven by my assigning some credence to the possibility that CCEI might have had as much as a 10% influence, which I wouldn’t rule out entirely.
However, the confidence intervals of the two estimates are overlapping.
I agree this is the weakest part of the analysis. As I highlighted, it’s a guesstimate motivated by the qualitative analysis that CCEI is part of the coalition of key movers and shakers that shifted budget increases to energy RD&D.
I think both estimates are roughly valid given the information available. Without further analysis, I don’t have enough precision to zero in on the most likely value.
I lost access to UseCarlo during the writeup and the after the analysis was delayed for quite some time (I had initially pitched it to FTX as an Impact NFT).
I just wanted to get the post out rather than delay further. With more resources, one could certainly dig deeper and make the analysis more rigorous and detailed. But I hope it provides a useful starting point for discussion and further research.
One could further nuance this analysis e.g. by calculating marginal effect of our $1M on US climate policy philanthropy at the current ~$55M level vs. what it’s now.
Thanks also for the astute observation about estimating expected cost-effectiveness in t/$ vs $/t. You raise excellent points and I agree it would be more elegant to estimate it as t/$ for the reasons you outlined.
I really appreciate you taking the time to engage substantively with the post.
Excellent post!