Follow me on hauke.substack.com
I’m an independent researcher working on EA topics (Global Priorities Research, Longtermism, Global Catastrophic Risks, and Economics).
Follow me on hauke.substack.com
I’m an independent researcher working on EA topics (Global Priorities Research, Longtermism, Global Catastrophic Risks, and Economics).
Aschenbrenner and his investors will gain financially from more and accelerated AGI progress.
Not necessarily—they could just invest in publicly traded company where the counterfactual impact is not very large (even a large hedge fund buying some say Google stock wouldn’t much move the market cap). They could also be shorting certain companies which might reduce economically inefficient overinvestment into AI, which might also have x-risk externalities. It would be different if he ran a VC fund and invested in getting the next, say, Anthropic off the ground. Especially if the profits are donated and used for mission hedging, this might be good.
The hedge fund could position Aschenbrenner to have a deep understanding of and connections within the AI landscape, making the think tank outputs very good, and causing important future decisions to be made better.
Yes, the outputs might be better as the incentives are aligned: the hedge fund / think tank has ‘skin in the game’ to get the correct answers on the future of AI progress (though maybe some big banks are also trying to move markets with their publications).
Good point.
Fair point, but as I wrote, this is just the optimistic the ‘business as usual’ boring scenario in the absence of catastrophes (e.g. a new cold war). I still think it’s a somewhat likely outcome.
On environment / energy: perhaps we’ll decouple growth and environmental externalities.
The models from consultancies are based on standard growth models and correlate strongly with IMF projections.
Excellent point- I do cite the article from Our World in Data “Meat consumption tends to rise as we get richer”, that includes the figure you pasted.
I agree that we should try to decouple this trend—I think the most promising approach is increasing alternative protein R&D (GFI.org is working on this).
Thanks! Excellent comment.
My ambition here was perhaps simpler than you might assumed: my point here was just to highlight an even weaker version of Basil’s finding that I thought was worth highlighting: even if GDP percentage growth slows down a smaller growth rate can still mean more $ every year in absolute terms.
Sorry I also don’t know much more about this and don’t have the cognitive capacity right now to think this through for utility increases and maybe this breaks down at certain ηs.
Maybe it doesn’t make sense to think of just ‘one true average η’, like 1.5 for OECD countries, but rather specific ηs for different comparisons and doublings.
There was a related post on this recently—would love for someone to get to the bottom of it.
good catch! fixed this it should be:
“The next $1k/cap increase in a country at $10k/cap is worth 10x as much as in a country with $100k/cap, because, the utility gained from increasing consumption from $10k to $11k is much greater than the utility gained from increasing consumption from $100k to $101k, even though the absolute dollar amounts are the same.
Same with salaries actually. If you’d let people filter by salary ranges that would force orgs to give up some leverage during negotiation.
You should display how many people have already applied for a job and let applicants filter by that—so that they can target neglected jobs. Ideally via your own application forms, but click through statistics would do. Big orgs might not like that because they want as many applicants apply as possible, and do not internalize the externalities of wasted application time, but for candidates it would be better.
AI labs tend to partner with Big Tech for money, data, compute, scale etc. (e.g. Google Deepmind, Microsoft/OpenAI, and Amazon/Anthropic). Presumably to compete better? If they they’re already competing hard now, then it seems unlikely that they’ll coordinate much on slowing down in the future.
Also, it seems like a function of timelines: antitrust advocates argue that breaking up firms / preventing mergers would slow industry down in the short-run but speed up in the long-run by increasing competition, but if competition is usually already healthy, as libertarians often argue, then antitrust interventions might slow down industries in the long-run.
AI policy folks and research economists could engage with the arguments and the cited literature.
Grassroots folks like Pause AI sympathizers could put pressure on politicians and regulators to investigate this more (some claims, like the tax avoidance stuff seems most robustly correct and good).
I only said we should look into this more and have reviewed the pros and cons from different angles (e.g. not only consumer harms). As you say, the standard argument is that breaking up monopolists like Google increases consumer surplus and this might also apply here.
But I’m not sure in how far, in the short and long-run, this increases/decreases AI risks and/or race dynamics and within the west or between countries. This approach might be more elegant than Pausing AI, which definitely reduces consumer surplus.
Cool instance of black box evaluation—seems like a relatively simple study technically but really informative.
Do you have more ideas for future research along those lines you’d like to see?
it’s AI generated w/ Gemini 1.5 Pro- I had initially indicated that but then had formatting issues and had to repaste and forgot about adding it—now fixed.
Reimagining Malevolence: A Primer on Malevolence and Implications for EA—AI Summary
This extensive post delves into the concept of malevolence, particularly within the context of effective altruism (EA).
Key points:
Defining Malevolence:
The post critiques the limitations of the Dark Triad/Tetrad framework and proposes the Dark Factor (D) as a more comprehensive model. D focuses on the willingness to cause disutility to others, encompassing traits like callousness, sadism, and vindictiveness.
The post also distinguishes between callousness (lack of empathy) and antagonism (active desire to harm), and further differentiates reactive antagonism (vengefulness) from instrumental antagonism (premeditated harm for personal gain).
Why Malevolence Persists:
Despite its negative consequences, malevolence persists due to evolutionary factors such as varying environmental pressures, frequency-dependent selection, and polygenic mutation-selection balance.
Chaotic and lawless environments tend to favor individuals with malevolent traits, providing them with opportunities for power and survival.
Factors Amplifying Malevolence:
Admiration: The desire for power and recognition can drive individuals to seek positions of influence, amplifying the impact of their malevolent tendencies.
Boldness: The ability to remain calm and focused in stressful situations can be advantageous in attaining power.
Disinhibition/Planfulness: A balance of impulsivity and self-control can be effective in achieving goals, both good and bad.
Conscientiousness: Hard work and orderliness contribute to success in various domains, including those with potential for harm.
General Intelligence: Higher intelligence can enhance an individual’s ability to plan and execute harmful actions.
Psychoticism: Paranoia and impaired reality testing can lead to harmful decisions and actions.
Recommendations for EA:
Screening: Implementing psychometric measures to assess malevolence in individuals seeking positions of power.
Awareness: Recognizing that malevolence is not always linked to overt antisocial behavior or mental illness.
Intervention: While challenging, interventions should ideally target the neurological and biological underpinnings of malevolence, particularly during early development.
EA Community: While EA’s values and selection processes may offer some protection against malevolent actors, its emphasis on rationality and risk-neutrality could inadvertently attract or benefit such individuals. Vigilance and robust institutions are crucial.
Compassion and Action:
The post concludes by acknowledging the complexity of human nature and the potential for evil within all individuals. However, it emphasizes the need to draw lines and prevent individuals with high levels of malevolence from attaining positions of power. This requires a combination of compassion, understanding, and decisive action to safeguard the well-being of society.
Great comment—thanks so much!
Regarding CCEI’s effect of shifting deploy$ to RD&D$:
Yes, in the Guesstimate model the confidence intervals went from 0.1% to 1% lognormally distributed, with a mean of ~0.4%
With UseCarlo I used a metalog distribution with parameters 0%, 0.1%, 2%, 10%, resulting in a mean of ~5%
So you’re right, there is indeed about an order of magnitude difference between the two estimates:
This is mostly driven by my assigning some credence to the possibility that CCEI might have had as much as a 10% influence, which I wouldn’t rule out entirely.
However, the confidence intervals of the two estimates are overlapping.
I agree this is the weakest part of the analysis. As I highlighted, it’s a guesstimate motivated by the qualitative analysis that CCEI is part of the coalition of key movers and shakers that shifted budget increases to energy RD&D.
I think both estimates are roughly valid given the information available. Without further analysis, I don’t have enough precision to zero in on the most likely value.
I lost access to UseCarlo during the writeup and the after the analysis was delayed for quite some time (I had initially pitched it to FTX as an Impact NFT).
I just wanted to get the post out rather than delay further. With more resources, one could certainly dig deeper and make the analysis more rigorous and detailed. But I hope it provides a useful starting point for discussion and further research.
One could further nuance this analysis e.g. by calculating marginal effect of our $1M on US climate policy philanthropy at the current ~$55M level vs. what it’s now.
Thanks also for the astute observation about estimating expected cost-effectiveness in t/$ vs $/t. You raise excellent points and I agree it would be more elegant to estimate it as t/$ for the reasons you outlined.
I really appreciate you taking the time to engage substantively with the post.
AI Summary of the “Quick Update on Leaving the Board of EV” Thread (including comments):
Rebecca Kagan’s resignation from the board of Effective Ventures (EV) due to disagreements regarding the handling of the FTX crisis has sparked an intense discussion within the Effective Altruism (EA) community. Kagan believes that the EA community needs an external, public investigation into its relationship with FTX and its founder, Sam Bankman-Fried (SBF), to address mistakes and prevent future harm. She also calls for clarity on EA leadership and their responsibilities to avoid confusion and indirect harm.
The post generated extensive debate, with many community members echoing the call for a thorough, public investigation and postmortem. They argue that understanding what went wrong, who was responsible, and what structural and cultural factors enabled these mistakes is crucial for learning, rebuilding trust, and preventing future issues. Some point to the concerning perception gap between those who had early concerns about SBF and those who seemingly ignored or downplayed these warnings.
However, others raise concerns about the cost, complexity, and legal risks involved in conducting a comprehensive investigation. They worry about the potential for re-victimizing those negatively impacted by the FTX fallout and argue that the key facts may have already been uncovered through informal discussions.
Alternative suggestions include having multiple individuals with relevant expertise conduct post-mortems, focusing on improving governance and organizational structures, and mitigating the costs of speaking out by waiving legal obligations or providing financial support for whistleblowers.
The thread also highlights concerns about recent leadership changes within EA organizations. Some argue that the departure of individuals known for their integrity and thoughtfulness regarding these issues raises questions about the movement’s priorities and direction. Others suggest that these changes may be less relevant due to factors such as the impending disbanding of EV or reasons unrelated to the FTX situation.
Lastly, the discussion touches on the concept of “naive consequentialism” and its potential role in the FTX situation and other EA decisions. The OpenAI board situation is also mentioned as an example of the challenges facing the EA community beyond the FTX crisis, suggesting that the core issues may lie in the quality of governance rather than a specific blind spot.
Overall, the thread reveals a community grappling with significant trust and accountability issues in the aftermath of the FTX crisis. It underscores the urgent need for the EA community to address questions of transparency, accountability, and leadership to maintain its integrity and continue to positively impact the world.
What are the most surprising things that emerged from the thread?
Based on the summaries, a few surprising or noteworthy things emerged from the “Quick Update on Leaving the Board of EV” thread:
The extent of disagreement and concern within the EA community regarding the handling of the FTX crisis, as highlighted by Rebecca Kagan’s resignation from the EV board and the subsequent discussion.
The revelation of a significant perception gap between those who had early concerns about Sam Bankman-Fried (SBF) and those who seemingly ignored or downplayed these warnings, suggesting a lack of effective communication and information-sharing within the community.
The variety of perspectives on the necessity and feasibility of conducting a public investigation into the EA community’s relationship with FTX and SBF, with some advocating strongly for transparency and accountability, while others raised concerns about cost, complexity, and potential legal risks.
The suggestion that recent leadership changes within EA organizations may have been detrimental to reform efforts, with some individuals known for their integrity and thoughtfulness stepping back from their roles, raising questions about the movement’s priorities and direction.
The mention of the OpenAI board situation as another example of challenges facing the EA community, indicating that the issues extend beyond the FTX crisis and may be rooted in broader governance and decision-making processes.
The discussion of “naive consequentialism” and its potential role in the FTX situation and other EA decisions, suggesting a need for the community to re-examine its philosophical foundations and decision-making frameworks.
The emotional weight and urgency conveyed by many community members regarding the need for transparency, accountability, and reform, underscoring the significance of the FTX crisis and its potential long-term impact on the EA movement’s credibility and effectiveness.
These surprising elements highlight the complex nature of the challenges facing the EA community and the diversity of opinions within the movement regarding the best path forward.
He did mention the head of the FTX foundation which was Nick Beckstead—not sure about the others, but would still seem weird for them to say it like that—maybe one of the younger staff members said something like ‘I care more about the far future’ or something along the lines of ‘GiveDirectly is too risk averse’. but would still think he’s painting quite the stereotype of EA here.
Pointing to white papers from think tanks that you fund isn’t a good evidentiary basis to support the claim of R&D’s cost effectiveness.
I cite a range of papers from the academia, government, and think tanks in the appendix. You don’t cite anything either those are just like… your opinions no?
The R&D benefit for advanced nuclear since the 1970s has yielded a net increase in price for that technology
Are you saying the more we invest in R&D the higher the costs? I agree that nuclear is getting more expensive on net but that can still mean that R&D will drive the price down.
After that, all the technology gains came from scaling, not R&D.
What about the perovskite fever from the mid ’10s?
Also there’s a long lag with research.
And historic estimates are not necessarily indicative of future gains; we should expect diminishing returns.
Furthermore, most of the money in BIL and IRA were for demonstration projects—advanced nuclear, the hydrogen hubs, DAC credits. Notably NOT research and development. You make a subtle shift in your cost effectiveness table where you use unreviewed historic numbers on cost-effectiveness for research and development, and then apply that to the much larger demonstration and deployment dollars. Apples and oranges. The needs for low TRL tech is very different from high TRL tech.
I’ve simplified R&D to RD&D here, but I do cite RD&D projections—see and my calculation—do you think these numbers are off? What do you think they are? All models are wrong as they say.
Lastly, a Bill Gates retweet is not the humble brag you think it is. Bill has a terrible track record of success in energy ventures; he’s uninformed and impulsive. Saying Bill Gates likes your energy startup is like saying Jim Cramer likes your stock. Both indicate a money-making opportunity for those who do the opposite.
That was a straightforward brag because he has millions of followers on X. I’m quite critical of Gates—I have blogged about this here. But also maybe we should give more credit to doing high-risk high reward stuff even if it doesn’t work out… like Solyndra?
Excellent post!