I think this is a good instance of focusing through cause areas and one I had in mind
Phib
I empathize with the desire for the request which is why I’m responding, but yeah, unsure the EA forum is the right place for the presidential election.
I wonder if one were to make an argument for a candidate strictly across causes which are more EA consensus/funded by Open Phil. X candidate is good for animal welfare, global health and development, and pandemic and AI catastrophic/existential risk. And here are the policies and here is the total analysis across these which differentially directs this many GiveDirectly rated dollars/QALYs.
But yeah, seems hard. Also open to just being wrong here.
Thank you for making the effort to write this post.
Reading Situational Awareness, I updated pretty hardcore into national security as the probable most successful future path, and now find myself a little chastened by your piece, haha [and just went around looking at other responses too, but yours was first and I think it’s the most lit/evidence-based]. I think I bought into the “Other” argument for China and authoritarianism, and the ideal scenario of being ahead in a short timeline world so that you don’t have to even concern yourself with difficult coordination, or even war, if it happens fast enough.
I appreciated learning about macrosecuritization and Sears’ thesis, if I’m a good scholar I should also look into Sears’ historical case studies of national securitization being inferior to macrosecuritization.
Other notes for me from your article included: Leopold’s pretty bad handwaviness around pausing as simply, “not the way”, his unwillingness to engage with alternative paths, the danger (and his benefit) of his narrative dominating, and national security actually being more at risk in the scenario where someone is threatening to escape mutually assured destruction. I appreciated the note that safety researchers were pushed out of/disincentivized in the Manhattan Project early and later disempowered further, and that a national security program would probably perpetuate itself even with a lead.
FWIW I think Leopold also comes to the table with a different background and set of assumptions, and I’m confused about this but charitably: I think he does genuinely believe China is the bigger threat versus the intelligence explosion, I don’t think he intentionally frames the Other as China to diminish macrosecuritization in the face of AI risk. See next note for more, but yes, again, I agree his piece doesn’t have good epistemics when it comes to exploring alternatives, like a pause, and he seems to be doing his darnedest narratively to say the path he describes is The Way (even capitalizing words like this), but...
One additional aspect of Leopold’s beliefs that I don’t believe is present in your current version of this piece, is that Leopold makes a pretty explicit claim that alignment is solvable and furthermore believes that it could be solved in a matter of months, from p. 101 of Situational Awareness:
Moreover, even if the US squeaks out ahead in the end, the difference between a 1-2 year and 1-2 month lead will really matter for navigating the perils of superintelligence. A 1-2 year lead means at least a reasonable margin to get safety right, and to navigate the extremely volatile period around the intelligence explosion and post-superintelligence.77 [NOTE] 77 E.g., space to take an extra 6 months during the intelligence explosion for alignment research to make sure superintelligence doesn’t go awry, time to stabilize the situation after the invention of some novel WMDs by directing these systems to focus on defensive applications, or simply time for human decision-makers to make the right decisions given an extraordinarily rapid pace of technological change with the advent of superintelligence.
I think this is genuinely a crux he has with the ‘doomers’, and to a lesser extent the AI safety community in general. He seems highly confident that AI risk is solvable (and will benefit from gov coordination), contingent on there being enough of a lead (which requires us to go faster to produce that lead) and good security (again, increase the lead).
Finally, I’m sympathetic to Leopold writing about the government as better than corporations to be in charge here (and I think the current rate of AI scaling makes this at some point likely (hit proto-natsec level capability before x-risk capability, maybe this plays out on the model gen release schedule)) and his emphasis on security itself seems pretty robustly good (I can thank him for introducing me to the idea of North Korea walking away with AGI weights). Also just the writing is pretty excellent.
Noting another recent post doing this: https://forum.effectivealtruism.org/posts/RbCnvWyoiDFQccccj/on-the-dwarkesh-chollet-podcast-and-the-cruxes-of-scaling-to
Yeah, really interesting thanks for sharing. The incentive structure here seems to be a pretty nice clean loop where a better world model actually does predict more accurately something that matters (better financial news, benefits readers more directly—vs maybe the incentive with other news sources is more meta/abstract—agreeing with your community and being up to date)
And one last thought I have is that the incentives news has to be negative seem to be quite bad. If there were a tractable intervention to mitigate these incentives maybe that could do a lot of good.
And from the ronghosh article: ”… All we have to do is fix the loneliness crisis, the fertility crisis, the housing crisis, the obesity crisis, the opioid crisis, the meaning crisis, the meta crisis, the flawed incentives of the political system, the flawed incentives of social media, the flawed incentives of academia, the externalities leading to climate change, the soft wars with China and Iran, the hot war with Russia, income inequality, status inequality, racism, sexism, and every other form of bigotry.”
Of course, as someone who’s steeped in all the AI stuff, I can’t help but just think that A) AI is the most important thing to consider here (ha!), since B) it might allow us (‘alignment’ allowing) to scale the sort of sense-making and problem-solving cognition to help solve all the problems that we’re seemingly increasingly making for ourselves. And yeah this is reductionist and probably naive.
I think the question, “is the world getting better?” is important for effective altruists (soft pitch why is that it’s just a crucial consideration for decision-making).
IDK, quick take because I’m just thinking about the following links, and people’s perceptions around this question.
170 years of American news coverage:
https://x.com/DKThomp/status/1803766107532153119 (linked in Marginal Revolution)“We really are living in an era of negativity-poisoned discourse that is (*empirically*) historically unique.”
(and this Atlantic article by the same author as the tweet discussing how America may be producing and exporting a lot of anxiety)
And I thought this piece lays out really quite well the points for both: things are better, things are worse, and introduced me to the neat term, “the vibecession”:
https://ronghosh.substack.com/p/the-stratification-of-gratification
(linked in r/slatestarcodex)
In particular, I thought this quote was funny too, and got me:
“Anecdotally, this is also where a subset of rationalists appear to be inconsistent in their worldview. One moment they claim the majority of people are data illiterate, and are therefore unrealistically pessimistic, and in the next moment they will set p(doom) to 10%.”
And [I had more to say here, but I think I’ll just leave it to, another excerpt]:
”Like, yes; it’s fairly uncontroversial to say that the world and the economy is better than ever. Even the poorest among us have super computers in our pockets now capable of giving us a never-ending stream of high-quality videos, or the power to summon a car ride, some food, or an Amazon delivery at any given moment.And yet, all of this growth and change and innovation and wealth has come at the cost of some underlying stability. For a lot of people, they feel like they’re no longer living on land; instead they’ve set sail on a vessel — and the never-ending swaying, however gentle it might feel, is leaving them seasick.”
(Finally, I coincidentally also read recently a listicle of ordinary life improvements from Gwern)
Hi JWS, unsure if you’d see this since it’s on LW and I thought you’d be interested (I’m not sure what to think of Chollet’s work tbh and haven’t been able to spend time on it, so I’m not making much of a claim in sharing this!)
https://www.lesswrong.com/posts/Rdwui3wHxCeKb7feK/getting-50-sota-on-arc-agi-with-gpt-4o
It is an OP grantmaking program now, afaik https://forum.effectivealtruism.org/posts/ziSEnEg4j8nFvhcni/new-open-philanthropy-grantmaking-program-forecasting
FWIW Habryka, I appreciate all that I know you’ve done and expect there’s a lot more I don’t know about that I should be appreciative of too.
I would also appreciate if you’d write up these concerns? I guess I want to know if I should feel similarly even as I rather trust your judgment. Sorry to ask, and thanks again
Editing to note I‘ve now seen some of comments elsewhere
Yeah, thank you, I guess I was trying to say that the evidence only seems to be stronger over time that the Bay Area’s: ‘AI is the only game in town’, is accurate.
Insofar as, timelines for various AI capabilities have outperformed both superforecasters’ and AI insiders’ predictions; transformative AI timelines (at Open Phil, prediction markets, AI experts I think) have decreased significantly over the past few years; the performance of LLMs have increased at an extraordinary rate across benchmarks; and we expect the next decade to extrapolate this scaling to some extent (w/ essentially hundreds of billions if not tens of trillions to be invested).
Although, yeah, I think to some extent we can’t know if this continues to scale as prettily as we’d expect and it’s especially hard to predict categorically new futures like exponential growth (10%, 50%, etc, growth/year). Given the forecasting efforts and trends thus far it feels like there’s a decent chance of these wild futures, and people are kinda updating all the way? Maybe not Open Phil entirely (to the point that EA isn’t just AIS), since they are hedging their altruistic bets, in the face of some possibility this decade could be ‘the precipice’ or one of the most important ever.
Misuse and AI risk seem like the negative valence of AI’s transformational potential. I personally buy the arguments around transformational technologies needing more reasoned steering and safety, and I also buy that EA has probably been a positive influence, and that alignment research has been at least somewhat tractable. Finally I think that there’s more that could be done to safely navigate this transition.
Also, re David (Thorstad?) yeah I haven’t engaged with his stuff as I probably should, and I really don’t know how to reason for or against arguments around the singularity, exponential growth, and the potential of AI without deferring to people more knowledgeable/smarter than me. I do feel like I have seen the start and middle of trends they predicted, and predict will extrapolate-with my own personal use and some early reports on productivity increases.
I do look forward to your sequence and hope you do really well on it!
Hi, I went to Lessonline after registering for EAG London, my impression of both events being held on the same weekend is something like:
-
Events around the weekend (Manifest being held the weekend after Lessonline) informed Lessonline’s dates (but why not the weekend after Manifest then?)
-
People don’t travel internationally as much for EAGs (someone cited to me ~10% of attendees but my opinion on reflection is that this seems an underestimate).
-
I imagine EAG Bay Area, Global Catastrophic Risks in early Feb also somewhat covered the motivation for “AI Safety/EA conference”.
I think you’re right that it’s not entirely* a coincidence that Lessonline conflicted with EAG Bay Area, but I’m thinking this was done somewhat more casually and probably reasonably.
I think it’s odd, and other’s have noted too, the most significant AI safety conference shares space with things unrelated on an object-level. I think it’s further odd to consider, I’ve heard people say, why bother going to a conference like this when I live in the same city as the people I’d most want to talk with (Berkeley/SF).
Finally, I feel weird about AI, since I think insiders are only becoming more convinced/confirmed of extreme event likelihoods (AI capabilities). I think it has only become more important by virtue of most people updating timelines earlier, not later, and this includes Open Phil’s version of this (Ajeya and Joe Carlsmith’s AI timelines). In fact, I’ve heard arguments that it’s actually less important by virtue of, “the cat’s out of the bag and not even Open Phil can influence trajectories here.” Maybe AI safety feels less neglected because it’s being advocated from large labs, but that may be both a result of EA/EA-adjacent efforts and not really enough to solve a unilateralizing problem.
-
(feel a little awkward just pushing news but feel some completeness obligation on this subject)
U.S. Commerce Secretary Gina Raimondo Announces Expansion of U.S. AI Safety Institute Leadership Team [and Paul Christiano update]
My initial thoughts around this are that yeah, good information hard to find and prioritize, but I would really like better and more accurate information to be more readily available. I actually think AI models like chatgpt achieve this to some extent, as a sort of not-quite-expert on a number of topics, and I would be quite excited to have these models become even better accumulators of knowledge and communicators. Already it seems like there’s been a sort of benefit to productivity (one thing I saw recently: https://arxiv.org/abs/2403.16977). So I guess I somewhat disagree with AI being net negative as an informational source, but do agree that it’s probably enabling the production of a bunch of spurious content and have heard arguments that this is going to be disastrous.
But I guess the post is focused moreso on news itself? I appreciate the idea of a sort of weekly digest in that it would somewhat detract from the constant news hype cycle, I guess I’m in more favor of longer time horizons for examining what is going on in the world. The debate on covid origin comes to mind, especially considering Rootclaim, as an attempt to create more accurate information accumulation. I guess forecasting is another form of this, whereby taking bets on things before they occur and being measured by your accuracy is an interesting way to consume news which also has a sort of ‘truth’ mechanism to it—and notably has legible operationalization of truth! (Edit: guess I should also couch this more so in what already exists on EAF, and lesswrong and rationality pursuits in general seem pretty adjacent here)
To some extent my lame answer is just AI enabling better analysis in the future as probably the most tractable way to address information. (Idk, I’m no expert on information and this seems like a huge problem in a complex world. Maybe there are more legible interventions on improving informational accuracy, I don’t know them and don’t really have much time, but would encourage further exploration and you seem to be checking out a number of examples in another comment!)
Responding to this because I think it discourages a new user from trying to engage and test their ideas against a larger audience, maybe some of whom have relevant expertise, and maybe some of those will engage—seems like a decent way to try and learn. Of course, good intentions to solve a ‘disinformation crisis’ like this aren’t sufficient, ideally we would be able to perform serious analysis on the problem (scale, neglectedness, tractability and all that fun stuff I guess) and in this case, seems like tractability may be most relevant. I think your second paragraph is useful in mentioning that this is extremely difficult to implement but also just gestures at the problem’s existence as evidence.
I share this impression though, that disinformation is difficult and also had a kinda knee-jerk about “high quality content”. But idk, I feel like engaging with the piece with more of a yes-and attitude to encourage entrepreneurial young minds and/or more relevant facts of the domain could be a better contribution.
But I’m doing the same thing and just being meta here, which is easy, so I’ll try too in another comment
Yeah wow the views vs engagement ratio is the most unbalanced I’ve seen (not saying this is a bad or good thing, just noting my surprise)
I think of the expanding moral circle sometimes instead like an abstracting moral, uh, circle. Where I’m able to abstract suffering over a distance, over time into the future, onto other species at some rate, into numbers, into probabilities and the meta, into complex understandings of ideas as they interact.
Reading and engaging with the forum as good for a meta reason, engaging and encouraging other people to keep making posts because engagement seems to exist and they’re incentivized to post. Or even more junior people to try and contribute, idk what the ea forum felt like ~10 years ago, but probably lower standards for engagement.