If this post resonates with you, consider traveling back in time a few days and submitting an application for CEA’s open Head of Communications role! (Remember, EA is first and foremost a do-ocracy, so you really need to be the change you wish to see around here)
ramekin
I ran into the same issue. It looks like the report is still live at this URL, though! (And here’s an archived version)
On a slight tangent from the above: I think I might have once come across an analysis of EAs’ scores on the Big Five scale, which IIRC found that EAs’ most extreme Big Five trait was high openness. (Perhaps it was Rethink Charity’s annual survey of EAs as e.g. analyzed by ElizabethE here, where [eyeballing these results] on a scale from 1-14, the EA respondents scored an average of 11 for openness, vs. less extreme scores on the other four dimensions?)
If EAs really do have especially high average openness, and high openness is a central driver of high AI xrisk estimates, that could also help explain EAs’ general tendency toward high AI xrisk estimates
I’d be interested in an investigation and comparison of the participants’ Big Five personality scores. As with the XPT, I think it’s likely that the concerned group is higher on the dimensions of openness and neuroticism, and these persistent personality differences caused their persistent differences in predictions.
To flesh out this theory a bit more:
Similar to the XPT, this project failed to find much difference between the two groups’ predictions for the medium term (i.e. through 2030) - at least, not nearly enough disagreement to explain the divergence in their AI risk estimates through 2100. So to explain the divergence, we’d want a factor that (a) was stable over the course of the study, and (b) would influence estimates of xrisk by 2100 but not nearer-term predictions
Compared to the other forecast questions, the question about xrisk by 2100 is especially abstract; generating an estimate requires entering far mode to average out possibilities over a huge set of complex possible worlds. As such, I think predictions on this question are uniquely reliant on one’s high-level priors about whether bizarre and horrible things are generally common or are generally rare—beyond those priors, we really don’t have that much concrete to go on.
I think neuroticism and openness might be strong predictors of these priors:
I think one central component of neuroticism is a global prior on danger.[1] Essentially: is the world essentially a safe place where things are fundamentally okay? Or is the world vulnerable?
I think a central component of openness to experience is something like “openness to weird ideas”[2]: how willing are you to flirt with weird/unusual ideas, especially those that are potentially hazardous or destabilizing to engage with? (Arguments that “the end is nigh” from AI probably fit this bill, once you consider how many religious, social, and political movements have deployed similar arguments to attract followers throughout history.)
Personality traits are by definition mostly stable over time—so if these traits really are the main drivers of the divergence in the groups’ xrisk estimates, that could explain why participants’ estimates didn’t budge over 8 weeks.
- ^
For example, this source identifies “a pervasive perception that the world is a dangerous and threatening place” as a core component of neuroticism.
- ^
I think this roughly lines up with scales c (“openness to theoretical or hypothetical ideas”) and e (“openness to unconventional views of reality”) from here
How does one vote? (Sorry if this is super obvious and I’m just missing it!)
Normally I’d recommend freewill.com for this (which is designed with charitable donation as a central use case), but I see now it’s only for US-based assets
One potential reason for the observed difference in expert and superforecaster estimates: even though they’re nominally participating in the same tournament, for the experts, this is a much stranger choice than it is for the superforecasters, who presumably have already built up an identity where it makes sense to spend a ton of time and deep thought on a forecasting tournament, on top of your day job and other life commitments. I think there’s some evidence for this in the dropout rates, which were 19% for the superforecasters but 51% (!) for the experts, suggesting that experts were especially likely to second-guess their decision to participate. (Also, see the discussion in Appendix 1 of the difficulties in recruiting experts—it seems like it was pretty hard to find non-superforecasters who were willing to commit to a project like this.)
So, the subset of experts who take the leap and participate in the study anyway are selected for something like “openness to unorthodox decisions/beliefs,” roughly equivalent to the Big Five personality trait of openness (or other related traits). I’d guess that each participant’s level of openness is a major driver (maybe even the largest driver?) of whether they accept or dismiss arguments for 21st-century x-risk, especially from AI.
Ways you could test this:
Test the big five personality traits of all participants. My guess is that the experts would have a higher average openness than the superforcasters—but the difference would be even greater if comparing the average openness of the “AI-concerned” group (highest) to the “AI skeptics” (lowest). These personality-level differences seem to match well with the groups’ object-level disagreements on AI risk, which mostly didn’t center on timelines and instead centered on disagreements about whether to take the inside or outside view on AI.
I’d also expect the “AI-concerned” to have higher neuroticism than the “AI skeptics,” since I think high/low neuroticism maps closely to something like a strong global prior that the world is/isn’t dangerous. This might explain the otherwise strange finding that “although the biggest area of long-run disagreement was the probability of extinction due to AI, there were surprisingly high levels of agreement on 45 shorter-run indicators when comparing forecasters most and least concerned about AI risk.”
When trying to compare the experts and superforecasters to the general population, don’t rely on a poll of random people, since completing a poll is much less weird than participating in a forecasting tournament. Instead, try to recruit a third group of “normal” people who are neither experts nor superforecasters, but have a similar opportunity cost for their time, to participate in the tournament. For example, you might target faculty and PhD candidates at US universities working on non-x-risk topics. My guess is that the subset of people in this population who decide “sure, why not, I’ll sign up to spend many hours of my life rigorously arguing with strangers about the end of the world” would be pretty high on openness, and thus pretty likely to predict high rates of x-risk.
I bring all this up in part because, although Appendix 1 includes a caveat that “those who signed up cannot be claimed to be a representative of [x-risk] experts in each of these fields,” I don’t think there was discussion of specific ways they are likely to be non-representative. I expect most people to forget about this caveat when drawing conclusions from this work, and instead conclude there must be generalizable differences between superforecaster and expert views on x-risk.
Also, I think it would be genuinely valuable to learn the extent to which personality differences do or don’t drive differences in long-term x-risk assessments in such a highly analytical environment with strong incentives for accuracy. If personality differences really are a large part of the picture, it might help resolve the questions presented at the end of the abstract:
“The most pressing practical question for future work is: why were superforecasters so unmoved by experts’ much higher estimates of AI extinction risk, and why were experts so unmoved by the superforecasters’ lower estimates? The most puzzling scientific question is: why did rational forecasters, incentivized by the XPT to persuade each other, not converge after months of debate and the exchange of millions of words and thousands of forecasts?”
- 19 Mar 2024 0:49 UTC; 5 points) 's comment on Results from an Adversarial Collaboration on AI Risk (FRI) by (
Is there any more information available about experts? This paragraph below (from page 10) appears to be the only description provided in the report:
“To recruit experts, we contacted organizations working on existential risk, relevant academic departments, and research labs at major universities and within companies operating in these spaces. We also advertised broadly, reaching participants with relevant experience via blogs and Twitter. We received hundreds of expressions of interest in participating in the tournament, and we screened these respondents for expertise, offering slots to respondents with the most expertise after a review of their backgrounds.[1] We selected 80 experts to participate in the tournament. Our final expert sample (N=80) included 32 AI experts, 15 “general” experts studying longrun risks to humanity, 12 biorisk experts, 12 nuclear experts, and 9 climate experts, categorized by the same independent analysts who selected participants. Our expert sample included well-published AI researchers from top-ranked industrial and academic research labs, graduate students with backgrounds in synthetic biology, and generalist existential risk researchers working at think tanks, among others. According to a self-reported survey, 44% of experts spent more than 200 hours working directly on causes related to existential risk in the previous year, compared to 11% of superforecasters. The sample drew heavily from the Effective Altruism (EA) community: about 42% of experts and 9% of superforecasters reported that they had attended an EA meetup. In this report, we separately present forecasts from domain experts and non-domain experts on each question.”
- ^
Footnote here from the original text: “Two independent analysts categorized applicants based on publication records and work history. When the analysts disagreed, a third independent rater resolved disagreement after a group discussion.”
- ^
As an outsider to technical AI work, I find this piece really persuasive.
Looking at my own field of US politics and policymaking, there are a couple of potentially analogous situations that I think offer some additional indirect evidence in support of your argument. In both of these examples, the ideological preferences of employees and job candidates seem to impact the behavior of important political actors:
The lefty views of software engineers and similar non-journalist employees at the New York Times seem to have strongly contributed to the outlet’s shift toward more socially progressive reporting in recent years. This makes me more confident that candidates for roles in AI labs not explicitly related to safety can still exert meaningful influence on the safety-related actions of these labs (including by expressing safety-related preferences in the hiring process as you suggest—and potentially also through their actions during continued employment, akin to the NYT staffers’ slacktivism? Though this strikes me as potentially riskier)
Some commentators like David Shor argue that the progressive rank-and-file staffers for Democratic politicians pull these politicians meaningfully to the left in their policies and messaging. I think this is probably because being very progressive correlates with one’s willingness to jump through all the hoops needed to eventually land a Dem staffer role (e.g. getting a relevant degrees, working as an unpaid intern and/or campaign volunteer, and generally accepting lower pay and less job security than feasible alternative careers). I think it’s plausible that analogous situations happen in the Republican party, too, swapping “conservative rank-and-file” staffers for progressive ones.
I think there’s conflicting pieces of evidence on this topic, and most recent studies focus on stimulants as add-ons to antidepressants, rather than as the primary treatment for depression.
So, if you think you might be some level of depressed (without having ADHD), I think it’s sound advice to avoid fixating on stimulants as your most promising option—but know you do have lots of effective options to try that might really improve your wellbeing and productivity, such as those discussed here and described by community members here and here.
If you’re not the right person for the article, I’d instead recommend this post on Sustained effort, potential, and obligation. I’ve found it’s given me a helpful framework for making sense of my own limits on working hours, and you may also find it useful.
I admit that reading this post also stirred up some feelings of inadequacy for me—because, unlike all those CEOs and great men of history, I have actually a pretty low-to-average limit on how much sustained effort my brain will tolerate in a day. If you find yourself with similar feelings (which might be distressing, perhaps even leading you to spiral into self-hatred and/or seek out extreme measures to ‘fix’ yourself), the best antidote I’ve found for myself is The Parable of the Talents by Scott Alexander. (TLDR: variation in innate abilities is widespread, and recognizing and accepting the limits on one’s abilities is both more truthful and more compassionate than denying them.)
I think the Pandemic Prevention Network, formerly No More Pandemics, is active in this space. (Definitely in the UK, maybe some work in the US?) More info on them from:
This EA Forum post by founder Sanjay Joshi
Sanjay’s talk at EAGxOxford in March 2022
On the other hand, as of October 23 2022, this page on their site states “Our operations are currently on hold,” so maybe they’re not active at the moment.
This piece might have some of what you’re looking for: https://www.washingtonpost.com/opinions/2023/10/31/ai-gina-raimondo-is-steph-curry/