# Forecasting

Core TagLast edit: 31 Jan 2024 15:57 UTC by

Forecasting is an important tool for improving the future, because good forecasts and estimates can help us appropriately plan interventions and assess risks. Over the past several decades there has been significant research and investment in forecasting and estimation techniques, tools, and organizations. This continues to be an area of investment for improving our ability to make good decisions.

# The State of Forecasting within EA

There are some major branches of forecasting within the EA movement:

• Personal forecasting—individuals forecasting to improve their decision-making or for status and personal enjoyment

• Forecasting consultancies—EA organisations pay for forecasting by groups of top forecasters or Metaculus

• Forecasting research—Academic research on the accuracy of forecasting and how to do it better (eg by FRI)

• Institutional forecasting—Seeking for forecasting to be used inside government and large institutions

• Forecasting technology—Building new tools to quantify with (eg Squiggle)

# These areas in more depth

Institutional forecasting.

Forecasting in institutions can range from predicting broad metrics to specific outcomes based on specific decisions. There can often be problems with buy-in from key stakeholders, who either see this as an unnecessary step or are concerned for their own status.

# Forecasting Techniques

Forecasting is hard but many top forecasters use common techniques. This suggests that forecasting is a skill that can be learnt and practised.

## Base rates

Reference Class Forecasting on Wikipedia

Suppose we are trying to find the probability that an event will occur within the next 5 years. One good place to start is by asking “of all similar time periods, what fraction of the time does this event occur?”. This is the base rate.

If we want to know the probability that Joe Biden is President of the United States on Nov. 1st, 2024, we could ask

• What fraction of presidential terms are fully completed (last all 4 years)? The answer to this is 49 out of the 58 total terms, or around 84%.

• On the other hand, we know that Biden has already made it through 288 days of his term. If we remove the 5 presidents who left office before that, there are 49 out of 53 or around 92%.

• But alternately, Joe Biden is pretty old (78 to be exact). If we look up death rate per year in actuarial tables, it’s around 5.1% per year, so this leaves him with a ~15% chance of death or a 85% chance of surviving his term.

These are all examples of using base rates. [These examples are taken from Base Rates and Reference Classes by jsteinhardt.]

Base rates represent the outside view for a given question. They are a good place to start but can often be improved on by updating the probability according to an inside view.

Note that there are often several reference classes we could use, each implying a different base rate. The problem of deciding which class to use is known as the reference class problem.

## Calibration training

A forecaster is said to be calibrated if the events they say have a X% chance of happening, happen X% of the time.

Most people are overconfident. When they say an event has a 99% chance of happening, often the events happen much less frequently than that.

This natural overconfidence can be corrected with calibration training. In calibration training, you are asked to answer a set of factual questions, assigning a probability to each of your answers.

A list of calibration training exercises can be found here.

## Question decomposition

Much like Fermi estimation, questions about future events can often be decomposed into many different questions, these questions can be answered, and the answers to these questions can be used to reconstruct an answer to the original question.

Suppose you are interested in whether AI will cause a catastrophe by 2100. For AI to cause such an event, several things need to be true: (1) it needs to be possible to build advanced AI with agentic planning and strategic awareness by 2100, (2) there need to be strong incentives to apply such a system, (3) it needs to be difficult to align such a system should it be deployed, (4) a deployed and unaligned AI would act in unintended and high-impact power seeking ways causing trillions of dollars in damage, (5) of these consequences will result in the permanent disempowerment of all humanity and (6) this disempowerment will constitute an existential catastrophe. Taking the probabilities that Eli Lifland assigned to each question gives a 80%, 85%, 75%, 90%, 80% and 95% chance of events 1 through 6 respectively. Since each event is conditional on the ones before it, we can find the probability of the original question by multiplying all the probabilities together. This gives Eli Lifland a probability of existential risk from misaligned AI before 2100 to be approximately 35%. For more detail see Eli’s original post here.

Decomposing questions into their constituent parts, assigning probabilities to these sub-questions, and combining these probabilities to answer the original questions is believed to improve forecasts. This is because, while each forecast is noisy, combining the estimates from many questions cancels the noise and leaves us with the signal.

Question decomposition is also good at increasing epistemic legibility. It helps forecasters to communicate to others why they’ve made the forecast that they did and it allows them to identify their specific points of disagreement.

## Premortems

Premortems on Wikipedia

A premortem is a strategy used once you’ve assigned a probability to an event. You ask yourself to imagine that the forecast was wrong and you then work backwards to determine what could potentially have caused this.

It is simply a way to reframe the question “in what ways might I be wrong?” but in a way that reduces motivated reasoning caused by attachment to the bottom line.

## Practice

Getting Started on the Forecasting Wiki

While the above techniques are useful, they are no substitute for actually making predictions. Get out there and make predictions! Use the above techniques. Keep track of your predictions. Periodically evaluate questions that have been resolved and review your performance. Assess the degree to which you are calibrated. Look out for systematic mistakes that you might be making. Make more predictions! Over time, like with any skill, your ability can and should improve.

## Other Resources

Other resources include:

• Superforcasting by Philip Tetlock and Dan Gardener

• Intro to Forecasting by Alex Lawson

• Forecasting Newsletter by Nuño Sempere

# State of the Art

For many years there have been calls to apply forecasting techniques to non-academic domains including journalism, policy, investing and business strategy. Several organisations now exist within these niche.

## Metaculus

Metaculus is a popular and established web platform for forecasting. Their questions mainly focus on geopolitics, the coronavirus pandemic and topics of interest to Effective Altruism.

They host prediction competitions with real money prizes and collect and track public predictions made by various figures.

## Cultivate Labs

Cultivate Labs build tools that companies can use to crowdsource information from among their employees. This helps leadership to understand the consensus of people working on the ground and use this to improve the decisions they make.

## Kalshi

Kalshi provide real money prediction markets on geopolitical events. The financial options they provide are intended to be used as hedges for political risk.

## Manifold.Markets

Manifold.Markets is a prediction market platform that uses play money. It is noteworthy for its ease of use, great UI and the fact that the market creator decides how the market resolves.

## QURI

QURI is a research organisation that builds tools that make it easier to make good forecasts. Their most notable tool is Squiggle—a programming language designed to be used to make legible forecasts in a wide range of contexts.

This is a broad topic group that captures several sub-topics:

# Long list of AI ques­tions

6 Dec 2023 11:12 UTC
124 points

# Two di­rec­tions for re­search on fore­cast­ing and de­ci­sion making

11 Mar 2023 15:33 UTC
48 points

# Cost-effec­tive­ness of stu­dent pro­grams for AI safety research

10 Jul 2023 17:23 UTC
53 points

# Re­think Pri­ori­ties’ Cross-Cause Cost-Effec­tive­ness Model: In­tro­duc­tion and Overview

3 Nov 2023 12:26 UTC
220 points

# Analysing In­di­vi­d­ual Con­tri­bu­tions to the Me­tac­u­lus Com­mu­nity Prediction

8 May 2023 22:58 UTC
28 points

# Model­ing the im­pact of AI safety field-build­ing programs

10 Jul 2023 17:22 UTC
83 points

# Cost-effec­tive­ness of pro­fes­sional field-build­ing pro­grams for AI safety research

10 Jul 2023 17:26 UTC
38 points

# Squig­gle: Why and how to use it

30 Jan 2023 14:14 UTC
44 points

# Fate­book: the fastest way to make and track predictions

11 Jul 2023 15:13 UTC
136 points
(fatebook.io)

# Ex­plor­ing Me­tac­u­lus’s AI Track Record

1 May 2023 21:02 UTC
52 points
(www.metaculus.com)

# The Odyssean Process

24 Nov 2023 13:48 UTC
24 points
(www.odysseaninstitute.org)

# [Question] Why is EA so en­thu­si­as­tic about fore­cast­ing?

9 Jul 2023 16:35 UTC
56 points

# Sum­mary of posts on XPT fore­casts on AI risk and timelines

25 Jul 2023 8:42 UTC
28 points

# Prior prob­a­bil­ity of this be­ing the most im­por­tant century

15 Jul 2023 7:18 UTC
8 points

# Can a ter­ror­ist at­tack cause hu­man ex­tinc­tion? Not on priors

2 Dec 2023 8:20 UTC
43 points

# The Es­ti­ma­tion Game: a monthly Fermi es­ti­ma­tion web app

20 Feb 2023 11:22 UTC
69 points

# Ex­plor­ing Me­tac­u­lus’ com­mu­nity predictions

24 Mar 2023 7:59 UTC
95 points

# Ac­cu­racy Agree­ments: A Flex­ible Alter­na­tive to Pre­dic­tion Markets

20 Apr 2023 3:09 UTC
37 points
(quri.substack.com)

# AI Risk & Policy Fore­casts from Me­tac­u­lus & FLI’s AI Path­ways Workshop

16 May 2023 8:53 UTC
41 points

# The mar­ket plau­si­bly ex­pects AI soft­ware to cre­ate trillions of dol­lars of value by 2027

6 May 2024 5:16 UTC
88 points
(benjamintodd.substack.com)

# Famine deaths due to the cli­matic effects of nu­clear war

14 Oct 2023 12:05 UTC
40 points

# Two con­trast­ing mod­els of “in­tel­li­gence” and fu­ture growth

24 Nov 2022 11:54 UTC
74 points

# “Full Au­toma­tion” is a Slip­pery Metric

11 Jun 2024 19:53 UTC
18 points

# Win­ners of the Squig­gle Ex­per­i­men­ta­tion and 80,000 Hours Quan­tifi­ca­tion Challenges

8 Mar 2023 1:03 UTC
62 points

# Some es­ti­ma­tion work in the horizon

29 Mar 2023 22:18 UTC
25 points
(nunosempere.com)

# Who is Un­com­fortable Cri­tiquing Who, Around EA?

24 Feb 2023 5:55 UTC
150 points

# AGI Catas­tro­phe and Takeover: Some Refer­ence Class-Based Priors

24 May 2023 19:14 UTC
103 points

# Launch­ing the AI Fore­cast­ing Bench­mark Series Q3 | \$30k in Prizes

8 Jul 2024 17:20 UTC
17 points
(www.metaculus.com)

# Thoughts on “The Offense-Defense Balance Rarely Changes”

12 Feb 2024 3:26 UTC
42 points

# Has Rus­sia’s In­va­sion of Ukraine Changed Your Mind?

27 May 2023 18:35 UTC
61 points

# Me­tac­u­lus’ pre­dic­tions are much bet­ter than low-in­for­ma­tion priors

11 Apr 2023 8:36 UTC
53 points

# Bad­ness of eat­ing farmed an­i­mals in terms of smok­ing cigarettes

22 Jul 2023 8:45 UTC
26 points

# Will scal­ing work?

4 Feb 2024 9:29 UTC
19 points
(www.dwarkeshpatel.com)

# Ex­pected value and un­cer­tainty with­out full Monte Carlo simulations

5 Jan 2024 8:57 UTC
12 points

# Disen­tan­gling Some Im­por­tant Fore­cast­ing Con­cepts/​Terms

25 Jun 2023 17:31 UTC
16 points

# Can a war cause hu­man ex­tinc­tion? Once again, not on priors

25 Jan 2024 7:56 UTC
67 points

# A Case for Su­per­hu­man Gover­nance, us­ing AI

7 Jun 2024 0:10 UTC
52 points

# My Cur­rent Claims and Cruxes on LLM Fore­cast­ing & Epistemics

26 Jun 2024 0:40 UTC
40 points

# The Ra­tionale-Shaped Hole At The Heart Of Forecasting

2 Apr 2024 15:51 UTC
149 points

# Rel­a­tive Value Func­tions: A Flex­ible New For­mat for Value Estimation

18 May 2023 16:39 UTC
58 points

# NYT on the Man­i­fest fore­cast­ing conference

9 Oct 2023 21:40 UTC
27 points
(www.nytimes.com)

# Uncer­tainty over time and Bayesian updating

25 Oct 2023 15:51 UTC
63 points

# What do XPT fore­casts tell us about AI risk?

19 Jul 2023 7:43 UTC
97 points

# Me­tac­u­lus Launches Q4 Quar­terly Cup!

9 Oct 2023 21:36 UTC
8 points
(www.metaculus.com)

# Model-Based Policy Anal­y­sis un­der Deep Uncertainty

6 Mar 2023 14:24 UTC
103 points

# A sim­ple way of ex­ploit­ing AI’s com­ing eco­nomic im­pact may be highly-impactful

16 Jul 2023 10:30 UTC
5 points
(www.lesswrong.com)

# A se­lec­tion of cross-cut­ting re­sults from the XPT

26 Sep 2023 23:50 UTC
18 points

# Me­tac­u­lus Launches Fu­ture of AI Series, Based on Re­search Ques­tions by Arb Research

13 Mar 2024 21:14 UTC
29 points
(www.metaculus.com)

# Don’t In­ter­pret Pre­dic­tion Mar­ket Prices as Probabilities

5 May 2023 20:23 UTC
78 points

# Why I don’t trust forecasters

21 Jun 2023 6:19 UTC
−3 points

# Some re­search ideas in forecasting

15 Nov 2022 19:47 UTC
79 points

# Get your tick­ets to Man­i­fest 2024 by May 13th!

3 May 2024 23:57 UTC
5 points

# Epoch is hiring a Product and Data Vi­su­al­iza­tion Designer

25 Nov 2023 0:14 UTC
21 points
(careers.rethinkpriorities.org)

# Wis­dom of the Crowd vs. “the Best of the Best of the Best”

4 Apr 2023 15:32 UTC
101 points

# Iqisa: A Library For Han­dling Fore­cast­ing Datasets

14 Apr 2023 15:15 UTC
46 points

# Diminish­ing Re­turns in Ma­chine Learn­ing Part 1: Hard­ware Devel­op­ment and the Phys­i­cal Frontier

27 May 2023 12:39 UTC
16 points
(www.fromthenew.world)

# Mak­ing bet­ter es­ti­mates with scarce information

22 Mar 2023 16:29 UTC
48 points

# Fore­cast­ing in the Czech pub­lic ad­minis­tra­tion—pre­limi­nary findings

16 Mar 2023 14:47 UTC
45 points

# Me­tac­u­lus: Q2 Cup Kick­off + Q1 Winners

10 Apr 2024 21:30 UTC
9 points
(www.metaculus.com)

# New prob­a­bil­is­tic simu­la­tion tool

19 Aug 2023 14:10 UTC
75 points
(usedagger.com)

# Re­search Sum­mary: Fore­cast­ing with Large Lan­guage Models

2 Apr 2023 10:52 UTC
4 points
(damienlaird.substack.com)

# AI timelines by bio an­chors: the de­bate in one place

30 Jul 2022 23:04 UTC
93 points

# [Me­tac­u­lus Event] April 7 Fore­cast Fri­day: A Pro Fore­caster Pre­sents on Longevity Trends in G7 Coun­tries

7 Apr 2023 2:01 UTC
11 points
(www.metaculus.com)

# I made a news site based on pre­dic­tion markets

5 Jun 2023 18:33 UTC
226 points

# Now THIS is fore­cast­ing: un­der­stand­ing Epoch’s Direct Approach

4 May 2024 12:06 UTC
48 points

# Me­tac­u­lus In­tro­duces New ‘Con­di­tional Pair’ Fore­cast Ques­tions for Mak­ing Con­di­tional Predictions

20 Feb 2023 13:36 UTC
60 points
(www.metaculus.com)

# Why pre­dic­tion mar­kets aren’t popular

20 May 2024 14:21 UTC
67 points
(worksinprogress.co)

# AGI and the EMH: mar­kets are not ex­pect­ing al­igned or un­al­igned AI in the next 30 years

10 Jan 2023 16:05 UTC
333 points

# Scorable Func­tions: A For­mat for Al­gorith­mic Forecasting

21 May 2024 4:09 UTC
46 points

# Fore­casts on Moore v Harper from Samotsvety

20 Mar 2023 4:03 UTC
37 points
(samotsvety.org)

# Why Cost-Effec­tive­ness ≠ Effec­tive­ness/​Cost

17 Dec 2023 8:52 UTC
0 points

# New Open Philan­thropy Grant­mak­ing Pro­gram: Forecasting

19 Feb 2024 23:27 UTC
92 points
(www.openphilanthropy.org)

# Pre­dict­ing the cost-effec­tive­ness of fu­ture R&D pro­jects and aca­demic research

8 May 2023 9:58 UTC
23 points
(observablehq.com)

# Trans­for­ma­tive AGI by 2043 is <1% likely

6 Jun 2023 15:51 UTC
92 points
(arxiv.org)

# De­sign­ing Ar­tifi­cial Wis­dom: De­ci­sion Fore­cast­ing AI & Futarchy

14 Jul 2024 5:10 UTC
5 points

# [Question] Why most peo­ple in EA are con­fi­dent that AI will sur­pass hu­mans?

25 May 2023 13:39 UTC
2 points

# Guessti­mate: Why and how to use it

24 Jan 2023 14:16 UTC
25 points

# The Char­ity En­trepreneur­ship top ideas new char­ity pre­dic­tion market

17 May 2023 14:30 UTC
101 points

# OPTIC [Fore­cast­ing Comp] — Pilot Postmortem

19 May 2023 10:10 UTC
43 points

# A Dou­ble Fea­ture on The Extropians

3 Jun 2023 18:29 UTC
47 points

# Me­tac­u­lus Launches Space Tech­nol­ogy & Cli­mate Fore­cast­ing Ini­ti­a­tive

11 Oct 2023 1:29 UTC
11 points
(www.metaculus.com)

# Pro­ject Idea: Pro­files Ag­gre­gat­ing Fore­cast­ing Perfor­mance Metrics

17 Apr 2023 10:29 UTC
2 points
(damienlaird.substack.com)

# Could Ukraine re­take Crimea?

1 May 2023 1:06 UTC
6 points

# How Long Do Policy Changes Mat­ter? New Paper

2 Nov 2023 20:53 UTC
272 points
(zachfreitasgroff.com)

# An­nounc­ing “Fore­cast­ing Ex­is­ten­tial Risks: Ev­i­dence from a Long-Run Fore­cast­ing Tour­na­ment”

10 Jul 2023 17:04 UTC
160 points

# Me­tac­u­lus An­nounces Fore­cast­ing Tour­na­ment to Eval­u­ate Fo­cused Re­search Or­ga­ni­za­tions, in Part­ner­ship With the Fed­er­a­tion of Amer­i­can Scien­tists

3 Oct 2023 16:44 UTC
21 points
(www.metaculus.com)

# Me­tac­u­lus In­tro­duces New Fore­cast Scores, New Leader­board & Medals

20 Nov 2023 20:33 UTC
13 points
(www.metaculus.com)

# An­nounc­ing the Con­fido app: bring­ing fore­cast­ing to everyone

16 May 2023 10:25 UTC
104 points

# The Top AI Safety Bets for 2023: GiveWiki’s Lat­est Recommendations

11 Nov 2023 9:04 UTC
10 points

# Misha Yagudin and Ozzie Gooen Dis­cuss LLMs and Effec­tive Altruism

6 Jan 2023 22:59 UTC
47 points
(quri.substack.com)

# When you plan ac­cord­ing to your AI timelines, should you put more weight on the me­dian fu­ture, or the me­dian fu­ture | even­tual AI al­ign­ment suc­cess? ⚖️

5 Jan 2023 1:55 UTC
16 points

# En­trepreneur­ship ETG Might Be Bet­ter Than 80k Thought

29 Dec 2022 17:51 UTC
134 points

# Un­jour­nal Evals: “Ad­vance Mar­ket Com­mit­ments: In­sights from The­ory and Ex­pe­rience”

21 Mar 2023 16:59 UTC
27 points
(unjournal.pubpub.org)

# AI risk/​re­ward: A sim­ple model

4 May 2023 19:12 UTC
37 points

# Play Re­grantor: Move up to \$250,000 to Your Top High-Im­pact Pro­jects!

17 May 2023 16:51 UTC
58 points
(impactmarkets.substack.com)

# Con­tin­u­ous doesn’t mean slow

10 May 2023 12:17 UTC
64 points

# How We Think about Ex­pected Im­pact in Cli­mate Philanthropy

28 Nov 2023 19:02 UTC
39 points

# How bad a fu­ture do ML re­searchers ex­pect?

13 Mar 2023 5:47 UTC
165 points

# Fore­cast­ing Newslet­ter for Novem­ber and De­cem­ber 2022

9 Jan 2023 11:16 UTC
24 points
(forecasting.substack.com)

# An­nounc­ing the SPT Model Web App for AI Governance

4 Aug 2022 10:45 UTC
42 points

# Pa­trick Gruban on Effec­tive Altru­ism Ger­many and Non­profit Boards in EA

5 May 2023 17:23 UTC
37 points
(quri.substack.com)

# Me­tac­u­lus Pre­sents: Trans­for­ma­tive Science at Startup Speed

31 Oct 2023 21:12 UTC
5 points

# An­nounc­ing a sub­fo­rum for fore­cast­ing & estimation

26 Dec 2022 20:51 UTC
72 points

# Fore­cast­ing for Policy (FORPOL) - Main take­aways, prac­ti­cal learn­ings & report

18 Sep 2023 12:27 UTC
36 points

# When pool­ing fore­casts, use the ge­o­met­ric mean of odds

3 Sep 2021 9:58 UTC
116 points

# Nu­clear war tail risk has been ex­ag­ger­ated?

25 Feb 2024 9:14 UTC
41 points

# Suggested fore­cast­ing wiki text addition

29 Dec 2022 11:55 UTC
5 points

# Fore­cast­ing With LLMs—An Open and Promis­ing Re­search Direction

12 Mar 2024 4:23 UTC
13 points

# Non-al­ign­ment pro­ject ideas for mak­ing trans­for­ma­tive AI go well

4 Jan 2024 7:23 UTC
64 points
(lukasfinnveden.substack.com)

# A com­pute-based frame­work for think­ing about the fu­ture of AI

31 May 2023 22:00 UTC
96 points

# Con­cepts of ex­is­ten­tial catastrophe

15 Apr 2024 17:16 UTC
11 points
(globalprioritiesinstitute.org)

# Prior knowl­edge elic­i­ta­tion: The past, pre­sent, and fu­ture [re­view pa­per 2023]

10 Jan 2024 9:32 UTC
9 points
(arxiv.org)

# Distinc­tions when Dis­cussing Utility Functions

8 Mar 2024 18:43 UTC
15 points

# Man­i­fund: What we’re fund­ing (weeks 2-4)

4 Aug 2023 16:00 UTC
65 points
(manifund.substack.com)

# Can a pan­demic cause hu­man ex­tinc­tion? Pos­si­bly, at least on priors

15 Jul 2024 17:07 UTC
29 points

# Range and Fore­cast­ing Accuracy

27 May 2022 19:08 UTC
21 points

# Straight­for­wardly elic­it­ing prob­a­bil­ities from GPT-3

9 Feb 2023 19:25 UTC
41 points

# Trends in the dol­lar train­ing cost of ma­chine learn­ing systems

1 Feb 2023 14:48 UTC
63 points

# [Our World in Data] AI timelines: What do ex­perts in ar­tifi­cial in­tel­li­gence ex­pect for the fu­ture? (Roser, 2023)

7 Feb 2023 14:52 UTC
89 points
(ourworldindata.org)

# Fore­cast­ing ac­ci­den­tally-caused pandemics

17 Jan 2024 19:36 UTC
48 points
(blog.joshuablake.co.uk)

# Guessti­mate: Why and How to Use It

23 Jan 2023 19:37 UTC
5 points

# [Question] Have you tried to bring fore­cast­ing tech­niques to your com­pany? How did it work out?

5 Feb 2023 0:42 UTC
24 points

# A new Heuris­tic to Up­date on the Cre­dences of Others

16 Jan 2023 11:35 UTC
22 points

8 Jun 2024 11:29 UTC
140 points

# Higher-Order Forecasts

22 May 2024 21:49 UTC
35 points

# Ideas for Next-Gen­er­a­tion Writ­ing Plat­forms, us­ing LLMs

4 Jun 2024 18:40 UTC
17 points

# How evals might (or might not) pre­vent catas­trophic risks from AI

7 Feb 2023 20:16 UTC
28 points

# My highly per­sonal skep­ti­cism brain­dump on ex­is­ten­tial risk from ar­tifi­cial in­tel­li­gence.

23 Jan 2023 20:08 UTC
435 points
(nunosempere.com)

# Pro­ject idea: AI for epistemics

19 May 2024 19:36 UTC
45 points
(benjamintodd.substack.com)

# Why does Academia+EA pro­duce so few on­line videos?

10 Jan 2023 13:49 UTC
24 points
(quri.substack.com)

# Case study: The Lübeck vaccine

5 Jul 2024 14:57 UTC
43 points
(sentinel-team.org)

# Sur­vey of 2,778 AI au­thors: six parts in pictures

6 Jan 2024 4:43 UTC
176 points

# How many peo­ple are work­ing (di­rectly) on re­duc­ing ex­is­ten­tial risk from AI?

17 Jan 2023 14:03 UTC
116 points
(80000hours.org)

# On the fu­ture of lan­guage models

20 Dec 2023 16:58 UTC
115 points

# AI Im­pacts: His­toric trends in tech­nolog­i­cal progress

12 Feb 2020 0:08 UTC
55 points

# [Draft] The hum­ble cos­mol­o­gist’s P(doom) paradox

16 Mar 2024 11:13 UTC
41 points

# Can a con­flict cause hu­man ex­tinc­tion? Yet again, not on priors

19 Jun 2024 16:59 UTC
19 points

# Pod­cast: Is Fore­cast­ing a Promis­ing EA Cause Area?

25 Mar 2024 20:36 UTC
29 points

# [Question] Es­ti­mates on ex­pected effects of move­ment/​pres­sure group/​field build­ing?

15 Feb 2024 11:35 UTC
39 points

# What Does a Marginal Grant at LTFF Look Like? Fund­ing Pri­ori­ties and Grant­mak­ing Thresh­olds at the Long-Term Fu­ture Fund

10 Aug 2023 20:11 UTC
175 points

# An­nounc­ing the Fore­cast­ing Re­search In­sti­tute (we’re hiring)

13 Dec 2022 12:11 UTC
168 points

# Open Sourc­ing Metaculus

25 Jun 2024 18:40 UTC
75 points
(www.metaculus.com)

# GDP per cap­ita in 2050

6 May 2024 15:14 UTC
129 points
(hauke.substack.com)

# Govern­ments Might Pre­fer Bring­ing Re­sources Back to the So­lar Sys­tem Rather than Space Set­tle­ment in Order to Main­tain Con­trol, Given that Govern­ing In­ter­stel­lar Set­tle­ments Looks Al­most Im­pos­si­ble

29 May 2023 11:16 UTC
36 points

# Oper­a­tional­iz­ing timelines

10 Mar 2023 17:30 UTC
30 points

# Me­tac­u­lus Pre­sents: Does Gen­er­a­tive AI In­fringe Copy­right?

6 Nov 2023 23:41 UTC
5 points

# Us­ing Points to Rate Differ­ent Kinds of Evidence

25 Aug 2023 19:26 UTC
33 points

# An­nounc­ing the Open Philan­thropy AI Wor­ld­views Contest

10 Mar 2023 2:33 UTC
137 points
(www.openphilanthropy.org)

# Quan­tify­ing and in­ter­pret­ing the risks of mountaineering

3 Jun 2023 8:25 UTC
19 points

# A flaw in a sim­ple ver­sion of wor­ld­view diversification

15 May 2023 18:12 UTC
45 points
(nunosempere.com)

# Fore­cast­ing ex­treme outcomes

9 Jan 2023 15:02 UTC
46 points

# 14 Ways ML Could Im­prove In­for­ma­tive Video

10 Jan 2023 13:53 UTC
8 points
(quri.substack.com)

# Open Tech­ni­cal Challenges around Prob­a­bil­is­tic Pro­grams and Javascript

26 Aug 2023 2:04 UTC
39 points

# [Linkpost] Scott Alexan­der re­acts to OpenAI’s lat­est post

11 Mar 2023 22:24 UTC
105 points

# Pre­dic­tion mar­kets cov­ered in the NYT pod­cast “Hard Fork”

13 Oct 2023 18:43 UTC
24 points
(www.nytimes.com)

# Les­sons on pro­ject man­age­ment from “How Big Things Get Done”

17 May 2023 19:15 UTC
29 points

# Welfare ranges per calorie consumption

24 Jun 2023 8:47 UTC
12 points

# FLI pod­cast se­ries, “Imag­ine A World”, about as­pira­tional fu­tures with AGI

13 Oct 2023 16:03 UTC
18 points

# The the­o­ret­i­cal com­pu­ta­tional limit of the So­lar Sys­tem is 1.47x10^49 bits per sec­ond.

17 Oct 2023 2:52 UTC
12 points

# What are peo­ple up to in the world?

13 Jan 2023 23:25 UTC
34 points

# LLM-Se­cured Sys­tems: A Gen­eral-Pur­pose Tool For Struc­tured Transparency

18 Jun 2024 0:20 UTC
20 points

# Wild an­i­mal welfare? Stable to­tal­i­tar­i­anism? Pre­dict which new EA cause area will go main­stream!

11 Mar 2024 14:27 UTC
48 points

# Marginal value (or lack thereof) of voting

11 Mar 2024 9:01 UTC
7 points

# A Gen­tle In­tro­duc­tion to Risk Frame­works Beyond Forecasting

11 Apr 2024 9:15 UTC
81 points

# Pro­ject ideas: Epistemics

4 Jan 2024 7:26 UTC
43 points
(lukasfinnveden.substack.com)

# EA could use bet­ter in­ter­nal com­mu­ni­ca­tions infrastructure

12 Jan 2023 1:07 UTC
67 points
(quri.substack.com)

# Lan­guage mod­els sur­prised us

29 Aug 2023 21:18 UTC
59 points

# How to eval­u­ate rel­a­tive im­pact in high-un­cer­tainty con­texts? An up­date on re­search method­ol­ogy & grant­mak­ing of FP Cli­mate

26 May 2023 17:30 UTC
84 points

# You prob­a­bly want to donate any Man­i­fold cur­rency this week

23 Apr 2024 23:18 UTC
84 points

# De­com­pos­ing Agency — ca­pa­bil­ities with­out desires

11 Jul 2024 9:38 UTC
34 points
(strangecities.substack.com)

# Disper­sion in the ex­tinc­tion risk pre­dic­tions made in the Ex­is­ten­tial Risk Per­sua­sion Tournament

10 May 2024 16:48 UTC
22 points

# AI Safety Im­pact Mar­kets: Your Char­ity Eval­u­a­tor for AI Safety

1 Oct 2023 10:47 UTC
26 points
(impactmarkets.substack.com)

# Take­off speeds pre­sen­ta­tion at Anthropic

4 Jun 2024 22:46 UTC
29 points

# Tech­nolog­i­cal de­vel­op­ments that could in­crease risks from nu­clear weapons: A shal­low review

9 Feb 2023 15:41 UTC
79 points
(bit.ly)

# Un­jour­nal’s 1st eval is up: Re­silient foods pa­per (Denken­berger et al) & AMA ~48 hours

6 Feb 2023 19:18 UTC
77 points
(sciety.org)

# How would you es­ti­mate the value of de­lay­ing AGI by 1 day, in marginal dona­tions to GiveWell?

16 Dec 2022 9:25 UTC
30 points

# Re­place­ment for PONR concept

2 Sep 2022 0:38 UTC
14 points

# An­nounc­ing In­tro­duc­tions for Col­lab­o­ra­tive Truth Seek­ing Tools

23 Jan 2023 16:04 UTC
81 points

# YCom­bi­na­tor fraud rates

25 Dec 2022 18:01 UTC
90 points

# Up­date to Samotsvety AGI timelines

24 Jan 2023 4:27 UTC
120 points

# Pre­dictable up­dat­ing about AI risk

8 May 2023 22:05 UTC
130 points

# Use of “I’d bet” on the EA Fo­rum is mostly metaphorical

7 Mar 2023 23:33 UTC
17 points
(nunosempere.com)

# Clar­ify­ing and pre­dict­ing AGI

4 May 2023 15:56 UTC
69 points

# Im­pact Assess­ment of AI Safety Camp (Arb Re­search)

23 Jan 2024 16:32 UTC
87 points

# AI Views Snapshots

13 Dec 2023 0:45 UTC
25 points

# Economists can help with biose­cu­rity via ROI models

14 May 2023 20:10 UTC
16 points

# How much do mar­kets value Open AI?

14 May 2023 19:28 UTC
39 points

# More global warm­ing might be good to miti­gate the food shocks caused by abrupt sun­light re­duc­tion scenarios

29 Apr 2023 8:24 UTC
46 points

# Shap­ley value, im­por­tance, eas­i­ness and neglectedness

5 May 2023 7:33 UTC
27 points

# [Me­tac­u­lus Event] April 14 Fore­cast Fri­day: A Pro Fore­caster on Shift­ing Ter­ri­to­rial Con­trol in Ukraine

14 Apr 2023 0:40 UTC
5 points

# An­nounc­ing Epoch’s dash­board of key trends and figures in Ma­chine Learning

13 Apr 2023 7:33 UTC
127 points

# Chart­ing the precipice: The time of per­ils and pri­ori­tiz­ing x-risk

24 Oct 2023 16:25 UTC
86 points

# Owain Evans on LLMs, Truth­ful AI, AI Com­po­si­tion, and More

2 May 2023 1:20 UTC
21 points
(quri.substack.com)

# Fo­cus­ing your im­pact on short vs long TAI timelines

30 Sep 2023 19:23 UTC
39 points

# Fore­cast­ing (Shenani­gans Work­shop)

1 Apr 2023 16:50 UTC
13 points

# Can we help in­di­vi­d­ual peo­ple cost-effec­tively? Our trial with three sick kids

20 Feb 2024 9:43 UTC
393 points

# [Question] Can we eval­u­ate the “tool ver­sus agent” AGI pre­dic­tion?

8 Apr 2023 18:35 UTC
63 points

# Sur­vey on in­ter­me­di­ate goals in AI governance

17 Mar 2023 12:44 UTC
155 points

# The AI Boom Mainly Benefits Big Firms, but long-term, mar­kets will concentrate

29 Oct 2023 8:38 UTC
12 points

# Defer­ence on AI timelines: sur­vey results

30 Mar 2023 23:03 UTC
68 points

# One form to help us build a crowd­sourced char­ity evaluator

8 May 2023 21:03 UTC
9 points

# An­nounc­ing Epoch’s newly ex­panded Pa­ram­e­ters, Com­pute and Data Trends in Ma­chine Learn­ing database

25 Oct 2023 3:03 UTC
38 points
(epochai.org)

# Me­tac­u­lus Pre­dicts Weak AGI in 2 Years and AGI in 10

24 Mar 2023 19:43 UTC
27 points

# Are there dis­ec­onomies of scale in the rep­u­ta­tion of com­mu­ni­ties?

27 Jul 2023 18:43 UTC
50 points

# Anki with Uncer­tainty: Turn any flash­card deck into a cal­ibra­tion train­ing tool

22 Mar 2023 17:26 UTC
57 points
(www.quantifiedintuitions.org)

# Ten Com­mand­ments for Aspiring Superforecasters

20 Feb 2024 13:01 UTC
13 points
(goodjudgment.com)

# Why I think it’s im­por­tant to work on AI forecasting

27 Feb 2023 21:24 UTC
179 points

# Some more pro­jects I’d like to see

25 Feb 2023 22:22 UTC
67 points
(finmoorhouse.com)

# Quick pro­posal: De­ci­sion mar­ket re­grantor us­ing man­i­fund (please im­prove)

9 Jul 2023 12:49 UTC
23 points

# Fu­ture Mat­ters #8: Bing Chat, AI labs on safety, and paus­ing Fu­ture Matters

21 Mar 2023 14:50 UTC
81 points

# Tet­lock on low AI xrisk

13 Jul 2023 14:19 UTC
10 points

# Mis­takes in the moral math­e­mat­ics of ex­is­ten­tial risk (Part 2: Ig­nor­ing back­ground risk) - Reflec­tive altruism

3 Jul 2023 6:34 UTC
84 points
(ineffectivealtruismblog.com)

# Pre­dic­tion Mar­kets for Science

2 Jan 2023 17:55 UTC
14 points

# Un­jour­nal: Eval­u­a­tions of “Ar­tifi­cial In­tel­li­gence and Eco­nomic Growth”, and new host­ing space

17 Mar 2023 20:20 UTC
47 points
(unjournal.pubpub.org)

# Mis­takes in the moral math­e­mat­ics of ex­is­ten­tial risk (Part 1: In­tro­duc­tion and cu­mu­la­tive risk) - Reflec­tive altruism

3 Jul 2023 6:33 UTC
74 points
(ineffectivealtruismblog.com)

# In­ter­me­di­ate goals for re­duc­ing risks from nu­clear weapons: A shal­low re­view (part 1/​4)

1 May 2023 15:04 UTC
35 points

# 900+ Fore­cast­ers on Whether Rus­sia Will In­vade Ukraine

19 Feb 2022 13:29 UTC
51 points
(metaculus.medium.com)

# Will protests lead to thou­sands of coro­n­avirus deaths?

3 Jun 2020 19:08 UTC
85 points

# Donor Lot­tery Debrief

4 Aug 2020 20:58 UTC
129 points

# Fore­cast­ing Newslet­ter: April 2021

1 May 2021 15:58 UTC
21 points

# Es­ti­mat­ing the Aver­age Im­pact of an ARPA-E Grantmaker

1 Dec 2022 6:34 UTC
22 points

# COVID-19 in ru­ral Balochis­tan, Pak­istan: Two in­ter­views from May 2020

16 Dec 2022 11:33 UTC
22 points

# Man­i­fold Mar­kets Char­ity pro­gram end­ing March 1st

18 Feb 2023 2:12 UTC
28 points
(manifoldmarkets.notion.site)

# An­nounc­ing Con­fido 2.0: Pro­mot­ing the un­cer­tainty-aware mind­set in orgs

10 Jan 2024 11:45 UTC
20 points

# Pre­dic­tion Bank: A way around cur­rent pre­dic­tion mar­ket reg­u­la­tions?

25 Jan 2022 4:21 UTC
25 points

# Au­tomat­ing rea­son­ing about the fu­ture at Ought

9 Nov 2020 22:30 UTC
20 points
(ought.org)

# An­nounc­ing the Fore­cast­ing In­no­va­tion Prize

15 Nov 2020 21:21 UTC
64 points

# Cu­rated blind auc­tion pre­dic­tion mar­kets and a rep­u­ta­tion sys­tem as an al­ter­na­tive to ed­i­to­rial re­view in news pub­li­ca­tion.

15 Feb 2023 14:26 UTC
10 points

# Creat­ing a database for base rates

12 Dec 2022 10:05 UTC
74 points

# In­tro­duc­tion to Fermi estimates

26 Aug 2022 10:03 UTC
46 points
(nunosempere.com)

# Ten Com­mand­ments for Aspiring Su­perfore­cast­ers

25 Apr 2018 5:07 UTC
21 points

# David Rhys Bernard: Es­ti­mat­ing long-term effects with­out long-term data

6 Jul 2020 15:16 UTC
24 points

# Com­par­ing Su­perfore­cast­ing and the In­tel­li­gence Com­mu­nity Pre­dic­tion Market

12 Apr 2022 9:24 UTC
29 points

# [Question] How much will pre-trans­for­ma­tive AI speed up R&D?

31 May 2021 20:20 UTC
23 points

# Fore­cast­ing Newslet­ter: Au­gust 2022.

10 Sep 2022 8:59 UTC
29 points

# [Pod­cast] Rob Wiblin on self-im­prove­ment and re­search ethics

15 Jan 2021 7:24 UTC
8 points
(clearerthinkingpodcast.com)

# We’re re­ally bad at guess­ing the future

13 Aug 2022 9:11 UTC
20 points

# Fore­cast­ing Newslet­ter: April 2020

30 Apr 2020 16:41 UTC
54 points

# Bi­nary pre­dic­tion database and tournament

17 Nov 2020 18:09 UTC
15 points

# Con­clu­sion and Bibliog­ra­phy for “Un­der­stand­ing the diffu­sion of large lan­guage mod­els”

21 Dec 2022 13:50 UTC
12 points

# Me­tac­u­lus is build­ing a team ded­i­cated to AI forecasting

18 Oct 2022 16:08 UTC
35 points
(apply.workable.com)

# [Question] Is there any re­search or fore­casts of how likely AI Align­ment is go­ing to be a hard vs. easy prob­lem rel­a­tive to ca­pa­bil­ities?

14 Aug 2022 15:58 UTC
8 points

# Pre­dictably Pre­dictable Fu­tures Talk: Us­ing Ex­pected Loss & Pre­dic­tion In­no­va­tion for Long Term Benefits

8 Jan 2020 22:19 UTC
10 points

# [Squig­gle Ex­per­i­men­ta­tion Challenge] CEA LEEP Malawi

1 Sep 2022 5:13 UTC
23 points
(danwahl.net)

# David Man­heim: A Per­sonal (In­terim) COVID-19 Postmortem

1 Jul 2020 6:05 UTC
32 points
(www.lesswrong.com)

# Me­tac­u­lus Biose­cu­rity Tour­na­ment Round 1 Launch

10 Jul 2022 14:54 UTC
6 points
(www.metaculus.com)

# Prin­ci­pled ex­trem­iz­ing of ag­gre­gated forecasts

29 Dec 2021 18:49 UTC
46 points

# Use Nor­mal Predictions

9 Jan 2022 17:52 UTC
12 points
(www.lesswrong.com)

# AGI in sight: our look at the game board

18 Feb 2023 22:17 UTC
25 points

# Use re­silience, in­stead of im­pre­ci­sion, to com­mu­ni­cate uncertainty

18 Jul 2020 12:09 UTC
103 points

# In­ves­ti­gat­ing how tech­nol­ogy-fo­cused aca­demic fields be­come self-sustaining

6 Sep 2021 15:04 UTC
43 points

# Do­ing good while clueless

15 Feb 2018 5:04 UTC
46 points

# Fore­cast your 2024 with Fatebook

5 Jan 2024 12:40 UTC
20 points
(fatebook.io)

# Some EA Fo­rum Posts I’d like to write

23 Feb 2021 5:27 UTC
100 points

# Can/​should we au­to­mate most hu­man de­ci­sions, pre-AGI?

26 Dec 2021 1:37 UTC
25 points

# In­ter­view with Prof Tet­lock on epistemic mod­esty, pre­dict­ing catas­trophic risks, AI, and more

20 Nov 2017 18:34 UTC
6 points

# Is AI fore­cast­ing a waste of effort on the mar­gin?

5 Nov 2022 0:41 UTC
10 points

# When re­port­ing AI timelines, be clear who you’re defer­ring to

10 Oct 2022 14:24 UTC
120 points

# Re­duc­ing Nu­clear Risk Through Im­proved US-China Relations

21 Mar 2022 11:50 UTC
31 points

# Value of In­fo­ma­tion, an ex­am­ple with GiveDirectly

30 Aug 2022 20:37 UTC
12 points

# [Event] A Me­tac­u­lus Open Panel Dis­cus­sion: How Fore­casts In­form COVID-19 Policy

4 Oct 2021 18:17 UTC
3 points

# Build­ing Blocks of Utility Maximization

20 Sep 2021 17:23 UTC
21 points

# Disagree­ables and Asses­sors: Two In­tel­lec­tual Archetypes

5 Nov 2021 9:01 UTC
91 points

# A prac­ti­cal guide to long-term plan­ning – and sug­ges­tions for longtermism

10 Oct 2021 15:37 UTC
140 points

# Fore­cast­ing Newslet­ter: May 2021

1 Jun 2021 15:51 UTC
23 points

# Quan­tify­ing Uncer­tainty in GiveWell’s GiveDirectly Cost-Effec­tive­ness Analysis

27 May 2022 3:10 UTC
130 points

# An­nounc­ing the first is­sue of Asterisk

21 Nov 2022 18:51 UTC
275 points

# Fore­cast­ing of Pri­ori­ties: a tool for effec­tive poli­ti­cal par­ti­ci­pa­tion?

31 Dec 2020 15:24 UTC
27 points

# \$1,000 Squig­gle Ex­per­i­men­ta­tion Challenge

4 Aug 2022 14:20 UTC
61 points

# [Question] Ques­tions on databases of AI Risk estimates

2 Oct 2022 9:12 UTC
24 points

# Pan­demic Pre­dic­tion Check­list: H5N1

5 Feb 2023 14:56 UTC
70 points

# [Question] Is now a good time to ad­vo­cate for pre­dic­tion mar­ket gov­er­nance ex­per­i­ments in the UK?

21 Oct 2022 11:51 UTC
9 points

# Su­perfore­cast­ing Long-Term Risks and Cli­mate Change

19 Aug 2022 18:05 UTC
48 points

# A con­cern about the “evolu­tion­ary an­chor” of Ajeya Co­tra’s re­port on AI timelines.

16 Aug 2022 14:44 UTC
75 points
(nunosempere.com)

# Op­por­tu­nity Costs of Tech­ni­cal Ta­lent: In­tu­ition and (Sim­ple) Implications

19 Nov 2021 15:04 UTC
53 points

# On AI and Compute

3 Apr 2019 21:26 UTC
39 points

# Philip Tet­lock: Fireside chat

4 Feb 2020 21:25 UTC
13 points

# Can You Pre­dict Who Will Win OpenPhil’s Cause Ex­plo­ra­tion Prize? Bet on it!

2 Sep 2022 0:02 UTC
5 points

# [Linkpost] Dan Luu: Fu­tur­ist pre­dic­tion meth­ods and accuracy

15 Sep 2022 21:20 UTC
64 points
(danluu.com)

# Chris­ti­ano, Co­tra, and Yud­kowsky on AI progress

25 Nov 2021 16:30 UTC
18 points

# An ex­per­i­ment to eval­u­ate the value of one re­searcher’s work

1 Dec 2020 9:01 UTC
57 points

# AI strat­egy nearcasting

26 Aug 2022 16:25 UTC
61 points

# Fore­cast­ing Prize Results

19 Feb 2021 19:07 UTC
44 points

# Pre­dict­ing for Good: Char­ity Pre­dic­tion Markets

22 Mar 2022 17:44 UTC
42 points

# Epoch Im­pact Re­port 2022

2 Feb 2023 13:09 UTC
81 points
(epochai.org)

# Samotsvety Nu­clear Risk Fore­casts — March 2022

10 Mar 2022 18:52 UTC
155 points

# Base Rates on United States Regime Collapse

5 Apr 2021 17:14 UTC
14 points

# AI Fore­cast­ing Dic­tionary (Fore­cast­ing in­fras­truc­ture, part 1)

8 Aug 2019 13:16 UTC
18 points

# Fore­cast­ing Our World in Data: The Next 100 Years

1 Feb 2023 22:13 UTC
97 points
(www.metaculus.com)

# An­nounc­ing the Ben­tham Prize

21 Jan 2020 22:23 UTC
33 points

# Fu­ture Mat­ters #3: digi­tal sen­tience, AGI ruin, and fore­cast­ing track records

4 Jul 2022 17:44 UTC
70 points

# Wits & Wagers: An En­gag­ing Game for Effec­tive Altruists

1 Feb 2023 9:30 UTC
31 points

# Sim­ple es­ti­ma­tion ex­am­ples in Squiggle

2 Sep 2022 9:37 UTC
52 points

# An anal­y­sis of Me­tac­u­lus pre­dic­tions of fu­ture EA re­sources, 2025 and 2030

22 Sep 2021 10:24 UTC
50 points

# Why I think there’s a one-in-six chance of an im­mi­nent global nu­clear war

8 Oct 2022 23:25 UTC
53 points

# Eli Lifland on Nav­i­gat­ing the AI Align­ment Landscape

1 Feb 2023 0:07 UTC
48 points
(quri.substack.com)

# [Op­por­tu­nity] Syn­thetic Biol­ogy Fore­cast­ers

4 Jul 2022 16:15 UTC
7 points

# Grokking “Semi-in­for­ma­tive pri­ors over AI timelines”

12 Jun 2022 22:15 UTC
60 points

# More Is Prob­a­bly More—Fore­cast­ing Ac­cu­racy and Num­ber of Fore­cast­ers on Metaculus

31 Jan 2023 17:20 UTC
36 points

# Biolog­i­cal An­chors ex­ter­nal re­view by Jen­nifer Lin (linkpost)

30 Nov 2022 13:06 UTC
36 points

# The Pen­tagon claims China will likely have 1,500 nu­clear war­heads by 2035

12 Dec 2022 18:12 UTC
34 points
(media.defense.gov)

# Data on fore­cast­ing ac­cu­racy across differ­ent time hori­zons and lev­els of fore­caster experience

27 May 2021 18:51 UTC
121 points

# Fore­cast­ing Newslet­ter: Novem­ber 2020.

1 Dec 2020 17:00 UTC
33 points

# Fore­cast­ing tools and Pre­dic­tion Mar­kets: Why and How

31 Jan 2023 12:55 UTC
18 points

# LW4EA: 16 types of use­ful predictions

24 May 2022 3:19 UTC
14 points
(www.lesswrong.com)

# An ex­am­i­na­tion of Me­tac­u­lus’ re­solved AI pre­dic­tions and their im­pli­ca­tions for AI timelines

20 Jul 2021 9:07 UTC
81 points

# In­creas­ing the Ac­cu­racy of Our Judg­ments: More to explore

1 Jan 2021 11:49 UTC
1 point

# Me­tac­u­lus An­nounces The Million Pre­dic­tions Hackathon

10 Nov 2022 20:00 UTC
20 points
(metaculus.medium.com)

# [Question] How can good gen­er­al­ist judg­ment be differ­en­ti­ated from skill at fore­cast­ing?

21 Aug 2020 23:13 UTC
25 points

# EA Uni Group Fore­cast­ing Tour­na­ment!

18 Sep 2020 16:35 UTC
62 points

# How to make in­de­pen­dent re­search more fun (80k After Hours)

17 Mar 2023 22:25 UTC
28 points
(80000hours.org)

# Red-team­ing Holden Karnofsky’s AI timelines

25 Jun 2022 14:24 UTC
58 points

# Nar­ra­tion: Re­port on Run­ning a Fore­cast­ing Tour­na­ment at an EA Re­treat, part 1

13 Jul 2021 16:21 UTC
8 points
(anchor.fm)

# Liter­a­ture re­view of Trans­for­ma­tive Ar­tifi­cial In­tel­li­gence timelines

27 Jan 2023 20:36 UTC
148 points

# Make your own cost-effec­tive­ness Fermi es­ti­mates for one-off problems

11 Dec 2014 11:49 UTC
23 points

# Prob­a­bil­ity of ex­tinc­tion for var­i­ous types of catastrophes

9 Oct 2022 15:30 UTC
16 points

# An­nounc­ing Me­tac­u­lus’s ‘Red Lines in Ukraine’ Fore­cast­ing Project

21 Oct 2022 22:13 UTC
17 points
(www.metaculus.com)

# Ta­boo “Out­side View”

17 Jun 2021 9:39 UTC
177 points

# Long-Term Fu­ture Fund: April 2019 grant recommendations

23 Apr 2019 7:00 UTC
142 points

# Types of speci­fi­ca­tion prob­lems in forecasting

20 Jul 2021 4:17 UTC
35 points

# Me­tac­u­lus Launches Cli­mate Tip­ping Points Tour­na­ment With The Fed­er­a­tion of Amer­i­can Scientists

27 Jan 2023 19:33 UTC
21 points
(www.metaculus.com)

# Pre­dict which posts will win the Crit­i­cism and Red Team­ing Con­test!

27 Sep 2022 22:46 UTC
21 points
(manifold.markets)

# 6 Year De­crease of Me­tac­u­lus AGI Prediction

12 Apr 2022 5:36 UTC
40 points

# A vi­sion of the fu­ture (fic­tional short-story)

15 Oct 2022 12:38 UTC
12 points

# Quan­tum com­put­ing timelines

15 Sep 2020 14:15 UTC
28 points

# [Question] Please Share Your Per­spec­tives on the De­gree of So­cietal Im­pact from Trans­for­ma­tive AI Outcomes

15 Apr 2022 1:23 UTC
3 points

# Fore­cast­ing Newslet­ter: March 2021

1 Apr 2021 17:01 UTC
22 points

# AI Fore­cast­ing Re­s­olu­tion Coun­cil (Fore­cast­ing in­fras­truc­ture, part 2)

29 Aug 2019 17:43 UTC
28 points

# Cost-effec­tive­ness of op­er­a­tions man­age­ment in high-im­pact organisations

27 Nov 2022 10:33 UTC
48 points

# AI X-Risk: In­te­grat­ing on the Shoulders of Giants

1 Nov 2022 16:07 UTC
34 points

# An­nounc­ing Squig­glepy, a Python pack­age for Squiggle

19 Oct 2022 18:34 UTC
90 points
(github.com)

# [Question] Is there a good web app for do­ing the “equiv­a­lent bet test” from “How To Mea­sure Any­thing”?

10 Nov 2022 14:17 UTC
14 points

# Against GDP as a met­ric for timelines and take­off speeds

29 Dec 2020 17:50 UTC
47 points

# Quan­tify­ing the prob­a­bil­ity of ex­is­ten­tial catas­tro­phe: A re­ply to Beard et al.

10 Aug 2020 5:56 UTC
21 points
(gcrinstitute.org)

# Judge­ment as a key need in EA

12 Sep 2020 14:48 UTC
30 points

# Tough enough? Ro­bust satis­fic­ing as a de­ci­sion norm for long-term policy analysis

31 Oct 2020 13:28 UTC
5 points
(globalprioritiesinstitute.org)

# Fore­cast­ing Newslet­ter: July 2022

8 Aug 2022 8:03 UTC
30 points

# Pre­fer be­liefs to cre­dence probabilities

1 Sep 2022 2:04 UTC
3 points

# Ev­i­dence on good fore­cast­ing prac­tices from the Good Judg­ment Pro­ject: an ac­com­pa­ny­ing blog post

15 Feb 2019 19:14 UTC
79 points

# Efforts to Im­prove the Ac­cu­racy of Our Judg­ments and Fore­casts (Open Philan­thropy)

25 Oct 2016 10:09 UTC
19 points
(www.openphilanthropy.org)

# Fore­cast­ing Newslet­ter: June 2022

12 Jul 2022 12:35 UTC
49 points

# [Cause Ex­plo­ra­tion Prizes] Train­ing ex­perts to be forecasters

26 Aug 2022 9:52 UTC
49 points

# On Defer­ence and Yud­kowsky’s AI Risk Estimates

19 Jun 2022 14:35 UTC
285 points

# Free money from New York gam­bling websites

24 Jan 2022 22:50 UTC
74 points

# Guessti­mate: An app for mak­ing de­ci­sions with con­fi­dence (in­ter­vals)

30 Dec 2015 17:30 UTC
63 points

# Register your pre­dic­tions for 2023

26 Dec 2022 20:49 UTC
42 points

# An es­ti­mate of the value of Me­tac­u­lus questions

22 Oct 2021 17:45 UTC
47 points

# AI Fore­cast­ing Ques­tion Database (Fore­cast­ing in­fras­truc­ture, part 3)

3 Sep 2019 14:57 UTC
23 points

# Help, Please: In­te­grat­ing EA Ideas into Large Re­search Organization

30 Oct 2021 1:23 UTC
37 points

# Does gen­er­al­ity pay? GPT-3 can provide pre­limi­nary ev­i­dence.

12 Jul 2020 18:53 UTC
21 points

# Fore­cast­ing Newslet­ter: May 2020.

31 May 2020 12:35 UTC
35 points

# [Question] Needed: Vol­un­teer fore­cast­ers for Fish Welfare Initiative

21 Nov 2020 19:15 UTC
18 points

# EA Fundrais­ing Through Ad­van­tage Sports Bet­ting: A Guide (\$500/​Hour in Select States)

27 Jan 2022 8:57 UTC
63 points

# [Question] What pre­vi­ous work has been done on fac­tors that af­fect the pace of tech­nolog­i­cal de­vel­op­ment?

27 Apr 2021 18:43 UTC
21 points

# Helping fu­ture re­searchers to bet­ter un­der­stand long-term forecasting

25 Nov 2020 18:55 UTC
2 points

# Path­ways to im­pact for fore­cast­ing and evaluation

25 Nov 2021 17:59 UTC
29 points

# Flag­ging up a ‘pre­dic­tion mar­ket’

12 Jul 2022 12:07 UTC
3 points

# Book Re­view: The Sig­nal and the Noise

18 Jul 2021 21:32 UTC
30 points

22 Sep 2020 20:51 UTC
24 points
(www.lesswrong.com)

# Fore­cast­ing Newslet­ter: June 2021

1 Jul 2021 20:59 UTC
29 points

# Im­pact­ful Fore­cast­ing Prize for fore­cast write­ups on cu­rated Me­tac­u­lus questions

4 Feb 2022 20:06 UTC
91 points

# [Link] “How fea­si­ble is long-range fore­cast­ing?” (Open Phil)

11 Oct 2019 21:01 UTC
42 points

# Fore­cast­ing Newslet­ter: April 2022

10 May 2022 16:40 UTC
44 points

# Fore­cast­ing Newslet­ter: May 2022

3 Jun 2022 19:32 UTC
31 points

# Database of ex­is­ten­tial risk estimates

15 Apr 2020 12:43 UTC
130 points

# Up­dat­ing on the pas­sage of time and con­di­tional pre­dic­tion curves

11 Aug 2022 18:18 UTC
37 points

# Birds, Brains, Planes, and AI: Against Ap­peals to the Com­plex­ity/​Mys­te­ri­ous­ness/​Effi­ciency of the Brain

18 Jan 2021 12:39 UTC
27 points

# Get­ting GPT-3 to pre­dict Me­tac­u­lus questions

6 May 2022 12:12 UTC
59 points

# 2020: Fore­cast­ing in Review

10 Jan 2021 16:05 UTC
35 points

# Top open Me­tac­u­lus forecasts

20 Jul 2022 23:00 UTC
11 points
(www.metaculus.com)

# “Be­fore 5 Au­gust 2022, will Rus­sia deto­nate a nu­clear de­vice out­side of Rus­sian ter­ri­tory or airspace?”

15 Apr 2022 22:07 UTC
3 points

# AGI Predictions

21 Nov 2020 12:02 UTC
36 points
(www.lesswrong.com)

# An­nounc­ing the Fore­cast­ing Wiki

15 Apr 2022 9:53 UTC
23 points

# [Question] How valuable would more aca­demic re­search on fore­cast­ing be? What ques­tions should be re­searched?

12 Aug 2020 7:19 UTC
23 points

# [Question] Put­ting Peo­ple First in a Cul­ture of De­hu­man­iza­tion

22 Jul 2020 3:31 UTC
16 points

# [Question] Are there su­perfore­casts for ex­is­ten­tial risk?

7 Jul 2020 7:39 UTC
24 points

# Fore­cast­ing Newslet­ter: Septem­ber 2021.

1 Oct 2021 17:03 UTC
20 points

# Cur­rent Es­ti­mates for Like­li­hood of X-Risk?

6 Aug 2018 18:05 UTC
24 points

# An­nual AGI Bench­mark­ing Event

26 Aug 2022 21:31 UTC
20 points
(www.metaculus.com)

# Con­ver­sa­tion on tech­nol­ogy fore­cast­ing and gradualism

9 Dec 2021 19:00 UTC
15 points

# Over­re­act­ing to cur­rent events can be very costly

4 Oct 2022 21:30 UTC
281 points

# List of past fraud­sters similar to SBF

28 Nov 2022 18:31 UTC
114 points

# Fore­cast­ing Newslet­ter: July 2020.

1 Aug 2020 16:56 UTC
31 points

# Pro­posal for Fore­cast­ing Givewell-Char­ity Im­pact-Metrics

13 Apr 2022 10:21 UTC
28 points

# Es­ti­ma­tion of prob­a­bil­ities to get tenure track in academia: baseline and pub­li­ca­tions dur­ing the PhD.

20 Sep 2020 18:32 UTC
42 points

# Ra­tional pre­dic­tions of­ten up­date pre­dictably*

15 May 2022 16:09 UTC
143 points

# Fore­cast­ing Newslet­ter: Fe­bru­ary 2022

5 Mar 2022 19:16 UTC
25 points

# Me­tac­u­lus Launches FluSight Challenge 2022/​23

24 Oct 2022 17:10 UTC
12 points
(www.metaculus.com)

# What a com­pute-cen­tric frame­work says about AI take­off speeds

23 Jan 2023 4:09 UTC
189 points
(www.lesswrong.com)

# Con­ver­sa­tion on fore­cast­ing with Vaniver and Ozzie Gooen

30 Jul 2019 11:16 UTC
38 points

# Owen Cot­ton-Bar­ratt, Robin Han­son, Ja­son Ma­theny, and Ju­lia Galef: Forecasting

5 Aug 2016 9:19 UTC
7 points

# How Can Donors In­cen­tivize Good Pre­dic­tions on Im­por­tant but Un­pop­u­lar Topics?

3 Feb 2019 1:11 UTC
27 points

# Me­tac­u­lus is seek­ing ex­pe­rienced lead­ers, re­searchers & op­er­a­tors for high-im­pact roles

10 Jul 2022 14:29 UTC
13 points
(apply.workable.com)

# 7 es­says on Build­ing a Bet­ter Future

24 Jun 2022 14:28 UTC
21 points

# Rus­sia-Ukraine Con­flict: Fore­cast­ing Nu­clear Risk in 2022

24 Mar 2022 21:03 UTC
23 points

# One’s Fu­ture Be­hav­ior as a Do­main of Calibration

31 Dec 2020 15:48 UTC
17 points

# In­cen­tive Prob­lems With Cur­rent Fore­cast­ing Com­pe­ti­tions.

10 Nov 2020 21:40 UTC
56 points

# Fore­cast­ing “Cli­mate Change and the Long-term Fu­ture”

23 Jul 2022 0:12 UTC
5 points

# Re­port on Semi-in­for­ma­tive Pri­ors for AI timelines (Open Philan­thropy)

26 Mar 2021 17:46 UTC
62 points

# BitBets: A Sim­ple Scor­ing Sys­tem for Fore­caster Training

18 Mar 2021 11:19 UTC
28 points

# Su­perfore­cast­ing in a nutshell

25 Feb 2021 6:11 UTC
51 points
(lukemuehlhauser.com)

# Fo­rum rank­ing sys­tem pro­to­type: Cause Pri­orit­sa­tion Con­test posts ranked by pre­dic­tion markets

5 Sep 2022 15:55 UTC
18 points

# Man­i­fold for Good: Bet on the fu­ture, for charity

2 May 2022 18:06 UTC
35 points

# LW4EA: Six eco­nomics mis­con­cep­tions of mine which I’ve re­solved over the last few years

30 Aug 2022 15:20 UTC
8 points
(www.lesswrong.com)

# Pre­dic­tion Mar­kets For Credit?

5 Mar 2022 20:33 UTC
16 points

# Create a pre­dic­tion mar­ket in two min­utes on Man­i­fold Markets

9 Feb 2022 17:37 UTC
32 points

# View and Bet in Man­i­fold pre­dic­tion mar­kets on EA Forum

24 May 2022 17:05 UTC
67 points

# Event-driven mis­sion cor­re­lated in­vest­ing and the 2020 US election

14 Jun 2021 15:06 UTC
48 points

# An at­tempt to pro­mote pre­dic­tion markets

10 May 2022 14:19 UTC
7 points

# Pro­ject: A web plat­form for crowd­sourc­ing im­pact es­ti­mates of in­ter­ven­tions.

22 Apr 2022 6:54 UTC
41 points

# [Fic­tion] Im­proved Gover­nance on the Crit­i­cal Path to AI Align­ment by 2045.

18 May 2022 15:50 UTC
20 points

# Cal­ibrate—New Chrome Ex­ten­sion for hid­ing num­bers so you can guess

7 Oct 2022 11:21 UTC
26 points

# Five steps for quan­tify­ing spec­u­la­tive interventions

18 Feb 2022 20:39 UTC
94 points

# Open Com­mu­ni­ca­tion in the Days of Mal­i­cious On­line Actors

6 Oct 2020 23:57 UTC
38 points

# Shal­low eval­u­a­tions of longter­mist organizations

24 Jun 2021 15:31 UTC
192 points

# Some global catas­trophic risk estimates

10 Feb 2021 19:32 UTC
106 points

# Im­prove del­e­ga­tion abil­ities to­day, del­e­gate heav­ily tomorrow

11 Nov 2021 21:52 UTC
58 points

# Rel­a­tive Im­pact of the First 10 EA Fo­rum Prize Winners

16 Mar 2021 17:11 UTC
88 points

# 13 Very Differ­ent Stances on AGI

27 Dec 2021 23:30 UTC
84 points

# Am­bi­tious Altru­is­tic Soft­ware Eng­ineer­ing Efforts: Op­por­tu­ni­ties and Benefits

17 Nov 2021 18:12 UTC
109 points

# Do­ing Good Badly? - Michael Plant’s the­sis, Chap­ters 5,6 on Cause Pri­ori­ti­za­tion

4 Mar 2021 16:57 UTC
75 points

# Ex­ter­nal Eval­u­a­tion of the EA Wiki

13 Dec 2021 17:09 UTC
78 points

# EA/​Ra­tion­al­ist Safety Nets: Promis­ing, but Arduous

29 Dec 2021 18:41 UTC
69 points

# Com­plex clue­less­ness as credal fragility

8 Feb 2021 16:59 UTC
61 points

# Flimsy Pet The­o­ries, Enor­mous Initiatives

9 Dec 2021 15:10 UTC
211 points

# Valu­ing re­search works by elic­it­ing com­par­i­sons from EA researchers

17 Mar 2022 19:58 UTC
114 points

# Quan­tify­ing the Value of Evaluations

10 Jan 2021 22:59 UTC
23 points

# Col­lec­tion of defi­ni­tions of “good judge­ment”

14 Mar 2022 14:14 UTC
31 points

# Sim­ple com­par­i­son pol­ling to cre­ate util­ity functions

15 Nov 2021 19:48 UTC
46 points

# Big List of Cause Can­di­dates: Jan­uary 2021–March 2022 update

30 Apr 2022 17:21 UTC
122 points

# Why don’t gov­ern­ments seem to mind that com­pa­nies are ex­plic­itly try­ing to make AGIs?

23 Dec 2021 7:08 UTC
82 points

# Fore­cast­ing Newslet­ter: Novem­ber 2021

2 Dec 2021 21:35 UTC
23 points

# Big List of Cause Candidates

25 Dec 2020 16:34 UTC
270 points

# Con­tri­bu­tion-Ad­justed Utility Max­i­miza­tion Funds: An Early Proposal

3 Aug 2021 23:01 UTC
14 points

# Brief eval­u­a­tions of top-10 billionnaires

21 Oct 2022 15:29 UTC
80 points

# Fore­cast­ing Newslet­ter: Fe­bru­ary 2021

1 Mar 2021 20:29 UTC
19 points

# A Crit­i­cal Re­view of Open Philan­thropy’s Bet On Crim­i­nal Jus­tice Reform

16 Jun 2022 16:40 UTC
302 points

# Par­ti­ci­pate in the Hy­brid Fore­cast­ing-Per­sua­sion Tour­na­ment (on X-risk top­ics)

25 Apr 2022 22:13 UTC
53 points

# A Fun­nel for Cause Candidates

13 Jan 2021 19:45 UTC
34 points

# Pri­ori­ti­za­tion Re­search for Ad­vanc­ing Wis­dom and Intelligence

18 Oct 2021 22:22 UTC
88 points

# [Question] What should the norms around pri­vacy and eval­u­a­tion in the EA com­mu­nity be?

16 Jun 2021 17:31 UTC
66 points

# What are good rubrics or rubric el­e­ments to eval­u­ate and pre­dict im­pact?

3 Dec 2020 21:52 UTC
24 points

# An­nounc­ing the UK Covid-19 Crowd Fore­cast­ing Challenge

17 May 2021 19:28 UTC
7 points

# Quan­tify­ing Uncer­tainty in GiveWell Cost-Effec­tive­ness Analyses

31 Oct 2022 14:31 UTC
118 points
(observablehq.com)

# An ex­per­i­ment elic­it­ing rel­a­tive es­ti­mates for Open Philan­thropy’s 2018 AI safety grants

12 Sep 2022 11:19 UTC
111 points

# The “feel­ing of mean­ing” vs. “ob­jec­tive mean­ing”

5 Dec 2021 1:51 UTC
21 points

# Suc­cess Max­i­miza­tion: An Alter­na­tive to Ex­pected Utility The­ory and a Gen­er­al­iza­tion of Max­ipok to Mo­ral Uncertainty

26 Nov 2022 1:53 UTC
13 points

# An in-progress ex­per­i­ment to test how Laplace’s rule of suc­ces­sion performs in prac­tice.

30 Jan 2023 17:41 UTC
57 points

# 2018-2019 Long-Term Fu­ture Fund Gran­tees: How did they do?

16 Jun 2021 17:31 UTC
194 points

# Samotsvety Nu­clear Risk up­date Oc­to­ber 2022

3 Oct 2022 18:10 UTC
262 points

# Nu­clear Ex­pert Com­ment on Samotsvety Nu­clear Risk Forecast

26 Mar 2022 9:22 UTC
129 points

# My take on What We Owe the Future

1 Sep 2022 18:07 UTC
353 points

# In­tro­duc­ing Effec­tive Self-Help

6 Jan 2022 13:11 UTC
111 points

# Ad­ding Quan­tified Uncer­tainty to GiveWell’s Cost Effec­tive­ness Anal­y­sis of the Against Malaria Foundation

31 Aug 2022 12:53 UTC
31 points
(observablehq.com)

# Tech­ni­cal AGI safety re­search out­side AI

18 Oct 2019 15:02 UTC
91 points

# Im­prov­ing Karma: \$8mn of pos­si­ble value (my es­ti­mate)

1 Sep 2022 22:42 UTC
34 points

# What is es­ti­ma­tional pro­gram­ming? Squig­gle in context

12 Aug 2022 18:01 UTC
22 points

# How many EA billion­aires five years from now?

20 Aug 2022 9:57 UTC
61 points
(www.erichgrunewald.com)

# Draft re­port on ex­is­ten­tial risk from power-seek­ing AI

28 Apr 2021 21:41 UTC
87 points

# Pre­dict re­sponses to the “ex­is­ten­tial risk from AI” survey

28 May 2021 1:38 UTC
36 points

# Disagree­ment with bio an­chors that lead to shorter timelines

16 Nov 2022 14:40 UTC
85 points

# [Question] Is this a good way to bet on short timelines?

28 Nov 2020 14:31 UTC
17 points

# Rood­man’s Thoughts on Biolog­i­cal Anchors

14 Sep 2022 12:23 UTC
72 points

# Pod­cast: Mag­nus Vind­ing on re­duc­ing suffer­ing, why AI progress is likely to be grad­ual and dis­tributed and how to rea­son about poli­tics

21 Nov 2021 15:29 UTC
26 points
(www.utilitarianpodcast.com)

# Dis­cussing how to al­ign Trans­for­ma­tive AI if it’s de­vel­oped very soon

28 Nov 2022 16:17 UTC
36 points

# Drivers of large lan­guage model diffu­sion: in­cre­men­tal re­search, pub­lic­ity, and cascades

21 Dec 2022 13:50 UTC
21 points

# Against the weird­ness heuris­tic

5 Oct 2022 14:13 UTC
5 points

# AI Alter­na­tive Fu­tures: Ex­plo­ra­tory Sce­nario Map­ping for Ar­tifi­cial In­tel­li­gence Risk—Re­quest for Par­ti­ci­pa­tion [Linkpost]

9 May 2022 19:53 UTC
17 points

# Publi­ca­tion de­ci­sions for large lan­guage mod­els, and their impacts

21 Dec 2022 13:50 UTC
14 points

# AI Timelines via Cu­mu­la­tive Op­ti­miza­tion Power: Less Long, More Short

6 Oct 2022 7:06 UTC
27 points

# Back­ground for “Un­der­stand­ing the diffu­sion of large lan­guage mod­els”

21 Dec 2022 13:49 UTC
12 points

# Fun with +12 OOMs of Compute

1 Mar 2021 21:04 UTC
28 points
(www.lesswrong.com)

# Ar­gu­ment Against Im­pact: EU Is Not an AI Su­per­power

31 Jan 2022 9:48 UTC
35 points

# [Question] Where is a good place to start learn­ing about Fore­cast­ing?

14 Jan 2022 22:26 UTC
11 points

# [Question] Fore­cast­ing thread: How does AI risk level vary based on timelines?

14 Sep 2022 23:56 UTC
47 points

# Win­ners of the EA Crit­i­cism and Red Team­ing Contest

1 Oct 2022 1:50 UTC
226 points

# What role should evolu­tion­ary analo­gies play in un­der­stand­ing AI take­off speeds?

11 Dec 2021 1:16 UTC
12 points

# [Question] What im­por­tant ques­tions are miss­ing from Me­tac­u­lus?

26 May 2021 14:03 UTC
38 points

# Fore­cast­ing Through Fiction

6 Jul 2022 5:23 UTC
8 points
(www.lesswrong.com)

# AGI al­ign­ment re­sults from a se­ries of al­igned ac­tions

27 Dec 2021 19:33 UTC
15 points

# Phil Tram­mell on Eco­nomic Growth Un­der Trans­for­ma­tive AI

24 Oct 2021 18:10 UTC
10 points
(youtu.be)

# Fi­nal Re­port of the Na­tional Se­cu­rity Com­mis­sion on Ar­tifi­cial In­tel­li­gence (NSCAI, 2021)

1 Jun 2021 8:19 UTC
51 points
(www.nscai.gov)

# [linkpost] When does tech­ni­cal work to re­duce AGI con­flict make a differ­ence?: Introduction

16 Sep 2022 14:35 UTC
31 points
(www.lesswrong.com)

# Per­sua­sion Tools: AI takeover with­out AGI or agency?

20 Nov 2020 16:56 UTC
15 points

# Mis­sion-cor­re­lated in­vest­ing: Ex­am­ples of mis­sion hedg­ing and ‘lev­er­ag­ing’

11 Mar 2022 9:33 UTC
25 points

# Asya Ber­gal: Rea­sons you might think hu­man-level AI is un­likely to hap­pen soon

26 Aug 2020 16:01 UTC
24 points

# Fore­cast­ing trans­for­ma­tive AI: the “biolog­i­cal an­chors” method in a nutshell

31 Aug 2021 18:17 UTC
50 points

# Im­pli­ca­tion of AI timelines on plan­ning and solutions

21 Aug 2021 5:11 UTC
15 points

# Fore­cast­ing trans­for­ma­tive AI: what’s the bur­den of proof?

17 Aug 2021 17:14 UTC
71 points

# [Question] What work has been done on the post-AGI dis­tri­bu­tion of wealth?

6 Jul 2022 18:59 UTC
16 points

# Es­ti­mat­ing the Cur­rent and Fu­ture Num­ber of AI Safety Researchers

28 Sep 2022 20:58 UTC
64 points

# What if we don’t need a “Hard Left Turn” to reach AGI?

15 Jul 2022 9:49 UTC
39 points

# Open Philan­thropy’s AI grants

30 Jul 2022 17:22 UTC
21 points

# GiveWell should use shorter TAI timelines

27 Oct 2022 6:59 UTC
52 points

# More Chris­ti­ano, Co­tra, and Yud­kowsky on AI progress

6 Dec 2021 20:34 UTC
16 points

# A con­ver­sa­tion with Ro­hin Shah

12 Nov 2019 1:31 UTC
27 points
(aiimpacts.org)

# “Ex­is­ten­tial risk from AI” sur­vey results

1 Jun 2021 20:19 UTC
80 points

# What is Com­pute? - Trans­for­ma­tive AI and Com­pute [1/​4]

23 Sep 2021 13:54 UTC
48 points

# [Question] How does one find out their AGI timelines?

7 Nov 2022 22:34 UTC
19 points

# ‘Dis­solv­ing’ AI Risk – Pa­ram­e­ter Uncer­tainty in AI Fu­ture Forecasting

18 Oct 2022 22:54 UTC
111 points

# Have your timelines changed as a re­sult of ChatGPT?

5 Dec 2022 15:03 UTC
30 points

# New re­port on how much com­pu­ta­tional power it takes to match the hu­man brain (Open Philan­thropy)

15 Sep 2020 1:06 UTC
45 points
(www.openphilanthropy.org)

# AI timelines and the­o­ret­i­cal un­der­stand­ing of deep learn­ing

12 Sep 2021 16:26 UTC
4 points

# [Linkpost] The Prob­lem With The Cur­rent State of AGI Definitions

29 May 2022 17:01 UTC
7 points

# AGI Isn’t Close—Fu­ture Fund Wor­ld­view Prize

18 Dec 2022 16:03 UTC
−8 points

# The repli­ca­tion and em­u­la­tion of GPT-3

21 Dec 2022 13:49 UTC
14 points

# Why AGI Timeline Re­search/​Dis­course Might Be Overrated

3 Jul 2022 8:04 UTC
120 points

# [Question] What should I ask Ajeya Co­tra — se­nior re­searcher at Open Philan­thropy, and ex­pert on AI timelines and safety challenges?

28 Oct 2022 15:28 UTC
23 points

# Long-Term Fu­ture Fund: May 2021 grant recommendations

27 May 2021 6:44 UTC
110 points

# “In­tro to brain-like-AGI safety” se­ries—halfway point!

9 Mar 2022 15:21 UTC
8 points

# [Link post] Pa­ram­e­ter counts in Ma­chine Learning

1 Jul 2021 15:44 UTC
15 points

# Cog­ni­tive sci­ence and failed AI fore­casts

18 Nov 2022 14:25 UTC
13 points

# Un­der­stand­ing the diffu­sion of large lan­guage mod­els: summary

21 Dec 2022 13:49 UTC
127 points

# Ar­tifi­cial In­tel­li­gence, Mo­ral­ity, and Sen­tience (AIMS) Sur­vey: 2021

1 Jul 2022 7:47 UTC
36 points
(www.sentienceinstitute.org)

# [Question] What will be some of the most im­pact­ful ap­pli­ca­tions of ad­vanced AI in the near term?

3 Mar 2022 15:26 UTC
16 points

# Safety timelines: How long will it take to solve al­ign­ment?

19 Sep 2022 12:51 UTC
45 points

# How Do AI Timelines Affect Ex­is­ten­tial Risk?

29 Aug 2022 17:10 UTC
2 points
(www.lesswrong.com)

# [Link] “The AI Timelines Scam”

11 Jul 2019 3:37 UTC
22 points

# [Question] What are the num­bers in mind for the su­per-short AGI timelines so many long-ter­mists are alarmed about?

19 Apr 2022 21:09 UTC
41 points

# [Link post] Paths To High-Level Ma­chine Intelligence

22 Sep 2021 2:43 UTC
23 points

# [Question] Are AGI labs build­ing up im­por­tant in­tan­gibles?

8 Apr 2022 18:43 UTC
9 points

# Yud­kowsky and Chris­ti­ano dis­cuss “Take­off Speeds”

22 Nov 2021 19:42 UTC
42 points

# How im­por­tant are ac­cu­rate AI timelines for the op­ti­mal spend­ing sched­ule on AI risk in­ter­ven­tions?

16 Dec 2022 16:05 UTC
30 points

# Pod­cast: Bryan Ca­plan on open bor­ders, UBI, to­tal­i­tar­i­anism, AI, pan­demics, util­i­tar­i­anism and la­bor economics

22 Feb 2022 15:04 UTC
22 points
(www.utilitarianpodcast.com)

# Re­port on Whether AI Could Drive Ex­plo­sive Eco­nomic Growth

25 Jun 2021 23:02 UTC
63 points

# Pre­dic­tion Mar­kets Speaker Event + Meetup

30 Jun 2022 5:57 UTC
3 points

# Chris­tian Tarsney on fu­ture bias and a pos­si­ble solu­tion to moral fanaticism

6 May 2021 10:39 UTC
26 points
(80000hours.org)

# Long-Term Fu­ture Fund: April 2020 grants and recommendations

18 Sep 2020 10:28 UTC
40 points
(app.effectivealtruism.org)

# Me­tac­u­lus is hiring

9 Dec 2020 20:58 UTC
30 points

# Pre­dict­ing Poly­genic Selec­tion for IQ

28 Mar 2022 18:00 UTC
41 points

# “Two-fac­tor” vot­ing (“two di­men­sional”: karma, agree­ment) for EA fo­rum?

25 Jun 2022 11:10 UTC
81 points
(www.lesswrong.com)

# Prac­ti­cal ethics re­quires meta­phys­i­cal Free Will

7 Apr 2022 14:47 UTC
2 points

# [Question] What’s the GiveDirectly of longter­mism & ex­is­ten­tial risk?

15 Nov 2021 23:55 UTC
28 points

# EA In­fras­truc­ture Fund: May–Au­gust 2021 grant recommendations

24 Dec 2021 10:42 UTC
85 points
(funds.effectivealtruism.org)

# A quick and crude com­par­i­son of epi­demiolog­i­cal ex­pert fore­casts ver­sus Me­tac­u­lus fore­casts for COVID-19

2 Apr 2020 19:29 UTC
9 points

# Should you still use the ITN frame­work? [Red Team­ing Con­test]

14 Jul 2022 4:02 UTC
25 points

# UVC air puri­fier de­sign and test­ing strategy

1 Jun 2022 5:35 UTC
27 points

# “How many peo­ple might ever ex­ist, calcu­lated” by Primer [Video]

16 Aug 2022 16:33 UTC
12 points
(youtu.be)

# Zachary Robin­son: Us­ing “back of the en­velope calcu­la­tions” (BOTECs) to pri­ori­tize interventions

25 Oct 2020 5:48 UTC
7 points

# A peek at pair­wise prefer­ence es­ti­ma­tion in eco­nomics, mar­ket­ing, and statistics

8 Oct 2022 4:56 UTC
31 points
(blog.jonasmoss.com)

# [Question] Is im­prov­ing the welfare of arthro­pods and ne­ma­todes un­der­rated?

8 Nov 2022 10:26 UTC
37 points

# Are poul­try birds re­ally im­por­tant? Yes...

19 Jun 2022 18:24 UTC
13 points

# Prob­a­bil­ity es­ti­mate for wild an­i­mal welfare prioritization

23 Oct 2019 20:47 UTC
9 points

# [Question] Have you ever used a Fermi calcu­la­tion to make a per­sonal ca­reer de­ci­sion?

9 Nov 2020 9:34 UTC
6 points

# Cost-effec­tive­ness of donat­ing a kidney

23 Apr 2022 21:50 UTC
15 points

# Open Let­ter Against Reck­less Nu­clear Es­ca­la­tion and Use

3 Nov 2022 15:08 UTC
10 points
(futureoflife.org)

# How much dona­tions are needed to neu­tral­ise the an­nual x-risk foot­print of the mean hu­man?

22 Sep 2022 6:41 UTC
8 points

# Should Effec­tive Altru­ists Fo­cus More on Move­ment Build­ing?

30 Dec 2020 3:16 UTC
20 points

# [Question] What is the re­la­tion­ship be­tween im­pact and EA Fo­rum karma?

6 Dec 2022 10:42 UTC
14 points

# The num­ber of seabirds and sea mam­mals kil­led by marine plas­tic pol­lu­tion is quite small rel­a­tive to the catch of fish

19 Apr 2022 11:22 UTC
88 points

# Fore­cast­ing Newslet­ter: Septem­ber 2022.

12 Oct 2022 16:37 UTC
23 points

# Fore­cast­ing Newslet­ter: De­cem­ber 2021

10 Jan 2022 19:34 UTC
37 points

# Fore­cast­ing Newslet­ter: Oc­to­ber 2021.

2 Nov 2021 14:05 UTC
15 points

# Fore­cast­ing Newslet­ter: Jan­uary 2022

3 Feb 2022 19:10 UTC
16 points

# [Question] What should my re­search lab fo­cus on in the first week of 2023?

4 Nov 2022 10:16 UTC
3 points

# [Question] What “pivotal” and use­ful re­search … would you like to see as­sessed? (Bounty for sug­ges­tions)

28 Apr 2022 15:49 UTC
37 points

# The Epistemic Challenge to Longter­mism (Tarsney, 2020)

4 Apr 2021 3:09 UTC
79 points
(globalprioritiesinstitute.org)

# The Case for Strong Longtermism

3 Sep 2019 1:17 UTC
14 points
(globalprioritiesinstitute.org)

# Open Philan­thropy’s AI gov­er­nance grant­mak­ing (so far)

17 Dec 2020 12:00 UTC
63 points
(www.openphilanthropy.org)

# Overview of Re­think Pri­ori­ties’ work on risks from nu­clear weapons

10 Jun 2021 18:48 UTC
43 points

# Es­ti­mat­ing long-term treat­ment effects with­out long-term out­come data

29 Sep 2020 13:30 UTC
3 points
(globalprioritiesinstitute.org)

# Chris­tian Tarsney: Can we pre­dictably im­prove the far fu­ture?

18 Oct 2019 7:40 UTC
10 points

# Thoughts on “A case against strong longter­mism” (Mas­rani)

3 May 2021 14:22 UTC
39 points

# Mo­gensen & MacAskill, ‘The paral­y­sis ar­gu­ment’

19 Jul 2021 14:04 UTC
15 points
(quod.lib.umich.edu)

# Chris­tian Tarsney on fu­ture bias and a pos­si­ble solu­tion to moral fanaticism

5 May 2021 19:38 UTC
7 points

# A per­sonal take on longter­mist AI governance

16 Jul 2021 22:08 UTC
173 points

# Cli­mate-con­tin­gent Fi­nance, and A Gen­er­al­ized Mechanism for X-Risk Re­duc­tion Financing

26 Sep 2022 13:23 UTC
6 points

# Fore­cast Which Psy­chol­ogy Stud­ies Repli­cate With Me­tac­u­lus for the Trans­par­ent Repli­ca­tions Project

29 Aug 2023 20:24 UTC
21 points
(www.metaculus.com)

# Epoch is hiring an As­so­ci­ate Data Analyst

21 Sep 2023 13:25 UTC
9 points
(careers.rethinkpriorities.org)

# 2023 Open Philan­thropy AI Wor­ld­views Con­test: Odds of Ar­tifi­cial Gen­eral In­tel­li­gence by 2043

14 Mar 2023 20:32 UTC
19 points

# [Question] Fore­cast­ing Ques­tions: What do you want to pre­dict on AI?

1 Nov 2023 13:16 UTC
9 points

# \$1,000 bounty for an AI Pro­gramme Lead recommendation

14 Aug 2023 13:11 UTC
11 points

# Thought ex­per­i­ment: Trad­ing off risk, in­tra­gen­er­a­tional and in­ter­gen­er­a­tional in­equal­ity, and fairness

2 Sep 2023 23:32 UTC
9 points

# Im­mor­tal­ity or death by AGI

24 Sep 2023 9:44 UTC
12 points
(www.lesswrong.com)

# Me­tac­u­lus Launches Con­di­tional Cup to Ex­plore Linked Forecasts

18 Oct 2023 20:41 UTC
11 points
(www.metaculus.com)

# Re­port on Fron­tier Model Training

30 Aug 2023 20:04 UTC
19 points

# Su­perfore­cast­ing the premises in “Is power-seek­ing AI an ex­is­ten­tial risk?”

18 Oct 2023 20:33 UTC
114 points

# Me­tac­u­lus’s Cli­mate Tip­ping Points Tour­na­ment En­ters Round 2

16 Mar 2023 18:48 UTC
10 points
(www.metaculus.com)

# Trans­for­ma­tive AI and Com­pute—Read­ing List

4 Sep 2023 6:21 UTC
24 points

# Es­ti­ma­tion Is the Best We Have

9 Sep 2014 16:15 UTC
9 points

# [Event] Join Me­tac­u­lus for Fore­cast Fri­day on March 24th!

17 Mar 2023 22:47 UTC
8 points
(www.eventbrite.com)

# [Question] Should Twit­ter have pre­dic­tion mar­kets in Com­mu­nity Notes?

20 Oct 2023 12:27 UTC
17 points

# Longter­mism and An­i­mal Farm­ing Trajectories

27 Dec 2022 0:58 UTC
51 points
(www.sentienceinstitute.org)

# Red-team­ing ex­is­ten­tial risk from AI

30 Nov 2023 14:35 UTC
30 points

# Es­ti­ma­tion for san­ity checks

21 Mar 2023 0:13 UTC
64 points
(nunosempere.com)

# Ret­ro­spec­tive Met­rics: Tools for Col­lab­o­ra­tive Truth Seeking

15 Aug 2023 17:07 UTC
8 points

# An Overview of the AI Safety Fund­ing Situation

12 Jul 2023 14:54 UTC
128 points

# Oper­a­tions: We only have two (types of) meet­ings now.

5 Oct 2023 13:27 UTC
12 points

# [Question] Ques­tions about school shootings

26 Nov 2023 19:15 UTC
5 points

# Largest AI model in 2 years from \$10B

24 Oct 2023 15:14 UTC
36 points

# Are ed­u­ca­tion in­ter­ven­tions as cost effec­tive as the top health in­ter­ven­tions? Five sep­a­rate lines of ev­i­dence for the in­come effects of bet­ter ed­u­ca­tion [Founders Pledge]

13 Jul 2023 13:35 UTC
146 points

# Me­tac­u­lus’s Series ‘Shared Vi­sion: Pro Fore­caster Es­says on Pre­dict­ing the Fu­ture Bet­ter’

13 Jul 2023 1:24 UTC
16 points
(www.metaculus.com)

# Me­tac­u­lus In­tro­duces AI-Pow­ered Com­mu­nity In­sights to Re­veal Fac­tors Driv­ing User Forecasts

10 Nov 2023 17:57 UTC
9 points
(www.metaculus.com)

# How to save a lob­ster in 1 Hour*

17 Oct 2023 23:13 UTC
1 point

# Re­search Sum­mary: Pre­dic­tion Markets

22 Mar 2023 17:07 UTC
3 points
(damienlaird.substack.com)

# An ex­haus­tive list of cos­mic threats

4 Dec 2023 17:59 UTC
74 points

# The Emer­gence of Cy­borgs and the Un­rest of Tran­si­tion: An­ti­ci­pat­ing the Fu­ture of Hu­man Rights

13 Jul 2023 17:35 UTC
8 points

# What val­ues will con­trol the Fu­ture? Overview, con­clu­sion, and di­rec­tions for fu­ture work

18 Jul 2023 16:11 UTC
25 points

# Why we may ex­pect our suc­ces­sors not to care about suffering

10 Jul 2023 13:54 UTC
62 points

# Me­tac­u­lus Launches Quar­terly Cup Tournament

6 Jul 2023 19:25 UTC
13 points
(www.metaculus.com)

# Epoch is hiring an ML Hard­ware Researcher

20 Jul 2023 19:08 UTC
29 points
(careers.rethinkpriorities.org)

# An EA Fairy Tale

17 Jul 2023 11:41 UTC
20 points

# What do XPT fore­casts tell us about AI timelines?

21 Jul 2023 8:30 UTC
29 points

# Why we should fear any bio­eng­ineered fun­gus and give fungi re­search attention

18 Aug 2023 3:35 UTC
67 points

# XPT fore­casts on (some) Direct Ap­proach model inputs

20 Aug 2023 12:39 UTC
37 points

# XPT fore­casts on (some) biolog­i­cal an­chors inputs

24 Jul 2023 13:32 UTC
37 points

# How ex­pen­sive is leav­ing your org? Squig­gle Model

16 Aug 2023 18:01 UTC
40 points

# Fu­ture tech­nolog­i­cal progress does NOT cor­re­late with meth­ods that in­volve less suffering

1 Aug 2023 9:30 UTC
60 points

# [Question] How much might be the coun­ter­fac­tual im­pact of re­lo­ca­tion?

19 Aug 2023 11:34 UTC
1 point

# Last Chance: Get Tick­ets to Man­i­fest 2023! (Sep 22-24 in Berkeley)

6 Sep 2023 10:41 UTC
8 points

# Carl Shul­man on AI takeover mechanisms (& more): Part II of Dwarkesh Pa­tel in­ter­view for The Lu­nar Society

25 Jul 2023 18:31 UTC
28 points
(www.dwarkeshpatel.com)

# AI ro­man­tic part­ners will harm so­ciety if they go unregulated

31 Jul 2023 15:55 UTC
16 points

# Who’s right about in­puts to the biolog­i­cal an­chors model?

24 Jul 2023 14:37 UTC
69 points

# II. Trig­ger­ing The Race

24 Oct 2023 18:45 UTC
6 points

# How much is re­duc­ing catas­trophic and ex­tinc­tion risk worth, as­sum­ing XPT fore­casts?

24 Jul 2023 15:16 UTC
51 points

# Mea­sur­ing im­pact — EA bias to­wards num­bers?

26 Jul 2023 16:19 UTC
−1 points
(mirror.xyz)

# There is Lit­tle Ev­i­dence on Ques­tion Decomposition

7 Sep 2023 18:04 UTC
32 points

# As­ter­isk Magaz­ine Is­sue 03: AI

24 Jul 2023 15:53 UTC
34 points
(asteriskmag.com)

# Solu­tions to prob­lems with Bayesianism

4 Nov 2023 12:15 UTC
27 points

# Pre­dict­ing what fu­ture peo­ple value: A terse in­tro­duc­tion to Ax­iolog­i­cal Futurism

24 Mar 2023 19:15 UTC
62 points

# What do XPT re­sults tell us about biorisk?

13 Sep 2023 20:05 UTC
23 points

# What do XPT fore­casts tell us about nu­clear risk?

22 Aug 2023 19:09 UTC
22 points

# Take­aways from the Me­tac­u­lus AI Progress Tournament

27 Jul 2023 14:37 UTC
85 points

# Com­par­ing Two Fore­cast­ers in an Ideal World

9 Oct 2023 20:06 UTC
14 points

# Pro­posal: Con­nect Me­tac­u­lus to the EA Fo­rum to In­cen­tivize Bet­ter Research

25 Mar 2023 12:13 UTC
19 points
(damienlaird.substack.com)

# Pro­posal + Demo: Con­nect Guessti­mate and Me­tac­u­lus and Turn them into Trees

25 Mar 2023 17:15 UTC
15 points

# When Will We Spend Enough to Train Trans­for­ma­tive AI

28 Mar 2023 0:41 UTC
3 points

# [Event] Join Me­tac­u­lus To­mor­row, March 31st, for Fore­cast Fri­day!

30 Mar 2023 20:58 UTC
29 points
(www.metaculus.com)

# [Question] How to per­suade a non-CS back­ground per­son to be­lieve AGI is 50% pos­si­ble in 2040?

1 Apr 2023 15:27 UTC
1 point

# L’in­quina­mento da plas­tica nei mari sem­bra uc­cidere molti meno uc­celli e mam­miferi mar­ini rispetto ai pesci pescati (es­em­pio pratico di stima di Fermi)

31 Dec 2022 3:40 UTC
1 point

# Earth is not run­ning out of resources

3 Apr 2023 10:53 UTC
5 points
(hereticalupdate.substack.com)

# Me­tac­u­lus’s Keep Virginia Safe II Tour­na­ment En­ters 2nd Round

4 Apr 2023 21:51 UTC
11 points
(www.metaculus.com)

# 🔰Me­tac­u­lus Launches New Begin­ner Fore­cast­ing Tour­na­ment🔰

5 Apr 2023 20:08 UTC
21 points
(www.metaculus.com)

# You Can’t Prove Aliens Aren’t On Their Way To De­stroy The Earth (A Com­pre­hen­sive Take­down Of The Doomer View Of AI)

7 Apr 2023 13:37 UTC
−31 points

# The New England Cot­ton­tail-re­lated con­trol­led fires

19 Nov 2023 3:19 UTC
3 points

# An­nounc­ing Squig­gle Hub

5 Aug 2023 0:55 UTC
131 points

# What the Mo­ral Truth might be makes no differ­ence to what will happen

9 Apr 2023 17:43 UTC
40 points

# [Question] Who here knows?: Cryp­tog­ra­phy [An­swered]

9 Sep 2023 20:30 UTC
6 points

# Shap­ley val­ues: an in­tro­duc­tory example

12 Nov 2023 13:35 UTC
15 points

# Open-source LLMs may prove Bostrom’s vuln­er­a­ble world hypothesis

14 Apr 2023 9:25 UTC
14 points

# [linkpost] “What Are Rea­son­able AI Fears?” by Robin Han­son, 2023-04-23

14 Apr 2023 23:26 UTC
41 points
(quillette.com)

# AI Takeover Sce­nario with Scaled LLMs

16 Apr 2023 23:28 UTC
29 points

# No, the EMH does not im­ply that mar­kets have long AGI timelines

24 Apr 2023 8:27 UTC
83 points

# Il su­perfore­cast­ing in breve

17 Jan 2023 20:12 UTC
1 point

# [Opz­ionale] Pre­vi­sioni aperte più popo­lari su Metaculus

17 Jan 2023 20:15 UTC
1 point
(www.metaculus.com)

# Le Tem­p­is­tiche delle IA: il di­bat­tito e il punto di vista degli “es­perti”

17 Jan 2023 23:30 UTC
1 point

# Es­ti­mat­ing the cost-effec­tive­ness of pre­vi­ous R&D projects

24 Apr 2023 9:48 UTC
25 points

# Power laws in Speedrun­ning and Ma­chine Learning

24 Apr 2023 10:06 UTC
48 points

# X-Risk Re­searchers Sur­vey

24 Apr 2023 8:06 UTC
12 points

# The­ory: “WAW might be of higher im­pact than x-risk pre­ven­tion based on util­i­tar­i­anism”

12 Sep 2023 13:11 UTC
51 points

# A Guide to Fore­cast­ing AI Science Capabilities

29 Apr 2023 6:51 UTC
19 points

# P(doom|AGI) is high: why the de­fault out­come of AGI is doom

2 May 2023 10:40 UTC
13 points

# Man­i­fo­lio: The tool for mak­ing Kelly op­ti­mal bets on Man­i­fold Markets

10 Aug 2023 11:26 UTC
81 points
(manifol.io)

# Me­tac­u­lus Fore­cast Fri­days: May 5th — Peter Wilde­ford on Bi­den’s 3rd Veto

4 May 2023 17:14 UTC
4 points

# How to Sig­nal Com­pe­tence in Your Early-Stage Ca­reer (CCW 2023)

12 Sep 2023 15:49 UTC
28 points

# 4 things GiveDirectly got right and wrong send­ing cash to flood survivors

31 Jul 2023 14:33 UTC
103 points

# The Grabby Values Selec­tion Th­e­sis: What val­ues do space-far­ing civ­i­liza­tions plau­si­bly have?

6 May 2023 19:28 UTC
44 points

# Graph­i­cal Rep­re­sen­ta­tions of Paul Chris­ti­ano’s Doom Model

7 May 2023 13:03 UTC
48 points

# Think­ing of Con­ve­nience as an Eco­nomic Term

5 May 2023 19:09 UTC
28 points

# Fate­book for Slack: Track your fore­casts, right where your team works

11 May 2023 12:58 UTC
77 points
(fatebook.io)

# OpenAI’s new Pre­pared­ness team is hiring

26 Oct 2023 20:41 UTC
85 points

# Seek­ing in­put on Frame­work for Un­con­di­tional UBI Cost-Effec­tive­ness Analysis

11 Dec 2023 13:13 UTC
3 points

# Me­tac­u­lus Launches 2023/​2024 FluSight Challenge Sup­port­ing CDC, \$5K in Prizes

27 Sep 2023 21:35 UTC
9 points
(www.metaculus.com)

# A Ris­ing Tide Threat­ens Bar­ri­ers to Bioweapons

14 May 2023 14:49 UTC
23 points

# Share Your Feed­back and Help Us Refine Me­tac­u­lus’s Scor­ing Sys­tem

7 Aug 2023 23:09 UTC
15 points

# The Hinge of His­tory Hy­poth­e­sis: Re­ply to MacAskill (An­dreas Mo­gensen)

8 Aug 2023 11:00 UTC
47 points

# Microdooms averted by work­ing on AI Safety

17 Sep 2023 21:51 UTC
39 points
(www.lesswrong.com)

# A model-based ap­proach to AI Ex­is­ten­tial Risk

25 Aug 2023 10:44 UTC
17 points
(www.lesswrong.com)

# Quan­tified Col­lec­tive In­tel­li­gence: In­te­grat­ing Fore­cast­ing into De­ci­sion-Making

28 Sep 2023 15:37 UTC
6 points

# [Question] Is the risk of a bioweapons “warn­ing shot” >>50%?

18 Sep 2023 9:45 UTC
9 points

# Taiwan’s mil­i­tary com­pla­cency.

4 Dec 2023 9:28 UTC
32 points

# Me­tac­u­lus An­nounces Win­ners of the Alt-Protein Fore­cast­ing Tournament

15 Sep 2023 17:59 UTC
25 points

# OPTIC: An­nounc­ing In­ter­col­le­giate Fore­cast­ing Tour­na­ments in SF, DC, Boston

13 Oct 2023 1:26 UTC
19 points

# [Question] Ask­ing for on­line re­sources why AI now is near AGI

18 May 2023 0:04 UTC
6 points

# U.S. Reg­u­la­tory Up­dates to Benefit-Cost Anal­y­sis: High­lights and En­courage­ment to Sub­mit Public Comments

18 May 2023 6:37 UTC
79 points

# Me­tac­u­lus In­tro­duces Con­di­tional Con­tin­u­ous Ques­tions to Ex­plore Re­la­tion­ships Between Events

19 May 2023 19:24 UTC
14 points
(www.metaculus.com)

# Re­boot­ing AI Gover­nance: An AI-Driven Ap­proach to AI Governance

20 May 2023 19:06 UTC
38 points

# It’s Time We Pay In­ter­view-Stage Job Ap­pli­cants For Their Time

28 Nov 2023 19:45 UTC
−6 points

# A pro­gres­sive AI, not a threat­en­ing one

12 Dec 2023 17:19 UTC
−17 points

# Be­ware of shift­ing baseline syndrome

12 Dec 2023 19:09 UTC
6 points

# Di­a­gram with Com­men­tary for AGI as an X-Risk

24 May 2023 22:27 UTC
20 points

# Will AI end ev­ery­thing? A guide to guess­ing | EAG Bay Area 23

25 May 2023 17:01 UTC
74 points

# Without a tra­jec­tory change, the de­vel­op­ment of AGI is likely to go badly

30 May 2023 0:21 UTC
1 point

# Why It Works

26 Aug 2023 5:07 UTC
2 points

# How to calcu­late the ex­pected value of the best option

26 Aug 2023 5:01 UTC
5 points

# Rel­a­tive val­ues for an­i­mal suffer­ing and ACE Top Charities

30 May 2023 16:37 UTC
33 points
(nunosempere.com)

# Con­sid­er­a­tions on trans­for­ma­tive AI and ex­plo­sive growth from a semi­con­duc­tor-in­dus­try per­spec­tive

31 May 2023 1:11 UTC
23 points
(muireall.space)

# Global In­no­va­tion Fund pro­jects its im­pact to be 3x GiveWell Top Charities

1 Jun 2023 13:00 UTC
70 points

# Pre­ci­sion of Sets of Forecasts

19 Sep 2023 18:20 UTC
8 points

# A moral back­lash against AI will prob­a­bly slow down AGI development

31 May 2023 21:31 UTC
141 points

# Prior X%—<1%: A quan­tified ‘epistemic sta­tus’ of your pre­dic­tion.

2 Jun 2023 15:51 UTC
11 points

# In­trin­sic limi­ta­tions of GPT-4 and other large lan­guage mod­els, and why I’m not (very) wor­ried about GPT-n

3 Jun 2023 13:09 UTC
28 points

# In­put sought on next steps for the XPT (also, we’re hiring!)

29 Sep 2023 22:26 UTC
34 points

# In­cor­po­rat­ing and vi­su­al­iz­ing un­cer­tainty in cost effec­tive­ness analy­ses: A walk­through us­ing GiveWell’s es­ti­mates for StrongMinds

7 Nov 2023 12:50 UTC
69 points

# Why microplas­tics should mat­ter to EAs

4 Dec 2023 9:27 UTC
3 points

# Quick, High-EV Ad­van­tage Sports­bet­ting Op­por­tu­nity in 18 US States

4 Jun 2023 3:27 UTC
−1 points

# EA Ar­chi­tect: Disser­ta­tion on Im­prov­ing the So­cial Dy­nam­ics of Con­fined Spaces & Shelters Prece­dents Report

6 Jun 2023 11:58 UTC
42 points

# The Offense-Defense Balance Rarely Changes

9 Dec 2023 15:22 UTC
80 points
(maximumprogress.substack.com)

# AI Safety Strat­egy—A new or­ga­ni­za­tion for bet­ter timelines

14 Jun 2023 20:41 UTC
8 points

# A Man­i­fold Mar­ket “Leaked” the AI Ex­tinc­tion State­ment and CAIS Wanted it Deleted

12 Jun 2023 15:57 UTC
24 points
(news.manifold.markets)

# Me­tac­u­lus Launches Chi­nese AI Chips Tour­na­ment, Sup­port­ing In­sti­tute for AI Policy and Strat­egy Research

6 Dec 2023 11:26 UTC
22 points
(www.metaculus.com)

# Chi­nese and US Semi­con­duc­tor competition

17 Jan 2024 16:27 UTC
12 points

# Ex­pert trap (Part 2 of 3) – how hind­sight, hi­er­ar­chy, and con­fir­ma­tion bi­ases break con­duc­tivity and ac­cu­racy of knowledge

9 Jun 2023 22:53 UTC
3 points

# Miti­gat­ing Eth­i­cal Con­cerns and Risks in the US Ap­proach to Au­tonomous Weapons Sys­tems through Effec­tive Altruism

11 Jun 2023 10:37 UTC
5 points

# Epoch and FRI Men­tor­ship Pro­gram Sum­mer 2023

13 Jun 2023 14:27 UTC
38 points
(epochai.org)

# [Question] What’s the ex­act way you pre­dict prob­a­bil­ity of AI ex­tinc­tion?

13 Jun 2023 15:11 UTC
18 points

# The Long-Term Fu­ture Fund is look­ing for a full-time fund chair

5 Oct 2023 1:49 UTC
101 points

# Mir­ror, Mir­ror on the Wall: How Do Fore­cast­ers Fare by Their Own Call?

7 Nov 2023 17:37 UTC
20 points

# [Question] What’s the Limit for Cost-Effec­tive­ness?

10 Aug 2023 23:38 UTC
4 points

# Sce­nario plan­ning for AI x-risk

10 Feb 2024 0:07 UTC
40 points
(www.convergenceanalysis.org)

# [Question] Is there any work on cause pri­ori­ti­za­tion that takes into ac­count timelines be­ing wor­ld­view-de­pen­dent?

31 Oct 2023 2:25 UTC
13 points

28 Aug 2023 22:44 UTC
3 points

# Me­tac­u­lus Pre­sents — View From the En­ter­prise Suite: How Ap­plied AI Gover­nance Works Today

20 Jun 2023 22:24 UTC
4 points

# RP’s AI Gover­nance & Strat­egy team—June 2023 in­terim overview

22 Jun 2023 13:45 UTC
68 points

# Think­ing-in-limits about TAI from the de­mand per­spec­tive. De­mand sat­u­ra­tion, re­source wars, new debt.

7 Nov 2023 22:44 UTC
2 points

# The Benev­olent Ruler’s Hand­book (Part 2): Mo­ral­ity Rules

12 Aug 2023 14:25 UTC
3 points

# [Event] Me­tac­u­lus Pre­sents: Trans­for­ma­tive Science at Startup Speed

1 Nov 2023 3:01 UTC
6 points
(www.eventbrite.com)

# An­nounc­ing Man­i­fest 2023 (Sep 22-24 in Berkeley)

14 Aug 2023 11:41 UTC
46 points

# Me­tac­u­lus’s New Side­bar Helps You Find Fore­casts Faster

8 Nov 2023 20:56 UTC
8 points
(www.metaculus.com)

# The Type Of An­i­mal Hus­bandry Is Rele­vant For An­i­mal Welfare

17 Jun 2024 20:20 UTC
5 points

# Four Fu­tures For Cog­ni­tive Labor

13 Jun 2024 12:58 UTC
23 points
(www.maximum-progress.com)

# Fate­book for Chrome: Make and em­bed fore­casts in Google Docs

16 Feb 2024 15:59 UTC
27 points
(fatebook.io)

# [Question] Is there a pub­lic tracker de­pict­ing at what dates AI has been able to au­to­mate x% of cog­ni­tive tasks (weighted by 2020 eco­nomic value)?

17 Feb 2024 4:52 UTC
12 points

# Pre­dict­ing the fu­ture with the power of the In­ter­net (and piss­ing off Rob Miles)

15 Dec 2023 17:37 UTC
4 points
(youtu.be)

# My ex­pe­rience at the con­tro­ver­sial Man­i­fest 2024

17 Jun 2024 18:07 UTC
49 points

# China-AI fore­cast­ing

25 Feb 2024 16:47 UTC
10 points

# AI Policy In­sights from the AIMS Survey

22 Feb 2024 19:17 UTC
10 points
(www.sentienceinstitute.org)

# Me­tac­u­lus In­tro­duces Bet­ter Ques­tion Discovery

1 Mar 2024 3:24 UTC
5 points
(www.metaculus.com)

# An­nounc­ing The Pre­dic­tion Post

2 Mar 2024 4:58 UTC
17 points
(thepredictionpost.substack.com)

# The World in 2029

2 Mar 2024 18:03 UTC
88 points

# An­nounc­ing the AI Fore­cast­ing Bench­mark Series | July 8, \$120k in Prizes

19 Jun 2024 21:37 UTC
50 points
(www.metaculus.com)

# AISN #32: Mea­sur­ing and Re­duc­ing Hazardous Knowl­edge in LLMs Plus, Fore­cast­ing the Fu­ture with LLMs, and Reg­u­la­tory Markets

7 Mar 2024 16:37 UTC
15 points

# Man­i­fold mar­kets isn’t very good

20 Jun 2024 11:24 UTC
11 points

# Re­sults from an Ad­ver­sar­ial Col­lab­o­ra­tion on AI Risk (FRI)

11 Mar 2024 15:54 UTC
193 points
(forecastingresearch.org)

# Probly: a Python-like lan­guage for prob­a­bil­is­tic modelling

18 Mar 2024 13:19 UTC
13 points
(probly.dev)

# Re­vis­it­ing the Evolu­tion An­chor in the Biolog­i­cal An­chors Re­port

18 Mar 2024 3:01 UTC
13 points

# Carlo: un­cer­tainty anal­y­sis in Google Sheets

18 Mar 2024 13:06 UTC
42 points
(carlo.app)

# Par­ti­ci­pate in Man­i­fund Micro­grants: an ACX Grants giv­ing game

19 Mar 2024 18:19 UTC
26 points

# China State Ship­build­ing Cor­po­ra­tion

24 Jun 2024 15:27 UTC
10 points

# Trans­for­ma­tive AI and Sce­nario Plan­ning for AI X-risk

22 Mar 2024 11:44 UTC
14 points

# [Linkpost] Vague Ver­biage in Forecasting

22 Mar 2024 18:05 UTC
5 points
(goodjudgment.com)

# De­cen­tral­ized His­tor­i­cal Data Preser­va­tion and Why EA Should Care

22 Mar 2024 10:09 UTC
2 points

# Very Ac­cu­rate Ances­tor Si­mu­la­tion: Prac­ti­cal­ity and Ethics

25 Mar 2024 9:38 UTC
0 points

# Me­tac­u­lus Launches Ques­tion Series With Bryan Caplan

21 Mar 2024 19:18 UTC
16 points
(www.metaculus.com)

# Timelines to Trans­for­ma­tive AI: an investigation

25 Mar 2024 18:11 UTC
71 points

# Me­tac­u­lus In­tro­duces Mul­ti­ple Choice Questions

20 Dec 2023 19:00 UTC
8 points
(www.metaculus.com)

26 Dec 2023 9:07 UTC
10 points
(www.brasstacks.blog)

26 Dec 2023 15:57 UTC
104 points

# Say how much, not more or less ver­sus some­one else

28 Dec 2023 22:24 UTC
100 points

# Me­tac­u­lus Hosts ACX 2024 Pre­dic­tion Contest

1 Jan 2024 16:38 UTC
16 points
(www.metaculus.com)

# [Question] Is Walk­ing Really Bet­ter Than Driv­ing?

7 Jan 2024 2:49 UTC
−4 points

# Me­tac­u­lus Launches Q1 2024 Quar­terly Cup for Cur­rent Events-Fo­cused, Fast-Re­solv­ing Questions

9 Jan 2024 4:28 UTC
6 points
(www.metaculus.com)

# 2023: high­lights from the year, from the EA Newsletter

5 Jan 2024 21:57 UTC
68 points

# Come to Man­i­fest 2024 (June 7-9 in Berkeley)

27 Mar 2024 21:30 UTC
15 points
(news.manifold.markets)

# AI Bench­marks Series — Me­tac­u­lus Ques­tions on Eval­u­a­tions of AI Models Against Tech­ni­cal Benchmarks

27 Mar 2024 23:05 UTC
10 points
(www.metaculus.com)

# Fore­cast in the Un­der­stand­ing AI Series With Ti­mothy B. Lee

28 Mar 2024 22:27 UTC
12 points
(www.metaculus.com)

# Ex­plor­ing Er­god­ic­ity in the Con­text of Longtermism

29 Mar 2024 10:14 UTC
36 points

# AI scal­ing myths

27 Jun 2024 20:29 UTC
30 points
(open.substack.com)

# Con­tra Ace­moglu on AI

28 Jun 2024 13:14 UTC
51 points
(www.maximum-progress.com)

# #190 – On whether the US is con­scious (Eric Sch­witzgebel on the 80,000 Hours Pod­cast)

12 Jun 2024 15:14 UTC
7 points

# A Re­search Agenda for Psy­chol­ogy and AI

28 Jun 2024 12:56 UTC
52 points

# Illu­mi­natea—A Pro­posal for EA Reform

1 Apr 2024 10:51 UTC
96 points

# Thou­sands of mal­i­cious ac­tors on the fu­ture of AI misuse

1 Apr 2024 10:03 UTC
75 points

# Fore­cast­ers: What Do They Know? Do They Know Things?? Let’s Find Out!

2 Apr 2024 18:03 UTC
10 points

# Fore­cast in the 2024 UBS As­set Man­age­ment In­vest­ments Re­cruit­ment Challenge on Good Judg­ment Open

3 Apr 2024 20:31 UTC
2 points

# Futarchy and prefer­ences over variance

29 Jun 2024 2:36 UTC
2 points
(nicholasdecker.substack.com)

# Con­scious AI con­cerns all of us. [Con­scious AI & Public Per­cep­tions]

3 Jul 2024 3:12 UTC
24 points

# (4 min read) An in­tu­itive ex­pla­na­tion of the AI in­fluence situation

13 Jan 2024 17:34 UTC
1 point

# Con­scious AI & Public Per­cep­tion: Four futures

3 Jul 2024 23:06 UTC
12 points

# Con­scious AI: Will we know it when we see it? [Con­scious AI & Public Per­cep­tion]

4 Jul 2024 20:30 UTC
13 points

# How to re­duce risks re­lated to con­scious AI: A user guide [Con­scious AI & Public Per­cep­tion]

5 Jul 2024 14:19 UTC
9 points

# The case for con­scious AI: Clear­ing the record [AI Con­scious­ness & Public Per­cep­tion]

5 Jul 2024 20:29 UTC
3 points

# [Question] AI con­scious­ness & moral sta­tus: What do the ex­perts think?

6 Jul 2024 15:27 UTC
−1 points

# De­mon­strate and eval­u­ate risks from AI to so­ciety at the AI x Democ­racy re­search hackathon

19 Apr 2024 14:46 UTC
24 points
(www.apartresearch.com)

# An AI Race With China Can Be Bet­ter Than Not Racing

2 Jul 2024 17:57 UTC
18 points

# [Question] How bad would AI progress need to be for us to think gen­eral tech­nolog­i­cal progress is also bad?

6 Jul 2024 18:44 UTC
10 points

# AI-nu­clear in­te­gra­tion: ev­i­dence of au­toma­tion bias from hu­mans and LLMs [re­search sum­mary]

27 Apr 2024 21:59 UTC
17 points

# Epoch AI is Hiring an Eco­nomics of AI Researcher

3 May 2024 0:03 UTC
24 points
(careers.rethinkpriorities.org)

# Fluent, Cruxy Predictions

10 Jul 2024 20:34 UTC
15 points

# The Age of EM

9 May 2024 12:17 UTC
0 points
(ageofem.com)

# Re­port on the De­sir­a­bil­ity of Science Given New Biotech Risks

17 Jan 2024 19:42 UTC
78 points

# No “Zero-Shot” Without Ex­po­nen­tial Data: Pre­train­ing Con­cept Fre­quency Deter­mines Mul­ti­modal Model Performance

14 May 2024 23:57 UTC
36 points
(arxiv.org)

# Digi­tal Agents: The Fu­ture of News Consumption

16 May 2024 8:12 UTC
9 points
(echoesandchimes.com)

# Fore­cast­ing: the way I think about it

16 May 2024 19:01 UTC
16 points

# Big Pic­ture AI Safety: Introduction

23 May 2024 11:28 UTC
32 points

# What will the first hu­man-level AI look like, and how might things go wrong?

23 May 2024 11:28 UTC
12 points

# What should AI safety be try­ing to achieve?

23 May 2024 11:28 UTC
13 points

# 2024 State of AI Reg­u­la­tory Landscape

28 May 2024 12:00 UTC
12 points
(www.convergenceanalysis.org)

# Me­tac­u­lus World Map Experiment

16 Jul 2024 18:19 UTC
18 points
(www.metaculus.com)

# [Question] PhDs in Data Science and Govern­men­tal Re­source Allocation

5 Jun 2024 0:48 UTC
1 point

# Un­der­fund­ing of break­through treat­ments for ad­dic­tion and over­dose—look­ing for help

31 Jan 2024 10:38 UTC
28 points

# Do­ing a Ba­sic Life-Fo­cused Cost-Benefit Analysis

26 Jan 2024 18:44 UTC
16 points

# Gaia Net­work: An Illus­trated Primer

26 Jan 2024 11:55 UTC
2 points

# Sum­mary: Max­i­mal Clue­less­ness (An­dreas Mo­gensen)

6 Feb 2024 14:49 UTC
28 points

# UPDATE: Crit­i­cal Failures in the World Hap­piness Re­port’s Model of Na­tional Satisfaction

24 Feb 2024 4:54 UTC
120 points

# xAI raises \$6B

5 Jun 2024 15:26 UTC
16 points
(x.ai)

# OpenAI and An­thropic Donate Cred­its for AI Fore­cast­ing Bench­mark Tournament

17 Jul 2024 21:50 UTC
2 points

# Launch­ing the Re­s­pi­ra­tory Out­look 2024/​25 Fore­cast­ing Series

17 Jul 2024 19:51 UTC
5 points
(www.metaculus.com)

# Prob­a­bil­ities, Pri­ori­ti­za­tion, and ‘Bayesian Mind­set’

4 Apr 2023 10:16 UTC
55 points

# Five slightly more hard­core Squig­gle mod­els.

10 Oct 2022 14:42 UTC
31 points

# \$5k challenge to quan­tify the im­pact of 80,000 hours’ top ca­reer paths

23 Sep 2022 11:32 UTC
126 points

# [Question] Is there a good web app for do­ing the “equiv­a­lent bet test” from “How To Mea­sure Any­thing”?

10 Nov 2022 14:17 UTC
14 points

# A con­cern about the “evolu­tion­ary an­chor” of Ajeya Co­tra’s re­port on AI timelines.

16 Aug 2022 14:44 UTC
75 points
(nunosempere.com)

# Quan­tify­ing the im­pact of grant­mak­ing ca­reer paths

30 Oct 2022 21:00 UTC
32 points

# Beyond Sim­ple Ex­is­ten­tial Risk: Sur­vival in a Com­plex In­ter­con­nected World

21 Nov 2022 14:35 UTC
84 points

# Es­ti­mat­ing value from pair­wise comparisons

5 Oct 2022 11:23 UTC
34 points
(blog.jonasmoss.com)

# Me­tac­u­lus Launches the ‘Fore­cast­ing Our World In Data’ Pro­ject to Probe the Long-Term Future

14 Oct 2022 17:00 UTC
65 points
(www.metaculus.com)

# An­nounc­ing Squig­glepy, a Python pack­age for Squiggle

19 Oct 2022 18:34 UTC
90 points
(github.com)

# Su­perfore­cast­ing Long-Term Risks and Cli­mate Change

19 Aug 2022 18:05 UTC
48 points

# Ten Com­mand­ments for Aspiring Su­perfore­cast­ers

25 Apr 2018 5:07 UTC
21 points

# The col­lab­o­ra­tive ex­plo­ra­tion of al­ter­na­tive fu­tures—a free to use on­line tool

26 Aug 2022 14:37 UTC
11 points

# Re­minder: you can donate your mana to char­ity!

29 Nov 2022 18:30 UTC
25 points
(manifold.markets)

# An­nounc­ing Squig­gle: Early Access

3 Aug 2022 0:23 UTC
147 points

# AI Fore­cast­ing Re­search Ideas

17 Nov 2022 17:37 UTC
78 points

# Pre­dic­tion mar­ket does not im­ply causation

10 Oct 2022 20:37 UTC
29 points
(dynomight.net)

# Us­ing Sub­jec­tive Well-Be­ing to Es­ti­mate the Mo­ral Weights of Avert­ing Deaths and Re­duc­ing Poverty

3 Aug 2020 16:17 UTC
97 points

# Fore­cast­ing Newslet­ter for Oc­to­ber 2022

15 Nov 2022 17:31 UTC
17 points
(forecasting.substack.com)

# Register your pre­dic­tions for 2023

26 Dec 2022 20:49 UTC
42 points

# AI X-Risk: In­te­grat­ing on the Shoulders of Giants

1 Nov 2022 16:07 UTC
34 points

# Prob­a­bil­ity of ex­tinc­tion for var­i­ous types of catastrophes

9 Oct 2022 15:30 UTC
16 points

# The Pen­tagon claims China will likely have 1,500 nu­clear war­heads by 2035

12 Dec 2022 18:12 UTC
34 points
(media.defense.gov)

# [Question] Is now a good time to ad­vo­cate for pre­dic­tion mar­ket gov­er­nance ex­per­i­ments in the UK?

21 Oct 2022 11:51 UTC
9 points

# When re­port­ing AI timelines, be clear who you’re defer­ring to

10 Oct 2022 14:24 UTC
120 points

# In­tro­duc­tion to Fermi estimates

26 Aug 2022 10:03 UTC
46 points
(nunosempere.com)

# COVID-19 in ru­ral Balochis­tan, Pak­istan: Two in­ter­views from May 2020

16 Dec 2022 11:33 UTC
22 points

# Track­ing the money flows in forecasting

9 Nov 2022 16:10 UTC
76 points
(nunosempere.com)

# Com­par­ing top fore­cast­ers and do­main experts

6 Mar 2022 20:43 UTC
205 points

# Samotsvety’s AI risk forecasts

9 Sep 2022 4:01 UTC
175 points

# Metafore­cast late 2022 up­date: GraphQL API, Charts, bet­ter in­fras­truc­ture be­hind the scenes.

4 Nov 2022 17:56 UTC
39 points

# Guessti­mate Al­gorithm for Med­i­cal Research

22 Sep 2022 21:40 UTC
37 points
(acesounderglass.com)

# AI Timelines: Where the Ar­gu­ments, and the “Ex­perts,” Stand

7 Sep 2021 17:35 UTC
88 points

# [job] Me­tac­u­lus has new soft­ware roles

7 Nov 2022 21:19 UTC
9 points
(apply.workable.com)

# How Many Lives Does X-Risk Work Save From Nonex­is­tence On Aver­age?

8 Dec 2022 21:44 UTC
34 points

# A com­mon failure for foxes

14 Oct 2022 22:51 UTC
22 points

# Pre­dict­ing Open Phil Grants

23 Jul 2021 14:00 UTC
57 points

# In­tro­duc­ing Metafore­cast: A Fore­cast Ag­gre­ga­tor and Search Tool

7 Mar 2021 19:03 UTC
132 points

# Cal­ibrate—New Chrome Ex­ten­sion for hid­ing num­bers so you can guess

7 Oct 2022 11:21 UTC
26 points

# Guessti­mate: An app for mak­ing de­ci­sions with con­fi­dence (in­ter­vals)

30 Dec 2015 17:30 UTC
63 points

# Ev­i­dence on good fore­cast­ing prac­tices from the Good Judg­ment Pro­ject: an ac­com­pa­ny­ing blog post

15 Feb 2019 19:14 UTC
79 points

# Cost-effec­tive­ness of op­er­a­tions man­age­ment in high-im­pact organisations

27 Nov 2022 10:33 UTC
48 points

# Pre­dict which posts will win the Crit­i­cism and Red Team­ing Con­test!

27 Sep 2022 22:46 UTC
21 points
(manifold.markets)

# Make your own cost-effec­tive­ness Fermi es­ti­mates for one-off problems

11 Dec 2014 11:49 UTC
23 points

# Me­tac­u­lus An­nounces The Million Pre­dic­tions Hackathon

10 Nov 2022 20:00 UTC
20 points
(metaculus.medium.com)

# Biolog­i­cal An­chors ex­ter­nal re­view by Jen­nifer Lin (linkpost)

30 Nov 2022 13:06 UTC
36 points

# Sim­ple es­ti­ma­tion ex­am­ples in Squiggle

2 Sep 2022 9:37 UTC
52 points

# Samotsvety Nu­clear Risk Fore­casts — March 2022

10 Mar 2022 18:52 UTC
155 points

# [Linkpost] Dan Luu: Fu­tur­ist pre­dic­tion meth­ods and accuracy

15 Sep 2022 21:20 UTC
64 points
(danluu.com)

# Is AI fore­cast­ing a waste of effort on the mar­gin?

5 Nov 2022 0:41 UTC
10 points

# Use re­silience, in­stead of im­pre­ci­sion, to com­mu­ni­cate uncertainty

18 Jul 2020 12:09 UTC
103 points

# Creat­ing a database for base rates

12 Dec 2022 10:05 UTC
74 points

# Pre­dic­tion Bank: A way around cur­rent pre­dic­tion mar­ket reg­u­la­tions?

25 Jan 2022 4:21 UTC
25 points

# Es­ti­mat­ing the Aver­age Im­pact of an ARPA-E Grantmaker

1 Dec 2022 6:34 UTC
22 points

# “Tech­nolog­i­cal un­em­ploy­ment” AI vs. “most im­por­tant cen­tury” AI: how far apart?

11 Oct 2022 4:50 UTC
17 points
(www.cold-takes.com)

# Me­tac­u­lus Begin­ner Tour­na­ment for New Forecasters

6 Jan 2023 2:35 UTC
33 points

# En­ter Scott Alexan­der’s Pre­dic­tion Competition

5 Jan 2023 20:52 UTC
18 points

# Me­tac­u­lus Year in Re­view: 2022

6 Jan 2023 1:23 UTC
25 points
(metaculus.medium.com)

# [Part 2] Am­plify­ing gen­er­al­ist re­search via fore­cast­ing – re­sults from a pre­limi­nary exploration

19 Dec 2019 16:36 UTC
32 points

# [Part 1] Am­plify­ing gen­er­al­ist re­search via fore­cast­ing – mod­els of im­pact and challenges

19 Dec 2019 18:16 UTC
60 points

# Is any­one else also get­ting more wor­ried about hard take­off AGI sce­nar­ios?

9 Jan 2023 6:04 UTC
19 points

# Against us­ing stock prices to fore­cast AI timelines

10 Jan 2023 16:04 UTC
18 points

# [Ru­mour] Microsoft to in­vest \$10B in OpenAI, will re­ceive 75% of prof­its un­til they re­coup in­vest­ment: GPT would be in­te­grated with Office

10 Jan 2023 23:43 UTC
25 points

# Fore­cast­ing could use more gen­der diversity

13 Jan 2023 19:27 UTC
137 points

# Prac­tic­ing my Hand­writ­ing in 1439

3 Feb 2024 13:22 UTC
19 points
(www.maximum-progress.com)

# [Question] Should we have Me­tac­u­lus ques­tions for when each ma­jor EA or­ga­ni­za­tion dis­solves and if so, how should they be worded?

20 Jan 2023 20:45 UTC
25 points

# What a com­pute-cen­tric frame­work says about AI take­off speeds

23 Jan 2023 4:09 UTC
189 points
(www.lesswrong.com)

# Me­tac­u­lus Launches Cli­mate Tip­ping Points Tour­na­ment With The Fed­er­a­tion of Amer­i­can Scientists

27 Jan 2023 19:33 UTC
21 points
(www.metaculus.com)

# Liter­a­ture re­view of Trans­for­ma­tive Ar­tifi­cial In­tel­li­gence timelines

27 Jan 2023 20:36 UTC
148 points

# How to make in­de­pen­dent re­search more fun (80k After Hours)

17 Mar 2023 22:25 UTC
28 points
(80000hours.org)

# Fore­cast­ing tools and Pre­dic­tion Mar­kets: Why and How

31 Jan 2023 12:55 UTC
18 points

# More Is Prob­a­bly More—Fore­cast­ing Ac­cu­racy and Num­ber of Fore­cast­ers on Metaculus

31 Jan 2023 17:20 UTC
36 points

# Eli Lifland on Nav­i­gat­ing the AI Align­ment Landscape

1 Feb 2023 0:07 UTC
48 points
(quri.substack.com)

# Wits & Wagers: An En­gag­ing Game for Effec­tive Altruists

1 Feb 2023 9:30 UTC
31 points

# Fore­cast­ing Our World in Data: The Next 100 Years

1 Feb 2023 22:13 UTC
97 points
(www.metaculus.com)

# Epoch Im­pact Re­port 2022

2 Feb 2023 13:09 UTC
81 points
(epochai.org)

# Pan­demic Pre­dic­tion Check­list: H5N1

5 Feb 2023 14:56 UTC
70 points

# Fore­cast your 2024 with Fatebook

5 Jan 2024 12:40 UTC
20 points
(fatebook.io)

# AGI in sight: our look at the game board

18 Feb 2023 22:17 UTC
25 points

# Cu­rated blind auc­tion pre­dic­tion mar­kets and a rep­u­ta­tion sys­tem as an al­ter­na­tive to ed­i­to­rial re­view in news pub­li­ca­tion.

15 Feb 2023 14:26 UTC
10 points

# An­nounc­ing Con­fido 2.0: Pro­mot­ing the un­cer­tainty-aware mind­set in orgs

10 Jan 2024 11:45 UTC
20 points

# Man­i­fold Mar­kets Char­ity pro­gram end­ing March 1st

18 Feb 2023 2:12 UTC
28 points
(manifoldmarkets.notion.site)

# Man­i­fund Im­pact Mar­ket /​ Mini-Grants Round On Forecasting

24 Feb 2023 6:14 UTC
59 points
(astralcodexten.substack.com)

# [Question] Can we es­ti­mate the ex­pected value of hu­man’s fu­ture life(in 500 years)

25 Feb 2023 15:13 UTC
5 points

# Com­pe­ti­tion for “For­tified Es­says” on nu­clear risk

17 Nov 2021 20:55 UTC
35 points
(www.metaculus.com)

# Launch­ing the INFER Fore­cast­ing Tour­na­ment for EA uni groups

31 Mar 2022 6:25 UTC
46 points

# Some his­tory top­ics it might be very valuable to investigate

8 Jul 2020 2:40 UTC
91 points

# An­nounc­ing the Nu­clear Risk Fore­cast­ing Tournament

16 Jun 2021 16:12 UTC
38 points

# The chance of ac­ci­den­tal nu­clear war has been go­ing down

31 May 2022 14:48 UTC
66 points
(www.pasteurscube.com)

# [Question] How can I bet on short timelines?

7 Nov 2020 12:45 UTC
33 points

# Scor­ing fore­casts from the 2016 “Ex­pert Sur­vey on Progress in AI”

1 Mar 2023 14:39 UTC
204 points

# Fore­cast­ing Newslet­ter: Au­gust 2020.

1 Sep 2020 11:35 UTC
22 points

# [Question] Pre­dic­tive Perfor­mance on Me­tac­u­lus vs. Man­i­fold Markets

3 Mar 2023 19:39 UTC
111 points

# Fore­cast­ing the cost-effec­tive­ness of try­ing some­thing new

3 Apr 2023 12:29 UTC
46 points

# Pre­dict­ing the cost-effec­tive­ness of de­ploy­ing a new intervention

10 Apr 2023 9:07 UTC
26 points

# How much can we learn from other peo­ple’s guesses?

8 Mar 2023 3:29 UTC
5 points