Good note, agreed that it’s better to centralize forecasts on the LW thread!
Oh this looks really interesting, I’ll check it out, thanks for linking!
The probability of an interval is the area under the graph! Currently, it’s set to 0% that any of the disaster scenarios kill < 1 people. I agree this is probably incorrect, but I didn’t want to make any other assumptions about points they didn’t specify. Here’s a version that explicitly states that.
Thank you for putting this spreadsheet database together! This seemed like a non-trivial amount of work, and it’s pretty useful to have it all in one place. Similar to other comments, seeing this spreadsheet really made me want more consistent questions and forecasting formats such that all these people can make comparable predictions, and also to see the breakdowns of how people are thinking about these forecasts (I’m very excited about people structuring their forecasts more into specific assumptions and evidence).
I thought the 2008 GCR questions were really interesting, and plotted the median estimates here. I was surprised by / interested in:
How many more deaths were expected from wars than other disaster scenarios
For superintelligent AI, most of the probability mass was < 1M deaths, but there was a high probability (5%) on extinction
A natural pandemic was seen as more likely to cause > 1M deaths than an engineered pandemic (although less likely to cause > 1B deaths)
(I can’t figure out how to make the image bigger, but you can click the link here to see the full snapshot)
I just eyeballed the worst to best case for each revenue source (and based on general intuitions about e.g. how hard it is to start a podcast). Yeah, this makes a lot of sense – we’ve thought about showing expected value in the past so this is a nice +1 to that.
Gwern’s comment was really helpful to see the different paradigms, thanks for sharing! This reasoning makes sense to me in terms of increasing compute—I could see this pushing me slightly more towards shorter timelines, although I’d want to spend a lot longer researching this.
Yeah, I mostly focused on the Q1 question so didn’t have time to do a proper growth analysis across 2021 – I just did 10% growth each quarter and summed that for 2021, and it looked reasonable given the EA TAM. This was a bit of a ‘number out of the air,’ and in reality I wouldn’t expect it to be the same growth rate across all quarters. Definitely makes sense that you’re not just focusing on the EA market – the market for general productivity services in the US is quite large! I looked briefly at the subscriptions for top productivity podcasts on Castbox (e.g. Getting Things Done, 5am miracle), which suggests lots of room for growth (although I imagine podcast success is fairly power law distributed).
There isn’t a way to get the expected value, just the median currently (I had a bin in my snapshot indicating a median of $25,000). I’m curious what makes the expected value more useful than the median for you?
Here’s my Q1 2021 prediction, with more detailed notes in a spreadsheet here. I started out estimating the size of the market, to get reference points. Based on very rough estimates of CEA subscriptions, # of people Effective Altruism Coaching has worked with, and # of people who have gone through a CFAR workshop, I estimated the number of EAs who are interested enough in productivity to pay for a service to be ~8000. The low number of people who have done Effective Altruism Coaching (I estimated 100, but this is an important assumption that could be wrong since I don’t think Lynette has published this number anywhere) suggests a range for your course (which is more expensive) of ~10 to 45 people in Q1. Some other estimates, which are in the spreadsheet linked above, gave me a range of $8,000 to $42,000. I didn’t have enough time to properly look into 2021 as a whole, so I just did a flat 10% growth rate across all the numbers and got this prediction. Interestingly, I notice a pressure to err on the side of optimistic when publicly evaluating people’s companies/initiatives.
Your detailed notes were very helpful in this. I noticed that I wanted more information on:
The feedback you got from the first course. How many of them would do it again or recommend it to a friend?
More detail on your podcast plans. I didn’t fully understand the $10 lessons – I assumed it was optional $10 lessons attached to each podcast, but this may be wrong.
How much you’re focusing on EA’s. The total market for productivity services is a lot bigger (here’s an estimate of $1B market value for life coaching, which encompasses productivity coaching).
Do these estimates align with what you’re currently thinking? Are there any key assumptions I made that you disagree with? (here are blank distributions for Q1 and 2021 if you want to share what you’re currently projecting).
You should be able to access the doc from the link in my comment now! That’s useful feedback re: selecting a range and seeing the probability. You can currently see the probability of an interval by defining the interval, leaving the prob blank, and hovering over the bin, but I like the solution you described.
Yeah I could definitely see it being sooner, but didn’t find any sources that convinced me it would be more likely in the next 10 years than later – what’s driving your shorter timelines?
Here’s my prediction for this. It’s pretty uncertain, and I expect others have perspectives which could narrow the range on this forecast. Some thoughts:
Although the same algorithms can be generalized, we’re still at the stage where agents have to be trained on individual games    
It’s really hard to quickly get a sense of how this will progress and what the challenges are without knowing more about the technical research
Given that, my prediction is very uncertain over the range, but bounded by AGI timelines
Does this seem in line with what you expected? Do you know of any good ways to estimate how fast this kind of research will progress? If anyone else has insight that would increase the certainty over a range, you can edit my snapshot or create your own here.
This was really interesting to forecast! Here’s my prediction, and my thought process is below. I decomposed this into several questions:
Will OpenAI commercialize the API?
94% – this was the intention behind releasing the API, although the potential backlash adds some uncertainty 
When will OpenAI commercialize the API? (~August/September)
They released the API in June and indicated a 2 month beta, so it would begin generating revenue in August/September 
Will the API reach $100M revenue? (90%)
Eliezer is willing to bet there’ll be 1B in revenue from GPT-like services in 2025. This broader than just the revenue from the OpenAI API, but is also a lot more than 100M
A tiny list of industries OpenAI API will affect, to give reference points:
Journalism: 63B in 2014 in the US (100M is 0.1% of this)
Gaming: 159B globally in 2020 (100M is 0.06% of this)
Chatbot: 2.6 billion globally in 2019 (100M is 3.8% of this)
It seems absurd to me that OpenAI wouldn’t generate 100M from the API *at some point*, but I’ve adjusted down because it’s easy to be overconfident about exciting tech like this
If it does reach $100M, when will it?
This article suggests that SaaS companies started in the last 15 years took 8 years to reach $100M ARR
The question asks about total revenue not ARR, so this timescale would be a lot shorter
What do you think? Are you more bullish on it generating 100M sooner? (My median was April 17, 2022 – this seems like it could be a bit late, but ultimately I’m not that certain in the 2021 – 2024 range). Here’s a blank distribution if you want to make your own!
This was pretty difficult to forecast in a limited amount of time, so you should take my prediction with a large grain of salt. Broadly, I thought about this as:
How likely is the 1000th baby to involve iterated embryo selection?
There’s a lot of controversy around genetic manipulation for ability, and it’s possible that stem cell gamete reproduction is regulated such that you can only use it as an alternative fertility treatment
E.G. controversy around the ethics of genetic relationship of parents to children (see this series of papers for an overview)
I think 1000 babies is still sufficiently small that it could still be a niche fertility treatment (rather than mass iterated embryo selection), but I could be persuaded otherwise
If the 1000th baby does involve iterated embryo selection, what is the gain in IQ we would expect? (IQ seems like the easiest measure of cognitive ability)
This is pretty hard to estimate.
This paper (Schulman & Bostrom, 2014) suggests a cap of 30 SDs (~300 IQ points). Based on their simulation, choosing one embryo in 10 would lead to 11.5 points (0.8SD) and running 10 generations of choosing 1 in 10 embryos would lead to an increase of 130 points (8.6SD). These estimates may be high – this paper by Karavani, Zuk et al. (2019) suggests the gain from choosing one in 10 embryos is closer to 2.5 IQ points (0.2SD)
This was a really interesting question to look into – what motivated you to ask this? Is there anything you think I missed? (here’s a blank distribution if you want to make your own).
Here’s my prediction! My median is October 3, 2020. If you want to keep checking in on this, the Bureau of Consular Affairs is helpfully tracking their passport backlog and how many they’re processing each week here.
Was this in line with what you were expecting?
Here’s my prediction. Based on this timeline, I started out thinking it would be quite a while (10+ years) before all 50 states legalized recreational marijuana. This paper caused a pretty significant update towards thinking that federal legalization was more likely sooner than I had previously thought. I also found this map useful for getting a quick sense of the current status.
Curious what you think—here’s a blank distribution if you want to make your own.
Here’s my prediction for this! Awesome proposal, I enjoyed reading it. I wrote up more of my thought process here, but a brief overview:
It would help a lot to know the base rate of EA initiatives succeeding past the first year. I couldn’t find any information on this, but it possibly does exist
It wasn’t entirely clear to me what the impact you expect from this project is, which made it hard to estimate cost effectiveness.
I suspect a lot of the indirect impact (building EA connections, converting researchers to EA philosophies) will take a while to manifest
I wanted to know more information about the expected time cost of organizing this, as this would make it less cost effective.
I say this in the post above, but since this may be relevant to decisions you make I want to caveat that I only spent ~1 hour on this! I’d love to hear if you agree/disagree (here’s a blank distribution for the question if you want to create your own.
You didn’t misunderstand! The intention was that you ask any question that’s interesting to you, including personal questions. I’m assuming you’re more interested in the first question you asked, so I’ll answer that unless you feel otherwise :)