We’d like to try using this forum to coalesce possible questions to ask Cullen. Please use this comment chain to ask and rank questions about the Windfall clause!!
(We will attempt, but not guarantee, asking him questions in the order of highest-upvoted questions in this thread that wasn’t covered in the talk, as well as some live questions!)
cross-posted from Facebook. (I will check both this thread and the FB event for comments)
Time: 19:00-21:00 (PDT)
How can we ensure that the gains from Transformative AI collectively benefit humanity, rather than just the lucky few? How can we incentivize innovation in AI in a way that’s broadly positive for humanity? Other than alignment, what are distributive issues with the status quo in AI profits? What are practical issues with different distributive mechanisms?
In this talk, Cullen O’Keefe, Research Scientist at OpenAI and Research Affiliate at FHI, will argue for the “windfall clause”: in short, that companies should donate excess windfall profits from Transformative AI for the common good.
You may be interested in reading his paper summarizing the core ideas , or his AMA on the EA Forum .
This will be EA: San Francisco’s inaugural online event (and only our second general event).
We’re still looking into different technological options for the best way to host this talk, but please have the Zoom App downloaded and create a Zoom account.
As this will be online, I see little reason to restrict this to people living within the physical Bay Area. Feel free to invite friends from all over the world (in a compatible time zone) if they wish to attend.
Tentative schedule:Talk: 7:00-7:25Q&A: 7:25-8:10Structured Mingling: 8:10-9:00. (Details of exact schedule TBD)
For the sake of everyone’s mental health, we are banning all discussions of The Disease Which Must Not Be Named.
 https://arxiv.org/pdf/1912.11595.pdf https://forum.effectivealtruism.org/posts/9cx8TrLEooaw49cAr/i-m-cullen-o-keefe-a-policy-researcher-at-openai-ama
Is it too late to submit new answers?
During a crisis, people tend to implement the preferred policies of whoever seems to be accurately predicting each phase of the problem
This seems incredibly optimistic.
I edited that section, let me know if there are remaining points of confusion!
Do you include in “People working specifically on AGI” people working on AI safety, or just capabilities?
Just capabilities (in other words, people working to create AGI), although I think the safety/capabilities distinction is less clear-cut outside of a few dedicated safety orgs like MIRI.
“bullish” in the sense of “thinking transformative AI (TAI) is coming soon”
what do you mean by “experts not working on AGI”?
AI people who aren’t explicitly thinking of AGI when they do their research (I think this correctly describes well over 90% of ML researchers at Google Brain, for example).
Why say “even”
Because it might be surprising (to people asking or reading this question who are imagining long timelines) to see timelines as short as the ones AI experts believe, so the second point is qualifying that AGI experts believe it’s even shorter.
In general it looks like my language choice was more ambiguous than desirable so I’ll edit my answer to be clearer!
I also like this quote:
“I wish it need not have happened in my time,” said Frodo.”So do I,” said Gandalf, “and so do all who live to see such times. But that is not for them to decide. All we have to decide is what to do with the time that is given us.”
J.R.R. Tolkien, The Fellowship of the Ring
I think there’s some evidence that Metaculus, while a group of fairly smart and well-informed people, are nowhere near as knowledgeable as a fairly informed EA (perhaps including a typical user of this forum?) for the specific questions around existential and global catastrophic risks.
One example I can point to is that for this question on climate change and GCR before 2100 (that has been around since October 2018), a single not-very-informative comment from me was enough to change the community median from 24% to 10%. This suggests to me that Metaculus users did not previously have strong evidence or careful reasoning on this question, or perhaps GCR-related thinking in general.
Now you might think that actual superforecasters are better, but based on the comments given so far for COVID-19, I’m unimpressed. In particular the selected comments point to use of reference classes that EAs and avid Metaculus users have known to be flawed for over a week before the report came out (eg, using China’s low deaths as evidence that this can be easily replicated in other countries as the default scenario).
Now COVID-19 is not an existential risk or GCR, but it is an “out of distribution” problem showing clear and fast exponential growth that seems unusual for most questions superforecasters are known to excel at.
Currently at 200 million a day, though NPR says they’re facing shortages with the materials used to make masks.
Hmm, if everybody stopped eating honey and wild bees are not picking up the slack, then presumably farmers would instead pay for commercial beekeeping to pollinate their fields?
One reason to believe otherwise is because you think existential GCBRs will looks so radically different that any broader biosecurity preparatory work won’t be useful.
It’s going to go public! Want people to review it lightly in case this type of question will lead to information-hazard territory in the answers.
Do you think it makes sense for EAs to treat global health and economic development as the same cause area, given that they seem to be two somewhat separate fields with different metrics, different theories of change, different institutions etc?
(I may not be formulating this question correctly).
On balance, what do you think is the probability that we are at or close to a hinge of history (either right now, this decade, or this century)?
What’s one book that you think most EAs have not yet read and you think that they should (other than your own, of course)?
Can you describe a typical day in your life with sufficient granularity that readers can have a sense of what “being a researcher at a place like FHI” is like?
Are there any specific natural existential risks that are significant enough that more than 1% of EA resources should be devoted to it? .1%? .01%?
What do you think is the biggest mistake that the EA community is currently making?