I have a doc on my computer with some notes on Metaculus questions that I want to see, but either haven’t gotten around to writing up yet, or am not sure how to operationalize. Feel free to take any of them.
Giving now vs. later parameter values
“In 2030, I personally will either donate at least 10% of my income to an EA cause or will work directly on an EA cause full time”
attempting to measure value drift
or maybe ask about Jeff Kaufman or somebody like that because he’s public about his donations
or make a list of people, and ask how many of them will fulfill the above criteria
“According to the EA Survey, what percent of people who donated at least 10% in 2018 will donate at least 10% in 2023?”
Not sure if it’s possible to derive this info
According to David Moss in Rethink Priorities Slack, it’s probably not feasible to get data on this
“When will the Founders Pledge’s long-term investment fund make its last grant?”
because its investments run out, value drift, or expropriation
Have they actually established this fund yet?
“When the long-term investment fund run by Founders Pledge ceases to make grants, will it happen because the fund is seized by an outside actor?”
by a government, etc.
“When will the longest-lived foundation or DAF owned by an EA make its last grant?”
EA defined as someone who identifies as an EA as of this prediction
the DAF must already exist and contain nonzero dollars
question about Rockefeller/Ford/Gates foundation longevity
best achievable QALYs per dollar in 2030 according to ACE, etc.
“Will the US stock market close by 2120?”
A stock market is considered to have closed if all public exchanges cease trading for at least one year
Could also ask about any developed market, but I think it makes most sense to ask about a single country
Open research questions
“By 2040, there will be a broadly accepted answer on how to construct a rank ordering of possible worlds where some of the worlds have a nonzero probability of containing infinite utility.”
“broadly accepted” doesn’t mean everyone agrees with its prescriptions, but at least people agree that it’s internally consistent and largely aligns with intuitions on finite-utility cases
“In 2121, it will be broadly agreed that, all things considered, donations to GiveDirectly were net positive.”
attempt at addressing cluelessness
“broadly agreed” is hard to define in a useful way. it’s already broadly agreed right now, in spite of cluelessness
maybe “broadly agreed among philosophers who have written about cluelessness” but this might limit your sample to like 4 people
“When will the longest-lived foundation or DAF owned by an EA make its last grant?”
EA defined as someone who identifies as an EA as of this prediction
Just to be clear, you specifically mean to exclude not-yet-EAs who set up DAFs in, say, 2025?
“What annual real return will be realized by the Good Ventures investment portfolio 2022-2031?”
Can be calculated by Form 990-PF, Schedule B, Part II, which gives the gain of any assets held
Might make more sense to look at Dustin Moskowitz’s net worth
But that doesn’t account for spending
It might be interesting to have forecasts on the amount of resources expected to be devoted to EA causes in the future, e.g. by more billionaires getting involved. This could be useful for questions like “how fast should Good Ventures be spending their money?” if we expect to have 5 more equally big donors in 2030 that might suggest they should be spending down faster than if they are still expected to be the biggest donor by a wide margin.
For this, would you prefer to condition on something like there being no transformative AI, or not? I feel like sometimes these questions end up dominated by considerations like this, and it is plausible you care about this answer only conditional on something like this not happening.
The question is intended to look at tail risk associated with stock markets shutting down. Transformative AI may or may not constitute such a risk; for example, the AI might shut down the stock market because it’s going to do something far better with people’s money, or it might shut down the market because everyone is turned into paperclips. So I think it should be unconditional.
I have a doc on my computer with some notes on Metaculus questions that I want to see, but either haven’t gotten around to writing up yet, or am not sure how to operationalize. Feel free to take any of them.
Giving now vs. later parameter values
“In 2030, I personally will either donate at least 10% of my income to an EA cause or will work directly on an EA cause full time”
attempting to measure value drift
or maybe ask about Jeff Kaufman or somebody like that because he’s public about his donations
or make a list of people, and ask how many of them will fulfill the above criteria
“According to the EA Survey, what percent of people who donated at least 10% in 2018 will donate at least 10% in 2023?”
Not sure if it’s possible to derive this info
According to David Moss in Rethink Priorities Slack, it’s probably not feasible to get data on this
“When will the Founders Pledge’s long-term investment fund make its last grant?”
https://forum.effectivealtruism.org/posts/8vfadjWWMDaZsqghq/long-term-investment-fund-at-founders-pledge
because its investments run out, value drift, or expropriation
Have they actually established this fund yet?
“When the long-term investment fund run by Founders Pledge ceases to make grants, will it happen because the fund is seized by an outside actor?”
by a government, etc.
“When will the longest-lived foundation or DAF owned by an EA make its last grant?”
EA defined as someone who identifies as an EA as of this prediction
the DAF must already exist and contain nonzero dollars
question about Rockefeller/Ford/Gates foundation longevity
best achievable QALYs per dollar in 2030 according to ACE, etc.
“Will the US stock market close by 2120?”
A stock market is considered to have closed if all public exchanges cease trading for at least one year
Could also ask about any developed market, but I think it makes most sense to ask about a single country
Open research questions
“By 2040, there will be a broadly accepted answer on how to construct a rank ordering of possible worlds where some of the worlds have a nonzero probability of containing infinite utility.”
“broadly accepted” doesn’t mean everyone agrees with its prescriptions, but at least people agree that it’s internally consistent and largely aligns with intuitions on finite-utility cases
“In 2121, it will be broadly agreed that, all things considered, donations to GiveDirectly were net positive.”
attempt at addressing cluelessness
“broadly agreed” is hard to define in a useful way. it’s already broadly agreed right now, in spite of cluelessness
maybe “broadly agreed among philosophers who have written about cluelessness” but this might limit your sample to like 4 people
“By 2040, there will be a broadly accepted answer on what prior to use for the lifespan of humanity.” see https://forum.effectivealtruism.org/posts/XXLf6FmWujkxna3E6/are-we-living-at-the-most-influential-time-in-history-1
alternate formulation: Toby Ord and Will MacAskill both agree (to some level of confidence) on the correct prior
“By 3020, a macroscopic object will be observed traveling faster than the speed of light.”
relevant to Beyond Astronomical Waste
Finance
“What annual real return will be realized by the Good Ventures investment portfolio 2022-2031?”
Can be calculated by Form 990-PF, Schedule B, Part II, which gives the gain of any assets held
Might make more sense to look at Dustin Moskowitz’s net worth
But that doesn’t account for spending
“Will the momentum factor have a positive return in the United States 2022-2031?”
Fama/French 12-2 momentum over a total market index
As measured by “Momentum Factor (Mom)” on Ken French Data Library
Gross of costs
“Will the Fama-French value factor (using E/P) be positive in the United States 2022-2031?”
Fama-French value over a total market index (not S&P 500), measured with E/P, not B/P
French “Portfolios Formed on Earnings/Price”
Factor is considered positive if the low 30% portfolio (equal-weighted) outperforms the high 30% portfolio.
E/P chosen due to being less subject to company structure than B/P
“What annualized real return will be obtained by the top decile of momentum stocks in the United States 2022-2031?”
same definitions as previous question
“What will be the magnitude of the S&P 500′s largest drawdown 2022-2031?”
magnitude = percent decline from peak to trough
Thanks for these!
Just to be clear, you specifically mean to exclude not-yet-EAs who set up DAFs in, say, 2025?
It might be interesting to have forecasts on the amount of resources expected to be devoted to EA causes in the future, e.g. by more billionaires getting involved. This could be useful for questions like “how fast should Good Ventures be spending their money?” if we expect to have 5 more equally big donors in 2030 that might suggest they should be spending down faster than if they are still expected to be the biggest donor by a wide margin.
Yes, the intention is to predict the maximum length of time that foundations and DAFs created now (or before now) can continue to exist.
Agreed.
For this, would you prefer to condition on something like there being no transformative AI, or not? I feel like sometimes these questions end up dominated by considerations like this, and it is plausible you care about this answer only conditional on something like this not happening.
The question is intended to look at tail risk associated with stock markets shutting down. Transformative AI may or may not constitute such a risk; for example, the AI might shut down the stock market because it’s going to do something far better with people’s money, or it might shut down the market because everyone is turned into paperclips. So I think it should be unconditional.
That’s in pending now, as are a few other questions you may be interested in, though not identical to the ones you list.
I’ll post a response here in a few weeks once most of the questions I intend to write are actually live with a summary.