How efficiently could MIRI “burn through” its savings if it considered AGI sufficiently likely to be imminent? In other words, if MIRI decided to spend all its savings in a year, how many normal-spending-years’ worth of progress on AI safety do you think it would achieve?
WikipediaEA
Q1: Has MIRI noticed a significant change in funding following the change in disclosure policy?
Q2: If yes to Q1, what was the direction of the change?
Q3: If yes to Q1, were you surprised by the degree of the change?
ETA:
Q4: If yes to Q3, in which direction were you surprised?
Given a “bad” AGI outcome, how likely do you think a long-term worse-than-death fate for at least some people would be relative to extinction?
Q1: How closely does MIRI currently coordinate with the Long-Term Future Fund (LTFF)?
Q2: How effective do you currently consider [donations to] the LTFF relative to [donations to] MIRI? Decimal coefficient preferred if you feel comfortable guessing one.
Q3: Do you expect the LTFF to become more or less effective relative to MIRI as AI capability/safety progresses?
test comment
Given an aligned AGI, what is your point estimate for the TOTAL (across all human history) cost in USD of having aligned it?
To hopefully spare you a bit of googling without unduly anchoring your thinking, Wiki says the Manhattan Project cost $21-23 billion in 2018 USD, with only about 3.7% or $786m of that being research and development.