Q1: How closely does MIRI currently coordinate with the Long-Term Future Fund (LTFF)?
Q2: How effective do you currently consider [donations to] the LTFF relative to [donations to] MIRI? Decimal coefficient preferred if you feel comfortable guessing one.
Q3: Do you expect the LTFF to become more or less effective relative to MIRI as AI capability/safety progresses?
(I’ve spent a few hours talking to people about the LTFF but I’m not sure about things like “what order of magnitude of funding did they allocate last year” (my guess without looking it up is $1M, (which turns out to be correct!)), so take all this with a grain of salt.)
Re Q1: I don’t know, I don’t think that we coordinate very carefully.
Re Q2: I don’t really know. When I look at the list of things the LTFF funded in August or April (excluding regrants to orgs like MIRI, CFAR, and Ought), about 40% look meh (~0.5x MIRI), about 40% look like things which I’m reasonably glad someone funded (~1x MIRI), about 7% are things that I’m really glad someone funded (~3x MIRI), and 3% are things that I wish that they hadn’t funded (-1x MIRI). Note that my mean outcome of the meh, good, and great categories are much higher than the median outcomes—a lot of them are “I think this is probably useless but seems worth trying for value of information”. Apparently this adds up to thinking that they’re 78% as good as MIRI.
Q3: I don’t really know. My median outcome is that they turn out to do less well than my estimation above, but I think there’s a reasonable probability that they turn out to be much better than my estimate above, and I’m excited to see them try to do good. This isn’t really tied up with AI capability or safety progressing though.
Q1: How closely does MIRI currently coordinate with the Long-Term Future Fund (LTFF)?
Q2: How effective do you currently consider [donations to] the LTFF relative to [donations to] MIRI? Decimal coefficient preferred if you feel comfortable guessing one.
Q3: Do you expect the LTFF to become more or less effective relative to MIRI as AI capability/safety progresses?
(I’ve spent a few hours talking to people about the LTFF but I’m not sure about things like “what order of magnitude of funding did they allocate last year” (my guess without looking it up is $1M, (which turns out to be correct!)), so take all this with a grain of salt.)
Re Q1: I don’t know, I don’t think that we coordinate very carefully.
Re Q2: I don’t really know. When I look at the list of things the LTFF funded in August or April (excluding regrants to orgs like MIRI, CFAR, and Ought), about 40% look meh (~0.5x MIRI), about 40% look like things which I’m reasonably glad someone funded (~1x MIRI), about 7% are things that I’m really glad someone funded (~3x MIRI), and 3% are things that I wish that they hadn’t funded (-1x MIRI). Note that my mean outcome of the meh, good, and great categories are much higher than the median outcomes—a lot of them are “I think this is probably useless but seems worth trying for value of information”. Apparently this adds up to thinking that they’re 78% as good as MIRI.
Q3: I don’t really know. My median outcome is that they turn out to do less well than my estimation above, but I think there’s a reasonable probability that they turn out to be much better than my estimate above, and I’m excited to see them try to do good. This isn’t really tied up with AI capability or safety progressing though.