1) I see a trend in the way new EAs concerned about the far future think about where to donate money that seems dangerous, it goes:
I am an EA and care about impactfulness and neglectedness → Existential risk dominates my considerations → AI is the most important risk → Donate to MIRI.
The last step frequently involves very little thought, it borders on a cached thought.
How would you be conceiving of donating your X-risk money at the moment if MIRI did not exist? Which other researchers or organizations should be being scrutinized by donors who are X-risk concerned, and AI persuaded?
1) Huh, that hasn’t been my experience. We have a number of potential donors who ring us up and ask who in AI alignment needs money the most at the moment. (In fact, last year, we directed a number of donors to FHI, who had much more of a funding gap than MIRI did at that time.)
2) If MIRI disappeared and everything else was held constant, then I’d be pretty concerned about the lack of people focused on the object level problems. (All talk more about why I think this is so important in a little bit, I’m pretty sure at least one other person asks that question more directly.) There’d still be a few people working on the object level problems (Stuart Russell, Stuart Armstrong), but I’d want lots more. In fact, that statement is also true in the actual world! We only have three people on the research team right now, remember, with a fourth joining in August.
In other words, if you were to find yourself in a world like this one except without a MIRI, then I would strongly suggest building something like a MIRI :-)
1) I see a trend in the way new EAs concerned about the far future think about where to donate money that seems dangerous, it goes:
I am an EA and care about impactfulness and neglectedness → Existential risk dominates my considerations → AI is the most important risk → Donate to MIRI.
The last step frequently involves very little thought, it borders on a cached thought.
How would you be conceiving of donating your X-risk money at the moment if MIRI did not exist? Which other researchers or organizations should be being scrutinized by donors who are X-risk concerned, and AI persuaded?
1) Huh, that hasn’t been my experience. We have a number of potential donors who ring us up and ask who in AI alignment needs money the most at the moment. (In fact, last year, we directed a number of donors to FHI, who had much more of a funding gap than MIRI did at that time.)
2) If MIRI disappeared and everything else was held constant, then I’d be pretty concerned about the lack of people focused on the object level problems. (All talk more about why I think this is so important in a little bit, I’m pretty sure at least one other person asks that question more directly.) There’d still be a few people working on the object level problems (Stuart Russell, Stuart Armstrong), but I’d want lots more. In fact, that statement is also true in the actual world! We only have three people on the research team right now, remember, with a fourth joining in August.
In other words, if you were to find yourself in a world like this one except without a MIRI, then I would strongly suggest building something like a MIRI :-)