Can you talk more about what convinced you that they’re a good giving opportunity on the margin?
I asked Tobias Pulver about this specifically. He told me about their future plans and how they’d like to use marginal funds. They have things that they would have done if they’d had more money but couldn’t do. I don’t know if they’re okay with me speaking about this publicly but I invite Tobias or anyone else at REG to comment on this.
I know you know I think this, but I think it’s better for the health of ACE if their supporters divide their money between ACE and its recommended charities, even if the evidence for its recommended charities isn’t currently as strong as I’d like.
If ACE thought this was best, couldn’t it direct some of the funds I donate to its top charities? (This is something I probably should have considered and investigated, although it’s moot since I’m not planning on donating directly to ACE.)
Would I expect even the couple actual PhDs MIRI hired recently to do anything really ground breaking? They might, but I don’t see why you’d think it likely.
AI safety is such a new field that I don’t expect you need to be a genius to do anything groundbreaking. MIRI researchers are probably about as intelligent as most FLI grantees. But I expect them to be better at AI safety research because MIRI has been working on it for longer and has a stronger grasp of the technical challenges.
AI safety is such a new field that I don’t expect you need to be a genius to do anything groundbreaking.
They claim to be working on areas like game theory, decision theory, and mathematical logic, which are all well-developed fields of study. I see no reason to think those fields have lots of low-hanging fruit that would allow average researchers to make huge breakthroughs. Sure, they have a new angle on those fields, but does a new angle really overcome their lack of an impressive research track-record?
But I expect them to be better at AI safety research because MIRI has been working on it for longer and has a stronger grasp of the technical challenges.
Do they have a stronger grasp of the technical challenges? They’re certainly opinionated about what it will take to make AI safe, but their (public) justifications for those opinions look pretty flimsy.
Can you talk more about what convinced you that they’re a good giving opportunity on the margin?
I asked Tobias Pulver about this specifically. He told me about their future plans and how they’d like to use marginal funds. They have things that they would have done if they’d had more money but couldn’t do. I don’t know if they’re okay with me speaking about this publicly but I invite Tobias or anyone else at REG to comment on this.
I heard—all be it second hand, and last year—of two people involved, Lukas Gloor and Tobias Pulver, saying that thought that the minimal share of GBS/EAF manpower − 1.5 FTEs—that was being invested in REG was sufficient.
I asked Tobias Pulver about this specifically. He told me about their future plans and how they’d like to use marginal funds. They have things that they would have done if they’d had more money but couldn’t do. I don’t know if they’re okay with me speaking about this publicly but I invite Tobias or anyone else at REG to comment on this.
If ACE thought this was best, couldn’t it direct some of the funds I donate to its top charities? (This is something I probably should have considered and investigated, although it’s moot since I’m not planning on donating directly to ACE.)
AI safety is such a new field that I don’t expect you need to be a genius to do anything groundbreaking. MIRI researchers are probably about as intelligent as most FLI grantees. But I expect them to be better at AI safety research because MIRI has been working on it for longer and has a stronger grasp of the technical challenges.
They claim to be working on areas like game theory, decision theory, and mathematical logic, which are all well-developed fields of study. I see no reason to think those fields have lots of low-hanging fruit that would allow average researchers to make huge breakthroughs. Sure, they have a new angle on those fields, but does a new angle really overcome their lack of an impressive research track-record?
Do they have a stronger grasp of the technical challenges? They’re certainly opinionated about what it will take to make AI safe, but their (public) justifications for those opinions look pretty flimsy.
I heard—all be it second hand, and last year—of two people involved, Lukas Gloor and Tobias Pulver, saying that thought that the minimal share of GBS/EAF manpower − 1.5 FTEs—that was being invested in REG was sufficient.