Thanks for writing this, Michael. More people should write up documents like these. I’ve been thinking of doing something similar, but haven’t found the time yet.
I realized reading this that I haven’t thought much about REG. It sounds like they do good things, but I’m a bit skeptical re: their ability to make good use of the marginal donation they get. I don’t think a small budget, by itself, is strong evidence that they could make good use of more money. Can you talk more about what convinced you that they’re a good giving opportunity on the margin? (I’m thinking out loud here, don’t mean this paragraph to be a criticism.)
Re: ACE’s recommended charities. I know you know I think this, but I think it’s better for the health of ACE if their supporters divide their money between ACE and its recommended charities, even if the evidence for its recommended charities isn’t currently as strong as I’d like. But I admit this is based on a fuzzy heuristic, not a knock-down argument.
Re: MIRI. Setting aside what I think of Yudkowsky, I think you may be overlooking the fact that that “competence” is relative to what you’re trying to accomplish. Luke Muehlhauser accomplished a lot in terms of getting MIRI to follow nonprofit best practices, and from what I’ve read of his writing, I expect he’ll do very well in his new role as an analyst for GiveWell. But there’s a huge gulf between being competent in that sense, and being able to do (or supervise other people doing) ground breaking math and CS research.
Nate Soares seems as smart as you’d expect a former Google engineer to be, but would I expect him to do anything really ground breaking? No. Would I expect even the couple actual PhDs MIRI hired recently to do anything really ground breaking? They might, but I don’t see why you’d think it likely.
In a way, it was easier to make a case for MIRI back when they did a lot of advocacy work. Now that they’re billing themselves as a research institute, I think they’ve set a much higher bar for themselves, and when it comes to doing research (as opposed to advocacy) they’ve got much less of a track record to go on.
I realized reading this that I haven’t thought much about REG. It sounds like they do good things, but I’m a bit skeptical re: their ability to make good use of the marginal donation they get. I don’t think a small budget, by itself, is strong evidence that they could make good use of more money. Can you talk more about what convinced you that they’re a good giving opportunity on the margin? (I’m thinking out loud here, don’t mean this paragraph to be a criticism.)
Thanks for bringing this up, Topher!
As Michael said, there are various things we would do if we had more funding.
1) REG’s ongoing operations need to be funded. Currently, we have around 6 months of reserves (at the current level of expenses), but ideally we would like to have 12 months. This would enable us to make use of more (sometimes unexpected) opportunities and to try things because we wouldn’t have to constantly be focused on our own funding situation.
2) We could potentially achieve (much) better results with REG by having additional people working on it. The best illustration of this is probably one person that we met (by going to poker stops) with a strong PR & marketing background who’s been working in the poker industry for 10 years now (there are not that many people with a level of expertise and network about the poker world like this person). This person woud like to work with us, but we had to decline her for the moment, even though we think that it would (clearly) be worth it to hire her. Another thing we would like to do is hiring someone to organise more charity tournaments and establish partnerships with industry leading organisations or strengthen existing ones, improve member communications and do social media. There are already several candidates who could do this, but we are hesitant to make this investment since we lack the appropriate funding.
3) Another way we would use additional funds is by working on various REG “extensions”. We are about to set up two REG expansions, but we won’t have enough resources to make the most out of even these two – and there are many more potentially really promising REG expansions that could be done. (The first of the two REG expansions that is likely going to be spread among the respective community in a few days is “DFS Charity”, a REG for Daily Fantasy Sports, an industry that is currently growing substantially and with a fair share of people with a similar (quantitative) mindset as poker players have. The preliminary website can be found at dfscharity.org – please don’t share it widely yet.)
In a way, it was easier to make a case for MIRI back when they did a lot of advocacy work. Now that they’re billing themselves as a research institute, I think they’ve set a much higher bar for themselves, and when it comes to doing research (as opposed to advocacy) they’ve got much less of a track record to go on.
To put this in context, the emerging consensus is that publicly advocating for x risk reduction in the area of AI is counterproductive, and it is better to network with researchers directly, something that may be best done by performing relevant research.
In a way, it was easier to make a case for MIRI back when they did a lot of advocacy work. Now that they’re billing themselves as a research institute, I think they’ve set a much higher bar for themselves, and when it comes to doing research (as opposed to advocacy) they’ve got much less of a track record to go on.
What are the best groups that are specifically doing advocacy for (against?) AI risk, or existential risks in general?
If I had to guess, I would guess FLI, given their ability to at least theoretically use the money for grant-making. Though after Elon Musk’s $10 million, donation this cause area seems to be short on room for more funding.
Although FLI were only able to grant a very small fraction of the funds that researchers applied for, and many organisations have scope for expansion beyond the grants they recieved.
Can you talk more about what convinced you that they’re a good giving opportunity on the margin?
I asked Tobias Pulver about this specifically. He told me about their future plans and how they’d like to use marginal funds. They have things that they would have done if they’d had more money but couldn’t do. I don’t know if they’re okay with me speaking about this publicly but I invite Tobias or anyone else at REG to comment on this.
I know you know I think this, but I think it’s better for the health of ACE if their supporters divide their money between ACE and its recommended charities, even if the evidence for its recommended charities isn’t currently as strong as I’d like.
If ACE thought this was best, couldn’t it direct some of the funds I donate to its top charities? (This is something I probably should have considered and investigated, although it’s moot since I’m not planning on donating directly to ACE.)
Would I expect even the couple actual PhDs MIRI hired recently to do anything really ground breaking? They might, but I don’t see why you’d think it likely.
AI safety is such a new field that I don’t expect you need to be a genius to do anything groundbreaking. MIRI researchers are probably about as intelligent as most FLI grantees. But I expect them to be better at AI safety research because MIRI has been working on it for longer and has a stronger grasp of the technical challenges.
AI safety is such a new field that I don’t expect you need to be a genius to do anything groundbreaking.
They claim to be working on areas like game theory, decision theory, and mathematical logic, which are all well-developed fields of study. I see no reason to think those fields have lots of low-hanging fruit that would allow average researchers to make huge breakthroughs. Sure, they have a new angle on those fields, but does a new angle really overcome their lack of an impressive research track-record?
But I expect them to be better at AI safety research because MIRI has been working on it for longer and has a stronger grasp of the technical challenges.
Do they have a stronger grasp of the technical challenges? They’re certainly opinionated about what it will take to make AI safe, but their (public) justifications for those opinions look pretty flimsy.
Can you talk more about what convinced you that they’re a good giving opportunity on the margin?
I asked Tobias Pulver about this specifically. He told me about their future plans and how they’d like to use marginal funds. They have things that they would have done if they’d had more money but couldn’t do. I don’t know if they’re okay with me speaking about this publicly but I invite Tobias or anyone else at REG to comment on this.
I heard—all be it second hand, and last year—of two people involved, Lukas Gloor and Tobias Pulver, saying that thought that the minimal share of GBS/EAF manpower − 1.5 FTEs—that was being invested in REG was sufficient.
Thanks for writing this, Michael. More people should write up documents like these. I’ve been thinking of doing something similar, but haven’t found the time yet.
I realized reading this that I haven’t thought much about REG. It sounds like they do good things, but I’m a bit skeptical re: their ability to make good use of the marginal donation they get. I don’t think a small budget, by itself, is strong evidence that they could make good use of more money. Can you talk more about what convinced you that they’re a good giving opportunity on the margin? (I’m thinking out loud here, don’t mean this paragraph to be a criticism.)
Re: ACE’s recommended charities. I know you know I think this, but I think it’s better for the health of ACE if their supporters divide their money between ACE and its recommended charities, even if the evidence for its recommended charities isn’t currently as strong as I’d like. But I admit this is based on a fuzzy heuristic, not a knock-down argument.
Re: MIRI. Setting aside what I think of Yudkowsky, I think you may be overlooking the fact that that “competence” is relative to what you’re trying to accomplish. Luke Muehlhauser accomplished a lot in terms of getting MIRI to follow nonprofit best practices, and from what I’ve read of his writing, I expect he’ll do very well in his new role as an analyst for GiveWell. But there’s a huge gulf between being competent in that sense, and being able to do (or supervise other people doing) ground breaking math and CS research.
Nate Soares seems as smart as you’d expect a former Google engineer to be, but would I expect him to do anything really ground breaking? No. Would I expect even the couple actual PhDs MIRI hired recently to do anything really ground breaking? They might, but I don’t see why you’d think it likely.
In a way, it was easier to make a case for MIRI back when they did a lot of advocacy work. Now that they’re billing themselves as a research institute, I think they’ve set a much higher bar for themselves, and when it comes to doing research (as opposed to advocacy) they’ve got much less of a track record to go on.
Thanks for bringing this up, Topher!
As Michael said, there are various things we would do if we had more funding.
1) REG’s ongoing operations need to be funded. Currently, we have around 6 months of reserves (at the current level of expenses), but ideally we would like to have 12 months. This would enable us to make use of more (sometimes unexpected) opportunities and to try things because we wouldn’t have to constantly be focused on our own funding situation.
2) We could potentially achieve (much) better results with REG by having additional people working on it. The best illustration of this is probably one person that we met (by going to poker stops) with a strong PR & marketing background who’s been working in the poker industry for 10 years now (there are not that many people with a level of expertise and network about the poker world like this person). This person woud like to work with us, but we had to decline her for the moment, even though we think that it would (clearly) be worth it to hire her. Another thing we would like to do is hiring someone to organise more charity tournaments and establish partnerships with industry leading organisations or strengthen existing ones, improve member communications and do social media. There are already several candidates who could do this, but we are hesitant to make this investment since we lack the appropriate funding.
3) Another way we would use additional funds is by working on various REG “extensions”. We are about to set up two REG expansions, but we won’t have enough resources to make the most out of even these two – and there are many more potentially really promising REG expansions that could be done. (The first of the two REG expansions that is likely going to be spread among the respective community in a few days is “DFS Charity”, a REG for Daily Fantasy Sports, an industry that is currently growing substantially and with a fair share of people with a similar (quantitative) mindset as poker players have. The preliminary website can be found at dfscharity.org – please don’t share it widely yet.)
I hope this helped!
To put this in context, the emerging consensus is that publicly advocating for x risk reduction in the area of AI is counterproductive, and it is better to network with researchers directly, something that may be best done by performing relevant research.
What are the best groups that are specifically doing advocacy for (against?) AI risk, or existential risks in general?
If I had to guess, I would guess FLI, given their ability to at least theoretically use the money for grant-making. Though after Elon Musk’s $10 million, donation this cause area seems to be short on room for more funding.
Although FLI were only able to grant a very small fraction of the funds that researchers applied for, and many organisations have scope for expansion beyond the grants they recieved.
I asked Tobias Pulver about this specifically. He told me about their future plans and how they’d like to use marginal funds. They have things that they would have done if they’d had more money but couldn’t do. I don’t know if they’re okay with me speaking about this publicly but I invite Tobias or anyone else at REG to comment on this.
If ACE thought this was best, couldn’t it direct some of the funds I donate to its top charities? (This is something I probably should have considered and investigated, although it’s moot since I’m not planning on donating directly to ACE.)
AI safety is such a new field that I don’t expect you need to be a genius to do anything groundbreaking. MIRI researchers are probably about as intelligent as most FLI grantees. But I expect them to be better at AI safety research because MIRI has been working on it for longer and has a stronger grasp of the technical challenges.
They claim to be working on areas like game theory, decision theory, and mathematical logic, which are all well-developed fields of study. I see no reason to think those fields have lots of low-hanging fruit that would allow average researchers to make huge breakthroughs. Sure, they have a new angle on those fields, but does a new angle really overcome their lack of an impressive research track-record?
Do they have a stronger grasp of the technical challenges? They’re certainly opinionated about what it will take to make AI safe, but their (public) justifications for those opinions look pretty flimsy.
I heard—all be it second hand, and last year—of two people involved, Lukas Gloor and Tobias Pulver, saying that thought that the minimal share of GBS/EAF manpower − 1.5 FTEs—that was being invested in REG was sufficient.