Please, what AIA organizations? MIRI? And do not worry about offending me. I do not intend to offend. If I do/did though my tone or however, I am sorry.
That being said, I wish you would’ve examined the actual claims I presented. I did not claim AI researchers are worried about a malevolent AI. I am not against researchers; research in robotics, industrial PLCs, nanotech, whatever—are fields in their own right. It is donating my income, as an individual that I take offense. People can fund whatever they want: A new planetary wing at a museum, research in robotics, research in CS, research in CS philosophy.
Although, Earning to Give does not follow. Thinking about and discussing the risks of strong AI does make sense, and we both seem to agree it is important. The CS grad students being supported, however, what makes them different from a random CS grad? Just because they claim to be researching AIA? Following the money, there is not a clear answer on which CS grad students are receiving it. Low or zero transparency. MIRI or no? Am I missing some public information?
Second, what do you define as advanced AI? Before, I said strong AI. Is that what you mean? Is there some sort of AI in between? I’m not aware. This is crucially where I split with AI safety. The theory is an idea of a belief about the far future. To claim that we’re close to developing strong AI is unfounded to me. What in this century is so close to strong AI? Neural networks do not seem to be (from my light research).
I do not believe climate change is as simple to define a “before” and “after.” Perhaps a large rogue solar flair or the Yellowstone supervolcano. Or perhaps even a time travel analogy would suffice ~ time travel safety research. There is no tractability/solvability. [Blank] cannot be defined because it doesn’t exist; unfounded and unknown phenomena cannot be solved. Climate change exists. It is a very real reality. It has solvability. A belief in an idea about the future is a poor reason for claiming some sort of tractability for funding. Strong AI safety (singularity safety) has “solvability” for thinking about and discussing—but, again, it does not follow that one should give monetarily. I feel like I’m beating a dead horse with this point.
For the book recommendation, I looked into it. I’d rather read about morality/ethics directly or further delve into better learning Java, Python, Logix5000, LabVIEW, etc.
That being said, I wish you would’ve examined the actual claims I presented. I did not claim AI researchers are worried about a malevolent AI.
You did, however, say “The theoretical threat of a malevolent strong AI would be immense. But that does not mean one has cause or a valid reason to support CS grad students financially.” I assumed you meant that you believed someone was giving an argument along the lines of “since malevolent AI is possible, then we should support CS grads.” If that is not what you meant, then I don’t see the relevance of mentioning malevolent AI.
Since you also stated that you had an issue with me not being charitable, I would reciprocate likewise. I agree that we should be charitable to each other’s opinions.
Having truthful views is not about winning debate. It’s about making sure that you hold good beliefs for good reasons, end of story. I encourage you to imagine this conversation not as a way to convince me that I’m wrong—but more of a case study about what the current arguments are, and whether they are valid. In the end, you don’t get points for winning an argument. You get points for actually holding correct views.
Therefore, it’s good to make sure that your beliefs actually hold weight under scrutiny. Not in a, “you can’t find the flaw after 10 minutes of self-sabotaged thinking” sort of way, but in a very deep understanding sort of way.
It is donating my income, as an individual that I take offense. People can fund whatever they want: A new planetary wing at a museum, research in robotics, research in CS, research in CS philosophy.
I agree people can fund whatever they want. It’s important to make a distinction between normative questions and factual ones. It’s true that people can fund whatever project they like; however, it’s also true that some projects have a high value from an impersonal utilitarian perspective. It is this latter category that I care about, which is why I want to find projects with particular high value. I believe that existential risk mitigation and AI alignment is among these projects, although I fully admit that I may be mistaken.
Although, Earning to Give does not follow. Thinking about and discussing the risks of strong AI does make sense, and we both seem to agree it is important.
If you agree that thinking about something is valuable, why not also agree that funding that thing is valuable. It seems you think that the field should just get a certain threshold of funding that allows certain people to think about the problem just enough—but not too much. I don’t a reason to believe that the field of AI alignment has reached that critical threshold. On the contrary, I believe the field is far from it at the moment.
Following the money, there is not a clear answer on which CS grad students are receiving it. Low or zero transparency. MIRI or no? Am I missing some public information?
I suppose when you make a donation to MIRI, it’s true that you can’t be certain about how they spend that money (although I might be wrong about this, I haven’t actually donated to MIRI). Generally though, funding an organization is about whether you think that their mission is neglected, and whether you think that further money would make a marginal impact in their cause area. This is no different than any other charity that EA aligned people endorse.
Second, what do you define as advanced AI? Before, I said strong AI. Is that what you mean? Is there some sort of AI in between? I’m not aware.
It might be confusing that there are all these terms for AI. To taboo the words “advanced AI”, “strong AI”, “AGI” or others—what I am worried about is an information processing system that can achieve broad success in cognitive tasks in a way that rivals or surpasses humans. I hope that makes it clear.
This is crucially where I split with AI safety. The theory is an idea of a belief about the far future. To claim that we’re close to developing strong AI is unfounded to me.
I’m not quite clear what you mean here. If you mean we are worried about AI in the far future, fine. But then in the next sentence you say that we’re worried about being close to strong AI. How can we simultaneously believe both. If AI is near then I care about the near-term future. If AI is not near, then I care about the long-term future. I do not claim either, however. I think it is important consideration even if it’s a long way off.
Neural networks do not seem to be (from my light research).
This is what I’m referring to when I talk about how important it is to really, truly understand something before developing an informed opinion about it. If you admit that you have only done light research, how can you be confident that you are right. Doing a bit of research might give you an edge for debate purposes, but we are talking about the future of life on Earth here. We really need to know the answers to these questions.
Perhaps a large rogue solar flair or the Yellowstone supervolcano. Or perhaps even a time travel analogy would suffice ~ time travel safety research. There is no tractability/solvability.
Lumping all existential risks in a single category and then asserting that there’s no tractability is a simplified approach. First what we need is the probability of any given existential risk occurring. For instance, if scientists discovered that the Yellowstone supervolcano was probably about to erupt sometime in the next few centuries, I’d definitely agree we should do research in that area, and we should fund that research as well. In fact, some research is being done in that area and I’m happy that it’s being done.
A belief in an idea about the future is a poor reason for claiming some sort of tractability for funding.
I’d agree with you if it was an idea asserted without evidence or reason. But there’s a whole load of arguments about why it is a tractable field, and how we can do things now—yes right now—about making the future safer. Ignorance of these arguments does not mean they do not exist.
Remember, ask yourself first what is true. Then form your opinion. Do not go the other way.
I am not trying to “win” anything. I am stating why MIRI is not transparent, and does not deal in scalable issues. As an individual, Earning to Give, it does not follow to fund such things under the guise of Effective Altruism. Existential risk is important to think about and discuss as individuals. However, funding CS grad students does not make sense in the light of Effective Altruism.
Funding does not increase “thinking.” The whole point of EA is to not give blindly. For example, giving food aid, although meaning well, can have a very negative effect (i.e., the crowding out effect on the local market). Nonmaleficence should be one’s initial position in regards to funding.
Lastly, no I rarely accept something as true first. I do not first accept the null hypothesis. “But there’s a whole load of arguments about why it is a tractable field”—What are they? Again, none of the actual arguments were examined: How is MIRI going about tractable/solvable issues? Who of MIRI is getting the funds? How is time travel safety not as relevant as AI safety?
the primary problem being the lack of transparency on the side of Open Phil. concerning the evaluative criteria used in their decision to award MIRI with an extremely huge grant.
Please, what AIA organizations? MIRI? And do not worry about offending me. I do not intend to offend. If I do/did though my tone or however, I am sorry.
That being said, I wish you would’ve examined the actual claims I presented. I did not claim AI researchers are worried about a malevolent AI. I am not against researchers; research in robotics, industrial PLCs, nanotech, whatever—are fields in their own right. It is donating my income, as an individual that I take offense. People can fund whatever they want: A new planetary wing at a museum, research in robotics, research in CS, research in CS philosophy.
Although, Earning to Give does not follow. Thinking about and discussing the risks of strong AI does make sense, and we both seem to agree it is important. The CS grad students being supported, however, what makes them different from a random CS grad? Just because they claim to be researching AIA? Following the money, there is not a clear answer on which CS grad students are receiving it. Low or zero transparency. MIRI or no? Am I missing some public information?
Second, what do you define as advanced AI? Before, I said strong AI. Is that what you mean? Is there some sort of AI in between? I’m not aware. This is crucially where I split with AI safety. The theory is an idea of a belief about the far future. To claim that we’re close to developing strong AI is unfounded to me. What in this century is so close to strong AI? Neural networks do not seem to be (from my light research).
I do not believe climate change is as simple to define a “before” and “after.” Perhaps a large rogue solar flair or the Yellowstone supervolcano. Or perhaps even a time travel analogy would suffice ~ time travel safety research. There is no tractability/solvability. [Blank] cannot be defined because it doesn’t exist; unfounded and unknown phenomena cannot be solved. Climate change exists. It is a very real reality. It has solvability. A belief in an idea about the future is a poor reason for claiming some sort of tractability for funding. Strong AI safety (singularity safety) has “solvability” for thinking about and discussing—but, again, it does not follow that one should give monetarily. I feel like I’m beating a dead horse with this point.
For the book recommendation, I looked into it. I’d rather read about morality/ethics directly or further delve into better learning Java, Python, Logix5000, LabVIEW, etc.
SE for
SE against
Yes, MIRI is one. FHI is another.
You did, however, say “The theoretical threat of a malevolent strong AI would be immense. But that does not mean one has cause or a valid reason to support CS grad students financially.” I assumed you meant that you believed someone was giving an argument along the lines of “since malevolent AI is possible, then we should support CS grads.” If that is not what you meant, then I don’t see the relevance of mentioning malevolent AI.
Since you also stated that you had an issue with me not being charitable, I would reciprocate likewise. I agree that we should be charitable to each other’s opinions.
Having truthful views is not about winning debate. It’s about making sure that you hold good beliefs for good reasons, end of story. I encourage you to imagine this conversation not as a way to convince me that I’m wrong—but more of a case study about what the current arguments are, and whether they are valid. In the end, you don’t get points for winning an argument. You get points for actually holding correct views.
Therefore, it’s good to make sure that your beliefs actually hold weight under scrutiny. Not in a, “you can’t find the flaw after 10 minutes of self-sabotaged thinking” sort of way, but in a very deep understanding sort of way.
I agree people can fund whatever they want. It’s important to make a distinction between normative questions and factual ones. It’s true that people can fund whatever project they like; however, it’s also true that some projects have a high value from an impersonal utilitarian perspective. It is this latter category that I care about, which is why I want to find projects with particular high value. I believe that existential risk mitigation and AI alignment is among these projects, although I fully admit that I may be mistaken.
If you agree that thinking about something is valuable, why not also agree that funding that thing is valuable. It seems you think that the field should just get a certain threshold of funding that allows certain people to think about the problem just enough—but not too much. I don’t a reason to believe that the field of AI alignment has reached that critical threshold. On the contrary, I believe the field is far from it at the moment.
I suppose when you make a donation to MIRI, it’s true that you can’t be certain about how they spend that money (although I might be wrong about this, I haven’t actually donated to MIRI). Generally though, funding an organization is about whether you think that their mission is neglected, and whether you think that further money would make a marginal impact in their cause area. This is no different than any other charity that EA aligned people endorse.
It might be confusing that there are all these terms for AI. To taboo the words “advanced AI”, “strong AI”, “AGI” or others—what I am worried about is an information processing system that can achieve broad success in cognitive tasks in a way that rivals or surpasses humans. I hope that makes it clear.
I’m not quite clear what you mean here. If you mean we are worried about AI in the far future, fine. But then in the next sentence you say that we’re worried about being close to strong AI. How can we simultaneously believe both. If AI is near then I care about the near-term future. If AI is not near, then I care about the long-term future. I do not claim either, however. I think it is important consideration even if it’s a long way off.
This is what I’m referring to when I talk about how important it is to really, truly understand something before developing an informed opinion about it. If you admit that you have only done light research, how can you be confident that you are right. Doing a bit of research might give you an edge for debate purposes, but we are talking about the future of life on Earth here. We really need to know the answers to these questions.
Lumping all existential risks in a single category and then asserting that there’s no tractability is a simplified approach. First what we need is the probability of any given existential risk occurring. For instance, if scientists discovered that the Yellowstone supervolcano was probably about to erupt sometime in the next few centuries, I’d definitely agree we should do research in that area, and we should fund that research as well. In fact, some research is being done in that area and I’m happy that it’s being done.
I’d agree with you if it was an idea asserted without evidence or reason. But there’s a whole load of arguments about why it is a tractable field, and how we can do things now—yes right now—about making the future safer. Ignorance of these arguments does not mean they do not exist.
Remember, ask yourself first what is true. Then form your opinion. Do not go the other way.
I am not trying to “win” anything. I am stating why MIRI is not transparent, and does not deal in scalable issues. As an individual, Earning to Give, it does not follow to fund such things under the guise of Effective Altruism. Existential risk is important to think about and discuss as individuals. However, funding CS grad students does not make sense in the light of Effective Altruism.
Funding does not increase “thinking.” The whole point of EA is to not give blindly. For example, giving food aid, although meaning well, can have a very negative effect (i.e., the crowding out effect on the local market). Nonmaleficence should be one’s initial position in regards to funding.
Lastly, no I rarely accept something as true first. I do not first accept the null hypothesis. “But there’s a whole load of arguments about why it is a tractable field”—What are they? Again, none of the actual arguments were examined: How is MIRI going about tractable/solvable issues? Who of MIRI is getting the funds? How is time travel safety not as relevant as AI safety?
Thanks for this discussion, which I find quite interesting. I think the effectiveness and efficiency of funding research projects concerning risks of AI is a largely neglected topic. I’ve posted some concerns on this below an older thread on MIRI: http://effective-altruism.com/ea/14c/why_im_donating_to_miri_this_year/dce
the primary problem being the lack of transparency on the side of Open Phil. concerning the evaluative criteria used in their decision to award MIRI with an extremely huge grant.