OK, as we have communicated, you have filled in what I took to be gaps in your original presentation. It might be useful to review your discussion with the many people who have shown an interest in your work, and see if you can write a final piece that summarizes your position effectively. I for one had to interpret your position and ask for your feedback in order to be sure I was summarizing your position correctly.
My position, which stands in contrast to yours, is that current research in AI and robotics could lead to AGI if other circumstances permit it. I don’t particularly think it is necessary or a good idea to develop AGI, doing so will only add danger and difficulty to an already difficult world scene (as well as add people to it), but I also think it is important to recognize the implications once it happens.
OK, as we have communicated, you have filled in what I took to be gaps in your original presentation. It might be useful to review your discussion with the many people who have shown an interest in your work, and see if you can write a final piece that summarizes your position effectively. I for one had to interpret your position and ask for your feedback in order to be sure I was summarizing your position correctly.
My position, which stands in contrast to yours, is that current research in AI and robotics could lead to AGI if other circumstances permit it. I don’t particularly think it is necessary or a good idea to develop AGI, doing so will only add danger and difficulty to an already difficult world scene (as well as add people to it), but I also think it is important to recognize the implications once it happens.