I think that OpenAI is not worried about actors like DeepMind misusing AGI, but (a) is worried about actors that might not currently be on most people’s radar misusing AGI, (b) thinks that scaling up capabilities enables better alignment research (but sees other benefits to scaling up capabilities too) and (c) is earning revenue for reasons other than direct existential risk reduction where it does not see a conflict in doing so.
Jacob_Hilton
Thank you.
Thank you for writing this.
Please could you add to the top of the Google doc:
When it was last updated (Edit: I see this is at the bottom of each page but I think the top of the doc might be better)
A little about your expertise and your process for aggregating information, like you have included in the post but perhaps with more specific details about your work, if you are comfortable sharing that
This would make it easier for people to judge for themselves how much weight to put on your advice.
Thank you for this post. I agree with its central premise and I know that Michelle is already working on an impact evaluation that will contain a lot of this sort of information.
However, your post contains a couple of misleading points that I thought would be worth correcting.
The online-only pledge was launched on 4th April 2014, not in mid-2013, though there are a handful people who joined online in experiments before this date. (In your Internet Archive link the form has just been spread over 2 pages.) For what it’s worth, the increased growth rate from around September 2013 coincided with Ben Clifford starting as Director of Community and encouraging people who had expressed an interest to join.
The cause-neutral pledge wording was added to the website on 5th December 2014. As Greg mentioned, the huge spike in growth over the new year was due to over 100 new members from Ravi Patel’s Facebook campaign. Taking into account the growth rate since mid-January, this appears to be more-or-less unrelated to the pledge wording.
For future reference, it may have been courteous to contact someone at Giving What We Can before posting this. In case that sounds intimidating I can assure you they are all very friendly :)
(Disclosure: I manage Giving What We Can’s website as a volunteer.)
It’s interesting to me that you refer to (CPU) clock speed. If my understanding is correct, when you change the clock speed of a CPU, you don’t actually change the speed at which signals propagate through the CPU, you just change the length of the delay between consecutive propagations. (Technically, changes in temperature or voltage could have small side-effects on propagation speed, but let’s ignore those for the sake of argument.) It seems to me that the length of the delay is not morally relevant, for the same reason that the length of a period of time during which I am unconscious is not morally relevant, all else being equal. I am curious if you agree, and if so, whether that changes any of your practical conclusions.
For what it’s worth, it seems to me that both digital and biological minds are discrete in an important sense, regardless of whether physics is continuous. Indeed, for a digital simulation of a biological mind to even be possible, it has to rely on a discrete approximation being sufficient. But I think I’d have trouble making that argument precise to your satisfaction, so for now the thought experiment will have to do. Also, thank you for the post, I found it quite thought-provoking!