Some useful context is that I think a software singularity is unlikely to occur; see this blog post for some arguments. Loosely speaking, under the view expressed in the linked blog post, there aren’t extremely large gains from automating software engineering tasks beyond the fact that these tasks represent a significant (and growing) fraction of white collar labor by wage bill.
Even if I thought a software singularity will likely happen in the future, I don’t think this type of work would be bad in expectation, as I continue to think that accelerating AI is likely good for the world. My main argument is that speeding up AI development will hasten large medical, technological, and economic benefits to people alive today, without predictably causing long-term harms large enough to outweigh these clear benefits. For anyone curious about my views, I’ve explained my perspective on this issue at length on this forum and elsewhere.
Note: Matthew’s comment was negative just now. Please don’t vote it into the negative and use the disagree button instead. Even though I don’t think Matthew’s defense is persuasive, it deserves to be heard.
I wrote a critique of that article here. TLDR: “It has some strong analysis at points, but unfortunately, it’s undermined by some poor choices of framing/focus that mean most readers will probably leave more confused than when they came”.
”A software singularity is unlikely to occur”—Unlikely enough that you’re willing to bet the house on it? Feels like you’re picking up pennies in front of a steamroller.
I continue to think that accelerating AI is likely good for the world
AI is already going incredibly fast. Why would you want to throw more fuel on the fire?
Is it that you honestly think AI is moving too slow at the moment (no offense, but seems crazy to me) or is your worry that current trends are misleading and AI might slow in the future?
Regarding the latter, I agree that once timelines start to get sufficiently long, there might actually be an argument for accelerating them (but in order to reach AGI before biotech causes a catastrophe, rather than the more myopic reasons you’ve provided). But if your worry is stagnation, why not actually wait until things appear to have stalled and then perhaps consider doing something like this?
Or why didn’t you just stay at Epoch, which was a much more robust and less fragile theory of action? (Okay, I don’t actually think articles like this are high enough quality to be net-positive, but you were 90% of the way towards having written a really good article. The framing/argument just needed to be a little bit tighter, which could have been achieved with another round of revisions).
The main reason not to wait is… missing the opportunity to cash in on the current AI boom.
I bet the strategic analysis for Mechanize being a good choice (net-positive and positive relative to alternatives) is paper-thin, even given his rough world view.
Might be true, doesn’t make that not a strawman. I’m sympathetic to thinking it’s implausible that mechanize would be the best thing to do on altruistic grounds even if you share views like those of the founders. (Because there is probably something more leveraged to do and some weight on cooperativeness considerations.)
Sometimes the dollar signs can blind someone and cause them not to consider obvious alternatives. And they will feel that they made the decision for reasons other than the money, but the money nonetheless caused the cognitive distortion that ultimately led to the decision.
I’m not claiming that this happened here. I don’t have any way of really knowing. But it’s certainly suspicious. And I don’t think anything is gained by pretending that it’s not.
Some useful context is that I think a software singularity is unlikely to occur; see this blog post for some arguments. Loosely speaking, under the view expressed in the linked blog post, there aren’t extremely large gains from automating software engineering tasks beyond the fact that these tasks represent a significant (and growing) fraction of white collar labor by wage bill.
Even if I thought a software singularity will likely happen in the future, I don’t think this type of work would be bad in expectation, as I continue to think that accelerating AI is likely good for the world. My main argument is that speeding up AI development will hasten large medical, technological, and economic benefits to people alive today, without predictably causing long-term harms large enough to outweigh these clear benefits. For anyone curious about my views, I’ve explained my perspective on this issue at length on this forum and elsewhere.
Note: Matthew’s comment was negative just now. Please don’t vote it into the negative and use the disagree button instead. Even though I don’t think Matthew’s defense is persuasive, it deserves to be heard.
I wrote a critique of that article here. TLDR: “It has some strong analysis at points, but unfortunately, it’s undermined by some poor choices of framing/focus that mean most readers will probably leave more confused than when they came”.
”A software singularity is unlikely to occur”—Unlikely enough that you’re willing to bet the house on it? Feels like you’re picking up pennies in front of a steamroller.
AI is already going incredibly fast. Why would you want to throw more fuel on the fire?
Is it that you honestly think AI is moving too slow at the moment (no offense, but seems crazy to me) or is your worry that current trends are misleading and AI might slow in the future?
Regarding the latter, I agree that once timelines start to get sufficiently long, there might actually be an argument for accelerating them (but in order to reach AGI before biotech causes a catastrophe, rather than the more myopic reasons you’ve provided). But if your worry is stagnation, why not actually wait until things appear to have stalled and then perhaps consider doing something like this?
Or why didn’t you just stay at Epoch, which was a much more robust and less fragile theory of action? (Okay, I don’t actually think articles like this are high enough quality to be net-positive, but you were 90% of the way towards having written a really good article. The framing/argument just needed to be a little bit tighter, which could have been achieved with another round of revisions).
The main reason not to wait is… missing the opportunity to cash in on the current AI boom.
This is a clear strawman. Matthew has given reasons why he thinks acceleration is good which aren’t this.
I bet the strategic analysis for Mechanize being a good choice (net-positive and positive relative to alternatives) is paper-thin, even given his rough world view.
Might be true, doesn’t make that not a strawman. I’m sympathetic to thinking it’s implausible that mechanize would be the best thing to do on altruistic grounds even if you share views like those of the founders. (Because there is probably something more leveraged to do and some weight on cooperativeness considerations.)
Sometimes the dollar signs can blind someone and cause them not to consider obvious alternatives. And they will feel that they made the decision for reasons other than the money, but the money nonetheless caused the cognitive distortion that ultimately led to the decision.
I’m not claiming that this happened here. I don’t have any way of really knowing. But it’s certainly suspicious. And I don’t think anything is gained by pretending that it’s not.