A lot of these considerations feel more compelling if AI timelines are long, or at least not short (with capital being the one consideration going the other way).
I’m not sure why you wouldn’t care about standards if timelines are short. I feel like you should care more about people actually using your work, which might be more likely if you are forced to get product-market fit.
Ok, I did also write: * Nonprofits have significantly weaker feedback mechanisms compared to for-profits * Few people are going to complain that you provided bad service when it didn’t cost them anything. * Most nonprofits are not very ambitious, despite having large moral ambitions. * It’s challenging to find talented people willing to accept a substantial pay cut to work with you. * For-profits are considerably more likely to create something that people actually want.
Which I think are all differentially more useful in short timelines than in long timelines worlds as you don’t have less time to mess around, and you instead need to very quickly work on a useful thing. If you disagree with this maybe we’re talking past other and I misunderstand your perspective.
Fwiw I’m sympathetic to nonprofits being better on net than for profit in short-timelines for reasons that aren’t discussed in this post e.g. greater freedom to focus narrowly on useful work, particularly in cases where there isn’t a viable business model.
Points 1, 2 and 5: These all seem like variants of feedback being good. Seems like if timelines are short, you probably want to take a shot directly at the goal/what needs to be done[1], even if the feedback mechanism isn’t that good. If you don’t take the shot, there’s no guarantee that anyone else will. Whilst if timelines are longer, your risk tolerance will likely be lower and feedback mechanisms are one key way of reducing this.
Point 3: I expect a large proportion of this to be a founder selection effect.
Point 4: Seems to fall more under more capital which I already acknowledged as going the other way.
I suppose this lines up with “greater freedom to focus narrowly on useful work” which you consider outside the scope of the original article, whilst I see this as directly tied to how much we care about feedback.
A lot of these considerations feel more compelling if AI timelines are long, or at least not short (with capital being the one consideration going the other way).
I’m not sure why you wouldn’t care about standards if timelines are short. I feel like you should care more about people actually using your work, which might be more likely if you are forced to get product-market fit.
Let’s look at what you wrote under this section and not just the headline.
“They are often difficult to evaluate and lack a natural kill function”
That seems to me like a longer term issue.
Ok, I did also write:
* Nonprofits have significantly weaker feedback mechanisms compared to for-profits
* Few people are going to complain that you provided bad service when it didn’t cost them anything.
* Most nonprofits are not very ambitious, despite having large moral ambitions.
* It’s challenging to find talented people willing to accept a substantial pay cut to work with you.
* For-profits are considerably more likely to create something that people actually want.
Which I think are all differentially more useful in short timelines than in long timelines worlds as you don’t have less time to mess around, and you instead need to very quickly work on a useful thing. If you disagree with this maybe we’re talking past other and I misunderstand your perspective.
Fwiw I’m sympathetic to nonprofits being better on net than for profit in short-timelines for reasons that aren’t discussed in this post e.g. greater freedom to focus narrowly on useful work, particularly in cases where there isn’t a viable business model.
Points 1, 2 and 5: These all seem like variants of feedback being good. Seems like if timelines are short, you probably want to take a shot directly at the goal/what needs to be done[1], even if the feedback mechanism isn’t that good. If you don’t take the shot, there’s no guarantee that anyone else will. Whilst if timelines are longer, your risk tolerance will likely be lower and feedback mechanisms are one key way of reducing this.
Point 3: I expect a large proportion of this to be a founder selection effect.
Point 4: Seems to fall more under more capital which I already acknowledged as going the other way.
I suppose this lines up with “greater freedom to focus narrowly on useful work” which you consider outside the scope of the original article, whilst I see this as directly tied to how much we care about feedback.