I think this type of ML research (i.e. trying to train groundbreaking neural networks) is pretty messy and unpredictable; and money and luck are fungible to some extent. It’s not like back in 2017 OpenAI’s researchers could perfectly predict which ML experiments would succeed, and how to turn $X of GPU hours into an impressive model that would allow them to raise >$X in the next round, with probability 1.
For example, suppose OpenAI’s researchers ran some expensive experiment in 2017, and did not get impressive results. They then need to decide whether to give up on that particular approach, or just tweak some hyperparameters and run another such experiment. The amount of remaining funding they have at that point may determine their decision.
Again, why does it have to be X=$1B and probability 1?
It seems like if the $30M mattered, then the counterfactual is that they needed to be able to raise $30M at the end of their runway, at any valuation, rather than $1B, in order to bridge to the more impressive model. There should be a sizeable gap in what constitutes a sufficiently impressive model between those scenarios. In theory they also had “up to $1B” in grants from their original funders, including Elon, that should have been possible to draw on if needed.
How did you come to the conclusion that funding ML research is “pretty messy and unpredictable”? I’ve seen many ML companies funded over the years as straightforwardly as other tech startups, esp. if they had great professional backgrounds as was clearly the case with OAI. Seems like an unnecessary assumption on top of other unnecessary assumptions.
How did you come to the conclusion that funding ML research is “pretty messy and unpredictable”? I’ve seen many ML companies funded over the years as straightforwardly as other tech startups, […]
I think it’s important to distinguish here between companies that intend to use existing state-of-the-art ML approaches (where the innovation is in the product side of things) and companies that intend to advance the state-of-the-art in ML. I’m only claiming that research that aims to advance the state-of-the-art in ML is messy and unpredictable.
To illustrate my point: If we use an extreme version of the messy-and-unpredictable view, we can imagine that OpenAI’s research was like repeatedly drawing balls from an urn, where drawing each ball costs $1M and there is a 1% chance (or whatever) to draw a Winning Ball (that is analogous to getting a super impressive ML model). The more funding OpenAI has the more balls they can draw, and thus the more likely they are to draw a Winning Ball. Giving OpenAI $30M increases their chance to draw a Winning Ball; though that increase must be small if they have access to much more funding than $30M (without a super impressive ML model).
I understood what you meant before, but still see it as a bad analogy.
For context I saw many rounds of funding as a board member at Vicarious which was a pure lab for most of its life (and then later attempted robotics but that small revenue actually devalued it in the eyes of investors). There, what it took was someone getting excited about the story and smaller performance milestones along the way.
I think this type of ML research (i.e. trying to train groundbreaking neural networks) is pretty messy and unpredictable; and money and luck are fungible to some extent. It’s not like back in 2017 OpenAI’s researchers could perfectly predict which ML experiments would succeed, and how to turn $X of GPU hours into an impressive model that would allow them to raise >$X in the next round, with probability 1.
For example, suppose OpenAI’s researchers ran some expensive experiment in 2017, and did not get impressive results. They then need to decide whether to give up on that particular approach, or just tweak some hyperparameters and run another such experiment. The amount of remaining funding they have at that point may determine their decision.
Again, why does it have to be X=$1B and probability 1?
It seems like if the $30M mattered, then the counterfactual is that they needed to be able to raise $30M at the end of their runway, at any valuation, rather than $1B, in order to bridge to the more impressive model. There should be a sizeable gap in what constitutes a sufficiently impressive model between those scenarios. In theory they also had “up to $1B” in grants from their original funders, including Elon, that should have been possible to draw on if needed.
How did you come to the conclusion that funding ML research is “pretty messy and unpredictable”? I’ve seen many ML companies funded over the years as straightforwardly as other tech startups, esp. if they had great professional backgrounds as was clearly the case with OAI. Seems like an unnecessary assumption on top of other unnecessary assumptions.
I think it’s important to distinguish here between companies that intend to use existing state-of-the-art ML approaches (where the innovation is in the product side of things) and companies that intend to advance the state-of-the-art in ML. I’m only claiming that research that aims to advance the state-of-the-art in ML is messy and unpredictable.
To illustrate my point: If we use an extreme version of the messy-and-unpredictable view, we can imagine that OpenAI’s research was like repeatedly drawing balls from an urn, where drawing each ball costs $1M and there is a 1% chance (or whatever) to draw a Winning Ball (that is analogous to getting a super impressive ML model). The more funding OpenAI has the more balls they can draw, and thus the more likely they are to draw a Winning Ball. Giving OpenAI $30M increases their chance to draw a Winning Ball; though that increase must be small if they have access to much more funding than $30M (without a super impressive ML model).
I understood what you meant before, but still see it as a bad analogy.
For context I saw many rounds of funding as a board member at Vicarious which was a pure lab for most of its life (and then later attempted robotics but that small revenue actually devalued it in the eyes of investors). There, what it took was someone getting excited about the story and smaller performance milestones along the way.