No worries, I’m glad you find these critiques helpful!
I think the identical clone thing is an interesting thought experiment, and one that perhaps reveals some differences in worldview. I think duplicating Ava a couple of times would lead to roughly linear increase in output, sure: but if you kept duplicating you’d run into diminishing returns. A large software company who’s engineers were entirely replaced with Ava’s would be a literal groupthink factory: all of the blindspots and biases of Ava would be completely entrenched and make the whole enterprise brittle.
I think the push and pull of different personalities is essential to creative production in science: If you look at the history of scientific developments progress is rarely the work of a single genius: more typically it driven by collaborations and fierce disagreements.
With regards to comment 1: yeah “accuracy” is an imperfect proxy, but I think it makes more sense than “number of tasks done” as a measure of algorithmic progress. This seems like an area where quality matters more than quantity. If I’m using Chatgpt to generate ideas for a research project, will running five different instances lead to the final ideas being five times as good?
I feel like there’s a hidden assumption here that AI will at some point switch from acting like LLM’s act in reality to acting like a “little guy in the computer”. I don’t think this is the case, I think AI may end up having different advantages and disadvantages when compared to human researchers.
No worries, I’m glad you find these critiques helpful!
I think the identical clone thing is an interesting thought experiment, and one that perhaps reveals some differences in worldview. I think duplicating Ava a couple of times would lead to roughly linear increase in output, sure: but if you kept duplicating you’d run into diminishing returns. A large software company who’s engineers were entirely replaced with Ava’s would be a literal groupthink factory: all of the blindspots and biases of Ava would be completely entrenched and make the whole enterprise brittle.
I think the push and pull of different personalities is essential to creative production in science: If you look at the history of scientific developments progress is rarely the work of a single genius: more typically it driven by collaborations and fierce disagreements.
With regards to comment 1: yeah “accuracy” is an imperfect proxy, but I think it makes more sense than “number of tasks done” as a measure of algorithmic progress. This seems like an area where quality matters more than quantity. If I’m using Chatgpt to generate ideas for a research project, will running five different instances lead to the final ideas being five times as good?
I feel like there’s a hidden assumption here that AI will at some point switch from acting like LLM’s act in reality to acting like a “little guy in the computer”. I don’t think this is the case, I think AI may end up having different advantages and disadvantages when compared to human researchers.