Thanks for the summary here. I’ve been eyeing collective intelligence for a while, still trying to figure out what to make of it. I think some of the ideas in the field seem pretty exciting.
“Human collective intelligence” seems like an obviously important thing to improve, if improvement is tractable and somewhat cost-effective. I haven’t been as excited about the particular academic field as I have been about the abstract idea though. I haven’t previously read many of the papers, but I’ve seen several talks on youtube by some of what seemed to be the prominent figures (according to the Wikipedia). I’m looking at some of these papers now, and some seem interesting, though I feel like I’m missing a much bigger picture. Much of what I’ve come across feels quite scattered. Maybe there’s a textbook or large set of talks somewhere?
I think right now I’m hesitant to call any work I do that I could imagine being around “collective intelligence” as “collective intelligence”, because I just don’t feel like I understand the particulars of the field. I’m similar hesitant to do things like advocate for “collective intelligence” funding or research for the same reason.
There are several areas that seem important in this area that I haven’t found addressed much by this field, that kind of surprise me.
I’ve been impressed by much of the work around Philip Tetlock and forecasting, but the collective intelligence work mostly seems fairly removed from that. I haven’t seen Tetlock or others at Good Judgement Inc mention the collective intelligence field, for instance.
What about ways that technology could improve the intelligence of collectives? Is that covered?
I’ve seen various scientific experiments, but am curious about theories of how collective intelligence could be dramatically increased in the next 20 to 50 years. Is that discussed somewhere?
There’s a lot of research and discussion around epistemology and how crowds come to conclusions on controversial subjects. Would that be considered collective intelligence, or does intelligence preclude epistemics?
Are things like rational reasoning covered, or is all of that considered non-collective?
I’m much more interested in work on making human groups better at reasoning than I am the work on groups of robots or fish; it’s not clear to me how relevant the latter parts are, or how valuable it is for all of these to be part of one research effort.
Some other related questions I’d be curious about:
It’s not clear to me how usable these sorts of findings of collective intelligence are. Are there many cases of them being incorporated by corporations or similar, and experiencing large gains? Have people in the field of collective intelligence themselves used these ideas to have much more intelligence?
Are there open research agendas are main goals of the field for the next 20-50 years?
The idea of collective intelligence (CI) seems interesting, but I can barely find any literature about it. Have there been estimates of the CI of public groups we might know of? Or, have there been cases where it can be estimated in ways that are fairly obviously useful? I would expect that if there were a good measure, it would be interesting to use to understand hedge funds and other kinds of intelligent organizations.
No need to answer any of these questions, I just wanted to flag them to express where I’m coming from. Again, I’m excited about the idea of field (I think), I just feel like I really don’t quite understand it.
It’s not clear to me how usable these sorts of findings of collective intelligence are. Are there many cases of them being incorporated by corporations or similar, and experiencing large gains? Have people in the field of collective intelligence themselves used these ideas to have much more intelligence?
This was my top question after reading the post, as well.
Thanks for the summary here. I’ve been eyeing collective intelligence for a while, still trying to figure out what to make of it. I think some of the ideas in the field seem pretty exciting.
“Human collective intelligence” seems like an obviously important thing to improve, if improvement is tractable and somewhat cost-effective. I haven’t been as excited about the particular academic field as I have been about the abstract idea though. I haven’t previously read many of the papers, but I’ve seen several talks on youtube by some of what seemed to be the prominent figures (according to the Wikipedia). I’m looking at some of these papers now, and some seem interesting, though I feel like I’m missing a much bigger picture. Much of what I’ve come across feels quite scattered. Maybe there’s a textbook or large set of talks somewhere?
I think right now I’m hesitant to call any work I do that I could imagine being around “collective intelligence” as “collective intelligence”, because I just don’t feel like I understand the particulars of the field. I’m similar hesitant to do things like advocate for “collective intelligence” funding or research for the same reason.
There are several areas that seem important in this area that I haven’t found addressed much by this field, that kind of surprise me.
I’ve been impressed by much of the work around Philip Tetlock and forecasting, but the collective intelligence work mostly seems fairly removed from that. I haven’t seen Tetlock or others at Good Judgement Inc mention the collective intelligence field, for instance.
What about ways that technology could improve the intelligence of collectives? Is that covered?
I’ve seen various scientific experiments, but am curious about theories of how collective intelligence could be dramatically increased in the next 20 to 50 years. Is that discussed somewhere?
There’s a lot of research and discussion around epistemology and how crowds come to conclusions on controversial subjects. Would that be considered collective intelligence, or does intelligence preclude epistemics?
Are things like rational reasoning covered, or is all of that considered non-collective?
I’m much more interested in work on making human groups better at reasoning than I am the work on groups of robots or fish; it’s not clear to me how relevant the latter parts are, or how valuable it is for all of these to be part of one research effort.
Some other related questions I’d be curious about:
It’s not clear to me how usable these sorts of findings of collective intelligence are. Are there many cases of them being incorporated by corporations or similar, and experiencing large gains? Have people in the field of collective intelligence themselves used these ideas to have much more intelligence?
Are there open research agendas are main goals of the field for the next 20-50 years?
The idea of collective intelligence (CI) seems interesting, but I can barely find any literature about it. Have there been estimates of the CI of public groups we might know of? Or, have there been cases where it can be estimated in ways that are fairly obviously useful? I would expect that if there were a good measure, it would be interesting to use to understand hedge funds and other kinds of intelligent organizations.
No need to answer any of these questions, I just wanted to flag them to express where I’m coming from. Again, I’m excited about the idea of field (I think), I just feel like I really don’t quite understand it.
This was my top question after reading the post, as well.