What you describe there is probably one of the most similar concepts I’ve seen thus far, but I think a potentially important difference is that I am particularly interested in a system that allows/emphasizes semantically-richer relationships between concepts and things. From what I saw in that post, it looks like the relationships in the project you describe are largely just “X influences Y” or “X relates to/informs us about Y”, whereas the system I have in mind would allow identifying relationships like “X and Y are inconsistent claims,” “Z study had conclusion/finding X,” “X supports Y”, etc.
I used a free/lite/outdated version of Semantica Pro which I was able to get from someone working on it many years ago.
We definitely like the idea of doing semantically richer representation, but there are several components of the debate that seem much less related to arguments, and more related to prediction—but they are interrelated.
For example, Argument 1: Analogies to the brain predict that we have sufficient computation to run an AI already Argument 2: Training AI systems (or at least hyperparameter search) is more akin to evolving the brain than to running it. (contra 1) Argument 2a: The compute needed to do this is 30 years away. Argument 2b (contra 2a): Optimizing directly for our goal will be more efficient. Argument 2c (contra 2b): We don’t know what we are optimizing for, exactly. Argument 2d (supporting 2b): We still manage to do things like computer vision.
Each of these has implications about timelines until AI—we don’t just want to look at strength of the arguments, we also want to look at the actual implication for timelines.
Semantica Pro doesn’t do quantitative relationships which allow for simulation of outcomes and uncertainty, like “argument X predicts progress will be normal(50%, 5%) faster.” On the other hand, Analytica doesn’t really do the other half of representing conflicting models—but we’re not wedded to it as the only way to do anything, and something like what you suggest is definitely valuable. (But if we didn’t pick something, we could spend the entire time until ASI debating preliminaries or building something perfect for what we want,)
It seems like what we should do is have different parts of the issue represented different / multiple ways, and given that we’ve been working on cataloging the questions, we’d potentially be interested in collaborating.
Yeah, I definitely felt that one of the downsides of Semantica Pro (or at least, my version of it) was the lack of quantitative or even logic (if-then) functionality, which in my mind would be a crucial feature. For example, I would want to see some kind of logical system that flags claims that depend on an assumption/claim/study that is shown to be flawed (and thus the flagged claims may need to be reevaluated). In my most recent research project, for example, I found a study which used a (seemingly/arguably) flawed experiment design for testing prediction market incentive structures, produced a finding which was seemingly counterintuitive (at least before taking into account the flaws in experiment design), and then went on to be cited by ~50-100 other studies with some of them even referencing it as the basis for their experimental design.
(Venting aside) I’m definitely interested in exploring the idea further.
What you describe there is probably one of the most similar concepts I’ve seen thus far, but I think a potentially important difference is that I am particularly interested in a system that allows/emphasizes semantically-richer relationships between concepts and things. From what I saw in that post, it looks like the relationships in the project you describe are largely just “X influences Y” or “X relates to/informs us about Y”, whereas the system I have in mind would allow identifying relationships like “X and Y are inconsistent claims,” “Z study had conclusion/finding X,” “X supports Y”, etc.
I used a free/lite/outdated version of Semantica Pro which I was able to get from someone working on it many years ago.
(I’m also working on the project.)
We definitely like the idea of doing semantically richer representation, but there are several components of the debate that seem much less related to arguments, and more related to prediction—but they are interrelated.
For example,
Argument 1: Analogies to the brain predict that we have sufficient computation to run an AI already
Argument 2: Training AI systems (or at least hyperparameter search) is more akin to evolving the brain than to running it. (contra 1)
Argument 2a: The compute needed to do this is 30 years away.
Argument 2b (contra 2a): Optimizing directly for our goal will be more efficient.
Argument 2c (contra 2b): We don’t know what we are optimizing for, exactly.
Argument 2d (supporting 2b): We still manage to do things like computer vision.
Each of these has implications about timelines until AI—we don’t just want to look at strength of the arguments, we also want to look at the actual implication for timelines.
Semantica Pro doesn’t do quantitative relationships which allow for simulation of outcomes and uncertainty, like “argument X predicts progress will be normal(50%, 5%) faster.” On the other hand, Analytica doesn’t really do the other half of representing conflicting models—but we’re not wedded to it as the only way to do anything, and something like what you suggest is definitely valuable. (But if we didn’t pick something, we could spend the entire time until ASI debating preliminaries or building something perfect for what we want,)
It seems like what we should do is have different parts of the issue represented different / multiple ways, and given that we’ve been working on cataloging the questions, we’d potentially be interested in collaborating.
Yeah, I definitely felt that one of the downsides of Semantica Pro (or at least, my version of it) was the lack of quantitative or even logic (if-then) functionality, which in my mind would be a crucial feature. For example, I would want to see some kind of logical system that flags claims that depend on an assumption/claim/study that is shown to be flawed (and thus the flagged claims may need to be reevaluated). In my most recent research project, for example, I found a study which used a (seemingly/arguably) flawed experiment design for testing prediction market incentive structures, produced a finding which was seemingly counterintuitive (at least before taking into account the flaws in experiment design), and then went on to be cited by ~50-100 other studies with some of them even referencing it as the basis for their experimental design.
(Venting aside) I’m definitely interested in exploring the idea further.