Agreed on the narrow point: anchoring on real data is better than pure vibes, when there is real data.
First, my main complaint about AI 2027 are that they extrapolate from METR data to fit a model while mostly ignoring the heavy caveats that the METR people put with their graph (this is not unique to AI 2027, Situational Awareness did something similar and many people do extrapolate a lot from benchmarks when this is not warranted or endorsed by the creators of those benchmarks).
This is an example of what I see as a broad problem in EA/ārationality circles, when someone says ābad model better than no modelā and uses numbers that are not āempirical with huge error barsā but completely made up.
More on made up numbers, certain psychological anchorings make people say 1% instead of 10^-5 for implausible claims, just because % is a typical way of expressing probabilities.
More generally, on community epistemics and why Iām picking on this particular example.
80k made a dramatized video out of AI2027 for a mass audience. I showed this video to some people in my circle and their reaction was to dismiss 80kās channel as one more AI hype/ādoom content. This is similar to what I remember being my first reaction when I encountered 80k way before learning anything about EA.
They even admitted that they chose AI 2027 in part because āitās a story, so people are compelled to keep watchingā.
They also said they received criticism for being ātoo speculativeā but I havenāt seen them engaging with the substance of it, at least in their retrospective. Please correct me if Iām wrong in this last part.
Apologies for the previous claim that 80k admitted that a more argument-based video would have depended on preexisting trust, this was AI generated and I was sloppy checking (it was on a comment on their retrospective, not by 80k themselves). My trust in AI as a search engine has gone down accordingly.
Hey @Clara Torres Latorre šøyour point isnāt bad but some of this this feels heavily AI written to me and I donāt love it. I could be wrong again (would not be the first time).
Second and last paragraphs were AI written. The rest, I used AI to search but double checked (but not well enough) the sources bc it hallucinated a bunch of stuff, but the rest I wrote directly.
Now itās 100% written by me, donāt know if it was worth my time but I hate AI slop so be the change that you want to see in the world etc
Agreed on the narrow point: anchoring on real data is better than pure vibes, when there is real data.
First, my main complaint about AI 2027 are that they extrapolate from METR data to fit a model while mostly ignoring the heavy caveats that the METR people put with their graph (this is not unique to AI 2027, Situational Awareness did something similar and many people do extrapolate a lot from benchmarks when this is not warranted or endorsed by the creators of those benchmarks).
This is an example of what I see as a broad problem in EA/ārationality circles, when someone says ābad model better than no modelā and uses numbers that are not āempirical with huge error barsā but completely made up.
More on made up numbers, certain psychological anchorings make people say 1% instead of 10^-5 for implausible claims, just because % is a typical way of expressing probabilities.
More generally, on community epistemics and why Iām picking on this particular example.
80k made a dramatized video out of AI2027 for a mass audience. I showed this video to some people in my circle and their reaction was to dismiss 80kās channel as one more AI hype/ādoom content. This is similar to what I remember being my first reaction when I encountered 80k way before learning anything about EA.
They even admitted that they chose AI 2027 in part because āitās a story, so people are compelled to keep watchingā.
They also said they received criticism for being ātoo speculativeā but I havenāt seen them engaging with the substance of it, at least in their retrospective. Please correct me if Iām wrong in this last part.
Apologies for the previous claim that 80k admitted that a more argument-based video would have depended on preexisting trust, this was AI generated and I was sloppy checking (it was on a comment on their retrospective, not by 80k themselves). My trust in AI as a search engine has gone down accordingly.
Hey @Clara Torres Latorre šøyour point isnāt bad but some of this this feels heavily AI written to me and I donāt love it. I could be wrong again (would not be the first time).
Second and last paragraphs were AI written. The rest, I used AI to search but double checked (but not well enough) the sources bc it hallucinated a bunch of stuff, but the rest I wrote directly.
Now itās 100% written by me, donāt know if it was worth my time but I hate AI slop so be the change that you want to see in the world etc
What seems AI written about it? (Iām conscious I received a similar flag from you awhile back too, hah!)
read above she changed it.
excessive colons
itās not x itās y
some language which was technically correct but seemed hollow.
but Iāll find it hard to describe exactly why sometimes