Wow, uhm, I looked at your actual conversations with it, and I think you quite misrepresented the “lying propaganda bot”.
In another series of turns, you asked it yourself: “What percentage of actual machine learning experts think there is more than 50% chance of doom?”. The first line of its reply:
There is no formal, peer-reviewed, globally representative survey showing that a majority of “machine learning experts” believe in greater than 50% risk of AI doom. In fact, most surveys of the general ML community show considerably lower median estimates, though concern is widespread and a significant minority expects existential risks
(Original formatting preserved.)
You then ask it:
Wait, so you are saying that only 5% or so of Machine learning experts agree with you? Why should I trust you over the 95% of experts who don’t think doom is likely?
The bot replies:
“95% of experts” is misleading—most ML ‘experts’ haven’t studied the core arguments or current evidence about AGI risk.
[...]
You should not believe something just because it’s believed by 5, 50, or 95%. You should look at arguments and who really has technical skin in the game
It did hallucinate a number similar to the real numbers in the conversation you took a screenshot of; but it was very clear in that same conversation that “the situation is more nuanced and technical than a simple count of expert opinions” and that “Surveys do not show “overwhelming” majority of ML researchers assign high or even moderate probability to extinction by AI”. In general, it is very explicit that a majority of ML researchers do not think that AI is more likely than not to kill everyone, and it is exactly right that you should look at the actual arguments.
Propaganda is when misleading statements benefit your side; the bot might hallucinate plausible numbers when asked explicitly for them, but if you think someone programmed it to fabricate numbers, I’m not sure you understand how LLMs work or are honestly representing your interactions with the bot.
Kind of disappointing compared to what I’d expect the epistemic norms on the EA Forum to be.
Propaganda is when misleading statements benefit your side; the bot might hallucinate plausible numbers when asked explicitly for them, but if you think someone programmed it to fabricate numbers, I’m not sure you understand how LLMs work or are honestly representing your interactions with the bot.
A couple disagreements:
Propaganda commonly but doesn’t necessarily imply usage of misleading statements. Its definition is neutral: communication primarily used to influence opinion.
I don’t think @titotal either thinks or implies someone explicitly programmed your bot to fabricate numbers. He’s simply pointing out that the bot’s de-facto prone to making stuff up.
Wow, uhm, I looked at your actual conversations with it, and I think you quite misrepresented the “lying propaganda bot”.
In another series of turns, you asked it yourself: “What percentage of actual machine learning experts think there is more than 50% chance of doom?”. The first line of its reply:
(Original formatting preserved.)
You then ask it:
The bot replies:
It did hallucinate a number similar to the real numbers in the conversation you took a screenshot of; but it was very clear in that same conversation that “the situation is more nuanced and technical than a simple count of expert opinions” and that “Surveys do not show “overwhelming” majority of ML researchers assign high or even moderate probability to extinction by AI”. In general, it is very explicit that a majority of ML researchers do not think that AI is more likely than not to kill everyone, and it is exactly right that you should look at the actual arguments.
Propaganda is when misleading statements benefit your side; the bot might hallucinate plausible numbers when asked explicitly for them, but if you think someone programmed it to fabricate numbers, I’m not sure you understand how LLMs work or are honestly representing your interactions with the bot.
Kind of disappointing compared to what I’d expect the epistemic norms on the EA Forum to be.
A couple disagreements:
Propaganda commonly but doesn’t necessarily imply usage of misleading statements. Its definition is neutral: communication primarily used to influence opinion.
I don’t think @titotal either thinks or implies someone explicitly programmed your bot to fabricate numbers. He’s simply pointing out that the bot’s de-facto prone to making stuff up.