Gaah, sorry, I keep forgetting to put links in—APS-AI means Advanced, Planning, Strategically Aware AI—the thing the Carlsmith report talks about. I’ll edit to put links in retroactively.
I’m currently at something like 20% that AI-PONR will be crossed in the next 5 years, and so insofar as that doesn’t seem to have happened 5 years from now then that’ll be a 20%-sized blow to my timelines in the usual Bayesian way. It’s important to note that this won’t necessarily lengthen my timelines all things considered, because what happens in those 5 years might be more than a 20% blow to 20+year timelines. (For example, and this is what I actually think is most likely, 5 years from now the world could look like it does at the end of my short story, in which case I’d have become more confident that the point of no return will come sometime between 2026 and 2036 than I am now, not less, because things would be more on track towards that outcome than they currently seem to be.)
Re: persuasion tools: You seem to have a different model of how persuasion tools cause PONR than I do. What I have in mind is mundane, not exotic—I’m not imagining AIs building QAnon-like cult followings, I’m imagining the cost of censorship/propaganda* continuing to drop rapidly and the effectiveness continuing to increase rapidly, and (given a few years for society to catch up) ideological strife to intensify in general. This in turn isn’t an x-risk by itself but it’s certainly a risk factor, and insofar as our impact comes from convincing key parts of society (e.g. government, tech companies) to recognize and navigate a tricky novel problem (AI risk) it seems plausible to me that our probability of success diminishes rapidly as ideological strife in those parts of society intensifies. So when you say “there’s already a lot of ideological strife and public confusion” my response is “yeah exactly, and isn’t it already causing big problems and e.g. making our collective handling of COVID worse? Now imagine that said strife and confusion gets a lot worse in the next five years, and worse still in the five years after that.”
*I mean these terms in a broad sense. I’m talking about the main ways in which ideologies strengthen their hold on existing hosts and spread themselves to new ones. For more on this see the aforementioned story, this post, and this comment.
Re: EfficientZero: Fair, I need to think about that more… I guess it would be really helpful to have examples of EfficientZero being done on more complex environments than Atari, such as e.g. real-world robot control or Starcraft or text prediction.
Sorry for the long delay, I let a lot of comments to respond to pile up!
APS seems like a category of systems that includes some of the others you listed (“Advanced capability: they outperform the best humans on some set of tasks which when performed at advanced levels grant significant power in today’s world (tasks like scientific research, business/military/political strategy, engineering, and persuasion/manipulation) … “). I still don’t feel clear on what you have in mind here in terms of specific transformative capabilities. If we condition on not having extreme capabilities for persuasion or research/engineering, I’m quite skeptical that something in the “business/military/political strategy” category is a great candidate to have transformative impact on its own.
Thanks for the links re: persuasion! This seems like a major theme for you and a big place where we currently disagree. I’m not sure what to make of your take, and I think I’d have to think a lot more to have stable views on it, but here are quick reactions:
If we made a chart of some number capturing “how easy it is to convince key parts of society to recognize and navigate a tricky novel problem” (which I’ll abbreviate as “epistemic responsiveness”) since the dawn of civilization, what would that chart look like? My guess is that it would be pretty chaotic; that it would sometimes go quite low and sometime sgo quite high; and that it would be very hard to predict the impact of a given technology or other development on epistemic responsiveness. Maybe there have been one-off points in history when epistemic responsiveness was very high; maybe it is much lower today compared to peak, such that someone could already claim we have passed the “point of no return”; maybe “persuasion AI” will drive it lower or higher, depending partly on who you think will have access to the biggest and best persuasion AIs and how they will use them. So I think even if we grant a lot of your views about how much AI could change the “memetic environment,” it’s not clear how this relates to the “point of no return.”
I think I feel a lot less impressed/scared than you with respect to today’s “persuasion techniques.”
I’d be interested in seeing literature on how big an effect size you can get out of things like focus groups and A/B testing. My guess is that going from completely incompetent at persuasion (e.g., basically modeling your audience as yourself, which is where most people start) to “empirically understanding and incorporating your audience’s different-from-you characteristics” causes a big jump from a very low level of effectiveness, but that things flatten out quickly after that, and that pouring more effort into focus groups and testing leads to only moderate effects, such that “doubling effectiveness” on the margin shouldn’t be a very impressive/scary idea.
I think most media is optimizing for engagement rather than persuasion, and that it’s natural for things to continue this way as AI advances. Engagement is dramatically easier to measure than persuasion, so data-hungry AI should help more with engagement than persuasion; targeting engagement is in some sense “self-reinforcing” and “self-funding” in a way that targeting persuasion isn’t (so persuasion targeters need some sort of subsidy to compete with engagement targeters); and there are norms against targeting persuasion as well. I do expect some people and institutions to invest a lot in persuasion targeting (as they do today), but my modal expectation does not involve it becoming pervasive on nearly all websites, the way yours seems to.
I feel like a lot of today’s “persuasion” is either (a) extremely immersive (someone is raised in a social setting that is very committed to some set of views or practices); or (b) involves persuading previously-close-to-indifferent people to believe things that call for low-cost actions (in many cases this means voting and social media posting; in some cases it can mean more consequential, but still ultimately not-super-high-personal-cost, actions). (b) can lead over time to shifting coalitions and identities, but the transition from (b) to (a) seems long.
I particularly don’t feel that today’s “persuaders” have much ability to accomplish the things that you’re pointing to with “chatbots,” “coaches,” “Imperius curses” and “drugs.” (Are there cases of drugs being used to systematically cause people to make durable, sustained, action-relevant changes to their views, especially when not accompanied by broader social immersion?)
I’m not really all that sure what the special role of AI is here, if we assume (for the sake of your argument that AI need not do other things to be transformative or PONR-y) a lack of scientific/engineering ability. What has/had higher ex ante probability of leading to a dramatic change in the memetic environment: further development of AI language models that could be used to write more propaganda, or the recent (last 20 years) explosion in communication channels and data, or many other changes over the last few hundred years such as the advent of radio and television, or the change in business models for media that we’re living through now? This comparison is intended to be an argument both that “your kind of reasoning would’ve led us to expect many previous persuasion-related PONRs without needing special AI advances” and that “if we condition on persuasion-related PONRs being the big thing to think about, we shouldn’t necessarily be all that focused on AI.”
I liked the story you wrote! A lot of it seems reasonably likely to be reasonably on point to me—I especially liked your bits about AIs confusing people when asked about their internal lives. However:
I think the story is missing a kind of quantification or “quantified attitude” that seems important if we want to be talking about whether this story playing out “would mean we’re probably looking at transformative/PONR-AI in the following five years.” For example, I do expect progress in digital assistants, but it matters an awful lot how much progress and economic impact there is. Same goes for just how effective the “pervasive persuasion targeting” is. I think this story could be consistent with worlds in which I’ve updated a lot toward shorter transformative AI timelines, and with worlds in which I haven’t at all (or have updated toward longer ones.)
As my comments probably indicate, I’m not sold on this section.
I’ll be pretty surprised if e.g. the NYT is using a lot of persuasion targeting, as opposed to engagement targeting.
I do expect “People who still remember 2021 think of it as the golden days, when conformism and censorship and polarization were noticeably less than they are now” will be true, but that’s primarily because (a) I think people are just really quick to hallucinate declinist dynamics and call past times “golden ages”; (b) 2021 does seem to have extremely little conformism and censorship (and basically normal polarization) by historical standards, and actually does kinda seem like a sort of epistemic golden age to me.
For people who are strongly and genuinely interested in understanding the world, I think we are in the midst of an explosion in useful websites, tools, and blogs that will someday be seen nostalgically;* a number of these websites/tools/blogs are remarkably influential among powerful people; and while most people are taking a lot less advantage than they could and seem to have pretty poorly epistemically grounded views, I’m extremely unconvinced that things looked better on this front in the past—here’s one post on that topic.
I do generally think that persuasion is an underexplored topic, and could have many implications for transformative AI strategy. Such implications could include something like “Today’s data explosion is already causing dramatic improvements in the ability of websites and other media to convince people of arbitrary things; we should assign a reasonably high probability that language models will further speed this in a way that transforms the world.” That just isn’t my guess at the moment.
*To be clear, I don’t think this will be because websites/tools/blogs will be less useful in the future. I just think people will be more impressed with those of our time, which are picking a lot of low-hanging fruit in terms of improving on the status quo, so they’ll feel impressive to read while knowing that the points they were making were novel at the time.
I’m a fan of lengthy asynchronous intellectual exchanges like this one, so no need to apologize for the delay. I hope you don’t mind my delay either? As usual, no need to reply to this message.
If we condition on not having extreme capabilities for persuasion or research/engineering, I’m quite skeptical that something in the “business/military/political strategy” category is a great candidate to have transformative impact on its own.
I think I agree with this.
Re: quantification: I agree; currently I don’t have good metrics to forecast on, much less good forecasts, for persuasion stuff and AI-PONR stuff. I am working on fixing that problem. :)
Re persuasion: For the past two years I have agreed with the claims made in “The misinformation problem seems like misinformation.”(!!!) The problem isn’t lack of access to information; information is more available than it ever was before. Nor is the problem “fake news” or other falsehoods. (Most propaganda is true.) Being politically polarized and extremist correlates positively with being well-informed, not negatively! (Anecdotally, my grad school friends with the craziest/most-extreme/most-dangerous/least-epistemically-virtuous political beliefs were generally the people best informed about politics. Analogous to how 9/11 truthers will probably know a lot more about 9/11 than you or me.) This is indeed an epistemic golden age… for people who are able to resist the temptations of various filter bubbles and the propaganda of various ideologies. (And everyone thinks themself one such person, so everyone thinks this is an epistemic golden age for them.)
I do disagree with your claim that this is currently an epistemic golden age. I think it’s important to distinguish between ways in which it is and isn’t. I mentioned above a way that it is.
If we made a chart of some number capturing “how easy it is to convince key parts of society to recognize and navigate a tricky novel problem” … since the dawn of civilization, what would that chart look like? My guess is that it would be pretty chaotic; that it would sometimes go quite low and sometimes go quite high
Agreed. I argued this, in fact.
and that it would be very hard to predict the impact of a given technology or other development on epistemic responsiveness.
Disagree. I mean, I don’t know, maybe this is true. But I feel like we shouldn’t just throw our hands up in the air here, we haven’t even tried! I’ve sketched an argument for why we should expect epistemic responsiveness to decrease in the near future (propaganda and censorship are bad for epistemic responsiveness & they are getting a lot cheaper and more effective & no pro-epistemic-responsiveness-force seems to be rising to counter it)
Maybe there have been one-off points in history when epistemic responsiveness was very high; maybe it is much lower today compared to peak, such that someone could already claim we have passed the “point of no return”; maybe “persuasion AI” will drive it lower or higher, depending partly on who you think will have access to the biggest and best persuasion AIs and how they will use them.
Agreed. I argued this, in fact. (Note: “point of no return” is a relative notion; it may be that relative to us in 2010 the point of no return was e.g. the founding of OpenAI, and nevertheless relative to us now the point of no return is still years in the future.)
So I think even if we grant a lot of your views about how much AI could change the “memetic environment,” it’s not clear how this relates to the “point of no return.”
The conclusion I built was “We should direct more research effort at understanding and forecasting this stuff because it seems important.” I think that conclusion is supported by the above claims about the possible effects of persuasion tools.
What has/had higher ex ante probability of leading to a dramatic change in the memetic environment: further development of AI language models that could be used to write more propaganda, or the recent (last 20 years) explosion in communication channels and data, or many other changes over the last few hundred years such as the advent of radio and television, or the change in business models for media that we’re living through now? This comparison is intended to be an argument both that “your kind of reasoning would’ve led us to expect many previous persuasion-related PONRs without needing special AI advances” and that “if we condition on persuasion-related PONRs being the big thing to think about, we shouldn’t necessarily be all that focused on AI.”
Good argument. To hazard a guess: 1. Explosion in communication channels and data (i.e. the Internet + Big Data) 2. AI language models useful for propaganda and censorship 3. Advent of radio and television 4. Change in business models for media
However I’m pretty uncertain about this, I could easily see the order being different. Note that from what I’ve heard the advent of radio and television DID have a big effect on public epistemology; e.g. it partly enabled totalitarianism. Prior to that, the printing press is argued to have also had disruptive effects.
This is why I emphasized elsewhere that I’m not arguing for anything unprecedented. Public epistemology / epistemic responsiveness has waxed and waned over time and has occasionally gotten extremely bad (e.g. in totalitarian regimes and the freer societies that went totalitarian) and so we shouldn’t be surprised if it happens again and if someone has an argument that it might be about to happen again it should be taken seriously and investigated. (I’m not saying you yourself need to investigate this, you probably have better things to do.) Also I totally agree that we shouldn’t just be focused on AI; in fact I’d go further and say that most of the improvements in propaganda+censorship will come from non-AI stuff like Big Data. But AI will help too; it seems to make censorship a lot cheaper for example.
I’d be interested in seeing literature on how big an effect size you can get out of things like focus groups and A/B testing. My guess is that going from completely incompetent at persuasion (e.g., basically modeling your audience as yourself, which is where most people start) to “empirically understanding and incorporating your audience’s different-from-you characteristics” causes a big jump from a very low level of effectiveness, but that things flatten out quickly after that, and that pouring more effort into focus groups and testing leads to only moderate effects, such that “doubling effectiveness” on the margin shouldn’t be a very impressive/scary idea.
I think most media is optimizing for engagement rather than persuasion, and that it’s natural for things to continue this way as AI advances. Engagement is dramatically easier to measure than persuasion, so data-hungry AI should help more with engagement than persuasion; targeting engagement is in some sense “self-reinforcing” and “self-funding” in a way that targeting persuasion isn’t (so persuasion targeters need some sort of subsidy to compete with engagement targeters); and there are norms against targeting persuasion as well. I do expect some people and institutions to invest a lot in persuasion targeting (as they do today), but my modal expectation does not involve it becoming pervasive on nearly all websites, the way yours seems to.
I feel like a lot of today’s “persuasion” is either (a) extremely immersive (someone is raised in a social setting that is very committed to some set of views or practices); or (b) involves persuading previously-close-to-indifferent people to believe things that call for low-cost actions (in many cases this means voting and social media posting; in some cases it can mean more consequential, but still ultimately not-super-high-personal-cost, actions). (b) can lead over time to shifting coalitions and identities, but the transition from (b) to (a) seems long.
I particularly don’t feel that today’s “persuaders” have much ability to accomplish the things that you’re pointing to with “chatbots,” “coaches,” “Imperius curses” and “drugs.” (Are there cases of drugs being used to systematically cause people to make durable, sustained, action-relevant changes to their views, especially when not accompanied by broader social immersion?)
These are all good points. This is exactly the sort of thing I wish there was more research into, and that I’m considering doing more research on myself.
Re: pervasiveness on almost all websites: Currently propaganda and censorship both seem pretty widespread and also seem to be on a trend of becoming more so. (The list of things that get censored is growing, not shrinking, for example.) This is despite the fact that censorship is costly and so theoretically platforms that do it should be outcompeted by platforms that just maximize engagement. Also, IIRC facebook uses large language models to do the censoring more efficiently and cheaply, and I assume the other companies do too. As far as I know they aren’t measuring user opinions and directly using that as a feedback signal, thank goodness, but… is it that much of a stretch to think that they might? It’s only been two years since GPT-3.
Gaah, sorry, I keep forgetting to put links in—APS-AI means Advanced, Planning, Strategically Aware AI—the thing the Carlsmith report talks about. I’ll edit to put links in retroactively.
I’ve written a short story about what I expect the next 5 years to look like. Insofar as AI progress is systematically slower and less impressive than what is depicted in that story, I’ll update towards longer timelines, yeah.
I’m currently at something like 20% that AI-PONR will be crossed in the next 5 years, and so insofar as that doesn’t seem to have happened 5 years from now then that’ll be a 20%-sized blow to my timelines in the usual Bayesian way. It’s important to note that this won’t necessarily lengthen my timelines all things considered, because what happens in those 5 years might be more than a 20% blow to 20+year timelines. (For example, and this is what I actually think is most likely, 5 years from now the world could look like it does at the end of my short story, in which case I’d have become more confident that the point of no return will come sometime between 2026 and 2036 than I am now, not less, because things would be more on track towards that outcome than they currently seem to be.)
Re: persuasion tools: You seem to have a different model of how persuasion tools cause PONR than I do. What I have in mind is mundane, not exotic—I’m not imagining AIs building QAnon-like cult followings, I’m imagining the cost of censorship/propaganda* continuing to drop rapidly and the effectiveness continuing to increase rapidly, and (given a few years for society to catch up) ideological strife to intensify in general. This in turn isn’t an x-risk by itself but it’s certainly a risk factor, and insofar as our impact comes from convincing key parts of society (e.g. government, tech companies) to recognize and navigate a tricky novel problem (AI risk) it seems plausible to me that our probability of success diminishes rapidly as ideological strife in those parts of society intensifies. So when you say “there’s already a lot of ideological strife and public confusion” my response is “yeah exactly, and isn’t it already causing big problems and e.g. making our collective handling of COVID worse? Now imagine that said strife and confusion gets a lot worse in the next five years, and worse still in the five years after that.”
*I mean these terms in a broad sense. I’m talking about the main ways in which ideologies strengthen their hold on existing hosts and spread themselves to new ones. For more on this see the aforementioned story, this post, and this comment.
Re: EfficientZero: Fair, I need to think about that more… I guess it would be really helpful to have examples of EfficientZero being done on more complex environments than Atari, such as e.g. real-world robot control or Starcraft or text prediction.
Sorry for the long delay, I let a lot of comments to respond to pile up!
APS seems like a category of systems that includes some of the others you listed (“Advanced capability: they outperform the best humans on some set of tasks which when performed at advanced levels grant significant power in today’s world (tasks like scientific research, business/military/political strategy, engineering, and persuasion/manipulation) … “). I still don’t feel clear on what you have in mind here in terms of specific transformative capabilities. If we condition on not having extreme capabilities for persuasion or research/engineering, I’m quite skeptical that something in the “business/military/political strategy” category is a great candidate to have transformative impact on its own.
Thanks for the links re: persuasion! This seems like a major theme for you and a big place where we currently disagree. I’m not sure what to make of your take, and I think I’d have to think a lot more to have stable views on it, but here are quick reactions:
If we made a chart of some number capturing “how easy it is to convince key parts of society to recognize and navigate a tricky novel problem” (which I’ll abbreviate as “epistemic responsiveness”) since the dawn of civilization, what would that chart look like? My guess is that it would be pretty chaotic; that it would sometimes go quite low and sometime sgo quite high; and that it would be very hard to predict the impact of a given technology or other development on epistemic responsiveness. Maybe there have been one-off points in history when epistemic responsiveness was very high; maybe it is much lower today compared to peak, such that someone could already claim we have passed the “point of no return”; maybe “persuasion AI” will drive it lower or higher, depending partly on who you think will have access to the biggest and best persuasion AIs and how they will use them. So I think even if we grant a lot of your views about how much AI could change the “memetic environment,” it’s not clear how this relates to the “point of no return.”
I think I feel a lot less impressed/scared than you with respect to today’s “persuasion techniques.”
I’d be interested in seeing literature on how big an effect size you can get out of things like focus groups and A/B testing. My guess is that going from completely incompetent at persuasion (e.g., basically modeling your audience as yourself, which is where most people start) to “empirically understanding and incorporating your audience’s different-from-you characteristics” causes a big jump from a very low level of effectiveness, but that things flatten out quickly after that, and that pouring more effort into focus groups and testing leads to only moderate effects, such that “doubling effectiveness” on the margin shouldn’t be a very impressive/scary idea.
I think most media is optimizing for engagement rather than persuasion, and that it’s natural for things to continue this way as AI advances. Engagement is dramatically easier to measure than persuasion, so data-hungry AI should help more with engagement than persuasion; targeting engagement is in some sense “self-reinforcing” and “self-funding” in a way that targeting persuasion isn’t (so persuasion targeters need some sort of subsidy to compete with engagement targeters); and there are norms against targeting persuasion as well. I do expect some people and institutions to invest a lot in persuasion targeting (as they do today), but my modal expectation does not involve it becoming pervasive on nearly all websites, the way yours seems to.
I feel like a lot of today’s “persuasion” is either (a) extremely immersive (someone is raised in a social setting that is very committed to some set of views or practices); or (b) involves persuading previously-close-to-indifferent people to believe things that call for low-cost actions (in many cases this means voting and social media posting; in some cases it can mean more consequential, but still ultimately not-super-high-personal-cost, actions). (b) can lead over time to shifting coalitions and identities, but the transition from (b) to (a) seems long.
I particularly don’t feel that today’s “persuaders” have much ability to accomplish the things that you’re pointing to with “chatbots,” “coaches,” “Imperius curses” and “drugs.” (Are there cases of drugs being used to systematically cause people to make durable, sustained, action-relevant changes to their views, especially when not accompanied by broader social immersion?)
I’m not really all that sure what the special role of AI is here, if we assume (for the sake of your argument that AI need not do other things to be transformative or PONR-y) a lack of scientific/engineering ability. What has/had higher ex ante probability of leading to a dramatic change in the memetic environment: further development of AI language models that could be used to write more propaganda, or the recent (last 20 years) explosion in communication channels and data, or many other changes over the last few hundred years such as the advent of radio and television, or the change in business models for media that we’re living through now? This comparison is intended to be an argument both that “your kind of reasoning would’ve led us to expect many previous persuasion-related PONRs without needing special AI advances” and that “if we condition on persuasion-related PONRs being the big thing to think about, we shouldn’t necessarily be all that focused on AI.”
I liked the story you wrote! A lot of it seems reasonably likely to be reasonably on point to me—I especially liked your bits about AIs confusing people when asked about their internal lives. However:
I think the story is missing a kind of quantification or “quantified attitude” that seems important if we want to be talking about whether this story playing out “would mean we’re probably looking at transformative/PONR-AI in the following five years.” For example, I do expect progress in digital assistants, but it matters an awful lot how much progress and economic impact there is. Same goes for just how effective the “pervasive persuasion targeting” is. I think this story could be consistent with worlds in which I’ve updated a lot toward shorter transformative AI timelines, and with worlds in which I haven’t at all (or have updated toward longer ones.)
As my comments probably indicate, I’m not sold on this section.
I’ll be pretty surprised if e.g. the NYT is using a lot of persuasion targeting, as opposed to engagement targeting.
I do expect “People who still remember 2021 think of it as the golden days, when conformism and censorship and polarization were noticeably less than they are now” will be true, but that’s primarily because (a) I think people are just really quick to hallucinate declinist dynamics and call past times “golden ages”; (b) 2021 does seem to have extremely little conformism and censorship (and basically normal polarization) by historical standards, and actually does kinda seem like a sort of epistemic golden age to me.
For people who are strongly and genuinely interested in understanding the world, I think we are in the midst of an explosion in useful websites, tools, and blogs that will someday be seen nostalgically;* a number of these websites/tools/blogs are remarkably influential among powerful people; and while most people are taking a lot less advantage than they could and seem to have pretty poorly epistemically grounded views, I’m extremely unconvinced that things looked better on this front in the past—here’s one post on that topic.
I do generally think that persuasion is an underexplored topic, and could have many implications for transformative AI strategy. Such implications could include something like “Today’s data explosion is already causing dramatic improvements in the ability of websites and other media to convince people of arbitrary things; we should assign a reasonably high probability that language models will further speed this in a way that transforms the world.” That just isn’t my guess at the moment.
*To be clear, I don’t think this will be because websites/tools/blogs will be less useful in the future. I just think people will be more impressed with those of our time, which are picking a lot of low-hanging fruit in terms of improving on the status quo, so they’ll feel impressive to read while knowing that the points they were making were novel at the time.
I’m a fan of lengthy asynchronous intellectual exchanges like this one, so no need to apologize for the delay. I hope you don’t mind my delay either? As usual, no need to reply to this message.
I think I agree with this.
Re: quantification: I agree; currently I don’t have good metrics to forecast on, much less good forecasts, for persuasion stuff and AI-PONR stuff. I am working on fixing that problem. :)
Re persuasion: For the past two years I have agreed with the claims made in “The misinformation problem seems like misinformation.”(!!!) The problem isn’t lack of access to information; information is more available than it ever was before. Nor is the problem “fake news” or other falsehoods. (Most propaganda is true.) Being politically polarized and extremist correlates positively with being well-informed, not negatively! (Anecdotally, my grad school friends with the craziest/most-extreme/most-dangerous/least-epistemically-virtuous political beliefs were generally the people best informed about politics. Analogous to how 9/11 truthers will probably know a lot more about 9/11 than you or me.) This is indeed an epistemic golden age… for people who are able to resist the temptations of various filter bubbles and the propaganda of various ideologies. (And everyone thinks themself one such person, so everyone thinks this is an epistemic golden age for them.)
I do disagree with your claim that this is currently an epistemic golden age. I think it’s important to distinguish between ways in which it is and isn’t. I mentioned above a way that it is.
Agreed. I argued this, in fact.
Disagree. I mean, I don’t know, maybe this is true. But I feel like we shouldn’t just throw our hands up in the air here, we haven’t even tried! I’ve sketched an argument for why we should expect epistemic responsiveness to decrease in the near future (propaganda and censorship are bad for epistemic responsiveness & they are getting a lot cheaper and more effective & no pro-epistemic-responsiveness-force seems to be rising to counter it)
Agreed. I argued this, in fact. (Note: “point of no return” is a relative notion; it may be that relative to us in 2010 the point of no return was e.g. the founding of OpenAI, and nevertheless relative to us now the point of no return is still years in the future.)
The conclusion I built was “We should direct more research effort at understanding and forecasting this stuff because it seems important.” I think that conclusion is supported by the above claims about the possible effects of persuasion tools.
Good argument. To hazard a guess:
1. Explosion in communication channels and data (i.e. the Internet + Big Data)
2. AI language models useful for propaganda and censorship
3. Advent of radio and television
4. Change in business models for media
However I’m pretty uncertain about this, I could easily see the order being different. Note that from what I’ve heard the advent of radio and television DID have a big effect on public epistemology; e.g. it partly enabled totalitarianism. Prior to that, the printing press is argued to have also had disruptive effects.
This is why I emphasized elsewhere that I’m not arguing for anything unprecedented. Public epistemology / epistemic responsiveness has waxed and waned over time and has occasionally gotten extremely bad (e.g. in totalitarian regimes and the freer societies that went totalitarian) and so we shouldn’t be surprised if it happens again and if someone has an argument that it might be about to happen again it should be taken seriously and investigated. (I’m not saying you yourself need to investigate this, you probably have better things to do.) Also I totally agree that we shouldn’t just be focused on AI; in fact I’d go further and say that most of the improvements in propaganda+censorship will come from non-AI stuff like Big Data. But AI will help too; it seems to make censorship a lot cheaper for example.
I’d be interested in seeing literature on how big an effect size you can get out of things like focus groups and A/B testing. My guess is that going from completely incompetent at persuasion (e.g., basically modeling your audience as yourself, which is where most people start) to “empirically understanding and incorporating your audience’s different-from-you characteristics” causes a big jump from a very low level of effectiveness, but that things flatten out quickly after that, and that pouring more effort into focus groups and testing leads to only moderate effects, such that “doubling effectiveness” on the margin shouldn’t be a very impressive/scary idea.
These are all good points. This is exactly the sort of thing I wish there was more research into, and that I’m considering doing more research on myself.
Re: pervasiveness on almost all websites: Currently propaganda and censorship both seem pretty widespread and also seem to be on a trend of becoming more so. (The list of things that get censored is growing, not shrinking, for example.) This is despite the fact that censorship is costly and so theoretically platforms that do it should be outcompeted by platforms that just maximize engagement. Also, IIRC facebook uses large language models to do the censoring more efficiently and cheaply, and I assume the other companies do too. As far as I know they aren’t measuring user opinions and directly using that as a feedback signal, thank goodness, but… is it that much of a stretch to think that they might? It’s only been two years since GPT-3.