Quick[1] thoughts on the Silicon Valley âVibe-Shiftâ
I wanted to get this idea out of my head and into a quick-take. I think thereâs something here, but a lot more to say, and Iâve really havenât done the in-depth research for it. There was a longer post idea I had for this, but honestly diving more than I have here into it is not a good use of my life I think.
The political outlook in Silicon Valley has changed.
Since the attempted assassination attempt on President Trump, the mood in Silicon Valley has changed. There have been open endorsements, e/âacc has claimed political victory, and lots of people have noticed the âvibe shiftâ.[2] I think that, rather than this being a change in opinions, itâs more an event allowing for the beginning of a preference cascade, but at least in Silicon Valley (if not yet reflected in national polling) it has happened.
So it seems that a large section of Silicon Valley is now openly and confidently supporting Trump, and to a greater or lesser extent aligned with the a16z/âe-acc worldview,[3] we know itâs already reached the ears of VP candidate JD Vance.
How did we get here
You could probably write a book on this, so this is a highly opinionated take. But I think this is somewhat, though not exclusively, an own goal of the AI Safety movement.
As ChatGPT starts to bring AI, and AI Safety, into the mainstream discourse, the e/âacc countermovement begins. It positions itself as opposite effective altruism, especially in the wake of SBF.
Guillaume Verdon, under the alias âBeff Jezosâ, realises the memetic weakness of the AI Safety movement and launches a full memetic war against it. Regardless of his rightness or wrongness, you do to some extent got to hand it to him. Heâs like right-wing Ămile Torres, ambitious and relentless and driven by ideological zeal against a hated foe.
Memetic war is total war. This means nuance dies to get it to spread. I donât know if, for example, Marc Andreessen actually thinks antimalarial bednets are a âtriple threatâ of badness, but itâs a war and you donât take prisoners. Does Beff think that people running a uni-group session on Animal Welfare are âbasically terroristsâ, I donât know. But EA is the enemy, and the enemy must be defeated, and the war is total.
The OpenAI board fiasco is, I think, a critical moment here. It doesnât matter what the reasoning weâve come out with at the end of the day was, I think it was perceived as âa doomer coupâ and it did radicalize the valley. In his recent post Richard Ngo called on the AI Safety movement to show more legitimacy and competence. The board fiasco torpedoed my trust in the legitimacy and competence of many senior AI safety people, so god knows how strong the update was for Silicon Valley as a whole.
This new movement became increasingly right-wing coded. Partly as a response to the culture wars in America and the increasing vitriol thrown by the left against âtech brosâ, partly as a response to the California Ideology being threatened by any sense of AI oversight or regulation, and partly because EA is the enemy and EA was being increasingly seen by this group as left-wing, woke, or part of the Democratic Party due to the funding patterns of SBF and Moskovitz. I think this has led, fairly predictably, to the right-ward shift in SV and direct political affiliation with a (prospective) second Trump presidency
Across all of this my impression is that, just like with Torres, there was little to no direct pushback. I can understand not wanting to be dragged into a memetic war, or to be involved in the darker parts of Twitter discourse. But the e-acc/âtechnooptimist/âRW-Silicon-Valley movement was being driven by something, and I donât think AI Safety ever really argued against it convincingly, and definitely not in a convincing enough way to âwinâ the memetic war. Like, the a16z cluster literally lied to Congress and to Parliament, but nothing much come of that fact.
I think this is very much linked to playing a strong âinside gameâ to access the halls of power and no âoutside gameâ to gain legitimacy for that use of power. Itâs also I think due to EA not wanting to use social media to make its case, whereas the e-acc cluster was born and lives on social media.
Where are we now?
Iâm not a part of the Bay Area scene and culture,[4] but it seems to me that the AI Safety movement has lost the âmandate of heavenâ to whatever extent it did have it. SB-1047 is a push to change policy that has resulted in backlash, and may result in further polarisation and counter-attempts to fight back in a zero-sum political game. I donât know if itâs constitutional for a Trump/âVance administration to use the Supremacy Clause to void SB-1047 but I donât doubt that they might try. Bidenâs executive order seems certain for the chopping block. I expect a Trump administration to be a lot less sympathetic to the Bay Area/âDC AI Safety movements, and the right-wing part of Silicon Valley will be at the very least energised to fight back harder.
One concerning thing for both Silicon Valley and the AI Safety movement is what happens as a result of the ideological consequences of SV accepting this trend. Already a strong fault-line is the extreme social conservatism and incipient nationalism brought about by this. In the recent a16z podcast, Ben Horowitz literally accuses the Biden administration of breaking the rule of law, and says nothing about Trump literally refusing to concede the 2020 election and declaring that there was electoral fraud. Mike Solana seems to think that all risks of democratic backsliding under a Trump administration were/âare overblown (or at least that people in the Bay agreeing was preference falsification). On the Moments-of-Zen Podcast (which has also hosted Curtis Yarvin twice), Balaji Srinivasan accused the âBlue Tribeâ of ethnically cleansing him out of SF[5]and called on the grey tribe to push all the blues out of SF.e-acc sympathetic people are noting that anti-trans ideas bubbling up in the new movement. You cannot seriously engage with ideas and shape them without those ideas changing you.[6] This right-wing shift will have further consequences, especially under a second Trump presidency.
What next for the AI Safety field?
I think this is a bad sign for the field of AI Safety. Political polarisation has escaped AI for a while. Current polls may lean in support , but polls and political support are fickle, especially in the age of hyper-polarisation.[7] I feel like my fears around the perception of Open Philanthropy are re-occuring here but for the AI Safety movement at large.
I think the consistent defeats to the e-acc school and the fact that the tech sector as a whole seems very much unconvinced by the arguments for AI Safety should at some point lead to a reflection from the movement. Where you stand on this very much depends on your object-level beliefs. While this is a lot of e-acc discourse around transhumanism, replacing humanity, and the AI eschaton, I donât really buy it. I think that they donât think ASI is possible soon, and thus all arguments for AI Safety are bunk. Now, while the tech sector as a whole might not be as hostile, they donât seem at all convinced of the âASI-soonâ idea.
A key point I want to emphasise is that one cannot expect to wield power successfully without also having legitimacy.[8]And to the extent that the AI Safety movementâs strategy is trying to thread this needle it will fail.
Anyway, long ramble over, and given this was basically a one-shot ramble it will have many inaccuracies and flaws. Nevertheless I hope that it can be directionally useful and lead to productive discussion.
See here, here, and here. These examples are from Twitter because, for better or for worse, it seems much of SV/âtech opinions are formed by Twitter discourse.
A good example in fiction is in Warhammer40K, where Horus originally accepts the power of Chaos to fight against Imperial Tyranny, but ends up turning into their slave.
Due to polarisation, views can dramatically shift on even major topics such as the economy and national security (i know these are messy examples!). Current poll leads for AI regulation should not, in any way, be considered secure
I guess you could also have overwhelming might and force, but even that requires legitimacy. Caesar needed to be seen as legitimate by Marc Anthony, Alexander didnât have the legitimacy to get his army to cross the Hyphasis etc.
Iâve often found it hard to tell whether an ideology/âmovement/âview has just found a few advocates among a group, or whether it has totally permeated that group. For example, Iâm not sure that Srinivasanâs politics have really changed recently or that it would be fair to generalize from his beliefs to all of the valley. How much of this is actually Silicon Valleyâs political center shifting to e/âacc and the right, as opposed to people just having the usual distribution of political beliefs (in addition to a valley-unspecific decline of the EA brand)?
@thoth hermes (or https://ââx.com/ââthoth_iv if someone can get it to them if youâre Twitter friends then pls go ahead.[1] Iâm responding to this thread hereâI am not saying âthat EA is losing the memetic war because of its high epistemic standardsâ, in fact quite the opposite r.e. AI Safety, and maybe because of misunderstanding of how politics work/ânot caring about the social perception of the movement. My reply to Iyngkarran below fleshes it out a bit more, but if thereâs a way for you to get in touch directly, Iâd love to clarify what I think, and also hear your thoughts more. But I think I was trying to come from a similar place that Richard Ngo is, and many of his comments on the LessWrong thread here very much chime with my own point-of-view. What I am trying to push for is the AI Safety movement reflecting on losing ground memetically and then asking âwhy is that? what are we getting wrong?â rather than doubling down into lowest-common denominator communication. I think we actually agree here? Maybe I didnât make that clear enough in my OP though.
@Iyngkarran KumarâThanks for sharing your thoughts, but I must say that I disagree with it. I donât think that the epistemic standards are working against us by being too polite, quite the opposite. I think the epistemic standards in AI Safety have been too low relative to the attempts to wield power. If you are potentialy going to criminalise existing Open-Source models,[2] you better bring the epistemic goods. And for many people in the AI Safety field, the goods have not been brought (which is why I see people like Jeremy Howard, Sara Hooker, Rohit Krishnan etc get increasingly frustrated by the AI Safety field). This is on the field of AI Safety imo for not being more persuasive. If the AI Safety field was right, the arguments would have been more convincing. I think, while itâs good for Eliezer to say what he thinks accurately, the âbomb the datacentersâ[3] piece has probably been harmful for AI Safetyâs cause, and things like it a very liable to turn people away from supporting AI Safety. I also donât think itâs good to say that itâs a claim of âwhat we believeâ, as I donât really agree with Eliezer on much.
(r.e. inside vs outside game, see this post from Holly Elmore)
@anormative/â @David MathersâYeah itâs difficult to manage the exact hypothesis here, especially for falsified preferences. Iâm pretty sure SV is âliberalâ overall, but I wouldnât be surprised if Trump % is greater than 16 and 20, and it definitely seems to be a lot more open this time, e.g. a16z and Musk openly endorsing Trump, Sequoia Capital partners claiming that Biden dropping out was worse than the Jan 6th riot. Things seem very different this time around, different enough to be paid attention to.
- - - - - - - - - - - -
Once again, if you disagree, Iâd love to actually here why. Up/âdown voting is a crude feedback to, and discussion of ideas leads to much quicker sharing of knowledge. If you want to respond but donât want to publicly, then by all means please send a DM :)
This piece from @1a3orn is excellent and to absence of evidence of good arguments against it is evidence of the absence of said arguments. (tl;drâAI Safety people, engage with 1a3orn more!)
I disagree FWIW. I think that the political activation of Silicon Valley is the sort of thing which could reshape american politics, and that twitter is a leading indicator.
I think that the political activation of Silicon Valley is the sort of thing which could reshape american politics, and that twitter is a leading indicator.
I donât disagree with this statement, but also think the original comment is reading into twitter way too much.
absence of evidence of good arguments against it is evidence of the absence of said arguments. (tl;drâAI Safety people, engage with 1a3orn more!)
There are many (edit: 2) comments responding and offering to talk. 1a3orn doesnât appear to have replied to any of these comments. (To be clear, Iâm not saying theyâre under any obligation here, just that there isnât a absence of attempted engagement and thus you shouldnât update in the direction you seem to be updating here.)
a) r.e. Twitter, almost tautologically true Iâm sure. I think it is a bit of signal though, just very noisy. And one of the few ways for non-Bay people such as myself to try to get a sense of the pulse of the Bay, though obviously very prone to error, and perhaps not worth doing at all.
I think this case itâs ok (but happy to change my mind) - afaict he owns the connection now and the two names are a bit like separate personas. Heâs gone on podcasts under his true name, for instance.
Across all of this my impression is that, just like with Torres, there was little to no direct pushback
Strongly agree. I think the TESCREAL/âe-acc movements badly mischaracterise the EA community with extremely poor, unsubstantiated arguments, but there doesnât seem to be much response to this from the EA side.
I think this is very much linked to playing a strong âinside gameâ to access the halls of power and no âoutside gameâ to gain legitimacy for that use of power
What does this refer to? Iâm not familiar.
Other thoughts on this:
Publicly, the quietness from the EA side in response to TESCREAL/âe-acc/âetc. allegations is harming the communityâs image and what it stands for. But âwinningâ the memetic war is important. If not, then the world outside EAâwhich has many smart, influential peopleâends up seeing the community as a doomer cult (in the case of AI safety) or assigns some equally damaging label that lets them quickly dismiss many of the arguments being made.
I think this is a case where the the epistemic standards of the EA community work against it. Rigorous analysis, expressing second/âthird-order considerations, etc. are seen as the norm for most writing on the forum. However, in places such as Twitter, these sorts of analyses arenât âmemetically fitâ [1].
So, I think weâre in need of more pieces like the Time essay on Pausing AIâa no-punches-pulled sort of piece that gets across the seriousness of what weâre claiming. Iâd like to see more Twitter threads and op-edâs that dismantle claims like âadvancements in AI have solved itâs black-box natureâ, ones that donât let clearly false claims like this see the light of day in serious public discourse.
Donât get me wrongâepistemically rigorous work is great. But when responding to TESCREAL/âe-acc âcritiquesâ that continuously hit below the belt, other tactics may be better.
Quick[1] thoughts on the Silicon Valley âVibe-Shiftâ
I wanted to get this idea out of my head and into a quick-take. I think thereâs something here, but a lot more to say, and Iâve really havenât done the in-depth research for it. There was a longer post idea I had for this, but honestly diving more than I have here into it is not a good use of my life I think.
The political outlook in Silicon Valley has changed.
Since the attempted assassination attempt on President Trump, the mood in Silicon Valley has changed. There have been open endorsements, e/âacc has claimed political victory, and lots of people have noticed the âvibe shiftâ.[2] I think that, rather than this being a change in opinions, itâs more an event allowing for the beginning of a preference cascade, but at least in Silicon Valley (if not yet reflected in national polling) it has happened.
So it seems that a large section of Silicon Valley is now openly and confidently supporting Trump, and to a greater or lesser extent aligned with the a16z/âe-acc worldview,[3] we know itâs already reached the ears of VP candidate JD Vance.
How did we get here
You could probably write a book on this, so this is a highly opinionated take. But I think this is somewhat, though not exclusively, an own goal of the AI Safety movement.
As ChatGPT starts to bring AI, and AI Safety, into the mainstream discourse, the e/âacc countermovement begins. It positions itself as opposite effective altruism, especially in the wake of SBF.
Guillaume Verdon, under the alias âBeff Jezosâ, realises the memetic weakness of the AI Safety movement and launches a full memetic war against it. Regardless of his rightness or wrongness, you do to some extent got to hand it to him. Heâs like right-wing Ămile Torres, ambitious and relentless and driven by ideological zeal against a hated foe.
Memetic war is total war. This means nuance dies to get it to spread. I donât know if, for example, Marc Andreessen actually thinks antimalarial bednets are a âtriple threatâ of badness, but itâs a war and you donât take prisoners. Does Beff think that people running a uni-group session on Animal Welfare are âbasically terroristsâ, I donât know. But EA is the enemy, and the enemy must be defeated, and the war is total.
The OpenAI board fiasco is, I think, a critical moment here. It doesnât matter what the reasoning weâve come out with at the end of the day was, I think it was perceived as âa doomer coupâ and it did radicalize the valley. In his recent post Richard Ngo called on the AI Safety movement to show more legitimacy and competence. The board fiasco torpedoed my trust in the legitimacy and competence of many senior AI safety people, so god knows how strong the update was for Silicon Valley as a whole.
As some evidence this is known in EA circles, I think this is exactly what Dwarkesh is alluding to when asked âwhat happened to the EA brandâ. For many people in Silicon Valley, I think the answer is that it got thrown in the dustbin of history.
This new movement became increasingly right-wing coded. Partly as a response to the culture wars in America and the increasing vitriol thrown by the left against âtech brosâ, partly as a response to the California Ideology being threatened by any sense of AI oversight or regulation, and partly because EA is the enemy and EA was being increasingly seen by this group as left-wing, woke, or part of the Democratic Party due to the funding patterns of SBF and Moskovitz. I think this has led, fairly predictably, to the right-ward shift in SV and direct political affiliation with a (prospective) second Trump presidency
Across all of this my impression is that, just like with Torres, there was little to no direct pushback. I can understand not wanting to be dragged into a memetic war, or to be involved in the darker parts of Twitter discourse. But the e-acc/âtechnooptimist/âRW-Silicon-Valley movement was being driven by something, and I donât think AI Safety ever really argued against it convincingly, and definitely not in a convincing enough way to âwinâ the memetic war. Like, the a16z cluster literally lied to Congress and to Parliament, but nothing much come of that fact.
I think this is very much linked to playing a strong âinside gameâ to access the halls of power and no âoutside gameâ to gain legitimacy for that use of power. Itâs also I think due to EA not wanting to use social media to make its case, whereas the e-acc cluster was born and lives on social media.
Where are we now?
Iâm not a part of the Bay Area scene and culture,[4] but it seems to me that the AI Safety movement has lost the âmandate of heavenâ to whatever extent it did have it. SB-1047 is a push to change policy that has resulted in backlash, and may result in further polarisation and counter-attempts to fight back in a zero-sum political game. I donât know if itâs constitutional for a Trump/âVance administration to use the Supremacy Clause to void SB-1047 but I donât doubt that they might try. Bidenâs executive order seems certain for the chopping block. I expect a Trump administration to be a lot less sympathetic to the Bay Area/âDC AI Safety movements, and the right-wing part of Silicon Valley will be at the very least energised to fight back harder.
One concerning thing for both Silicon Valley and the AI Safety movement is what happens as a result of the ideological consequences of SV accepting this trend. Already a strong fault-line is the extreme social conservatism and incipient nationalism brought about by this. In the recent a16z podcast, Ben Horowitz literally accuses the Biden administration of breaking the rule of law, and says nothing about Trump literally refusing to concede the 2020 election and declaring that there was electoral fraud. Mike Solana seems to think that all risks of democratic backsliding under a Trump administration were/âare overblown (or at least that people in the Bay agreeing was preference falsification). On the Moments-of-Zen Podcast (which has also hosted Curtis Yarvin twice), Balaji Srinivasan accused the âBlue Tribeâ of ethnically cleansing him out of SF[5] and called on the grey tribe to push all the blues out of SF. e-acc sympathetic people are noting that anti-trans ideas bubbling up in the new movement. You cannot seriously engage with ideas and shape them without those ideas changing you.[6] This right-wing shift will have further consequences, especially under a second Trump presidency.
What next for the AI Safety field?
I think this is a bad sign for the field of AI Safety. Political polarisation has escaped AI for a while. Current polls may lean in support , but polls and political support are fickle, especially in the age of hyper-polarisation.[7] I feel like my fears around the perception of Open Philanthropy are re-occuring here but for the AI Safety movement at large.
I think the consistent defeats to the e-acc school and the fact that the tech sector as a whole seems very much unconvinced by the arguments for AI Safety should at some point lead to a reflection from the movement. Where you stand on this very much depends on your object-level beliefs. While this is a lot of e-acc discourse around transhumanism, replacing humanity, and the AI eschaton, I donât really buy it. I think that they donât think ASI is possible soon, and thus all arguments for AI Safety are bunk. Now, while the tech sector as a whole might not be as hostile, they donât seem at all convinced of the âASI-soonâ idea.
A key point I want to emphasise is that one cannot expect to wield power successfully without also having legitimacy.[8] And to the extent that the AI Safety movementâs strategy is trying to thread this needle it will fail.
Anyway, long ramble over, and given this was basically a one-shot ramble it will have many inaccuracies and flaws. Nevertheless I hope that it can be directionally useful and lead to productive discussion.
lol, lmao
See here, here, and here. These examples are from Twitter because, for better or for worse, it seems much of SV/âtech opinions are formed by Twitter discourse.
Would be very interested to hear the thoughts of people in the Bay on this
And if invited to be I would almost certainly decline,
He literally used the phrase âethnically cleanseâ. This is extraordinarily dangerous language in a political context.
A good example in fiction is in Warhammer40K, where Horus originally accepts the power of Chaos to fight against Imperial Tyranny, but ends up turning into their slave.
Due to polarisation, views can dramatically shift on even major topics such as the economy and national security (i know these are messy examples!). Current poll leads for AI regulation should not, in any way, be considered secure
I guess you could also have overwhelming might and force, but even that requires legitimacy. Caesar needed to be seen as legitimate by Marc Anthony, Alexander didnât have the legitimacy to get his army to cross the Hyphasis etc.
Iâve often found it hard to tell whether an ideology/âmovement/âview has just found a few advocates among a group, or whether it has totally permeated that group. For example, Iâm not sure that Srinivasanâs politics have really changed recently or that it would be fair to generalize from his beliefs to all of the valley. How much of this is actually Silicon Valleyâs political center shifting to e/âacc and the right, as opposed to people just having the usual distribution of political beliefs (in addition to a valley-unspecific decline of the EA brand)?
A NYT article I read a couple of days ago claimed Silicon Valley remains liberal overall.
Folding in Responses here
@thoth hermes (or https://ââx.com/ââthoth_iv if someone can get it to them if youâre Twitter friends then pls go ahead.[1] Iâm responding to this thread hereâI am not saying âthat EA is losing the memetic war because of its high epistemic standardsâ, in fact quite the opposite r.e. AI Safety, and maybe because of misunderstanding of how politics work/ânot caring about the social perception of the movement. My reply to Iyngkarran below fleshes it out a bit more, but if thereâs a way for you to get in touch directly, Iâd love to clarify what I think, and also hear your thoughts more. But I think I was trying to come from a similar place that Richard Ngo is, and many of his comments on the LessWrong thread here very much chime with my own point-of-view. What I am trying to push for is the AI Safety movement reflecting on losing ground memetically and then asking âwhy is that? what are we getting wrong?â rather than doubling down into lowest-common denominator communication. I think we actually agree here? Maybe I didnât make that clear enough in my OP though.
@Iyngkarran KumarâThanks for sharing your thoughts, but I must say that I disagree with it. I donât think that the epistemic standards are working against us by being too polite, quite the opposite. I think the epistemic standards in AI Safety have been too low relative to the attempts to wield power. If you are potentialy going to criminalise existing Open-Source models,[2] you better bring the epistemic goods. And for many people in the AI Safety field, the goods have not been brought (which is why I see people like Jeremy Howard, Sara Hooker, Rohit Krishnan etc get increasingly frustrated by the AI Safety field). This is on the field of AI Safety imo for not being more persuasive. If the AI Safety field was right, the arguments would have been more convincing. I think, while itâs good for Eliezer to say what he thinks accurately, the âbomb the datacentersâ[3] piece has probably been harmful for AI Safetyâs cause, and things like it a very liable to turn people away from supporting AI Safety. I also donât think itâs good to say that itâs a claim of âwhat we believeâ, as I donât really agree with Eliezer on much.
(r.e. inside vs outside game, see this post from Holly Elmore)
@anormative/â @David MathersâYeah itâs difficult to manage the exact hypothesis here, especially for falsified preferences. Iâm pretty sure SV is âliberalâ overall, but I wouldnât be surprised if Trump % is greater than 16 and 20, and it definitely seems to be a lot more open this time, e.g. a16z and Musk openly endorsing Trump, Sequoia Capital partners claiming that Biden dropping out was worse than the Jan 6th riot. Things seem very different this time around, different enough to be paid attention to.
- - - - - - - - - - - -
Once again, if you disagree, Iâd love to actually here why. Up/âdown voting is a crude feedback to, and discussion of ideas leads to much quicker sharing of knowledge. If you want to respond but donât want to publicly, then by all means please send a DM :)
I donât have Twitter and think itâd be harmful for my epistemic & mental health if I did get an account and become immersed in âThe Discourseâ
This piece from @1a3orn is excellent and to absence of evidence of good arguments against it is evidence of the absence of said arguments. (tl;drâAI Safety people, engage with 1a3orn more!)
I know thatâs not what it literally says but itâs what people know it as
I think youâre reading into twitter way too much.
I disagree FWIW. I think that the political activation of Silicon Valley is the sort of thing which could reshape american politics, and that twitter is a leading indicator.
I donât disagree with this statement, but also think the original comment is reading into twitter way too much.
There are many (edit: 2) comments responding and offering to talk. 1a3orn doesnât appear to have replied to any of these comments. (To be clear, Iâm not saying theyâre under any obligation here, just that there isnât a absence of attempted engagement and thus you shouldnât update in the direction you seem to be updating here.)
a) r.e. Twitter, almost tautologically true Iâm sure. I think it is a bit of signal though, just very noisy. And one of the few ways for non-Bay people such as myself to try to get a sense of the pulse of the Bay, though obviously very prone to error, and perhaps not worth doing at all.
b) I havenât seen those comments,[1] could you point me to them or where they happened? I know there was a bunch of discussion around their concerns about the Biorisk paper, but Iâm particularly concerned with the âMany AI Safety Orgs Have Tried to Criminalize Currently-Existing Open-Source AIâ articleâwhich I havenât seen good pushback to. Again, welcome to being wrong on this.
Ok, Iâve seen Ladish and Kokotajlo offer to talk which is good, would have like 1a3orn to take them up on that offer for sure.
Scroll down to see comments.
Nit: Beff Jezos was doxxed and repeating him name seems uncool, even if you donât like him.
I think this case itâs ok (but happy to change my mind) - afaict he owns the connection now and the two names are a bit like separate personas. Heâs gone on podcasts under his true name, for instance.
Ok thanks, I didnât know that.
Strongly agree. I think the TESCREAL/âe-acc movements badly mischaracterise the EA community with extremely poor, unsubstantiated arguments, but there doesnât seem to be much response to this from the EA side.
What does this refer to? Iâm not familiar.
Other thoughts on this:
Publicly, the quietness from the EA side in response to TESCREAL/âe-acc/âetc. allegations is harming the communityâs image and what it stands for. But âwinningâ the memetic war is important. If not, then the world outside EAâwhich has many smart, influential peopleâends up seeing the community as a doomer cult (in the case of AI safety) or assigns some equally damaging label that lets them quickly dismiss many of the arguments being made.
I think this is a case where the the epistemic standards of the EA community work against it. Rigorous analysis, expressing second/âthird-order considerations, etc. are seen as the norm for most writing on the forum. However, in places such as Twitter, these sorts of analyses arenât âmemetically fitâ [1].
So, I think weâre in need of more pieces like the Time essay on Pausing AIâa no-punches-pulled sort of piece that gets across the seriousness of what weâre claiming. Iâd like to see more Twitter threads and op-edâs that dismantle claims like âadvancements in AI have solved itâs black-box natureâ, ones that donât let clearly false claims like this see the light of day in serious public discourse.
Donât get me wrongâepistemically rigorous work is great. But when responding to TESCREAL/âe-acc âcritiquesâ that continuously hit below the belt, other tactics may be better.