Non-EA interests include chess and TikTok (@benthamite). We are probably hiring: https://ââmetr.org/ââhiring
Ben_Westđ¸
I understand âscapegoatingâ to be a specific sociological process, not just a generic term for blaming some one. Iâm not sure if Thiel wants us to be scapegoated, but if it does happen it would look more like âmob violenceâ than âguy mentions offhand in a podcast that youâre the antichrist.â[1]
- ^
Does Thiel have some 12D chess plan to inspire mob violence against EAs? Seems unlikely, but who knows.
- ^
I think itâs important to note that Thielâs worldview is pretty bleakâhe literally describes his goal as steering us towards a mythical Greek monster! He just thinks that the alternatives are even worse.
In EA lingo, I would say he has complex cluelessness: he sees very strong reasons for both doing a thing and doing the opposite of that thing.
I expect that there are Straussian readings of him that I am not understanding but for the most part he seems to just sincerely have very unusual views. E.g. I think Trump/âVance have done more than most presidents to dismantle the global order (e.g. through tariffs), and it doesnât seem surprising to me that Thiel supports them (even though I suspect he dislikes many other things they do).
I think he is using âtotalitarianâ to refer to any situation where the government is less economically libertarian than he would like, or âwokeâ ideas are popular amongst elite tastemakers, even if the polity this is all occurring in is clearly a liberal democracy, not a totalitarian state.
It seems true that he thinks governments (including liberal democracies) are satanic. I am unclear how much of this is because he thinks they are a slippery slope towards what you would call âtotalitarianismâ vs. being bad per se, but I think he is fairly consistent in his anarchism.
Thanks! Do you know if there is anywhere he has engaged more seriously with the possibility that AI could actually be transformative? His âmaybe heterodox thinking mattersâ statement I quoted above feels like relatively superficial engagement with the topic.
âThereâs always a risk that the katechon [thing holding back the Antichrist] becomes the Antichrist.ââPeter Thiel
EA as AnÂtichrist: UnÂderÂstandÂing Peter Thiel
This is great, congrats and thank you to everyone working on ensuring that companies keep their pledges!
I came here to make a similar comment: a lot of my p(doom) hinges on things like âhow hard is alignmentâ and âhow likely is a software intelligence explosion,â which seem to be largely orthogonal to questions of how likely we are to get flourishing. (And maybe even run contrary to it, as you point out.)
I think this is an important point, but my experience is that when you try to put it into practice things become substantially more complex. E.g. in the podcast Will talks about how it might be important to give digital beings rights to protect them from being harmed, but the downside of doing so is that humans would effectively become immediately disempowered because we would be so dramatically outnumbered by digital beings.
It generally seems hard to find interventions which are robustly likely to create flourishing (indeed, âcause humanity to not go extinctâ often seems like one of the most robust interventions!).
I think his use of âceilingâ is maybe somewhat confusing: heâs not saying that survival is near 100% (in the article he uses 80% as his example, and my sense is that this is near his actual belief). I interpret him to just mean that we are notably higher on the vertical axis than the horizontal one:
I think we might be talking past each other. Iâm just trying to make the point in the above table: the delaying tactic is not the most effective in a long timelines world, but it is the most effective in a short timelines world. (I think you agree?)
Fair point. Here is a notebook showing that, under 20 year timelines, saving 4 animals/âyear indefinitely is better than saving 5/âyear for 10 years, but that the order is reversed under 10 year timelines. Does this make sense now?
Yes, but the earlier 5 years are more valuable!
Given some discount rate , the value of preventing 5 years starting at is . Here is a plot with :
You can see that increasing values of (horizontal axis) result in less valuable outcomes.
Thanks, good questionâI am assuming here that you have some positive discount rate such that you care more about reducing farming in 10 years than you do in 15.
Thanks for looking into this, I would be excited for more people to investigate interventions which make sense on short timelines.
Each of these construction projects can take months, or years to complete.
One implication of this is: if we are (say) ten years away from farming becoming irrelevant and it takes five years to construct a farm, then delaying construction for five years is equivalent to preventing the farmâs construction. So delaying tactics may be (relatively) more valuable in short-timelines worlds. (Thanks to @Jakub Stencel for this point.)
These are good questions, unfortunately I donât feel very qualified to answer. One thing I did want to say though is that your comment made me realize that we were incorrectly (?) focused on a harm reduction frame. Iâm not sure that our suggestions are very good if you want to do something like âmaximize the expected number of rats on heroinâ.
My sense is that most AIxAnimals people are actually mostly focused on the harm reduction stuff so maybe itâs fine that we didnât consider upside scenarios very much, but, to the extent that you do want to consider upside for animals, Iâm not sure our suggestions hold. (Speaking for myself, not Lizka.)
I thought this was pretty interesting, I didnât realize it had gone up so much. Thanks for posting!
I actually want to make both claims! I agree that itâs true that, if the future looks basically like the present, then probably you donât need to care much about the paradigm shift (i.e. AI). But I also think the future will not look like the present so you should heavily discount interventions targeted at pre-paradigm-shift worlds unless they pay off soon.
Good question and thanks for the concrete scenarios! I think my tl;dr here is something like âeven when you imagine ânormalishâ futures, they are probably weirder than you are imagining.â
Even if McDonaldâs fires all its staff, itâs not clear to me why it would drop its cage-free policy
I donât think we want to make the claim that McDonaldâs will definitely drop its cage free policy but rather the weaker claim that you should not assume that the value of a cage-free commitment will remain ~constant by default.
If Iâm assuming that we are in a world where all of the human labor at McDonaldâs has been automated away, I think that is a pretty weird world. As you note, even the existence of something like McDonaldâs (much less a specific corporate entity which feels bound by the agreements of current-day McDonaldâs) is speculative.
But even if we grant its existence: a ~40% egg price increase is currently enough that companies feel cover to be justified in abandoning their cage-free pledges. Surely âthe entire global order has been upended and the new corporate management is robotsâ is an even better excuse?
And even if we somehow hold McDonaldâs to their pledge, I find it hard to believe that a world where McDonaldâs can be run without humans does not quickly lead to a world where something more profitable than battery cage farming can be found. And, as a result, the cage-free pledge is irrelevant because McDonaldâs isnât going to use cages anyway. (Of course, this new farming method may be even more cruel than battery cages, illustrating one of the downsides of trying to lock in a specific policy change before we understand what the future will be like.)
To be clear: this is just me randomly spouting, I donât believe strongly in any of the above. I think itâs possible someone could come up with a strong argument why present-day corporate pledges will continue post-paradigm-shift. But my point is that you shouldnât assume that this argument exists by default.
AGI is more like the Internet. The cage-free McMuffins endure, just with some cool LLM-generated images on them.
Yeah I think this world is (by assumption) one where cage free pledges should not receive a massive discount.
No AGI
Note that some worlds where we wouldnât get AGI soon (e.g. large-scale nuclear war setting science back 200 years) are also probably not great for the expected value of cage-free pledges.
(It is good to hear though that even in the maximally dystopian world of universal Taco Bell there will be some upside for animals đ.)
In a similar article on LessWrong, Ben Pace says the following, which resonates with me: