I (and my family) are bootstrapping and operating âEverything is Learningâ. Itâs a program based in Kenya, to implement and mainstream âpre-speech reading and numeracyâ starting in local daycares.
Website: EL.africa
Email: steven@EL.africa
bhrdwjđ¸
[Question] Should we inÂclude our benefiÂciaÂries within ourÂselves?
iCAPES: InÂhuÂmane ComÂpetÂiÂtivist AI-ecoÂnomics-acÂcelÂerÂated PoliÂtiÂcal EconÂomy Spiral
I put links to the theory there. I have read this theory. The conclusions are obvious if you read the theory. I canât do a better job than CGP Grayâs 20 minute amazing explanatory cartoon.
If you watch that video then I will fully engage in human chat. But I canât have that discourse with someone if they havenât done the basic prerequisite of understanding selectorate theory, which is a nearly canonical model of how politics works game theoretically.
Are we trying to get at truth here are we trying to engage ideas?
Or is this an athletic exercise of performing discourse?
Wow, Sorry no this entire post was from me. Four humans by humans. The monospace text box is a prompt that youâre supposed to paste in to one of the chatbots, to see the results.
Yâall going to try this out? Why the down votes? (Removed a comment about âcowardsâ.)
PoliÂtiÂcal EconÂomy Semmelweiss
100%. This is what is happening kwa ground in LICs and LMICs.
Hereâs a more straightforward presentation, hope it helps. https://ââforum.effectivealtruism.org/ââposts/ââPWYQh6uhxKCswrJLy/ââon-selectorate-theory-and-the-narrowing-window
On SelecÂtorate TheÂory and the NarÂrowÂing Window
What I mean is that it would be super nice to be able to enjoy these human learning techniques. And have decades of life in which to enjoy those things.
But, because of the concerns about human political economy in the footnote, which Will McCaskill mentions super obliquely and quietly in his latest post I donât think that ASI is going to get the chance to kill off the first 4 billion of humanity. ASI might overrun the globe and finish off the next 4 billion, but weâre going to get in the first punch đ!
Please upload this humble cultivator, this one so totally upvoted your comment!đââď¸đ
Can haz fuÂtureÂburger?
Authoritarian rule means youâve gambled. Youâre crossing your fingers and hoping that you get something more on the Singapore side of things, and something less on the Myanmar North Korea side of things. Mao was better before he got worse.
The only thing worse than authoritarian rule is entrenched futile feudal conflicts that are structurally feuding with proxy wars spilling over into all the low income countries, and then the messes start spilling back, if thatâs what you care about.
The problem with our democracies right now is there likely to skip past the possible stable states and zoom straight to the dark ages.
Those are great links, and a key part of the logic behind this point. đ
I also appreciated your jounalistic (judgement-reserved, wikipedia NPOV) summary of Peter Thielâs ideas about EA being the literal antichrist. I actually agree with much of his logic behind those ideas⌠but I feel that his conclusion is quite degenerate.
I think thereâs such a Western-centered groupthink that âGlobal reconciliatory governance would be so easily corrupted into a scary global totalitarian dystopiaâ⌠that weâre steering right into a much more real and present conflict-dystopia of a modern dark ages or warring kingdoms.
PoliÂtiÂcal econÂomy & AtrocÂity risk
bhrdwjâs Quick takes
Political economy and atrocity risk.
EA is neglecting the important middle ground between existential risk and public health: Atrocity risk.
Weâre now observing governance-automation trends driving governmentsâ increasing apathy toward constituents outside of the govsâ minimally-viable winning-coalitions. See âSelectorate Theoryâ. This will continue unless/âuntil we ban thinking machines like the Lansraad in Dune.
Absent such a ban, the atrocity risk from escalating neo-feudal proxy conflicts is legion.
This is a 3â3 on the ITN.
As long as youâre moving things is a good direction, use your judgement. Working at a less safe lab and then whistleblowing could be a path, for instance.
We absolutely should slow AI down at least some, versus the âai.govâł policy. The challenge is, how to coordinate it. My maxed-out agree vote is not to emphasize total-shutdown, but to emphasize the criticality of enough-slowdown, and good-enough coordination.
Shouldnât we include and recruit the beneficiaries into EA? Why are we constraining EA to be a arcane movement? Are the ideas of EA really untranslatable and unpersuasive to laypersons?