I (and my family) are bootstrapping and operating “Everything is Learning”. It’s a program based in Kenya, to implement and mainstream “pre-speech reading and numeracy” starting in local daycares.
Website: EL.africa
Email: steven@EL.africa
bhrdwj🔸
Here’s a more straightforward presentation, hope it helps. https://forum.effectivealtruism.org/posts/PWYQh6uhxKCswrJLy/on-selectorate-theory-and-the-narrowing-window
On Selectorate Theory and the Narrowing Window
What I mean is that it would be super nice to be able to enjoy these human learning techniques. And have decades of life in which to enjoy those things.
But, because of the concerns about human political economy in the footnote, which Will McCaskill mentions super obliquely and quietly in his latest post I don’t think that ASI is going to get the chance to kill off the first 4 billion of humanity. ASI might overrun the globe and finish off the next 4 billion, but we’re going to get in the first punch 👊!
Please upload this humble cultivator, this one so totally upvoted your comment!🙇♂️😅
Can haz futureburger?
Authoritarian rule means you’ve gambled. You’re crossing your fingers and hoping that you get something more on the Singapore side of things, and something less on the Myanmar North Korea side of things. Mao was better before he got worse.
The only thing worse than authoritarian rule is entrenched futile feudal conflicts that are structurally feuding with proxy wars spilling over into all the low income countries, and then the messes start spilling back, if that’s what you care about.
The problem with our democracies right now is there likely to skip past the possible stable states and zoom straight to the dark ages.
Those are great links, and a key part of the logic behind this point. 👍
I also appreciated your jounalistic (judgement-reserved, wikipedia NPOV) summary of Peter Thiel’s ideas about EA being the literal antichrist. I actually agree with much of his logic behind those ideas… but I feel that his conclusion is quite degenerate.
I think there’s such a Western-centered groupthink that “Global reconciliatory governance would be so easily corrupted into a scary global totalitarian dystopia”… that we’re steering right into a much more real and present conflict-dystopia of a modern dark ages or warring kingdoms.
Political economy & Atrocity risk
bhrdwj’s Quick takes
Political economy and atrocity risk.
EA is neglecting the important middle ground between existential risk and public health: Atrocity risk.
We’re now observing governance-automation trends driving governments’ increasing apathy toward constituents outside of the govs’ minimally-viable winning-coalitions. See “Selectorate Theory”. This will continue unless/until we ban thinking machines like the Lansraad in Dune.
Absent such a ban, the atrocity risk from escalating neo-feudal proxy conflicts is legion.
This is a 3⁄3 on the ITN.
As long as you’re moving things is a good direction, use your judgement. Working at a less safe lab and then whistleblowing could be a path, for instance.
We absolutely should slow AI down at least some, versus the “ai.gov″ policy. The challenge is, how to coordinate it. My maxed-out agree vote is not to emphasize total-shutdown, but to emphasize the criticality of enough-slowdown, and good-enough coordination.
Hi Sorin, congrats on your project and on relocating to Kampala, sounds like!
My project is based in Nairobi, and I spent the last ~3 years there about 70% time. I may go through a more-remote phase, locating myself back here in the USA for a year. Welcome to contact me directly at steven@[website]. I’m running a project in Kenya on early-child-development, check out our website EL.africa
Sounds like you may be extending EA ideas and invitations-to-inclusion to a more economic-median, grassroots kind of community in Uganda? Kudos for that! Surely EA is lacking this kind of grassroots outreach in its chapters worldwide! 💯😃
Let me amend that. Personally I would have no problem with an AI having its own forum account. But then it would also have to stand on its own merits of conciseness and relevance etc, and earn its own up votes.
“For Humans By Humans” is a 💯 appropriate rule-of-thumb for posting I agree.
My comment was FHBH ofc, I wouldn’t be so hypocritical as to post #3 and then violate it in the same moment! 🙏
I see the reputational danger! As soon as someone sees a speaker has mixed generated text into their speech once, the the speaker may be marked as “sus” evermore...
Thanks for the response!
I take your points well. Let me see if I can extrapolate from your enigmatic criticism in more depth:
I should have kept the introduction more to-the-point, especially given the point is probably not a consensus one.
Any “poll of AIs” methodology needs to be consitently accompanied by thorough red-teaming before it can be considered reliable.
Another concern about posts heavy on generative-AI is the dangers of frivolous “cheap talk”. If I’m going to survey AIs, maybe I should relegate all of their generated text into reference-linked pdfs, and keep the main post text carefully FHBH (for humans by humans)!
I agree with these points, and I will address all them in an upgraded re-try soon.
(Recovering from Rejection https://forum.effectivealtruism.org/posts/NDRBZNc2sBy5MC8Fw/recovering-from-rejection)
Wow I’m getting downvoted! 🎉 Care to explain please?! 🙏
USA/China Reconciliation a Necessity Because of AI/Tech Acceleration
I think there’s an intersection between the PauseAI kind of stuff, and a great-powers reconciliation movement.
Most of my scenario-forecast likelihood-mass, where the scenarios feature near-term mass-death situations, exist in this intersection between great-power cold-wars, proxy-wars in the global-south, AI brinkmanship, and asymmetrical biowarfare.
Maybe combining PauseAI with a 🇺🇸/🇨🇳 reconciliation and collaboration movement, would be a more credible orientation.
Moral alignment of AI’s is great. But we need moral alignment of all intelligences. Humans, literal whales, and AIs. Confusion, trauma, misalignment, and/or extinction of some intelligences against others negatively affects the whole Jungian system.
We urgently need great power alignment, and prevention of the coming escalating proxy warfare. “AI-driven urgency for great-power reconciliation” actually ticks all the ITN framework boxes, IMHO.
100%. This is what is happening kwa ground in LICs and LMICs.