I see the reasoning here for images/video, but I’m not sure it applies to audio—long-form podcasting is a great medium for serious discourse.
skluug
On specifically getting them interested: presumably, new-Steve-Jobs doesn’t want to come in and run someone else’s company, they want to start their own. You could pay them a lot of money, but if they really are Jobs, the opportunity cost of not starting their own is extremely high!
A little more to the point, I think Yudkowsky is seriously underestimating the information/coordination costs associated with finding the next Steve Jobs. Maybe one does exist—in fact, I would say I am pretty sure one does—but how do you find them? How do you get them interested? How do you verify they can do what Steve did, without handing them control of a trillion dollar company? How can you convince everyone else that they should trust and support new-Steve’s decisions?
These seem like the more significant obstacles than just any new Steve existing at all.
(I also think the relevance is kind of strained.)
i think Tim Cook is doing a good job
Four layers come to mind for me:
Have strong theoretical reasons to think your method of creating the system cannot result in something motivated to take dangerous actions
Inspect the system thoroughly after creation, before deployment, to make sure it looks as expected and appears incapable of making dangerous decisions
Deploy the system in an environment where it is physically incapable of doing anything dangerous
Monitor the internals of the system closely during deployment to ensure operation is as expected, and that no dangerous actions are attempted
Cool!!
You know, my take on this is that instead of resisting comparisons to Terminator and The Matrix, they should just be embraced (mostly). “Yeah, like that! We’re trying to prevent those things from happening. More or less.”
The thing is, when you’re talking about something that sounds kind of far out, you can take one of two tactics: you can try to engineer the concept and your language around it so that it sounds more normal/ordinary, or you can just embrace the fact that it is kind of crazy, and use language that makes it clear you understand that perception.
So like, “AI Apocalypse Prevention”?
I like this! UI suggestion: instead of “The first option is 5x as valuable as the second option”, I would insert the sentence between them in the middle: ”...is 5x as valuable as...”. Or if you’re willing to mess up marginal/total utility, you could format it as “One [X] is worth as much as five [Y]”, which I think would help it be more concrete to most people.
Clickhole is in fact no longer owned by The Onion! It was bought by the Cards Against Humanity team in early 2020. (link)
I also consider their famous article Heartbreaking: The Worst Person You Know Just Made A Great Point an enormous contribution to the epistemic habits of the internet.
hi, i’m skluug! i’ve been consuming EA-sphere content for a long time and have some friends who are heavily involved, but i so far haven’t had much formal engagement myself. i graduated college this year and have given a few thousand dollars to the AMF (i signed up for the GWWC Pledge back in college and enjoy finally making good on it!). i’m interested in upping my engagement with the community and hopefully working towards a career with direct impact per 80k recommendations (i’m a religious 80k podcast listener).
It can seem strange to focus on the wellbeing of future people who don’t even exist yet, when there is plenty of suffering that could be alleviated today. Shouldn’t we aid the people who need help now and let future generations worry about themselves?
We can see the problems with near-sighted moral concern if we imagine that past generations had felt similarly. If prior generations hadn’t cared for the future of their world, we might today find ourselves without many of the innovations we take for granted, suffering from far worse degradation of the environment, or even devastated by nuclear war. If we always prioritize the present, we risk falling into a trap of recurring moral procrastination, where each successive generation struggles against problems that could have been addressed much more effectively by the generations before.
This is not to say there no practical reasons why it might be better to help people today. We know much more about what today’s problems are, and the future may have much better technology that make fixing their own problems much easier. But acknowledging these practical considerations needn’t lead us to believe that helping future people is inherently less worthwhile than helping the people of the present. Just as impartial moral concern leads us to equally weigh the lives of individuals regardless of race or nationality, so too should we place everyone on equal footing regardless of when they exist in time.