cool ty
Hamish McDoodles
re: the biosecurity map
did you realise that the AIS map is just pulling all the coordinates, descriptions, etc from a google sheet
if you’ve already got a list of orgs and stuff it’s not hard to turn it into a map like the AIS one by copying the code, drawing a new background, and swapping out the URL of the spreadsheet
oh this is a cool and useful resource
ty for the mention
GPT-powered EA/LW weekly summary
could be could be
This is what I meant, yeah.
There’s also an issue of “low probability” meaning fundamentally different things in the case of AI doom vs supervolcanoes.
P(supervolacano doom) > 0 is a frequentist statement. “We know from past observations that supervolcano doom happens with some (low) frequency.” This is a fact about the territory.
P(AI doom) > 0 is a Bayesian statement. “Given our current state of knowledge, it’s possible we live in a world where AI doom happens.” This is a fact about our map. Maybe some proportion of technological civilisations do in fact get exterminated by AI. But maybe we’re just confused and there’s no way this could ever actually happen.
I have a masters degree in machine learning and I’ve been thinking a lot about this for like 6 years, and here’s how it looks to me:
AI is playing out in a totally different way to the doomy scenarios Bostrom and Yudkowsky warned about
AI doomers tend to hang out together and reinforce each other’s extreme views
I think rationalists and EAs can easily have their whole lives nerd-sniped by plausible but ultimately specious ideas
I don’t expect any radical discontinuities in the near-term future. The world will broadly continue as normal, only faster.
Some problems will get worse as they get faster. Some good things will get better as they get faster. Some things will get weirder in a way where it’s not clear if they’re better or worse.
Some bad stuff will probably happen. Bad stuff has always happened. So it goes.
It’s plausible humans will go extinct from AI. It’s also plausible humans will go extinct from supervolcanoes. So it goes.
I’m paralysed by the thought that I really can’t do anything about it.
IMO, a lot of people in the AI safety world are making a lot of preventable mistakes, and there’s a lot of value in making the scene more legible. If you’re a content writer, then honestly trying to understand what’s going on and communicating your evolving understanding is actually pretty valuable. Just write more posts like this.
What’s the theory of change?
For clarity being 2x better than cash transfers would still provide it with good reason to be on GWWC’s top charity list, right? Since GiveDirectly is?
I think GiveDirectly gets special privilege because “just give the money to the poorest people” is such a safe bet for how to spend money altruistically.
Like if a billionaire wanted to spend a million dollars making your life better, they could either:
just give you the million dollars directly, or
spend the money on something that they personally think would be best for you
You’d want them to set a pretty high bar of “I have high confidence that the thing I chose to spend the money on will be much better than whatever you would spend the money on yourself.”
EffectiveAltruismData.com is now a spreadsheet
You’re welcome!
I’ve added the communities page to the main text.
ty Lara!
AISafety.world is a map of the AIS ecosystem
Cool, yep. This checks out.
Thanks!
PS, I’m asking because I’m working on a “history of EA” video. The idea is that this is a nice narrative-driven way to explain the EA memeplex.
[Question] Why isn’t there a jump in funding when OP was founded?
I gather that you think it’s an issue worth correcting? Feel free to suggest a more correct phrasing for semafor and I’ll pass it on.
If you interpret “the Effective Altruism Forum” as a metonym for “the people who use the forum”, then it is true (like how you can say “Twitter is going nuts over this”).
It’s weird, but I don’t see any reason to make a fuss about it.
Semafor has corrected the article.
I’ve identified source of problem and fixed, thanks!