I think this is one of the things that distinguishes EAs and rationalists from randomly selected smart people. I like to say that EAs have a taste for biting bullets.
Randomized, Controlled
A Keynesian/Hayekian model of community building
Areas in the US Election that *might* be higher leverage to work on
Thanks for this report! I 100% agree with Ben Stewart this is really really cool. However, minor gripe: I do wish this had been edited for clarity of language. Even by EA Forum standards this the prose here is about as twisty as a pissed off octopus’ tentacles.
I am the symbiotic sentient lichen responsible for https://worldbuild.ai/W-0000000335/.
Please DM if you’d like to discuss the possibility of having one of my moieties colonize your lungs or other moist crevasses.
Inspired by this, I just created the Effective Altruism Toronto meetup. I’m already in touch with the organizers of LW Toronto. Please spread the word to anyone who might be in the GTA!
Hi Seth. I’m just finishing up work and am going to dump a bunch of questions here, then run home. Sorry for the firehose, and thank you for your time and work!
If I wanted to work at GCRI or a similar think-tank/institution, what skills would make me most valuable?
What are your suggestions for someone who’s technically inclined and interested in directly working on existential risk issues?
I’m particularly worried about the risks of totalitarianism, potentially leading to a what, IIRC, Bostrom calls a ‘whimper’: just a generally shitty future in which most people don’t have a chance to achieve their potential. To me this seems as likely if not more so than AI risk. What are your thoughts?
Over the twentieth century we sort of systematically deconstructed a lot of our grand narratives, like ‘progress’. Throwing out the narratives that supported colonialism was probably a net win, but it seems like we’re now at a point where we really need some new stories for thinking about the dangerous place we are in, and the actions that we might need to take. Do you have any thoughts on narratives as a tool for dealing with x-risks?
How can we make our societies generally resilient to threats? Once we have some idea of how to make ourselves more resilient, how can we enact these ideas?
I think that a really robust space program could be very important for x-risk mitigation. What are your thoughts? Do you see space-policy advocacy as an x-risk related activity?
+1 this. Hate FB. EA is the only reason I semi-regularly think about returning.
Introducing Canada’s first political advocacy group on AI Safety and Technological Unemployment
Oh, also:
I was confused by references to amputation until I understood that amputated tentacles can act autonomously for some amount of time. A brief, direct description of this would be useful.
Your 0.025 and 0.035 are extremely specific; it would be interesting to get a brief description of how you ended up with those numbers without having to delve into the full report.
Does EA need [a] reputation system[s]?
Reputation systems are typically used by on-line platforms to help enable higher levels of trust between users.
1) My sense is that within EA there is a norm that we Do Favors For Each Other; ie, EAs often seem to have the subgoal ‘try to help other EAs, within reason’. This is both correct and lovely.
2) This norm may come under significant pressure as the community continues to scale. Will it be sustainable when the community has grown 10x? 100x? 1000x?
If both of these propositions are correct, then an EA reputation system may be worth thinking about. EA presents some interesting challenges as a big-tent social movement, spread across many different on- and off-line platforms. Some initial ideas of what a reputational system could look like:
Yet Another Webpage: eahub.org already supports profiles pages for EAs, with links to FB, this forum, lesswrong, etc. If most EAs have a page on eahub, with up-to-date links to their other on-line personas, maybe that’s enough?
A Score: something like karma or the rep systems reddit/stack-exchange use, but able to deal with the multi-platform nature of EA. There are significant technical and social challenges with scoring systems even when they are only on a single platform.
A web of trust: something like the PGP web-of-trust, where EAs could essentially vouch for each other.
Normally I would not double-post an item, but I’d like to increase the chance people see this, and I don’t know if it warrants a front-page posting.
EA Toronto
I just created the Effective Altruism Toronto meetup. I’m already in touch with the organizers of LW Toronto. My goal is to reach a monthly meeting tempo over the next 3 − 6 months with a small core of regulars, and then reach a twice-monthly tempo.
Please spread the word to anyone who might be in the GTA!
Promotion help
If anybody has suggestions about how to best promote/spread the word, that would be super-great. I’m one of those tin-foil hat Facebook holdouts, but I’m willing to blow the dust off my account to do some promo for this. Pointers to FB groups/highly-connected-individuals/whatever, as well as non-FB related ideas would be really appreciated.
Has the date for the 2015 EA Summit been set yet?
Ugg.. something smells fishy here.. : ) The numbers seem completely outlandish.. 1 − 10 billion for recreational fishing in the US? There are, what.. 300 − 500 million total population in the US, I believe? Even assuming 10% are into fishing, would they consume 1 billion bait fish?
I’m extremely skeptical of this and strongly inclined to make a bet against this info being accurate. Currently considering what exactly I’d be willing to put money down against. My intuition is that these figures might be off by a factor of ten or more.
Nope, the Moon has none of the resources required for sustaining a spacefaring civilization, except sunlight and water.
Well. This might be a bit of an over-statement—we don’t really have a good idea of what’s up there. There is good evidence for Titanium and there may be Platnium Group metals up there. Who knows what else?
The moon, mars, or colonies inside hollowed out asteroids certainly doesn’t make sense as x-risk mitigation in the near or medium term, but at some point they’re going to be necessary.
I’ve been thinking lately that nuclear non-proliferation is probably a more pressing x-risk than AI at the moment and for the near term. We have nuclear weapons and the American/Russian situation has been slowly deteriorating for years. We are (likely) decades away from needing to solve AI race global coordination problems.
I am not asserting that AI coordination isn’t critically important. I am asserting that if we nuke ourselves first, it probably won’t matter.
For those who have down-voted or disagreed: happy to hear (and potentially engage with) substantive counter arguments. But I don’t think the Forum is a good place to post posturing, which the original post sometimes descends into.
How come this was only posted with five days notice?
Oh, that’s interesting. Did you folks come up with that methodology?
Thank you for the snippets.
EAG was, by the end, very emotional for me. I found some of my personal failures being juxtaposed with some of my civilization’s failings. I was put in very direct touch with the yearning at my core. I talked with people who I like and respect and feel wary around. Some of them are spooked and worried about the shape of things to come. I felt my own anxieties about my place in the world and my value rear up. It was fun and challenging and exhausting.