Views expressed here do not represent the views of any organizations I am affiliated with, unless mentioned otherwise.
Prabhat Soni
I think this excerpt from the Ben Todd on the core of effective altruism (80k podcast) sort of answers your question:
Ben Todd: Well yeah, just quickly on the definition, my definition didn’t have “Using evidence and reason” actually as part of the fundamental definition. I’m just saying we should seek the best ways of helping others through whatever means are best to find those things. And obviously, I’m pretty keen on using evidence and reason, but I wouldn’t foreground it.
Arden Koehler: If it turns out that we should consult a crystal ball in order to find out if that’s the best way, then we should do that?
Ben Todd: Yeah.
Arden Koehler: Okay. Yeah. So again, very abstract: whatever it is that turns out to be the best way of figuring out how to do the most good.
Ben Todd: Yeah. I mean, in general, you have this just big question of how narrow or broad to make the definition of effective altruism and it is a difficult thing to say.
I don’t think this is an “official definition” (for example, endorsed by CEA) but I think (or atleast hope!) that CEA is working out a more complete definition for EA.
Task Y candidate: Fellowship facilitator for EA Virtual Programs
EA Virtual Programs runs intro fellowships, in-depth fellowships, and The Precipice reading groups (plus occasional other programs). The time commitment for facilitators is generally 2-5 hours per week (depending on the particular program).
EA intro fellowships (and similar programs) have been successful at minting engaged EAs. There are large diminishing returns even in selecting applicants with a not-so-strong application since the application process does not predict future engagement well (see this and this). Thus, if a fellowship/reading group has to reject people, that’s significant value lost. Rejected applicants generally re-apply at low rates (despite being encouraged to!).
Uncertainties:
Is EA Virtual Programs short on facilitators? I don’t know. The answer to this question would presumably change post-COVID (IMO the answer could shift in either direction), and so in the interest of future-proofing this answer, I will not bother to find the current demand for facilitators.
Will EA Virtual Programs exist post-COVID? An organizer at the EA Virtual Programs informally said that nothing concrete has been decided yet, but the project was probably leaning towards continuing in some capacity. It is not clear to me whether there will even be significantly fewer applicants post-COVID (since most(?) university groups are running their fellowships independantly right now).
I know of atleast a few non-student working professionals who are facilitators for EA Virtual Programs, which I will take as evidence that this can be a Task Y.
Thanks for explaining your views further! This seems about right to me, and I think this is an interesting direction that should be explored further.
I think rationality should not be considered as a seperate cause area, but perhaps deserves to be a sub-cause area of EA movement building and AI safety.
It seems very unlikely that promoting rationality (and hoping some of those folks would be attracted to EA) is more effective than promoting EA in the first place.
I am unsure whether it is more effective to grow the the number of people interested in AI safety by promoting rationality or by directly reaching out to AI researchers (or other things one might do to grow the AI safety community).
Also, the post title is misleading since an interpretation of it could be that making people more rational is intrinsically valuable (or that due to increased rationality they would live happier lives). While this is likely true, this would probably be an ineffective intervention.
Strong upvote. This post caused me to deprioritize longtermism and shift my focus to presently alive beings.
The hyperlink is incorrect :P
Do you have a preference on whether to contact you or contact JP Addison (the programmer of the EA Forum) for technical bugs?
What is the minimum threshold of expected attendees required for GWWC/OFTW to be interested in collaborating?
What, if anything, changes in this mechanism/strategy post-COVID?
I was looking for books on rationality. My top 4 shortlist was:
Rationality: From AI to Zombies by Eliezer Yudkowsky
Predictably Irrational by Dan Ariely
Decisive by Chip Heath and Dan Heath (This covers a lot of concepts EAs are familiar with such as confirmation bias and overconfidence, so I didn’t feel it would add much to my knowledge base)
Thinking, Fast and Slow by Daniel Kahneman (More focussed on cognitive biases rather than rationality in general.)
I ended up going with Rationality: From AI to Zombies.
Hey I know this post is very old. But in case someone stumbles across this post, the best presentation for introducing EA in my opinion is:
this presentation by Ajeya Cotra or a slightly modified version (and IMO better) set of slides by Kuhan Jeyapragasan.
Yep, that’s what comes to my mind atleast :P
Apparently existential risk does not have its own Wikipedia article.
Some related concepts like human extinction, global catastrophic risks, existential risk from AGI, biotechnology risk do have their own Wikipedia articles. On closer inspection, hyperlinks for “existential risk” on Wikipedia redirect to the global catastrophic risk Wiki page. A lot of Wiki articles have started using the term “existential risk”. Should there be a seperate article for existential risk?
Another awesome (and low-effort for organizers) way to socialise is the EA Fellowship Weekend (which probably didn’t exist when Kuhan wrote this post).
BTW Jessica, the $75K figure from Kahneman’s paper that you mentioned is from 2010. After adjusting for inflation, that’s ~$90K in 2021 dollars (exact number depends on the inflation calculator you used).
Socrates’ case against democracy
https://bigthink.com/scotty-hendricks/why-socrates-hated-democracy-and-what-we-can-do-about-it
Socrates makes the following argument:
Just like we only allow skilled pilots to fly airplanes, licensed doctors to operate on patients or trained firefighters to use fire enignes, similarly we should only allow informed voters to vote in elections.
“The best argument against democracy is a five minute conversation with the average voter”. Half of American adults don’t know that each state gets two senators and two thirds don’t know what the FDA does.
(Whether a voter is informed can be evaluated by a short test on the basics of elections, for example.)
Pros: better quality of candidates elected, would give uninformed voters a strong incentive to learn aout elections.
Cons: would be crazy unpopular, possibility of the small group of informed voters acting acting in self-interest—which would worsen inequality.
(I did a shallow search and couldn’t find something like this on the EA Forum or Center for Election Science.)
A cause candidate suggestion: atomically precise manufacturing / molecular nanotechnology. Relevant EA Forum posts on this topic:
Sorry, you’re right; the link I provided earlier isn’t very relevant (that was the only EA Forum article on WBE I could find). I was thinking something along the lines of what Hanson wrote. Especially the economic and legal issues (this and the last 3 paragraphs in this; there are other issues raised in the same Wiki article as well). Also Bostrom raised significant concerns in Superintelligence, Ch. 2 that if WBE was the path to the first AGI invented, there is significant risk that unfriendly AGI will be created (see the last set of bullet points in this).
Yeah, I agree. I don’t have anything in mind as such. I think only Ben can answer this :P