Definitely!!!! A lot of journalists seem to cover topics they don’t really understand (mainstream media coverage of things like nuclear power or cryptocurrency can be particularly painful), so it was awesome to read something written by a person who gets the basic philosophy.
I think this is a really comprehensive report on this space! Nothing against the report itself, I think you did a great job. As somebody who has spent the last ~10 years studying neuroscience, I’m basically pretty cynical about current brain imaging/BCI methods. I plan to pivot out of neuro into higher impact fields once I graduate. I just wanted to add my 2 cents as somebody who has spent time doing EEG, CT Scan, MRI, fMRI, TMS, and TDCS research (in addition to being pretty familiar with MEG and FNIRS):
+ I don’t think getting high quality structural images of the brain is useful from an EA perspective, though it has substantial medical benefits for the people who need brain scans/can afford to get them. This just doesn’t strike me as one of the most effective cause areas, the same way a cure for Huntington’s disease would be a wonderful thing, but might not qualify as a top EA cause area. + I don’t think getting measures of brain activity via EEG or fMRI has yet produced results that I would consider worth funding from an EA perspective. Again, I’m not saying some results aren’t useful (I’m especially impressed with how EEG helped us understand sleep). But I don’t think any of this research is substantially relevant to preventing civilizational or existential risks. + I don’t think our current brain stimulation methods (e.g., TMS, TDCS) have any EA relevance. The stimulation provided from these procedures (in healthy subjects) just doesn’t seem to have huge cognitive effects compared to more robust methods (education, diet, exercise, sleep, etc.). Brain stimulation might have much bigger impacts for chronically depressed and Parkinson’s patients via DBS. But again I don’t think this stuff is relevant to civilizational or existential risks, and I think there are probably much more cost effective ways of improving welfare. There may still be useful neurotechnology research to be done. But I think the highest impact will be in computational/algorithmic stuff instead of things that directly probe the human brain.
I thought this was a surprisingly good article! Many journalists get unreasonably snarky about EA topics (e.g., insinuate that people who work in technology are out of touch awkward nerds who could never improve the world; suggest EA is cult-like; make fun of people for caring about literally anything besides climate change and poverty). This journalist took EA ideas seriously, talked about the personal psychological impact of being an EA, and correctly (imo) portrayed the ideas and mindsets of a bunch of central people in the EA movement.
Voted, it was surprisingly painless. Fingers crossed for Will, although he was buried in the middle of the pack of names due to unfortunate lack of alphabetical prominence. New cause area: renaming our thought leaders Aaron Aaronson.
Spicy takes, but I think these are good points people should consider! I’m also doing a PhD in Cognitive Neuroscience, and I would strongly agree with your footnote that:
“Final note: cellular/molecular neuroscience, circuit-level neuroscience, cognitive neuroscience, and computational neuroscience are some of the divisions within neuroscience, and the skills in each of these subfields have different levels of applicability to AI. My main point is I don’t think any of these without an AI / computational background will help you contribute much to AI safety, though I expect that most computational neuroscientists and a good subset of cognitive neuroscientists will indeed have AI-relevant computational backgrounds.”A bunch of people in my program have gone into research at DeepMind. But these were all people who specifically focused on ML and algorithm development in their research. There’s a wide swath of cognitive neuroscience, and other neuro sub-disciplines you list, where you can avoid serious ML research. I’ve spoken to about a dozen EA neuroscientists who didn’t focus on ML and have become pretty pessimistic about how their research is useful to AI development/alignment. This is a bummer for EAs who want to use their PhDs to help with AI safety. So please take this into consideration if you’re an early stage student considering different career paths!
This is cool; I often think about how much better the UK system is than the US when it comes to educating doctors. I think my biggest quibble with your post is: “I assume the odds of a successful campaign are 50%.” I would maybe revise that down to 5%? Professional organizations like the American Medical Association have their professions in a stranglehold; they have financial incentives to keep their profession difficult to access (eg allows them to demand higher wages), and they can easily manipulate the public by saying things like “Don’t you want a FULLY trained doctor? Not somebody who skipped undergraduate and went straight to medical school?”A substantially more skeptical campaign success probability obviously lowers the expected ROI of this effort. But I wonder if other people who know more about politics are as skeptical as me. All that being said, I would vote for your campaign if it came up on my state’s ballot!
Thanks! I was just curious, didn’t expect a super in depth analysis. Although that would be super cool to see too :)
Cool report! Thanks for sharing.
Was there anything in the report that you or the Happier Lives Institute were particularly surprised by?
This is cool! Thanks for compiling. I really love Focusmate, glad to see it included.
I wonder if it would be possible to allow people to vote for different recommendations so you could sort by # of endorsements? Just as a quick way to see which tools have been useful to the most people.
Great talk, and thanks for including the slides and the transcript!
Which directions in global priorities research seem most promising?
Has Andreas ever tried communicating deep philosophical research to politicians/CEOs/powerful non-academics? If so, how did they react to ideas like deontic long-termism? Does he think any of them made a big behavior change after hearing about these kinds of ideas?
I’m a little surprised by your perspective. My impression is that Open Phil, EA Infrastructure, FTX, Future Flourishing, etc. are all eagerly funding AI safety stuff. Who else are you imagining funding this space who isn’t already? Also, a bunch of EA community organizers are pushing AI risks substantially harder as a cause area now than they did 5 years ago (e.g. 80k, many university groups).If you’re worried about short timelines, shouldn’t the push be to transition people from meta work on community building to object level work directly on alignment?Thanks for sharing your thoughts! Let me know if I misunderstood something.
I’m not optimistic about this. I do a version of street outreach every year at Princeton’s club fairs, where I pitch EA to young, smart, analytical people, aka a pretty EA-friendly demographic. Our conversion rate of outreach/pitches to big life changes is TINY. I would believe TLYCS numbers if they think online advertising is more cost effective than in person outreach.
NYC needs a bigger EA community! Super happy you are working on this :)Is there any chance you could share more information about the coworking space and community center?
This is a great list of scientists! Thanks for compiling :)I would also add Tom Griffiths to this list.
Sam Gershman, who you mention, actually just published a really accessible book, What Makes Us Smart, which I recommend to people new to the field of human intelligence. Sam Gershman is probably one of the most productive and brilliant scientists in this field. https://press.princeton.edu/books/paperback/9780691205717/what-makes-us-smartFor people who are interested in the intersection of AI/psychology, I strongly encourage you to focus on high level/computational questions, staying away from super low level biological/chemical neuroscience questions. There are a bunch of EA Neuroscientists I’ve spoken to who are all pretty disillusioned by the progress we can make via brain computer interfaces or low level cellular research. But people are excited about computational models of human cognition!
Thanks for writing this! I appreciate your points about how EA grantmakers are 1. part time, 2. extremely busy, 3. and should spend more time getting grants out the door instead of writing feedback. I hope nobody has interpreted your lack of feedback as a personal affront! It just seems like the correct way to allocate your (and other grantmakers’) time. I think the EA community as a whole is biased too far towards spending resources on transparency at the expense of actually doing ~the thing~. Hopefully this post makes some people update!
Really cool survey, and great write up of the results! I especially liked the multilevel regression and post-stratification method of estimating distributions. Peter Singer seems to be higher profile than the other EAs on your list. How much of this do you think is from popular media, like The Good Place, versus from just being around for longer? Peter Singer is also well known because of his controversial disability/abortion views. I wonder if people who indicated they only heard about Peter Singer (as opposed to only hearing about MackAskill, Ord, Alexander, etc.) scored lower on ratings of understanding EA? I’ve had conversations with people who refused to engage with the EA community because we were “led by a eugenicist”, but that’s clearly not what EA believes in.Also kinda sad EA is being absolutely crushed by taffeta.
Great question! We need more research ;)
Sounds really cool! Would love to hear more when you’re ready :)
This is such cool research! Thanks to everybody who contributed :)I’ve found the majority of EA University Club members drift out of the EA community and into fairly low impact careers. These people presumably agree with all the EA basic premises, and many of them have done in depth EA fellowships, so they aren’t just agreeing to ideas in a quick survey due to experimenter demands, acquiescence bias, etc. Yet, exposure to/agreement with EA philosophy doesn’t seem sufficient to convince people to actually make high impact career choices. I would say the conversion rate is actually shockingly low. Maybe CEA has more information on this, but I would be surprised if more than 5% of people who do Introductory EA fellowships make a high impact career change. So I would be super excited to see more research into your first future direction: “Beyond agreement with basic EA principles, what other (e.g., motivational or cognitive) predictors are essential to becoming more engaged and making valuable contributions?”