Similarly: what domains do you wish someone would take a deep dive into and write about their learnings?
A few months ago, I was chatting with Richard Ngo when we concluded that perhaps more EAs should learn things that no one else in the EA community is learning. In other words, it would be good if EAs’ knowledge were more uncorrelated.
I then asked him one of these questions, and his answer made me consider leaning into curiosity about corporate governance (one of his answers) with the aim of writing a post about my learnings and/or mentioning my findings to him.
And so I figured I’d ask everyone—perhaps someone will look into it for you and give you some answers.
I’d be excited if there’s an EA who’s an expert in industrial and organizational psychology, especially the psychometrics side. To the extent that the research and methodology is valid, this has pretty clear applicability to community-building efforts, especially if we’re focused on attracting, selecting for, and deploying exceptional talent.
It may also have a bunch of value in running organizations well .
TikTok (like, seriously. Lots of young people are there caring about the world ending, and it doesn’t seem like it should be that complicated, right?)
A communications specialist who could think of 100 different maximally-effective ways to explain AI safety to a general audience, many of them in less than two minutes and some in a couple sentences.
I’m about to start as Head of Communications at CEA, and think this would be a very useful brainstorming exercise — thanks for the suggestion!
Have you seen the results of the AI Safety Arguments contest? It’s the best resource I know of (more time-efficient than most resources), although it would be great if someone could set up an even more time-efficient general-purpose persuasion-assistance rhetoric resource.
I had not seen this, thanks for sharing!
Wow! If you’d like to share drafts of things like that in a place that I could see them, I’m interested!
I believe I could do this. My background is just writing, argument, and constitution of community, I guess.
An idea that was floated recently was an interactive site that asks the user a few questions about themselves and their worldview then targets an introduction to them.
I’m not sure how strong the need actually is, though. I get the impression that, EA is such a simple concept (reasoned evidenced moral dialog, earnest consequentialist optimization of our shared values) that most misunderstandings of what EA is are a result of deliberate misunderstanding, and having better explanations wont actually help much. It’s as if people don’t want to believe that EA is what it claims to be.
It’s been a long time since I was outside of the rationality community, but I definitely remember having some sort of negative feeling about the suggestion that I can be better at foundational capacities like reasoning, or in EA’s case, knowing right from wrong.
I guess a solution there is to convince the reader that rationality/practical ethics isn’t just a tool for showing off for others (which is zero-sum, and so we wouldn’t collectively benefit from improvements in the state of the art), and that being trained in it would make their life better in some way. I don’t think LW actually developed the ability to sell itself as self-help (I think it just became a very good analytic philosophy school). I think that’s where the work needs to be done.
What bad things will happen to you if you reject expected a VNM axiom or tell yourself pleasant lies? What choking cloud of regret will descend around you if you aren’t doing good effectively?
Please make sure to enter this contest before the deadline!
Oh thank you, I might. Initially I Had Criticisms, but as with the FLI worldbuilding contest, my criticisms turned into outlines of solutions and now I have ideas.
Computational linguistics and evolutionary biology focus on hominids in the last few million years. (AI forecasting relevance and maybe language model comparisons?)
Psychology related to dark triad/tetrad traits. (Relevant to reducing the influence of malevolent actors.)
The risk of hacking into nuclear weapons systems. This was how Gwern’s AI takeover story ended, and has received mainstream interest unrelated to AI risk. Here’s a short brainstorm I did.
I know a fair few folk in this space, at least with in UK
Paul Ingram at CSER (https://www.cser.ac.uk/team/paul-ingram/) and other staff at BASIC (https://basicint.org/) know about cyber security and nuclear weapons (see https://basicint.org/publications/stanislav-abaimov-paul-ingram-executive-director/2017/hacking-uk-trident-growing-threat)
Also Caroline Baylon at the Future Generations APPG has expertise on cyber security at nuclear power plants (https://www.appgfuturegenerations.com/secretariat)
How various prominent ideologies view the world, e.g. based on in-depth conversations
Is it possible to get this by just finding people who reside in, or used to reside in those ideologies and consulting them? Either ones already in EA, or by making a deliberate effort to reach out to people from those groups? I feel pretty well positioned to do those things.
Yeah, that’s pretty much what I was imagining. Though I think the best insight is going to come from a more deliberate effort to seek out a different worldview since e.g. the people with the most different worldviews aren’t going to be in EA (probably far from it).
What are you considering as an ideology?
For instance political ideologies
Hiring
Flow and distribution of information (inside EA, and in general)
how to structure and present information to make it as easily digestible as possible (e.g. in blog posts or talks/presentations)
A bit less pressing maybe, but I’d also be interested in seeing some (empirical) research on polyamory and how it affects people. It appears to be rather prevalent in rationality & EA, and I know many people who like it, and also people who find it very difficult and complicated.
Love the question! Within AI policy:
How/whether R&D funding can be leveraged to fund AI safety research.
How/whether governments should regulate AI safety. We have people writing high-level academic papers on this, but potentially nobody, at least in the US, who’s aiming to become an expert in the details of implementation.
Anti-trust as it relates to AI. There’s a chance AI governance initiatives could run afoul of anti-trust by default, so it seems good to have people with deep experience at places like the FTC who could advise on how to navigate this.
There is expertise in how to regulate AI and on anti-trust and AI at least in the UK, e.g. GovAI and CSER etc. I’d consider myself an expert in the details of regulation (having lead on a large review of a relevantly similar topic for the UK government). If you have things I can help with feel free to reach out.
Someone extremely on top of the cutting-edge multi-omics game. EA has a lot of bio talent but nobody with years-long metagenomics experience and access to the (often unpublished) knowledge on current best practices, pitfalls, and promising upcoming research.
Various practical skillsets:
A “Scribe”, someone who is good at eliciting and presenting the ideas of other people who aren’t good at presenting their own ideas. There are a lot of people in the intellectual movement who’re brilliant, but don’t write enough, or don’t have an intuitive sense of what needs to be written (what’s not obvious to others, what others are ready for), or don’t have an impulse to do it unless someone is in front of them and asking them questions. A scribe is someone who knows what needs to be written and can facilitate the writing.
A very good job for a Scribe is podcast interviewing. There are currently no brilliant scribes doing podcasting as far as I’m aware (potentially due to limited true competition as a result of… incumbency bias, and as a result of the temptation to not be a scribe, and to instead follow your own interests, and because no one has formalized or named the skillset until we did).
Matchmaking, facilitating networking. The easiest way to start to get to know another person is for someone who knows both of you already to tell the two of you what you need to talk about. I feel like every EA group should be doing a lot of this, but it’s hard, it’s not something I’m naturally skilled at, but if someone who had a deep natural interest in people told me what I needed to learn, I think I would learn it then.
Doubt: There’s a possibility that an online profile system is always better for introductions than anything we can do in person.
Psychology of aesthetics of interior design, and potentially 3d modelling. We will need someone to build venues for EA in VR at some point over the next decade.
Doubt: It’s possible that the most reasonable approach will just be to find works that have been made outside of the community, pay for some minor adjustments and put “effective altruism” in the description to make it findable, in which case this is less important, but it might still be useful to have someone who knows how spaces create moods and shape conversations to tell us which spaces are best. But we can figure that out pretty well ourselves.
Inside-view understanding of policymaking in major / emerging bioeconomies outside the US/Europe. I’m thinking BRICS (Brazil, Russia, India, China and South Africa) but also countries that will have huge economies/populations this century, like Nigeria, Indonesia, Pakistan, and the DRC, countries with BSL-4 labs, and places with regulatory environments that allow broader biotechnology experimentation (e.g. Israel, Singapore).
Education
If there is already an expert, or a substantial body of research addressing this from an EA perspective, let me know!
Data poisoning, especially the kind relevant to US-China cybersecurity.
People working in producing and moving physical goods, to be able to have more impact on the physical world. I would guess that is kind of work, and starting companies to provide this kind of work is the most likely activity to bring low income countries to middle income
I am not saying that this doesn’t exist within EA (have not spoken to every EA anyway!) just that there is less emphasis on this than I think there should be:
- Modelling indirect impact: often people seem to be speaking about direct risks, e.g. from AI or bio, with other areas such as climate change being less discussed and global health and poverty being seen as ‘neartermist’. But for example more inequality can lead to greater instability, which increases chances of conflict which increases chances of other xrisks. Or climate change increases the chance of pandemics when e.g. more people are climate refugees (just two examples, not intending to put numbers on them but to illustrate a point).
- Applied psychology/communication (if that’s the right phrasing): when I first started joining EA I was mentioning how useful it would be to gain a deeper understanding of why people do what they do. If they currently aren’t don’t something that would be very beneficial from them to do (e.g. working on a high impact career), why? And if you want to convince them to do something like that and communicate with them effectively, how would you go about it? It’s difficult to answer either of those things without understanding why people do what they do, and more broadly why groups do what they do, or governments do what they do.
(1) Effects of cybersecurity on geopolitics, or individual privacy. These are two different areas and they seem to me like one bad actor can cause a lot of suffering or lead to suboptimal futures, but I don’t know of any EAs who looked deeply into it.
(2) Reproductive health and the costs of childbearing, possibly from a policy angle. I think as a community we decided to bite the bullet and become total utilitarians, and I see some discussions on how it should play out in terms of contraception and choosing to have more children but all of these come across to me as not very well-informed. There are only 2 posts I found about it are by isabel and they touch very specific topics. So I think an analysis of why people are having fewer children, what policies will help people choose to have more children, and a thorough analysis to settle all the discussions around contraception and abortion, including an attempt to quantify the suffering and counterfactual involved in childbearing and childcare, would be appreciated.
Diet and nutrition.
Eating better and more efficiently seems important. May be good for both physical and mental health, may improve brain capabilities...
I recently stumbled upon WFPB (Whole Food Plant-based) diet promoted by Dr. T. Colin Campbell and has started experimenting it for a month.
When I was evaluating the claims of WFPB I wanted to get some idea from a trusted source, but found out nutrition wasn’t a popular topic in EA.
The energy system (Vaclav Smil style) and the implications for transitioning out of fossil fuels.
It seems to me that the EA community is very naive about the importance of energy and the degree to which a timely transition out of fossil fuels is difficult. As a community, it mostly takes energy for granted.
I think we should do a bit more research into avant la lettre EA movements. Macaskill mentioned the Mohists several times in his new book, but never really gets into the detail of this movement. In a few recent (link)posts there have been mentions of the Charity Organisation Society and Scientific Philanthropy movements, and we have similarities with the original British utilitarian philosophers as well.
I would love to read an in depth analysis about why/how/if these movements failed, how the originated and how they compare to EA.
In case you haven’t already seen it, this might be helpful: https://forum.effectivealtruism.org/posts/aWyFsYZuxkrraQdYk/bibliography-of-ea-writings-about-fields-and-movements-of
The knowledge explosion, our relationship with it, our ability (or not) to control it.
In my view, downvoting this comment is a little harsh (the karma is −1 with 4 votes at the time I am writing this). I understand it could be more detailed, so it might not be worth upvoting (depending on one’s bar). However, downvoting on this basis discourages quick comments, and I think they could still be useful as long as they are not counterproductive or ill-intentioned. The above comment has seemingly been made in good faith, so I would not downvote.
Instead of downvoting, I think it is more productive to point out, as MakoYass did above, that it would be worth expanding the answer a bit. Writing something like “Interesting, would you please share a link expanding on that idea?” would only take about 10 s.
Thanks Vasco,
To help clarify your entirely reasonable concern, I’ve been shifting somewhat to short blurb comments because every time I share a longer more thoughtful article it almost immediately gets downvoted in to oblivion, and receives close to no feedback of any kind.
Members are clearly in no way obligated to engage my longer pieces, but if they don’t wish to read and engage those articles, ok, fair enough, but then it’s pretty much a waste of my time to write such articles.
If it were up to me, I’d contribute the most thoughtful posts I’m capable of, my fellow members would engage and challenge, and through an extensive process of challenge and counter challenge we would hopefully inch our way towards some more useful version of the truth. That’s what I came here to do, but you know, it takes two to tango.
Ah, that is unfortunate. I have just read this recent post from yours, and think that it may have been downvoted for the same reasons as your comment above (the post is more informative than the comment, but the typical post is also more informative than the typical comment, so the bar for not downvoting may be higher for posts).
This sounds like a great plan! I would encourage you to continue posting, and ask the EA Forum team for feedback before publishing (there is a feedback button at the bottom of the edition page). It may even be worth going through some your previous comments and posts together with them to assess what could be improved. I do not mean to suggest that your posts are being fairly/unfairly downvoted (I would not have downvoted the only one I have checked, but this is a small sample size, so I do not know). I just think the EA Forum team can help either way.
Thanks for your kind words, appreciated. How about this?
If anyone on the forum wishes to present themselves as being qualified to judge the quality of my posts, they can make a credible case as follows:
read the post
analyze the post
try to rip the post to shreds
and we’ll see what happens
Meaning no disrespect to anybody, just trying to respond honestly...
I don’t intend to ask the EA team for feedback because they have as yet not demonstrated (as above) that they are qualified to evaluate my posts, and it is they who implemented the silly voting system. And, they have already threatened to ban me over points I very explicitly did not make, as can be proven just by actually reading the post in question.
In addition, while I have no data to back this up, my sense from 27 years of doing this almost daily is that many or most members here are somewhere around a half to a third my age. If true, I don’t see why I should automatically judge them qualified to generate useful reputation data on my participation here.
All that said, I am having some good exchanges such as this in the comment section, which I appreciate. So for now I’ll stick to that, and let others write the posts.
I believe it is possible to request various types of feedback to the EAF team. I am confident they would be able to provide feedback e.g. on the tone and clarity of the post. For instance, the wording “silly voting system” in the sentence above feels unfriendly.
I would say age is a poor predictor of insightful feedback, and that focussing on the content (of the posts and comments) is much more productive. Other options (besides the number of votes) include asking specific people (who may share an interest in the topics being discussed) or the EAF team for feedback.
I think this could benefit from being expanded. I can only assume you’re referring to the democratization of access to knowledge. It’s not at all obvious why this is something we need to prepare for or why it would introduce any non-obvious qualitative changes in the world rather than just generally making it go a bit faster.
Hi Mako,
This article explains what I’m referring to:
https://forum.effectivealtruism.org/posts/kbfdeZbdoFXT8nuM6/our-relationship-with-knowledge