For example, in AI safety (my area of expertise) weāre very far from having a thorough understanding of the problems we might face. I expect the same is true for most of the other priority areas on 80,000 Hoursā list. This is natural, given that we havenāt worked on most of them for very long; but it seems important not to underestimate how far there is to go, as I did.
This very much resonates with me in relation to the couple of problem areas Iāve had a relatively deep, focused look into myself, and I share your guess that itād be true of most other problem areas on 80kās list as well.
It also seems notable that the ārelatively deep, focused lookā has in both cases consisted of something like 2 months of research (having started from a point of something like the equivalent of 1 undergrad unit, if we count all the somewhat relevant podcasts, blog posts, books, etc. I happened to have consumed beforehand). In both cases, Iād guess that that alone was enough to make me among the 5-20 highly engaged EAs who are most knowledgeable in the area. (Itās harder to say where Iād rank among less engaged EAs, since Iām less likely to know about them and how much they know.)
Two related messages that I think itād be good for EAs to continue to hear (both of which have been roughly said by other people before)
Be careful not to assume āEAā has stronger opinions and more reasoning behind them than it actually does.
Itās important to hold beliefs despite this, rather than just shrugging and saying we canāt know anything at all
But we should be very unsure about many of these beliefs
And individuals should be careful not to assume some others (e.g., staff at some EA org) have more confidence and expertise on a topic than they really do
Relatedly, people should avoid deferring too strongly, and should form their own independent impressions on many topics
It may be easier to get to the frontier of EAās knowledge on a topic, and to contribute new ideas, insights, sources, etc., than you might think.
E.g., even just having a semi-relevant undergrad degree and then spending 1 day looking into a topic may be enough to allow one to write a Forum post thatās genuinely useful to many people.
This also seems to dovetail with you saying: āIām now most proactive is in trying to explore foundational intellectual assumptions that EA is making. I didnāt do this during undergrad; the big shift for me came during my masters degree, when [I started writing](http://āāthinkingcomplete.blogspot.com/āā) about issues I was interested in rather than just reading about them. I wish Iād started doing so sooner. Although at first I wasnāt able to contribute much, this built up a mindset and skillset which have become vital to my career. In general it taught me that the frontiers of our knowledge are often much closer than Iād thoughtāthe key issue is picking the right frontiers to investigate.ā
That all really resonates with my own experience of switching from reading about EA-related ideas to also writing about them. I think that that helped prompt me to actually form my own views on things, realise how uncertain many things are, recognise some gaps in āourā understanding, etc. (Though I was already doing these things to some extent beforehand.)
Thanks for this post!
This very much resonates with me in relation to the couple of problem areas Iāve had a relatively deep, focused look into myself, and I share your guess that itād be true of most other problem areas on 80kās list as well.
It also seems notable that the ārelatively deep, focused lookā has in both cases consisted of something like 2 months of research (having started from a point of something like the equivalent of 1 undergrad unit, if we count all the somewhat relevant podcasts, blog posts, books, etc. I happened to have consumed beforehand). In both cases, Iād guess that that alone was enough to make me among the 5-20 highly engaged EAs who are most knowledgeable in the area. (Itās harder to say where Iād rank among less engaged EAs, since Iām less likely to know about them and how much they know.)
Two related messages that I think itād be good for EAs to continue to hear (both of which have been roughly said by other people before)
Be careful not to assume āEAā has stronger opinions and more reasoning behind them than it actually does.
Itās important to hold beliefs despite this, rather than just shrugging and saying we canāt know anything at all
But we should be very unsure about many of these beliefs
And individuals should be careful not to assume some others (e.g., staff at some EA org) have more confidence and expertise on a topic than they really do
Relatedly, people should avoid deferring too strongly, and should form their own independent impressions on many topics
See also posts on epistemic humility.
It may be easier to get to the frontier of EAās knowledge on a topic, and to contribute new ideas, insights, sources, etc., than you might think.
E.g., even just having a semi-relevant undergrad degree and then spending 1 day looking into a topic may be enough to allow one to write a Forum post thatās genuinely useful to many people.
This also seems to dovetail with you saying: āIām now most proactive is in trying to explore foundational intellectual assumptions that EA is making. I didnāt do this during undergrad; the big shift for me came during my masters degree, when [I started writing](http://āāthinkingcomplete.blogspot.com/āā) about issues I was interested in rather than just reading about them. I wish Iād started doing so sooner. Although at first I wasnāt able to contribute much, this built up a mindset and skillset which have become vital to my career. In general it taught me that the frontiers of our knowledge are often much closer than Iād thoughtāthe key issue is picking the right frontiers to investigate.ā
That all really resonates with my own experience of switching from reading about EA-related ideas to also writing about them. I think that that helped prompt me to actually form my own views on things, realise how uncertain many things are, recognise some gaps in āourā understanding, etc. (Though I was already doing these things to some extent beforehand.)