Thank you for this. I am a community-builder and I’ve definitely started emphasizing the importance of developing inside views to my group members. However, it seems like there may be domains where developing an inside view is relatively less important (e.g., algebraic geometers vs moral philosophers), because experts in that field appear to have better feedback loops. Given this, I’m curious whether you think community-builders might want to form inside views* on which areas to emphasize inside view formation for, to help communicate more accurately to our members?
*I’m not confident I’m describing an ‘inside view.’ Maybe this is something like, ‘getting a sense of outside views across an array of domains?’
I found your post doubly useful because I’ve recently been exploring how I can form inside views, which I’ve found both practically and emotionally difficult. Not being familiar with the rationality or AI safety community, I was surprised by how much emphasis was placed on inside views and started feeling a bit like an imposter in the EA community. I definitely felt like it was “low-status” to not have inside views on the causes I prioritized, though I expect at least some of this was due to my own anxiety.
Being able to see how you tackled this is really useful, as it gives me another model for how I could develop inside views (particularly on AI risk, which is the first thing I’m working on). It also reinforces that a lot of people have more career flexibility than they think—and so, perhaps, it’s okay if I haven’t figured out whether I should switch from community building into AI safety research in the three months before I graduate!
Hey! I have been thinking about this a lot from the perspective of a confused community builder / pipeline strategist, too. I didn’t get so far as Neel, it’s been great to read this post before getting anywhere near finishing my thoughts. It captures a lot of the same things better than I had. Thanks for your comment too—definitely a lot of overlap here!
I have got as far as some ideas, here, and would love any initial thoughts before I try to write it up with more certainty?
First a distinction which I think you’re pointing at—an inside view on what? The thing I can actually have an excellent inside view about as a (full-time) community builder is how community building works. Like, how to design a programme, how people respond to certain initiatives, what the likelihood certain things work are, etc.
Next, programmes that lead to working in industry, academic field building, independent research, etc, look different. How do I decide which to prioritise? This might require some inside view on how each direction changes the world (and interacts with the others), and lead to an answer on which I’m most optimistic about supporting. There is nobody to defer to here, as practitioners are all (rightly) quite bullish about their choice. Having an inside view on which approach I find most valuable will lead to quite concrete differences in the ultimate strategy I’m working towards or direction I’m pointing people in, I think.
When it comes to what to think about object-level work (i.e. how does alignment happen, technically), I get more hazy on what I should aim for. By statistical arguments, I reckon most inside views that exist on what work is going to be valuable are probably wrong. Why would mine be different? Alternatively, they might all be valuable, so why support just one. Or something in between. Either way, if I am doing meta work, it will probably be wrong to be bullish about my single inside view on ‘what will go wrong’. I think I should aim to support a number of research agenda if I don’t have strong reasons to believe some are wrong. I think this is where I will be doing most of my deferral, ultimately (and as the field shifts from where I left it).
However, understanding how valuable the object-level work is does seem important for deciding which directions to support (e.g. academia vs industry), so I’m a bit stuck on where to draw a kune. As Neel says, I might hope to get as far understanding what other people believe about their agenda and why—I always took this as “can I model the response person X would give, when considering an unseen question”, rather than memorising person X’s response to a number of questions.
I think where I am landing on this is that it might be possible to assume uniform prior over the directions I could take, and adjust my posterior by ‘learning things’ and understanding their models on both the direction-level and object-level, properly. Another thought I want to explore—is this something like a worldview diversification over directions? It feels similar, as we’re in a world where it ‘might turn out’ some agenda or direction was correct, but there’s no way of knowing that right now.
To confirm—I believe people doing the object-level work (i.e. alignment research) should be bullish about their inside view. Let them fight it out, and let expert discourse decide what is “right” or “most promising”. I think this amounts to Neel’s “truth-seeking” point.
Hey Jamie, thanks for this! Seems like you’ve thought about it quite a bit—probably more than I have—but here are my initial thoughts. Hope this is helpful to you; if so, maybe we should chat more!
First a distinction which I think you’re pointing at—an inside view on what? [...] How do I decide which to prioritise? This might require some inside view on how each direction changes the world (and interacts with the others), and lead to an answer on which I’m most optimistic about supporting. There is nobody to defer to here, as practitioners are all (rightly) quite bullish about their choice. Having an inside view on which approach I find most valuable will lead to quite concrete differences in the ultimate strategy I’m working towards or direction I’m pointing people in, I think.
Agree! When I first wrote my comment, I labelled this a ‘meta-inside view:’ an inside view on what somebody (probably you, but possibly others like your group members) need to form inside views on. But this might be too confusing compared to less jargon-y phrases like, ‘prioritizing what you form an inside view on first’ or something.
Regardless, I think we are capturing the same issue here—although I don’t use ‘issue’ in a negative sense. In my ideal world, community-builders would form pretty different views on causes to prioritize because this would help increase intellectual diversity and the discovery of the ‘next-best’ thing to work on. That doesn’t mean, however, that there couldn’t be some sort of guidance for how community-builders might go about figuring out what to prioritize.
I think this is where I will be doing most of my deferral, ultimately (and as the field shifts from where I left it).
Yeah, I think this is the status quo for any field that one isn’t an expert on. Community-builders may be experts on community-building, but that doesn’t extend to other domains, hence the need for deferral. Perhaps the key difference here is that community-builders need to be extra aware of the ever-shifting landscape and stay plugged-in, since their advice may directly impact the ‘next generation’ of EAs.
However, understanding how valuable the object-level work is does seem important for deciding which directions to support (e.g. academia vs industry), so I’m a bit stuck on where to draw a kune. As Neel says, I might hope to get as far understanding what other people believe about their agenda and why—I always took this as “can I model the response person X would give, when considering an unseen question”, rather than memorising person X’s response to a number of questions.
Hmm, I think you’re right that developing an inside view for a specific cause would influence the levers that you think are most important (which has effects on your CB efforts, etc.) - but I’m not sure this has much implications for what CBs should do. My prior is that it is very unlikely that there are any causes where only a handful of levers and skillsets would be relevant, such that I would feel comfortable suggesting that people rely more on personal fit to figure out their careers once they’ve chosen a cause area. However, I acknowledge that there is definitely more need in certain causes (e.g., software engineers for AI safety): I just don’t think that the CB level is the right level to apply this knowledge. I would feel more comfortable having cause-specific recruiters (c.f., University community building seems like the wrong model for AI safety).
I definitely agree on the latter point. I see community-builders as both building and embodying pipelines to the EA community! As the ‘point-of-entry’ for many potential EAs, I think it is sufficient for CBs to be able to model the mainstream views for core cause areas. I expect that the most talented CBs will probably have developed inside views for a specific cause outside of CB, but that doesn’t seem necessary to me for good CB work.
I think where I am landing on this is that it might be possible to assume uniform prior over the directions I could take, and adjust my posterior by ‘learning things’ and understanding their models on both the direction-level and object-level, properly. Another thought I want to explore—is this something like a worldview diversification over directions? It feels similar, as we’re in a world where it ‘might turn out’ some agenda or direction was correct, but there’s no way of knowing that right now.
Oh, I’m a huge fan of worldview diversification! I don’t currently have thoughts on starting with a non-/uniform prior … I am, honestly, somewhat inclined to suggest that CBs ‘adapt’ a bit to the communities in which they are working. That is, perhaps what should partly affect a CB’s prioritization re: inside view development is the existing interests of their group. For example, considering the Bay Area’s current status as a tech hub, it seems pretty important for CBs in the Bay Area to develop inside views on, say, AI safety—even if AI safety may not be what they consider the most pressing issue in the entire world. What do you think?
To confirm—I believe people doing the object-level work (i.e. alignment research) should be bullish about their inside view. Let them fight it out, and let expert discourse decide what is “right” or “most promising”.
Thank you for this. I am a community-builder and I’ve definitely started emphasizing the importance of developing inside views to my group members. However, it seems like there may be domains where developing an inside view is relatively less important (e.g., algebraic geometers vs moral philosophers), because experts in that field appear to have better feedback loops. Given this, I’m curious whether you think community-builders might want to form inside views* on which areas to emphasize inside view formation for, to help communicate more accurately to our members?
*I’m not confident I’m describing an ‘inside view.’ Maybe this is something like, ‘getting a sense of outside views across an array of domains?’
I found your post doubly useful because I’ve recently been exploring how I can form inside views, which I’ve found both practically and emotionally difficult. Not being familiar with the rationality or AI safety community, I was surprised by how much emphasis was placed on inside views and started feeling a bit like an imposter in the EA community. I definitely felt like it was “low-status” to not have inside views on the causes I prioritized, though I expect at least some of this was due to my own anxiety.
Being able to see how you tackled this is really useful, as it gives me another model for how I could develop inside views (particularly on AI risk, which is the first thing I’m working on). It also reinforces that a lot of people have more career flexibility than they think—and so, perhaps, it’s okay if I haven’t figured out whether I should switch from community building into AI safety research in the three months before I graduate!
Hey! I have been thinking about this a lot from the perspective of a confused community builder / pipeline strategist, too. I didn’t get so far as Neel, it’s been great to read this post before getting anywhere near finishing my thoughts. It captures a lot of the same things better than I had. Thanks for your comment too—definitely a lot of overlap here!
I have got as far as some ideas, here, and would love any initial thoughts before I try to write it up with more certainty?
First a distinction which I think you’re pointing at—an inside view on what? The thing I can actually have an excellent inside view about as a (full-time) community builder is how community building works. Like, how to design a programme, how people respond to certain initiatives, what the likelihood certain things work are, etc.
Next, programmes that lead to working in industry, academic field building, independent research, etc, look different. How do I decide which to prioritise? This might require some inside view on how each direction changes the world (and interacts with the others), and lead to an answer on which I’m most optimistic about supporting. There is nobody to defer to here, as practitioners are all (rightly) quite bullish about their choice. Having an inside view on which approach I find most valuable will lead to quite concrete differences in the ultimate strategy I’m working towards or direction I’m pointing people in, I think.
When it comes to what to think about object-level work (i.e. how does alignment happen, technically), I get more hazy on what I should aim for. By statistical arguments, I reckon most inside views that exist on what work is going to be valuable are probably wrong. Why would mine be different? Alternatively, they might all be valuable, so why support just one. Or something in between. Either way, if I am doing meta work, it will probably be wrong to be bullish about my single inside view on ‘what will go wrong’. I think I should aim to support a number of research agenda if I don’t have strong reasons to believe some are wrong. I think this is where I will be doing most of my deferral, ultimately (and as the field shifts from where I left it).
However, understanding how valuable the object-level work is does seem important for deciding which directions to support (e.g. academia vs industry), so I’m a bit stuck on where to draw a kune. As Neel says, I might hope to get as far understanding what other people believe about their agenda and why—I always took this as “can I model the response person X would give, when considering an unseen question”, rather than memorising person X’s response to a number of questions.
I think where I am landing on this is that it might be possible to assume uniform prior over the directions I could take, and adjust my posterior by ‘learning things’ and understanding their models on both the direction-level and object-level, properly. Another thought I want to explore—is this something like a worldview diversification over directions? It feels similar, as we’re in a world where it ‘might turn out’ some agenda or direction was correct, but there’s no way of knowing that right now.
To confirm—I believe people doing the object-level work (i.e. alignment research) should be bullish about their inside view. Let them fight it out, and let expert discourse decide what is “right” or “most promising”. I think this amounts to Neel’s “truth-seeking” point.
Hey Jamie, thanks for this! Seems like you’ve thought about it quite a bit—probably more than I have—but here are my initial thoughts. Hope this is helpful to you; if so, maybe we should chat more!
Agree! When I first wrote my comment, I labelled this a ‘meta-inside view:’ an inside view on what somebody (probably you, but possibly others like your group members) need to form inside views on. But this might be too confusing compared to less jargon-y phrases like, ‘prioritizing what you form an inside view on first’ or something.
Regardless, I think we are capturing the same issue here—although I don’t use ‘issue’ in a negative sense. In my ideal world, community-builders would form pretty different views on causes to prioritize because this would help increase intellectual diversity and the discovery of the ‘next-best’ thing to work on. That doesn’t mean, however, that there couldn’t be some sort of guidance for how community-builders might go about figuring out what to prioritize.
Yeah, I think this is the status quo for any field that one isn’t an expert on. Community-builders may be experts on community-building, but that doesn’t extend to other domains, hence the need for deferral. Perhaps the key difference here is that community-builders need to be extra aware of the ever-shifting landscape and stay plugged-in, since their advice may directly impact the ‘next generation’ of EAs.
Hmm, I think you’re right that developing an inside view for a specific cause would influence the levers that you think are most important (which has effects on your CB efforts, etc.) - but I’m not sure this has much implications for what CBs should do. My prior is that it is very unlikely that there are any causes where only a handful of levers and skillsets would be relevant, such that I would feel comfortable suggesting that people rely more on personal fit to figure out their careers once they’ve chosen a cause area. However, I acknowledge that there is definitely more need in certain causes (e.g., software engineers for AI safety): I just don’t think that the CB level is the right level to apply this knowledge. I would feel more comfortable having cause-specific recruiters (c.f., University community building seems like the wrong model for AI safety).
I definitely agree on the latter point. I see community-builders as both building and embodying pipelines to the EA community! As the ‘point-of-entry’ for many potential EAs, I think it is sufficient for CBs to be able to model the mainstream views for core cause areas. I expect that the most talented CBs will probably have developed inside views for a specific cause outside of CB, but that doesn’t seem necessary to me for good CB work.
Oh, I’m a huge fan of worldview diversification! I don’t currently have thoughts on starting with a non-/uniform prior … I am, honestly, somewhat inclined to suggest that CBs ‘adapt’ a bit to the communities in which they are working. That is, perhaps what should partly affect a CB’s prioritization re: inside view development is the existing interests of their group. For example, considering the Bay Area’s current status as a tech hub, it seems pretty important for CBs in the Bay Area to develop inside views on, say, AI safety—even if AI safety may not be what they consider the most pressing issue in the entire world. What do you think?
Also completely agree here. : )