Have you thought about the possibility that EA may have resonated in a particular social context that no longer exists?
But a community that took twenty years to develop its particular structure of norms and mutual knowledge cannot be regrown in twenty years, because the conditions that shaped it no longer exist. The people are older, the context has changed, and the specific convergance of circumstances that brought those particular individuals together in that particular configuration at that particular time is gone. Communities are path-dependent in the strongest possible sense: their current state is a function of their entire history, and you can’t rerun the history.
The main challenge I see at the moment is that for half the potential audience AI is clearly the biggest thing going on at the moment and the other half sees it as clearly overhyped. And it’s quite hard to construct a program or run events that will really hit it out of the park for both sides at once.
I would be keen to hear if you think you have any solutions to this birifuction.
If they think its overhyped that’s OK, they can just join us over here helping boring-old-right-now people in GHD, we can care about different things ;).
That’s orthogonal to the point that I raised about it being hard to run a course that simultaneously manages to be a strong fit for different groups of people.
The framing of your question suggests EA’s role is to prescribe actions. I think EA is centrally a question and a set of abstract tools for understanding the world’s needs. Using those tools will take different people in different directions. I want to support people using the tools well and I resolve not to judge how well people use the tools based on the specific conclusions they draw.
In particular, I think one of the biggest gaps in AI safety is having a clear understanding of why the majority of educated people reject the TAI hypothesis. Likewise, I think that many people who wish to do good in the world reject the TAI hypothesis for bad reasons that they would regret on reflection. Where do people go to correct these errors and build better models of which actions are best to take all things considered?
I don’t know of a better venue than the best pockets of the current EA community. I want to make those pockets bigger!
The framing of your question suggests EA’s role is to prescribe actions
Was I presuming this? I didn’t think I was. I was just talking about how it is hard to simultaneously meet the needs of folks with very different worldviews.
In particular, I think one of the biggest gaps in AI safety is having a clear understanding of why the majority of educated people reject the TAI hypothesis. Likewise, I think that many people who wish to do good in the world reject the TAI hypothesis for bad reasons that they would regret on reflection. Where do people go to correct these errors and build better models of which actions are best to take all things considered?
You can definitely run a session on this. The challenge is where do you go from there? If you take the conversation to more concrete issues you risk losing half your audience. I’m not claiming this is impossible, just that it’s tricky.
I think one of the biggest gaps in AI safety is having a clear understanding of why the majority of educated people reject the TAI hypothesis
I’m curious what your explanation would be. My explanation would be is that media landscape is filled with hype; there’s all these philosophical arguments you can make that are hard to evaluate; even if you know that some amount of predicted crises will come true most people don’t have high confidence that they could predict which ones these would be and even if you could it’d take a massive amount of time and people’s lives are pretty busy and what would they do with that knowledge anyway?
We’re looking at this more differently than I thought. The question “how does EA meet the needs of people with different worldviews” is strange to me. EA should be the place you go to *form* your worldview, by learning about and comparing different perspectives. Whatever has caused this framing to seem tricky/unnatural is the thing I’m pushing back against.
I have a similar take on TAI skepticism, with some added (perhaps excessively charitable) concerns around how economic value gets created in the first place and what hurdles there are between current AI systems and creating that value.
I expect people to update somewhat. My split was more about where people end up falling after initial exposure to arguments on both sides.
In the past, AI didn’t feel so pressing to the AI crowd, so they had more space to explore, rather than the discussion of animals and global poverty feeling like dead weight.
I would be keen to hear if you think you have any solutions to this birifuction.
Huh, this feels like prime EA territory to me. We need disagreement so that people can engage in key EA activities like “making persnickety critiques of footnote #237 on someone’s 10k word forum post.”
The case for EA feels much weaker to me if we are all confident that X is the best thing to do—then you should just do X and not worry about cause prio etc.
Have you thought about the possibility that EA may have resonated in a particular social context that no longer exists?
The main challenge I see at the moment is that for half the potential audience AI is clearly the biggest thing going on at the moment and the other half sees it as clearly overhyped. And it’s quite hard to construct a program or run events that will really hit it out of the park for both sides at once.
I would be keen to hear if you think you have any solutions to this birifuction.
If they think its overhyped that’s OK, they can just join us over here helping boring-old-right-now people in GHD, we can care about different things ;).
That’s orthogonal to the point that I raised about it being hard to run a course that simultaneously manages to be a strong fit for different groups of people.
yes it is, I was just responding to the “overhyped” comment.
The framing of your question suggests EA’s role is to prescribe actions. I think EA is centrally a question and a set of abstract tools for understanding the world’s needs. Using those tools will take different people in different directions. I want to support people using the tools well and I resolve not to judge how well people use the tools based on the specific conclusions they draw.
In particular, I think one of the biggest gaps in AI safety is having a clear understanding of why the majority of educated people reject the TAI hypothesis. Likewise, I think that many people who wish to do good in the world reject the TAI hypothesis for bad reasons that they would regret on reflection. Where do people go to correct these errors and build better models of which actions are best to take all things considered?
I don’t know of a better venue than the best pockets of the current EA community. I want to make those pockets bigger!
Was I presuming this? I didn’t think I was. I was just talking about how it is hard to simultaneously meet the needs of folks with very different worldviews.
You can definitely run a session on this. The challenge is where do you go from there? If you take the conversation to more concrete issues you risk losing half your audience. I’m not claiming this is impossible, just that it’s tricky.
I’m curious what your explanation would be. My explanation would be is that media landscape is filled with hype; there’s all these philosophical arguments you can make that are hard to evaluate; even if you know that some amount of predicted crises will come true most people don’t have high confidence that they could predict which ones these would be and even if you could it’d take a massive amount of time and people’s lives are pretty busy and what would they do with that knowledge anyway?
We’re looking at this more differently than I thought. The question “how does EA meet the needs of people with different worldviews” is strange to me. EA should be the place you go to *form* your worldview, by learning about and comparing different perspectives. Whatever has caused this framing to seem tricky/unnatural is the thing I’m pushing back against.
I have a similar take on TAI skepticism, with some added (perhaps excessively charitable) concerns around how economic value gets created in the first place and what hurdles there are between current AI systems and creating that value.
I expect people to update somewhat. My split was more about where people end up falling after initial exposure to arguments on both sides.
In the past, AI didn’t feel so pressing to the AI crowd, so they had more space to explore, rather than the discussion of animals and global poverty feeling like dead weight.
Huh, this feels like prime EA territory to me. We need disagreement so that people can engage in key EA activities like “making persnickety critiques of footnote #237 on someone’s 10k word forum post.”
The case for EA feels much weaker to me if we are all confident that X is the best thing to do—then you should just do X and not worry about cause prio etc.