The framing of your question suggests EA’s role is to prescribe actions. I think EA is centrally a question and a set of abstract tools for understanding the world’s needs. Using those tools will take different people in different directions. I want to support people using the tools well and I resolve not to judge how well people use the tools based on the specific conclusions they draw.
In particular, I think one of the biggest gaps in AI safety is having a clear understanding of why the majority of educated people reject the TAI hypothesis. Likewise, I think that many people who wish to do good in the world reject the TAI hypothesis for bad reasons that they would regret on reflection. Where do people go to correct these errors and build better models of which actions are best to take all things considered?
I don’t know of a better venue than the best pockets of the current EA community. I want to make those pockets bigger!
The framing of your question suggests EA’s role is to prescribe actions
Was I presuming this? I didn’t think I was. I was just talking about how it is hard to simultaneously meet the needs of folks with very different worldviews.
In particular, I think one of the biggest gaps in AI safety is having a clear understanding of why the majority of educated people reject the TAI hypothesis. Likewise, I think that many people who wish to do good in the world reject the TAI hypothesis for bad reasons that they would regret on reflection. Where do people go to correct these errors and build better models of which actions are best to take all things considered?
You can definitely run a session on this. The challenge is where do you go from there? If you take the conversation to more concrete issues you risk losing half your audience. I’m not claiming this is impossible, just that it’s tricky.
I think one of the biggest gaps in AI safety is having a clear understanding of why the majority of educated people reject the TAI hypothesis
I’m curious what your explanation would be. My explanation would be is that media landscape is filled with hype; there’s all these philosophical arguments you can make that are hard to evaluate; even if you know that some amount of predicted crises will come true most people don’t have high confidence that they could predict which ones these would be and even if you could it’d take a massive amount of time and people’s lives are pretty busy and what would they do with that knowledge anyway?
We’re looking at this more differently than I thought. The question “how does EA meet the needs of people with different worldviews” is strange to me. EA should be the place you go to *form* your worldview, by learning about and comparing different perspectives. Whatever has caused this framing to seem tricky/unnatural is the thing I’m pushing back against.
I have a similar take on TAI skepticism, with some added (perhaps excessively charitable) concerns around how economic value gets created in the first place and what hurdles there are between current AI systems and creating that value.
I expect people to update somewhat. My split was more about where people end up falling after initial exposure to arguments on both sides.
In the past, AI didn’t feel so pressing to the AI crowd, so they had more space to explore, rather than the discussion of animals and global poverty feeling like dead weight.
The framing of your question suggests EA’s role is to prescribe actions. I think EA is centrally a question and a set of abstract tools for understanding the world’s needs. Using those tools will take different people in different directions. I want to support people using the tools well and I resolve not to judge how well people use the tools based on the specific conclusions they draw.
In particular, I think one of the biggest gaps in AI safety is having a clear understanding of why the majority of educated people reject the TAI hypothesis. Likewise, I think that many people who wish to do good in the world reject the TAI hypothesis for bad reasons that they would regret on reflection. Where do people go to correct these errors and build better models of which actions are best to take all things considered?
I don’t know of a better venue than the best pockets of the current EA community. I want to make those pockets bigger!
Was I presuming this? I didn’t think I was. I was just talking about how it is hard to simultaneously meet the needs of folks with very different worldviews.
You can definitely run a session on this. The challenge is where do you go from there? If you take the conversation to more concrete issues you risk losing half your audience. I’m not claiming this is impossible, just that it’s tricky.
I’m curious what your explanation would be. My explanation would be is that media landscape is filled with hype; there’s all these philosophical arguments you can make that are hard to evaluate; even if you know that some amount of predicted crises will come true most people don’t have high confidence that they could predict which ones these would be and even if you could it’d take a massive amount of time and people’s lives are pretty busy and what would they do with that knowledge anyway?
We’re looking at this more differently than I thought. The question “how does EA meet the needs of people with different worldviews” is strange to me. EA should be the place you go to *form* your worldview, by learning about and comparing different perspectives. Whatever has caused this framing to seem tricky/unnatural is the thing I’m pushing back against.
I have a similar take on TAI skepticism, with some added (perhaps excessively charitable) concerns around how economic value gets created in the first place and what hurdles there are between current AI systems and creating that value.
I expect people to update somewhat. My split was more about where people end up falling after initial exposure to arguments on both sides.
In the past, AI didn’t feel so pressing to the AI crowd, so they had more space to explore, rather than the discussion of animals and global poverty feeling like dead weight.