The writing on epistemic erosion reminds me of a weird feeling I left with during an EAGx conference. I’ve benefitted tremendously from having conversations with experienced safety members at previous EAGs. I think of EAGx as a way to connect with people but with more of an emphasis on giving my own advice and trying to pay it forward with the experience I’ve accumulated to (even more than myself) relatively junior people interested in AI Safety. I had a lot of great meetings, but a surprising number of meetings left a bittery taste in my mouth.
Possibly I came in with improper expectations, but I expected a lot of discussion around project and research ideas, and general ways to get involved with AI Safety specific research if it was difficult to do at their current uni (through programs, funding opportunities,...) Instead, in many of my meetings, the questions were predominantly of a flavor where the reference class had a lot of overlap with the type of questions that would be asked if one were motivated to reduce Xrisks from AI, but felt distinctively different in that there were undertones of really motivated by prestige/status-seeking for the sake of prestige/status-seeking rather than for that being instrumentally useful on the path to reduce Xrisks from AI. Thoughts on how to get hired at OpenAI, Deepmind, masters programs in subject X at a prestigious university Y, where X wasn’t even related to my own background, but I just happened to be at university Y … Felt kinda bad because a lot of questions felt like things that could’ve been googled, or seemed very strongly driven by an eagerness to pursue high-prestige opportunities rather than anything about refining ideas on how to contribute to AIS.
Totally possible I’m overreading the vibes of such convos from that conference, but when I imagine what kind of things I’d be curious about (and had been curious about a few years ago) when I wanted to be helpful to AI Safety but unsure how, the kinds of questions and the direction in which I’d expect the conversations to be steered was very different in what I experienced. Just another light anecdota (admittedly highly speculative) on epistemic erosion.
edit*: I had lots of really great conversations where I was really happy I got to talk. The surprise was mostly about the percentage of conversations that gave me that ^ feeling.
The writing on epistemic erosion reminds me of a weird feeling I left with during an EAGx conference. I’ve benefitted tremendously from having conversations with experienced safety members at previous EAGs. I think of EAGx as a way to connect with people but with more of an emphasis on giving my own advice and trying to pay it forward with the experience I’ve accumulated to (even more than myself) relatively junior people interested in AI Safety. I had a lot of great meetings, but a surprising number of meetings left a bittery taste in my mouth.
Possibly I came in with improper expectations, but I expected a lot of discussion around project and research ideas, and general ways to get involved with AI Safety specific research if it was difficult to do at their current uni (through programs, funding opportunities,...) Instead, in many of my meetings, the questions were predominantly of a flavor where the reference class had a lot of overlap with the type of questions that would be asked if one were motivated to reduce Xrisks from AI, but felt distinctively different in that there were undertones of really motivated by prestige/status-seeking for the sake of prestige/status-seeking rather than for that being instrumentally useful on the path to reduce Xrisks from AI. Thoughts on how to get hired at OpenAI, Deepmind, masters programs in subject X at a prestigious university Y, where X wasn’t even related to my own background, but I just happened to be at university Y … Felt kinda bad because a lot of questions felt like things that could’ve been googled, or seemed very strongly driven by an eagerness to pursue high-prestige opportunities rather than anything about refining ideas on how to contribute to AIS.
Totally possible I’m overreading the vibes of such convos from that conference, but when I imagine what kind of things I’d be curious about (and had been curious about a few years ago) when I wanted to be helpful to AI Safety but unsure how, the kinds of questions and the direction in which I’d expect the conversations to be steered was very different in what I experienced. Just another light anecdota (admittedly highly speculative) on epistemic erosion.
edit*: I had lots of really great conversations where I was really happy I got to talk. The surprise was mostly about the percentage of conversations that gave me that ^ feeling.