Great question! I wish I knew the answer. Of all the Chinese surveys we looked through, to my knowledge, that question was never asked. (I think it might be a bit of a taboo.)
Nick Corvino
Survey: How Do Elite Chinese Students Feel About the Risks of AI?
was there a motive behind the font change?
it’s hard to put into words, but like there were cocktails and nice background music and all the events transitioned super smoothly. It’s like when you watch the Oscars or something and everything seems like it’s been rehearsed—that’s how this felt. EA conferences, on the other hand, usually seem more hectic and improvisational.
Did not hear animal welfare mentioned once, and they had lots of meat options for lunch. That’s all I got lol.
You know that’s what I thought as well, but I’ve found the community to be more open to caution than I initially thought. Derek Thompson in particular (the main organizer for the event) harped on safety quite a bit. And if more EAs got involved (assuming they don’t get amnesia) I assume they can carry over some of these concerns and shift the culture.
I went to the Progress Summit. Here’s What I Learned.
The “Inside-Out” Model for pitching EA
Idea: Exchange Program for Uni Groups
A much needed post
Midwest EA’s Next Steps Retreat Post-Mortem
Strong upvote.
To me, this seems more relevant for more established groups. Perhaps thinking about operational tasks vs skilling up shouldn’t be thought of in terms of percentages, but in terms of necessary vs supplemental tasks. I would imagine things like sending emails, doing 1:1s, buying food for events, etc. are necessary for any group to stay alive. So if you are the only HEA for your uni group, you might have to spend 90% of your time doing these (and tbh I think this would be the right call). But when it comes to things like doing an egregious amount of marketing or anything else that doesn’t seem necessary, perhaps skilling up should be prioritized.
Also, I didn’t see the multiplier effect come up anywhere, and I’m interested to hear how heavily you weight it.
(generally) how much counterfactual suffering comes buying cage free eggs vs. factory farmed eggs? I couldn’t find any straightforward posts/research on the topic, but I’m sure it’s somewhere.
Nick Corvino’s Quick takes
The problem here is that it’s still overtly utilitarian, with just a bit more wiggle room. It still forces people to weigh one thing against the other, which is what I think they might be uncomfortable doing. Buck Shlegeris says’ everything is triage’ and I think you’d agree with this sentiment. However, I don’t think everyone likes to think this way, and I don’t want that hiccup to be the reason they don’t further investigate EA.
I agree, and that is essentially the rationale I employ. I personally think I could put a value on every aspect of my life, therefore subverting the notion that implicit values can’t be made explicit.
However, I think the problem is that for some people your answer will be a non-starter. They might not want to assign the implicit value an explicit value (and therefore your response would shew them away). So what I’m proposing is allowing them keep their implicit values implicit while showing them that you can still be an EA if you accept that other people have implicit values as well. In honesty, it’s barely a meta-ethical claim, and more-so an explication of how EA can jive with various ethical frameworks.
Good point. Yeah, i think the question was worded better in Chinese. The question, literally translated, is: “Do you agree or disagree that the safe development of artificial intelligence requires cooperation between China and the United States?” (你是否赞同人工智能的安全发展需要中美两国的合作?) This is essentially your translation, but a bit different in that it frames the question as “requires” instead of “does not require,” which anchors the respondent a bit differently.