gabriel_wagner
China x AI Reference List
Can I ask whether there is a specific reason that you do not put the summary of the findings in this post, but only let people request access to a google drive folder?
Interesting also the second point you bring up on Sociology in Germany! I agree that collaborations between researchers who come with slightly different types of expertise could be super valuable.
Do you have any ideas how to promote it in practice though? As you say, various incentive structures are not really made for that. I also find that surprisingly often, researchers just really rather want to prove why “their” approach is better, rather than try to understand how another approach could help them better understand the world.
All this makes me feel slightly pessimistic^^ But I would be super glad to hear ideas on how to overcome these difficulties.
Hi Anton, glad to hear that you found this post valuable!
On your first question, I think could check out the Sinica Podcast. I believe it is one of the sources on China that is quite accessible, but still tries really hard to go below the surface of issues they cover. Of course just my personal recommendation.
We interviewed 15 China-focused researchers on how to do good research
“EA outreach funding has likely generated substantially >>$1B in value”
Would be curious how you came up with that number.
EA Forum Plugin for My Favorite Note Taking App Logseq
Thanks a lot for writing this down with so much clarity and honesty!
I think I share many of those feelings, but would not have been able to write this.
Something seems a little bit off in this cost-benefit analysis to me. You seem to compare the tiny tiny cost of delaying one breath to the sizable accumulated impact of 1 billion people doing this for a year. But that is not really helpful to get an intuition. The tiny tiny cost of delaying breathing once will also accumulate if 1 billion people do this for a year.
Of course, it is still possible that the accumulated cost is lower than the accumulated benefit. But in a way, this whole accumulation does not matter. All that matters is if the cost is higher than the benefit.
Nice post!
Do you think a person working on this should also have some basic knowledge of ML? Or might it be better to NOT have that, to have a more “pure” outsider view on the behaviour of the models?
I personally think the risks of these videos are relatively low because they do not mention EA. People who are convinced by the ideas in the jokes might start a google search and eventually find EA. Those that feel disgusted by the jokes might just think “what an idiot” and stop there. I doubt they would go on to search about what this is all about, find EA, and then try to act against that.
Just wanted to let you know that this was super amusing to read (including the hyper-linked content)! Some nostalgia for time in high-school when I was translating this stuff in Ancient Greek class :D
(I have completely no expertise in AI, but this is what I always felt personally confused about)
How are we going to know/measure/judge whether our efforts to prevent AI risks are actually helping? Or how much they are helping?
Hi Michael, great to hear you are interested in the intersection of EA work and China and have expertise to bring in!
You may be interested in our Slack community, the form of interest is here: https://airtable.com/shr4E1GeNid3qEjuZ
This number sounds suspiciously high to me. Do you have any further details? How long did these effects last? Have you done any comparisons to other interventions with similar people, such as using some mental health apps etc?