Do you think you could linkpost your article to Lesswrong too?
I know this article mainly focuses on EA values, but it also overlaps with a bunch of stuff that LW users like to research and think about (e.g. in order to better understand the current socio-political and and geopolitical situation with AI safety).
There’s a lot of people on LW who mainly spend their days deep into quantitative technical alignment research, but are surprisingly insightful and helpful when given a fair chance to weigh in on the sociological and geopolitical environment that EA and AI safety take place in, e.g. johnswentworth’s participation in this dialogue.
Normally the barriers to entry are quite high, which discourages involvement from AI safety’s most insightful and quantitative thinkers. Non-experts typically start out, by default, with really bad takes on US politics or China (e.g. believing that the US military just hands over the entire nuclear arsenal to a new president every 4-8 years), and people have to call them out on that in order to preserve community epistemics.
But it also keeps alignment researchers and other quant people separated from the people thinking about the global and societal environment that EA and AI safety take place in, which currently needs as many people as possible understanding the problems and thinking through viable solutions.
Do you think you could linkpost your article to Lesswrong too?
I know this article mainly focuses on EA values, but it also overlaps with a bunch of stuff that LW users like to research and think about (e.g. in order to better understand the current socio-political and and geopolitical situation with AI safety).
There’s a lot of people on LW who mainly spend their days deep into quantitative technical alignment research, but are surprisingly insightful and helpful when given a fair chance to weigh in on the sociological and geopolitical environment that EA and AI safety take place in, e.g. johnswentworth’s participation in this dialogue.
Normally the barriers to entry are quite high, which discourages involvement from AI safety’s most insightful and quantitative thinkers. Non-experts typically start out, by default, with really bad takes on US politics or China (e.g. believing that the US military just hands over the entire nuclear arsenal to a new president every 4-8 years), and people have to call them out on that in order to preserve community epistemics.
But it also keeps alignment researchers and other quant people separated from the people thinking about the global and societal environment that EA and AI safety take place in, which currently needs as many people as possible understanding the problems and thinking through viable solutions.
You’re welcome to re-post it there, if you think it might be of interest to the LW crowd! :-)
The Nuclear football is a lie?!! TIL