Strong upvoted because I think it’s important to preserve whatever embers of weirdness and anti-professionalism we have left in EA, and safeguard it as if it were our last bastion of hope against the forces of bureaucratic stagnation. (Though I’d be happy to discuss this.)
I’d be curious to know why people downvoted this. I don’t think we can claim to be good at inclusive diversity unless we support the kind of diversity that doesn’t immediately feel like our ingroup. If you can tolerate anything other than your outgroup, you aren’t actually tolerating anything.[1]
Although if the group itself is pernicious in some important way, then I’d change my mind about upvoting. Right now, however, all I know is that they have a weird niche and a corner for EAs to keep in touch.
Strengthening the association between “rationalist” and “furry” decreases the probability that AI research organizations will adopt AI safety proposals proposed by “rationalists”.
Strengthening the association may enable a larger slice of the rationalists to think and communicate clearly without being bogged down by professional constraints. I suspect professionalism is much more lethal than I think most people think, so that might be a crux. If we lighten the pressure towards professionalism, people have more slack and are less likely to end up optimising for proxies such as impressiveness, technicality, relevancy-to-other-literature, “comprehensiveness”, “hard work”, etc.
Strong upvoted because I think it’s important to preserve whatever embers of weirdness and anti-professionalism we have left in EA, and safeguard it as if it were our last bastion of hope against the forces of bureaucratic stagnation. (Though I’d be happy to discuss this.)
I’d be curious to know why people downvoted this. I don’t think we can claim to be good at inclusive diversity unless we support the kind of diversity that doesn’t immediately feel like our ingroup. If you can tolerate anything other than your outgroup, you aren’t actually tolerating anything.[1]
Although if the group itself is pernicious in some important way, then I’d change my mind about upvoting. Right now, however, all I know is that they have a weird niche and a corner for EAs to keep in touch.
Strengthening the association between “rationalist” and “furry” decreases the probability that AI research organizations will adopt AI safety proposals proposed by “rationalists”.
The poster is currently a resident at OpenAI on the reinforcement learning team.
And?
..
..
..
Just joking! I’m joking, sorry!
*pulls on rainbow dash costume*
Strengthening the association may enable a larger slice of the rationalists to think and communicate clearly without being bogged down by professional constraints. I suspect professionalism is much more lethal than I think most people think, so that might be a crux. If we lighten the pressure towards professionalism, people have more slack and are less likely to end up optimising for proxies such as impressiveness, technicality, relevancy-to-other-literature, “comprehensiveness”, “hard work”, etc.