Sorry, it is so confusing to refer to AIM as ‘A.I.’, particularly in this context...
David M
AIM simply doesn’t rate AI safety as a priority cause area. It’s not any particular organisation’s job to work on your favourite cause area. They are allowed to have a different prioritisation from you.
To contextualize the final point I made, it seems that in fact there is a lot of criminality among the ultra rich. https://forum.effectivealtruism.org/posts/d8nW46LrTkCWdjiYd/rates-of-criminality-amongst-giving-pledge-signatories (No comment on how malicious it is)
I don’t think it’s productive to name just one or two of the very many biases one could bring up. I would need some reason to think this bias is more worth mentioning than other biases (such as Ben’s payment to Alice and Chloe, or commenters’ friendships, etc.).
Edit: I misread what you were saying. I thought you were saying ‘Kat has dodged questions about whether it was true’, and ‘It’s not clear the anecdotes are being presented as real’.
Actually, Katsaid it was true.
I just mean one shouldn’t end up in a situation where you’re claiming nobody should do X, having just done X. That would be deeply weird of one.
I phrased that poorly, please see my reply to Vlad’s reply for an explanation.
I weakly think Ben’s decision to search for bad information rather than good was a good policy, but that the investigation was lacking in some other aspects.
When I said ‘overall character’ I was trying to draw a contrast between, on the one hand, categorising people into ‘evil’ vs ‘normal’ in a binary way, and, on the other hand, a kind of evaluation that allows for gradations of being a bad actor. My lazy phrasing implied that I was interested in the good behaviour of Nonlinear staff as well as the bad, but I actually think it’s more worth paying one’s limited attention to the bad side in particular, in the same way that it makes more sense to launch an investigation when someone has potentially done something bad, than when someone has potentially done something good.
Can you point out where the poem is in the very long post?
I read the author’s intention, when she makes the case for ‘forgiveness as a virtue’, as a bid to (1) seem more virtuous herself, and (2) make others more likely to forgive her (since she was so generous to her accusers—at least in that section—and we want to reciprocate generosity). I think this is an effective persuasive writing technique, but is not relevant to the questions at issue (who did what).
Another related ‘persuasive writing’ technique I spotted was that, in general, Kat is keen to phrase the hypothesis where Nonlinear did bad things in an extreme way—effectively challenging skeptics “so, you saying we’re completely evil moustache-twirling vagabonds from out of a children’s fairytale?”. That’s a straw person, because what’s at issue is the overall character of Nonlinear staff, not whether they’re cartoon villains. The word ‘witch’ is used 7 times in this post, and ‘evil’ half a dozen times too. Quote:
> 2 EAs are Secretly Evil Hypothesis: 2 (of 21) Nonlinear employees felt bad because while Kat/Emerson seem like kind, uplifting charity workers publicly, behind closed doors they are ill-intentioned ne’er do wells.
Retaliation is bad. If you think doing X is bad, then you shouldn’t do X, even if you’re ‘only doing it to make the point that doing X is bad’.
Thanks, really helpful to have this overview, makes me more likely to read the sequence itself (partly by directing me to which parts cover what)
Pause For Thought: The AI Pause Debate (Astral Codex Ten)
On the wiki:
It seems like ‘topics’ are trying to serve at least two purposes: linking to wiki articles with info to orient people, and classifying/tagging forum posts. These purposes don’t need to be so tied together as they currently are.One could want to have e.g. 3 classification labels to help subdivide a topic (I think we currently have ‘AI safety’, ‘AI risks’, and ‘AI alignment’), but that seems like a bad reason to write 3 separate similar articles, which duplicates effort in cases where the topics have a lot of overlap.
A lot of writing time could be saved if tags and wiki articles were split out such that closely related tags could point to the same wiki article.
Seems like these ‘topics’ are trying to serve at least two purposes: providing wiki articles with info to orient people, and classifying/tagging forum posts. These purposes don’t need to be so tied together as they currently are. One could want to have e.g. 3 classification labels (‘safety’, ‘risks’, ‘alignment’), but that seems like a bad reason to write 3 separate articles, which duplicates effort in cases where the topics have a lot of overlap.
A lot of writing time could be saved if tags/topics and wiki articles were split out such that closely related tags/topics could point to the same wiki article.
My hard-workingness is really dependent on my work context (e.g., whether I have a job or not). A graph of my hard-workingness over the past year peaks really strongly from Jan-March when I was working on EAGxCambridge, because of the soon and immovable deadlines, and being the main person responsible for it. I tracked 70 hrs/wk of work in the last month (unsustainable). In the meantime I’ve been far less hard-working (which I prefer). I think if I had a baby, I’d also become really hard-working, because I’d be one of the people most responsible for the ‘project’.
does OpenPhil accept donations? I would have guessed not
One can submit new features here: https://www.swapcard.com/product-roadmap
I just submitted what you said.
Maybe I misunderstood you.
I think AIM doesn’t constitute evidence for this. Your top hypothesis should be that they don’t think AI safety is that good of a cause area, before positing the more complicated explanation. I say this partly based on interacting with people who have worked at AIM.