Many of the questions ask you to pick among ‘Strongly disagree’ through ‘Strongly agree’ and most questions are optional. For those likert/select-an-option questions, I guess the survey analysers would do more aggregation among survey-takers, so quantity would matter there.
David M
For context, Kirsten has long worked for UK government departments.
No. It just has a question ‘Where are you based in the UK?’, with an option to say ‘not UK-based’ and specify where you are.
Grayden comments:
I think generally they are looking for issues to consider rather than doing a straw poll of public opinion, hence quality over quantity
Yes; that Google doc is linked to as the call to action of the linked post :)
An impactful opportunity to take action for animals, by today (May 7th): Reply to UK government’s open consultation on food welfare labelling
Unfortunately, that’s not a viable strategy. Emile is often the source for articles on EA in the media. Here are three examples from the guardian.
- May 3, 2024, 12:17 PM; 11 points) 's comment on Émile P. Torres’s history of dishonesty and harassment by (
I get a ‘comment not found’ response to your link.
I think you should speak to Naming What We Can https://forum.effectivealtruism.org/posts/54R2Masg3C9g2GxHq/announcing-naming-what-we-can-1
Though I think these days they go by ‘CETACEANS’ (the Centre for Effectively, Transparently, Accurately, Clearly, Effectively, and Accurately Naming Stuff).
Maybe I misunderstood you.
I think AIM doesn’t constitute evidence for this. Your top hypothesis should be that they don’t think AI safety is that good of a cause area, before positing the more complicated explanation. I say this partly based on interacting with people who have worked at AIM.
Sorry, it is so confusing to refer to AIM as ‘A.I.’, particularly in this context...
AIM simply doesn’t rate AI safety as a priority cause area. It’s not any particular organisation’s job to work on your favourite cause area. They are allowed to have a different prioritisation from you.
To contextualize the final point I made, it seems that in fact there is a lot of criminality among the ultra rich. https://forum.effectivealtruism.org/posts/d8nW46LrTkCWdjiYd/rates-of-criminality-amongst-giving-pledge-signatories (No comment on how malicious it is)
I don’t think it’s productive to name just one or two of the very many biases one could bring up. I would need some reason to think this bias is more worth mentioning than other biases (such as Ben’s payment to Alice and Chloe, or commenters’ friendships, etc.).
Edit: I misread what you were saying. I thought you were saying ‘Kat has dodged questions about whether it was true’, and ‘It’s not clear the anecdotes are being presented as real’.
Actually, Katsaid it was true.
I just mean one shouldn’t end up in a situation where you’re claiming nobody should do X, having just done X. That would be deeply weird of one.
I phrased that poorly, please see my reply to Vlad’s reply for an explanation.
I weakly think Ben’s decision to search for bad information rather than good was a good policy, but that the investigation was lacking in some other aspects.
I’ve never affiliated with a university group. I’m sad to hear that at least some university groups seem to be trying to appeal to ambitious prestige-chasers, and I hope it’s not something that the CEA Groups team has applied generally. I wonder if it comes from a short-sighted strategy of trying to catch those who are most likely to end up in powerful positions in the future, which would be in line with the reasons there has been a focus on the most prestigious universities. I call it short-sighted because filling the next generation of your movement with people who are light on values and strong on politics seems like a certain way to kill what’s valuable about EA (such as commitments to altruism and truth-seeking).