sorry, I got your name wrong in my reply (changed now)! I’m going to look into my question further, and read some of https://reducing-suffering.org/ you linked to. That’s as a result of this post:)
Matt Goodman
Linkpost: Italy introduces bill to ban lab-grown meat
I went through these experiences voluntarily and with the knowledge that I have the freedom to stop whenever I want. People suffering from painful disease, children dying of hunger, chickens being electrocuted to death, fish being asphyxiated to death—for these individuals, such experiences are a horrific reality, not an experiment
I think this is a very important distinction that should be given more emphasis. When I’ve experienced severe pain, the no.1 thought in my mind was “oh god make it stop”. This makes complete sense if you think of pain as your body’s way of saying, “ok, whatever it is you’re doing, you need to stop doing it now.” And I think a lot of the psychological suffering I experienced was due to the stress of not being able to stop the thing that was causing pain, and not knowing how long the pain would go on for. I add the word ‘psychological’ for clarity here, but in reality I don’t think there’s a clear difference between ‘psychological’ and ‘physical’ sources of pain. All pain in a sense is psychological—all of it happens ‘in your mind’, and factors such as knowing the pain will end soon can have a big effect on the experience of pain.
This distinction could also have a big effect on how people rate their pain on the pain-track framework. The framework seems to define pain a lot in terms of ‘how long could a person endure this?’ And that answer probably varies a lot depending on whether you know the pain will go away soon, or not. For ‘disabling’ pain, it could literally be less disabling, if you knows it’s going to end soon. You might think something like, “ok, I know this will end in 5 minutes, for now I’m going to do this other job to distract myself”. And looking back at the experience, and your behaviour at the time, you might read the scale, and think “ok it’s wasn’t that disabling, I could still do stuff”.
Hey Ren, this is a great post!
I share your intuition that reducing extreme suffering is the no.1 moral imperative for humankind.
What charities do you recommend, if that’s what you value most? GiveWell recommended charities based on their own moral weights, which I don’t think weight as reducing extreme suffering as highly as me.
Then there’s many animal welfare charities. And there’s OPIS, which is the only charity I know that explicitly targets extreme human suffering. Are there any others that I’m missing?
My guess is that it wouldn’t change much
Maybe not for most people reading the people reading the EA forum. I think if you take a serious look at the issues of animal suffering and farmed animal conditions, you’ll probably arrive at a number similar to existing statistics on numbers of factory farmed animals.
But I think there’s plenty of people who have motivated reasoning to doubt those statistics, or minimise the badness/factory-ness of a farm, or farming practice. For example, my extended family run a dairy farm. I remember when first reading about factory farms thinking ‘well, the family farm isn’t like these factory farms… right? ’
I also think it’s possible animal agriculturists will seize on uncertainty around the term ‘Factory Farm’ to sow confusion and whitewash animal welfare issues. Suppose that in the future, the concept of ‘Factory Farms’ gains widespread public vilification, in the same way that ‘Fossil Fuels’ does now. Now imagine a pan-European animal agriculture lobby group seizes on the looseness of the term ‘Factory Farm’ to ensure European farms aren’t associated with it:
European farms aren’t Factory Farms! We have better animal welfare standards here. There are cage-free policies here! Animal welfare laws! Standards and checks! It’s only farms outside of Europe that are factory farms, those are the ones that should be counted in the statistics, not European farms!
I don’t see this as “economic or moral incentive to sit on the borderline” but rather ‘if forced to adhere to higher welfare standards, there’s an incentive to maximise the reputational gain from this’.
edit: added last paragraph
[Question] What is the formal definition of a ‘Factory Farm’?
Why aren’t we protesting AI acceleration in the street?
I’m not super up to date with the latest EA thinking on current AI capabilities. The takes I read on social media from Yudkowsky and the like are something along the lines of ‘We’re at a really dangerous time, various companies are engaged in arms race to make more and more powerful AIs with little regard to safety, and this will directly lead to humanity being wiped out by AGI in the near future’. For people really believe this to be true (especially if you live in San Fransico) - why aren’t you protesting on the street?
Some reasons this might work:
There’s lots of precedents of public pressure leading to laws being passed or procedures changed, that have increased safety standards across many industries
The companies working on AI alignment are based in San Francisco. There’s a big EA and rationalist community in SF. Protests could happen outside the HQ of AI companies.
Stories about silicon valley tech companies get lots of press coverage in mainstream media
There’s a prevailing anti—big tech companies feeling in parts of society that could be tapped into it
Specifically, there’s criticisms of the newest AIs for things like ‘training AI models on artists’ work, then putting artists out of a job’ (Dalle) or ‘making it much easier to cheat at university’ (ChatGPT). Whilst this isn’t directly related to AGI safety, it’s the kind of feeling that could be tapped into for the purpose of this protest
If an AI safety researcher could be interviewed on camera at the march it adds credibility to the march, that experts are concerned
It adds credibility to the voices of experts warning about AI risk, if they’re so worried they’re willing to get out on the street to protest about it
Matt Goodman’s Quick takes
I feel uncomfortable with this kind of public character judgement of an alleged victim. Especially when it’s presented without a source or evidence backing up the claim she’s ‘hella scary’
maybe ‘social-justice-caring left’ is a better term
I think using the term ‘woke left’ will be counter-productive to your aim of reaching out to politically left people. While ‘woke’ started as a term used by the left, I now see it being used almost exclusively by the right as a pejorative term for the left, and most politically left people I know would be annoyed at being called ‘woke’.
What would that add? I think that would add speculation on to what is already speculation, and I’d think only the passing of time would be able to give feedback on whether the predictions turn out to be true.
I guess it could give more information, if you sought out different people for the meta-predictions, than had made the original predictions. But then I’m not sure why you wouldn’t just have these new people do the original prediction questions directly.
I think this might be partly due to the complex structure (and subsequent re-structure) of CEA. ‘CEA’ used to be a dual name for both a legal entity and the community building organisation.
I think this led me in the past to having a vague idea of what ‘CEA’ was, and thinking that the public-facing Community Health Team was representing all of it and responsible for more than they were.
This is kind of a separate issue though, here I’d just like to say I’m grateful for the work the Community Health Team does, and don’t want to distract from the discussion of the accusations made here.
Can you expand on what you dislike about the marketing? When looking at their website I was just dazzled at all the animations, my developer brain was trying to figure out how they worked ;)
Thanks, I didn’t know that one!
Assuming that this is both useful and time or funding constrained, you could be selective in how you roll it out. Images of world leaders and high profile public figures seem most likely to be manipulated and would have the highest negative impact if many people were fooled. You could start there
Props for taking the time to explain, even though you don’t like it!
I’d like to be able to hide the amount of karma and agreement points a comment or post has. I think seeing how many people have upvoted a statement affects how likely I am to agree with or upvote that statement. I think it makes me more likely to vote in accordance with social agreement, rather than whether or not I think a statement is true or well written. I’d like to be able to turn this off from time to time. Strongly downvoted comments should probably still be hidden.
I think the UI for voting could be improved in the following ways:
The arrows for voting on Karma point sideways, not up and down. It’s not immediately clear which one is upvote and which one is downvote.
The explanation text about voting (the one that explains Karma, agree/disagree and strong votes) only appears when you hover your mouse over the arrows. This means you never see it on mobile, where there’s no mouse.
the hit boxes could be bigger for arrows on mobile.
Your comment made me realise I’m actually talking about two different things:
When you can choose to end the pain at any point e.g. exercise, the hand-in-cold-water experiment.
When you can’t choose to end the pain, but you know that it will end soon with some degree of certainty. e.g. “medics will be here with morphine in 10 minutes”, or “we can see the head, the baby’s almost out”.
I agree with you that having some kind of peer pressure or social credit for ‘doing well’ can help a person withstand pain. I’d imagine this has an effect on the hand-in-cold-water experiment, if you’re doing it on your own vs as part of a trial with onlookers.