The vote in the European Parliament took place on October 8, you can see the Procedure File here, the press release with links to further information is here, and this is the result of the roll-call votes. A graphic representation of the vote on the entire legislative proposal, including EP amendments, can be found, for example, here.
Achim
Just for your information, if you want to read the legislative proposal. The FAQ links to a document number of amendments with respect to proposal 2024/0319(COD), which seems wrong. The actual proposal seems to be COM(2025) 553 final.
Online-Treffen / Online-Meetup
The Type Of Animal Husbandry Is Relevant For Animal Welfare
This is an interesting question. Even if the conditions were not fulfilled for almost all cases, I have not yet seen an answer to this question concerning ethical judgements in cases where these conditions are fulfilled.
When considering this question, the more general point is that the way that different animals are farmed should make some difference in ethical judgement. This post is about quantitative comparisons of suffering, but the differences in farming seem to be neglected. In particular, Brian Tomasik’s table on which this post is based ranks different animals by “Equivalent days of suffering caused per kg demanded”, but this comparison is strongly driven by column 5, “Suffering per day of life (beef cows = 1)”:
“Column 5 represents my best-guess estimates for how bad life is per day for each of type of farm animal, relative to that animal’s intrinsic ability to suffer. That is, differences in cognitive sophistication aren’t part of these numbers because they’re already counted in Column 4. Rather, Column 5 represents the “badness of quality of life” of the animals. For instance, since I think the suffering of hens in battery cages is perhaps 4 times as intense per day as the suffering of beef cows, I put a “1“ in the beef-cow cell and “4” in the egg cell.”
I don’t mind using subjective estimates in such calculations, but note that this assumes that an average day in the life of all of these animals is a day of suffering. This may be the case in factory farming, but I doubt that that is a necessary assumption for alpine pasture. However, if life is good on an average day of a cow in alpine pasture, we would need a negative sign.
You can enter a negative sign in the table. However, you’ll get an error message, because the whole table is based on the assumption that “Suffering per day of life” is positive. With this assumption, raising the “Average lifespan (days)” (Column 2) increases the “Equivalent days of suffering caused per kg demanded”. If this is the case, then it is good that farmed animals are “killed at a fraction of their natural lifespans”.
Moreover, Tomasik writes, “Column 6 is a best-guess estimate of the average pain of slaughter for each animal, expressed in terms of an equivalent number of days of regular life for that animal. For instance, I used “10″ as an estimate for broiler chickens, which means I assume that on average, slaughter is as painful as 10 days of pre-slaughter life.”
If the animals actually enjoy their life (negative number in column 5), you can still use that column by entering a negative number in column 6; these are the days an animal would forgo if it could avoid being slaughtered. So if we take the numbers in the table for beef and assume that column 5 is −1 (I don’t know how to interpret this though, as this is all relative to beef cow suffering), we need to enter −395 in column 6 to get to zero in column 7.
I’d be interested if someone has a more general calculator.
“While from an emotional perspective, I care a ton about our kids’ wellbeing, from a utilitarian standpoint this is a relatively minor consideration given I hope to positively impact many beings’ lives through my career.”
I find this distinction a bit confusing. After all, every hour spent with your kid is probably “relatively minor” compared to the counterfactual impact of that hour on “many beings’ lives”. So it seems to me that your evaluating personal costs and expected experiences and so on at all only makes sense if the kid’s wellbeing is very important to you, or do I misunderstand that?
“Thus, we looked at the child’s wellbeing on a very high level—guessing that our children have good chances at a net positive life because they will likely grow up with lots of resources and a good social environment.”
Out of interest: Do you consider catastrophic risks to be small enough not to matter, and is there a point at which that would change?
Thanks for the article.
Did aspects of the child’s wellbeing, expected life satisfaction, life expectation etc enter your considerations?
Which of David’s posts would you recommend as a particularly good example and starting point?
“Also—I’m using scare quotes here because I am very confused who these proposals mean when they say EA community. Is it a matter of having read certain books, or attending EAGs, hanging around for a certain amount of time, working at an org, donating a set amount of money, or being in the right Slacks?”
It is of course a relevant question who this community is supposed to consist of, but at the same time, this question could be asked whenever someone refers to the community as a collective agent doing something, having a certain opinion, benefitting from something etc. For example, you write “They may be interested in community input for their funding, via regranting for example, or invest in the Community”. If you can’t define the community, you cannot clearly say that someone invested in it. You later speak of “managing the relationship between the community and its most generous funder”, but it seems hard to say how this relationship is currently managed if the community is so hard to define.
Which global, technological, political etc developments do you currently find most relevant with regards to parenting choices?
If you don’t want to justify your claims, that’s perfectly fine, no one is forcing you to discuss in this forum. But if you do, please don’t act as if it’s my “homework” to back up your claims with sources and examples. I also find it inappropriate that you throw around many accusations like “quasi religious”, “I doubt there is any type of argumentation that will convince the devout adherents of the ideology of the incredulity of their beliefs”, “just prone to conspiracy theories like QAnon”, while at the same time you are unwilling or unable to name any examples for “what experts in the field think about what AI can actually do”.
There have been loads of arguments offered on the forum and through other sources like books, articles on other websites, podcasts, interviews, papers etc. So I don’t think that what’s lacking are arguments or evidence.
I’d still be grateful if you could post a link to the best argument (according to your own impression) by some well-respected scholar against AGI risk. If there are “loads of arguments”, this shouldn’t be hard. Somebody asked for something like that here, and there aren’t so many convincing answers, and no answers that would basically end the cause-area comprehensively and authoritatively.
I think the issue is the mentality some people in EA have when it comes to AI. Are people who are waiting for people to bring them arguments to convince them of something really interested in getting different perspectives?
I think so—see footnote 2 of the LessWrong post linked above.
Why not just go look for differing perspectives yourself?
Asking people for arguments is often one of the best ways to look for differing perspectives, in particular if these people have strongly implied that plenty of such arguments exist.
This is a known human characteristic, if someone really wants to believe in something they can believe it even to their own detriment and will not seek out information that may contradict with their beliefs
That this “known human characteristic” strongly applies to people working on AI safety is, up to now, nothing more than a claim.
(I was fascinated by the tales of COVID patients denying that COVID exists even when dying from it in an ICU).
I share that fascination. In my impression, such COVID patients have often previously dismissed COVID as a kind of quasi-religious death cult, implied that worrying about catastrophic risks such as pandemics is nonsense, and claimed that no arguments would convince the devout adherents of the ‘pandemic ideology’ of the incredulity of their beliefs.
Therefore, it only seems helpful to debate in this style when you have already formed a strong opinion as to which side is right; otherwise you can always just claim that the other side’s reasoning is motivated by religion/ideology/etc. Otherwise, the arguments seem like Bulverism.
I witnessed this lack of curiosity in my own cohort that completed AGISF. … They are all very nice amicable people and despite all the conversations I’ve had with them they don’t seem open to the idea of changing their beliefs even when there are a lot of holes in the positions they have and you directly point out those holes to them. In what other contexts are people not open to the idea of changing their beliefs other than in religious or other superstitious contexts? Well the other case I can think of is when having a certain belief is tied to having an income, reputation or something else that is valuable to a person.
I don’t work in AI Safety, I am not active in that area, and I am happy when I get arguments that tell me I don’t have to worry about things. So I can guarantee that I’d be quite open for such arguments. And given that you imply that the only reasons why these nice people still want to work in AI Safety is that they were quasi-religious or otherwise biased, I am looking forward to your object-level arguments against the field of AI Safety.
Given this looks very much like a religious belief I doubt there is any type of argumentation that will convince the devout adherents of the ideology of the incredulity of their beliefs.
I’d be interested in whether you actually tried that, and whether it’s possible to read your arguments somewhere, or whether you just saw superficial similarity between religious beliefs and the AI risk community and therefore decided that you don’t want to discuss your counterarguments with anybody.
Talking is a great idea in general, but it seems there are some opinions in this survey suggesting that there are barriers to talking openly?
I think most democratic systems don’t work that way—it’s not that people vote on every single decision; democratic systems are usually representative democracies where people can try to convince others that they would be responsible policymakers, and where these policymakers then are subject to accountability and checks and balances. Of course, in an unrestricted democracy you could also elect people who would then become dictators, but that just says that you also need democrats for a democracy, and that you may first need fundamental decisions about structures.
While I am also worried by Will MacAskill’s view as cited by Erik Hoel in the podcast, I think that Erik Hoel does not really give evidence for his claim that “this influences EA funding to go more towards alignment rather than trying to prevent/delay AGI (such as through regulation)”.
In my impression, the most influential argument of the camp against the initiative was that factory farming just doesn’t exist in Switzerland. Even if it was only one of but not the most influential argument, I think this speaks volumes about both the (current) debate culture and the limits of how hopeful we should be that relevantly similar EA-inspired policies will soon see widespread implementation .
Is there any empirical research on the motivation of voters (and non-voters) in this referendum? The swissinfo article you mention does not directly use this argument, it just cites something somewhat similar:
Interior Minister Alain Berset, responsible for the government’s stance on the initiative, said on Sunday that citizens had “judged that the dignity of animals is respected in our country, and that their well-being is sufficiently protected by current legislation”.
and:
Opponents of the ban, including government and a majority of parliament, had warned that the change would have led to higher prices, reduced consumer choice, and floods of foreign products arriving to fill the gap – despite the initiative stipulating that imports would also have to conform to the new standards.
Over the past months, a majority of farmers, led by the Farmers’ Federation, fought vehemently against what they saw as an unfair attack on them as a means to reduce meat consumption in society more broadly.
“If organizations have bad aims, should we seek to worsen their decision-making?”
That depends on the concrete case you have in mind. Consider the case of supplying your enemy with wrong but seemingly right information during a war. This is a case where you actively try to worsen their decision-making. But even in a war there may be some information you want the enemy to have (like: where is a hospital that should not be targeted). In general, you do not just want to “worsen” an opponent’s decision-making, but influence it in a direction that is favorable from your own point of view.
Conversely, if a decision-maker is only somewhat biased from your point of view and has to make a decision based on uncertain information, you may want her to precisely understand the information if the randomness of the decision could otherwise just work in both directions; it may be good if misinterpreting the situation makes her choose in your favor, but it is often much worse if misinterpretation leads to a deviation in the other direction.
Linksammlung zum heutigen Workshop
Themenblock Forecasting
Calibration im Forecasting Wiki
Calibration Exercise
Fatebook Predict your year
Confido for personal forecasting and calibration
One’s Future Behavior as a Domain of Calibration
Metaculus Movers
Anderes
Adam Mastroianni: So you wanna de-bog yourself (“Some problems are like getting a diploma: you work at it for a while, and then you’re done forever. Learning how to ride a bike is a classic diploma problem. But most problems aren’t like that. They’re more like toothbrushing problems: you have to work at them forever until you die. You can’t, as far as I know, just brush your teeth really really well and then let ’em ride forever.”)