Thanks for the article.
Did aspects of the child’s wellbeing, expected life satisfaction, life expectation etc enter your considerations?
Thanks for the article.
Did aspects of the child’s wellbeing, expected life satisfaction, life expectation etc enter your considerations?
If you don’t want to justify your claims, that’s perfectly fine, no one is forcing you to discuss in this forum. But if you do, please don’t act as if it’s my “homework” to back up your claims with sources and examples. I also find it inappropriate that you throw around many accusations like “quasi religious”, “I doubt there is any type of argumentation that will convince the devout adherents of the ideology of the incredulity of their beliefs”, “just prone to conspiracy theories like QAnon”, while at the same time you are unwilling or unable to name any examples for “what experts in the field think about what AI can actually do”.
Given this looks very much like a religious belief I doubt there is any type of argumentation that will convince the devout adherents of the ideology of the incredulity of their beliefs.
I’d be interested in whether you actually tried that, and whether it’s possible to read your arguments somewhere, or whether you just saw superficial similarity between religious beliefs and the AI risk community and therefore decided that you don’t want to discuss your counterarguments with anybody.
This is an interesting question. Even if the conditions were not fulfilled for almost all cases, I have not yet seen an answer to this question concerning ethical judgements in cases where these conditions are fulfilled.
When considering this question, the more general point is that the way that different animals are farmed should make some difference in ethical judgement. This post is about quantitative comparisons of suffering, but the differences in farming seem to be neglected. In particular, Brian Tomasik’s table on which this post is based ranks different animals by “Equivalent days of suffering caused per kg demanded”, but this comparison is strongly driven by column 5, “Suffering per day of life (beef cows = 1)”:
“Column 5 represents my best-guess estimates for how bad life is per day for each of type of farm animal, relative to that animal’s intrinsic ability to suffer. That is, differences in cognitive sophistication aren’t part of these numbers because they’re already counted in Column 4. Rather, Column 5 represents the “badness of quality of life” of the animals. For instance, since I think the suffering of hens in battery cages is perhaps 4 times as intense per day as the suffering of beef cows, I put a “1“ in the beef-cow cell and “4” in the egg cell.”
I don’t mind using subjective estimates in such calculations, but note that this assumes that an average day in the life of all of these animals is a day of suffering. This may be the case in factory farming, but I doubt that that is a necessary assumption for alpine pasture. However, if life is good on an average day of a cow in alpine pasture, we would need a negative sign.
You can enter a negative sign in the table. However, you’ll get an error message, because the whole table is based on the assumption that “Suffering per day of life” is positive. With this assumption, raising the “Average lifespan (days)” (Column 2) increases the “Equivalent days of suffering caused per kg demanded”. If this is the case, then it is good that farmed animals are “killed at a fraction of their natural lifespans”.
Moreover, Tomasik writes, “Column 6 is a best-guess estimate of the average pain of slaughter for each animal, expressed in terms of an equivalent number of days of regular life for that animal. For instance, I used “10″ as an estimate for broiler chickens, which means I assume that on average, slaughter is as painful as 10 days of pre-slaughter life.”
If the animals actually enjoy their life (negative number in column 5), you can still use that column by entering a negative number in column 6; these are the days an animal would forgo if it could avoid being slaughtered. So if we take the numbers in the table for beef and assume that column 5 is −1 (I don’t know how to interpret this though, as this is all relative to beef cow suffering), we need to enter −395 in column 6 to get to zero in column 7.
I’d be interested if someone has a more general calculator.
Your Richland-Poorland example is indeed illustrative, thanks. However, it seems the problem caused by immigration does not only occur when incomes in Richland were equalized before the immigration, but rather they also occur when people care about the degree of income inequality in their own country. So if Richlanders are free-market fans, but they do not like domestic inequality, they will want to keep the Poorlanders out.
However, socialism and open borders don’t mix well, because once you turn a society into a giant workers’ co-op, adding new members always comes at the expense of the current members.
Why should that be the case? Wealth and income of this giant worker’s co-op are not fixed, and why shouldn’t they scale with the number of members?
“While from an emotional perspective, I care a ton about our kids’ wellbeing, from a utilitarian standpoint this is a relatively minor consideration given I hope to positively impact many beings’ lives through my career.”
I find this distinction a bit confusing. After all, every hour spent with your kid is probably “relatively minor” compared to the counterfactual impact of that hour on “many beings’ lives”. So it seems to me that your evaluating personal costs and expected experiences and so on at all only makes sense if the kid’s wellbeing is very important to you, or do I misunderstand that?
“Thus, we looked at the child’s wellbeing on a very high level—guessing that our children have good chances at a net positive life because they will likely grow up with lots of resources and a good social environment.”
Out of interest: Do you consider catastrophic risks to be small enough not to matter, and is there a point at which that would change?
Which of David’s posts would you recommend as a particularly good example and starting point?
There have been loads of arguments offered on the forum and through other sources like books, articles on other websites, podcasts, interviews, papers etc. So I don’t think that what’s lacking are arguments or evidence.
I’d still be grateful if you could post a link to the best argument (according to your own impression) by some well-respected scholar against AGI risk. If there are “loads of arguments”, this shouldn’t be hard. Somebody asked for something like that here, and there aren’t so many convincing answers, and no answers that would basically end the cause-area comprehensively and authoritatively.
I think the issue is the mentality some people in EA have when it comes to AI. Are people who are waiting for people to bring them arguments to convince them of something really interested in getting different perspectives?
I think so—see footnote 2 of the LessWrong post linked above.
Why not just go look for differing perspectives yourself?
Asking people for arguments is often one of the best ways to look for differing perspectives, in particular if these people have strongly implied that plenty of such arguments exist.
This is a known human characteristic, if someone really wants to believe in something they can believe it even to their own detriment and will not seek out information that may contradict with their beliefs
That this “known human characteristic” strongly applies to people working on AI safety is, up to now, nothing more than a claim.
(I was fascinated by the tales of COVID patients denying that COVID exists even when dying from it in an ICU).
I share that fascination. In my impression, such COVID patients have often previously dismissed COVID as a kind of quasi-religious death cult, implied that worrying about catastrophic risks such as pandemics is nonsense, and claimed that no arguments would convince the devout adherents of the ‘pandemic ideology’ of the incredulity of their beliefs.
Therefore, it only seems helpful to debate in this style when you have already formed a strong opinion as to which side is right; otherwise you can always just claim that the other side’s reasoning is motivated by religion/ideology/etc. Otherwise, the arguments seem like Bulverism.
I witnessed this lack of curiosity in my own cohort that completed AGISF. … They are all very nice amicable people and despite all the conversations I’ve had with them they don’t seem open to the idea of changing their beliefs even when there are a lot of holes in the positions they have and you directly point out those holes to them. In what other contexts are people not open to the idea of changing their beliefs other than in religious or other superstitious contexts? Well the other case I can think of is when having a certain belief is tied to having an income, reputation or something else that is valuable to a person.
I don’t work in AI Safety, I am not active in that area, and I am happy when I get arguments that tell me I don’t have to worry about things. So I can guarantee that I’d be quite open for such arguments. And given that you imply that the only reasons why these nice people still want to work in AI Safety is that they were quasi-religious or otherwise biased, I am looking forward to your object-level arguments against the field of AI Safety.
While I am also worried by Will MacAskill’s view as cited by Erik Hoel in the podcast, I think that Erik Hoel does not really give evidence for his claim that “this influences EA funding to go more towards alignment rather than trying to prevent/delay AGI (such as through regulation)”.
The following quotes from the current Will McAskill podcast episode of 80,000 hours seem a weird combination to me:
“I really don’t know the point at which the arguments for longtermism just stop working because we’ve just used up all of the best targeted opportunities for making the long term go well, such that there’s just no difference between a longtermist argument and just an argument that’s about building a flourishing society in general. Maybe you hit that at 50%, maybe it’s 10%, maybe it’s even 1%. I don’t really know. But given what the world currently prioritises, should we care more about our grandkids and their grandkids and how the course of the next few millennia and millions of years go? Yes. And that’s the claim.”
“One important thing is to distinguish between is something a good thing to do, and is it the best thing to do? The core idea of effective altruism is we want to focus on the very best thing. And I entirely buy that even if you’re just concerned about what happens over the next century, reducing the risks of extinction and other sorts of catastrophes, like reducing the risk of misaligned AI takeover, are just extremely good things to do. And even concerned about the next century, society should be investing a lot more in making sure they don’t happen. Effective altruism is about doing the best we can. And certainly on its face, it would seem extremely suspicious and surprising if the best thing we could do for the very, very long term is also the very best thing we can do for the very short term.”
“So my last donation was to the Lead Elimination Exposure Project …, a new organisation incubated within the effective altruism community, which tries to eliminate lead paint and ultimately lead exposure from all sorts of sources. Lead exposures are really bad. It’s really bad from a health perspective, also lowers people’s IQ, lowers their general cognitive functioning. Some evidence that it kind of increases violence and social dysfunction. … So it seems like they’re really making traction. This is an example of very broad longtermist action, where I think this sort of intervention is maybe kind of different from certain other sorts of global health and development programmes. If I imagine a world where people are a bit smarter, they don’t have mild brain damage from lead exposure that has lowered their IQ and made them more impulsive, more violent, it just broadly seems like a much better society. That was the first argument. And then the second was just I think it’s really good for EAs to be doing things in the world — making it better, achieving concrete wins. … And then the final thing is just that they actually seem to me to be in real need of money and further funding, in a way that lots of the maybe more core, narrowly targeted longtermist work is not currently. So my sense is that a lot of the best giving opportunities are more in the stuff that’s a bit broader, because that really hasn’t been as much of a focus of grantmakers.”
As this is (probably) central to coordination: is there something like a clear decisionmaking structure to decide what “the community” actually wants (i.e., what is “ursuing EA goals”, concretely, in a given situation if there are trade-offs)? Is there an overview/explanation of this structure?
“Also—I’m using scare quotes here because I am very confused who these proposals mean when they say EA community. Is it a matter of having read certain books, or attending EAGs, hanging around for a certain amount of time, working at an org, donating a set amount of money, or being in the right Slacks?”
It is of course a relevant question who this community is supposed to consist of, but at the same time, this question could be asked whenever someone refers to the community as a collective agent doing something, having a certain opinion, benefitting from something etc. For example, you write “They may be interested in community input for their funding, via regranting for example, or invest in the Community”. If you can’t define the community, you cannot clearly say that someone invested in it. You later speak of “managing the relationship between the community and its most generous funder”, but it seems hard to say how this relationship is currently managed if the community is so hard to define.
Which global, technological, political etc developments do you currently find most relevant with regards to parenting choices?
Talking is a great idea in general, but it seems there are some opinions in this survey suggesting that there are barriers to talking openly?
I think most democratic systems don’t work that way—it’s not that people vote on every single decision; democratic systems are usually representative democracies where people can try to convince others that they would be responsible policymakers, and where these policymakers then are subject to accountability and checks and balances. Of course, in an unrestricted democracy you could also elect people who would then become dictators, but that just says that you also need democrats for a democracy, and that you may first need fundamental decisions about structures.
In my impression, the most influential argument of the camp against the initiative was that factory farming just doesn’t exist in Switzerland. Even if it was only one of but not the most influential argument, I think this speaks volumes about both the (current) debate culture and the limits of how hopeful we should be that relevantly similar EA-inspired policies will soon see widespread implementation .
Is there any empirical research on the motivation of voters (and non-voters) in this referendum? The swissinfo article you mention does not directly use this argument, it just cites something somewhat similar:
Interior Minister Alain Berset, responsible for the government’s stance on the initiative, said on Sunday that citizens had “judged that the dignity of animals is respected in our country, and that their well-being is sufficiently protected by current legislation”.
and:
Opponents of the ban, including government and a majority of parliament, had warned that the change would have led to higher prices, reduced consumer choice, and floods of foreign products arriving to fill the gap – despite the initiative stipulating that imports would also have to conform to the new standards.
Over the past months, a majority of farmers, led by the Farmers’ Federation, fought vehemently against what they saw as an unfair attack on them as a means to reduce meat consumption in society more broadly.
“If organizations have bad aims, should we seek to worsen their decision-making?”
That depends on the concrete case you have in mind. Consider the case of supplying your enemy with wrong but seemingly right information during a war. This is a case where you actively try to worsen their decision-making. But even in a war there may be some information you want the enemy to have (like: where is a hospital that should not be targeted). In general, you do not just want to “worsen” an opponent’s decision-making, but influence it in a direction that is favorable from your own point of view.
Conversely, if a decision-maker is only somewhat biased from your point of view and has to make a decision based on uncertain information, you may want her to precisely understand the information if the randomness of the decision could otherwise just work in both directions; it may be good if misinterpreting the situation makes her choose in your favor, but it is often much worse if misinterpretation leads to a deviation in the other direction.
Time-boxing and to-do lists
Tim Harford ist not convinced that it is a good idea to plan activities in advance and allocate them to blocks of calendar time, so-called “Timeboxing”. Instead, you should prioritize everything and, so as not to let work expand beyond all limits, set deadlines. He refers to a study where students were supposed to plan their time daily instead of fixing rough, monthly goals. The daily “plans backfired disastrously: day after day, the daily planners would fall short of their intentions and soon became demotivated, spending less time on studying and falling behind over the course of the academic year. The more amorphous monthly planners proved far more successful, presumably because they had more flexibility to adapt to events, as well as wasting less time fiddling around with their calendars. A plan that is too specific soon lies in tatters.” Harford himself is convinced of flexibility: “It is clear that some people have made timeboxing work for them. … For me, however, my To Do list is long, and my diary is as clear as I can keep it.”
It is fantastic if you can work in such a goal-oriented way, without requiring inner nudges—but exactly that is what timeboxing can provide. Allocating activities (more or less) to fixed blocks of time creates, at least that is probably the hope here, an inner positive attitude towards the planned work.
What about the time-wasting “fiddling around with their calendars” that Harford mentions? Whoever is able to just do whatever is currently important will, of course, not need that. But it is often difficult to say exactly what is really important in a given moment. Therefore the inclination to procrastinate unpleasant tasks joins the inclination to play down the tasks’ importance in the respective moment. The solution may be to accept in advance that some things have to be done. Some people can accept that for whatever is on their to-do lists. Others will have to accept that whatever is planned for (possible every) wednesday, 14:00 is important.
“Timeboxing” as planning your whole life in advance seems indeed unrealistic and creates the lack of flexibility that Harford mentions. However, acknowledging that you have to do certain things periodically is already part of e.g. David Allen’s Getting Things Done that Harford refers to, because GTD strictly includes Weekly Reviews. If you have to cope with a long to-do list in your weekly review, that is of course work-intensive, and it will be demotivating if there is a lot from the previous work, month etc that has not yet been completed (or not even started).
The pragmatic solution—for people who do not feel able to just execute a to-do list—is probably to fix certain time blocks in advance at least for certain high-priority activities: Weekly Review, Weekend family stuff, gym, etc., based on your experiences. This avoids that planning becomes too detailed, while still giving your life a structure and your mind a feeling of commitment. To avoid the high work amount necessary for planning, it is useful to be able to use many of the same time boxes every week. As a side effect, this may create both ritual and conscious leisure-time.
Also note that Timeboxing is basically unavoidable if you want to coordinate with other people. Every appoint is timeboxing, and only if you are an extremely important person can you completely drop that. Harford’s example is Arnold Schwarzenegger:
This seems not only impolite but also unrealistic (when you are an actor for a movie, you have to show up at certain times), but may be more possible the more powerful you are. However, for your personal life, if you want to go running with a friend once a week, coordinated timeboxing strongly reduces coordination costs and also creates commitment (and, again, ritual).
So, rules of thumb. Planning something like 60-80% of your time while having the rest of the time as a flexibility buffer seems sensible. If you can do the same activitiy at the same time each week, that’s often a good idea. If you feel you need to plan more or less, do so. To avoid the bad feeling of unaccomplished to-do list items, regular delete those you won’t do anyway (I think that’s done in Complice) or put them on a someday/maybe list.