Maybe worth noting this is by Max Tegmark?
Larks
Thanks for sharing both the original and this version of the argument!
I realize this is basically an aside and doesn’t really affect your bottom line, but I don’t think you can draw this inference:
Humanity-caused climate change and land use have contributed to a loss of 69% of wildlife since 1970.
Quoting Our World In Data:
the LPI does not tell us the number of species, populations or individuals lost; the number of extinctions that have occurred; or even the share of species that are declining. It tells us that between 1970 and 2018, on average, there was a 69% decline in population size across the 31,821 studied populations.
This paper also argued the methodology is systematically biased downwards, but I haven’t evaluated it.
The LPI indicates that vertebrate populations have decreased by almost 70% over the last 50 years. This is in striking contrast with current studies based on the same population time series data that show that increasing and decreasing populations are balanced on average. Here, we examine the methodological pipeline of calculating the LPI to search for the source of this discrepancy. We find that the calculation of the LPI is biased by several mathematical issues which impose an imbalance between detected increasing and decreasing trends and overestimate population declines.
It seems like the vast majority of the people who attended the conference do meta EA work and/or work at a large EA org (e.g. OP, GWWC, CEA).
Isn’t that what you’d expect from a Meta Coordination Forum? It’s the forum for meta people to coordinate at. There are other forums for people doing object-level work.
I don’t understand what the title has to do with the body of the text. ‘Meme’ either means a unit of cultural information or a funny picture with text; EA is definitely not ‘just’ the latter, and it is the former to the same extent that environmentalism or any other movement is.
It’s probably to do with the fact that being in finance (banker, consultant, whatever) is pretty much something any jackass can do. Pushing money around, dealing with people, risk assessment, you can pretty much just turn your brain off really, especially compared to technical fields. But banking is not only not a very useful job, it’s also incredibly morally dubious to work for companies that do fuck all for the world aside from scam customers and invest the money in fossil fuel industries and terrorist organizations. Like OK, yeah, better you have the job than some schmuck who wouldn’t donate anything and would spend the money on cars and luxury homes, but there are other jobs you can get that are not only useful, but comparative in their income.
This is probably the single most ignorant paragraph I have ever read about the financial industry.[1] The sort of finance job EAs do for EtG are not easy, they are some of the most competitive and challenging jobs in the world. Nor is it the case that scamming, fossil fuel investment and investing in terrorist organizations (how would that even work? Does Al-Qaeda pay dividends?) is all they do. This guy is apparently aware that financial companies invest in fossil fuel companies—does he think there is some other industry that handles investment in all other types of firms, including green infrastructure? Finance plays a number of important roles in society, from facilitating transactions to matching savers and borrowers to allowing people to adjust their risk to vetting and due diligence to forecasting the future. These are valuable services and people voluntarily pay to use them. There are some valid criticisms of the industry but this guy is just so ill-informed I seriously think your “”friend”″ should start by reading a basic wikipedia article on the subject.
… and medicine and STEM are only comparable in income to the extent that you can compare them and observe them to be lower.
- ^
Or possibly the management consulting industry? Who knows, not the author, that’s for sure.
- ^
Thanks very much for this detailed analysis and write-up, I really appreciate when people exhibit this level of self-evaluation.
For the far future to be a significant factor in moral decisions, it must lead to different decisions compared to those made when only considering the near future. If the same decisions are made in both cases, there is no need to consider the far future.
Given the vastness of the future compared to the present, focusing on the far future, risks harming the present. Resources spent on the far future could instead be used to address immediate problems like health crises, hunger, and conflict.
This seems a very strange view. If we knew the future would not last long—perhaps a black hole would swallow up humanity in 200 years—then the future would not be very vast, it would have less moral weight, and aiding it would be less demanding. Would this really leave longtermism more palatable to the critics?
One new thing to me in that thread was that the California Legislature apparently never overrides the governor’s vetoes. I wonder why this is the case there and not elsewhere.
My understanding is a lot of that is just that consumers didn’t want them. From the first source I found on this:
Safety-conscious car buyers could seek out—and pay extra for—a Ford with seatbelts and a padded dashboard, but very few did: only 2 percent of Ford buyers took the $27 seatbelt option.
This is not surprising to me given that, even after the installation of seatbelts became mandatory, it was decades until most Americans actually used them. Competition encouraged manufacturers to ‘cut corners’ on safety in this instance precisely because that was what consumers wanted them to do.
I think you probably want:
( CO2 averted * Social Cost of Carbon ) - Economic Costs
Your equation will give an infinite score to a policy which could avert 1 gram of CO2 for zero cost, even if it was totally non-scalable.
It seems strange to me to call a perfectly normal work week ‘neglecting leisure’. Typically when people are thinking about the argument that altruistic considerations mean they should work more than normal people they are talking about unusually long work weeks.
Gavin Newsom vetoes SB 1047
I think experimentation with new approaches is good, so for that reason I’m a fan of this.
When I evaluate your actual arguments for this particular mechanism design though, they seem quite weak. This makes me worry that, if this mechanism turns out to be good, it will only be by chance, rather than because it was well designed to address a real problem.
To motivate the idea you set up a scenario with three donors, varying dramatically in their level of generosity:
Donors 1 and 3 both think animals matter a lot, but Donor 3 is skeptical of the existing charities. Donor 1 doesn’t have access to the information that makes Donor 3 skeptical. It’s unclear if Donor 3 is right, but aggregating their beliefs might better capture an accurate view of the animal welfare space.
Donor 2 knows a lot about their specific research area, but not other areas, so they just give within GCRs and not outside it. They’d be happy to get the expertise of Donors 1 and 3 to inform their giving.
All three are motivated by making the world better, and believe strongly that other people have good views about the world, access to different information, etc.
I struggle to see how this setup really justifies the introduction of your complicated donation pooling and voting system. The sort of situation you described already occurs in many places in the global economy—and within the EA movement—and we have standard methods of addressing it, for example:
Donor 3 could write an article or an email about their doubts.
Donor 1 could hire Donor 3 as a consultant.
Donor 1 could delegate decisions to Donor 3.
Donor 2 can just give to GCR, this seems fine, they are a small donor anyway.
They could all give to professionally managed donation funds like the EA funds.
What all of these have in common is they attempt to directly access the information people have, rather than just introducing it in a dilute form into a global average. The traditional approach can take a single expert with very unusual knowledge and give them major influence over large donors; your approach gives this expert no more influence than any other person.
This also comes up in your democracy point:
Equal Hands functions similarly to tax systems in democracies — we don’t expect people who pay more in taxes to have better views about who should be elected to spend that tax money.
The way modern democratic states work is decidedly not that everyone can determine where a fraction of the taxes go if they pay a minimum of tax. Rather, voters elect politicians, who then choose where the money is spent. Ideally voters choose good politicians, and these politicians consult good experts.
One of the reasons for this is that is would be incredibly time consuming for individual voters to make all these determinations. And this seems to be an issue with your proposal also—it simply is not a good use of people’s time to be making donation decisions and filling in donation forms every month for very small amounts of money. Aggregation, whether through large donors (e.g. the donation lottery) or professional delegation (e.g. the EA funds) is the key to efficiency.
The most bizarre thing to me however is this argument (emphasis added):
Donating inherently has huge power differentials — the beliefs of donors who are wealthier inevitably exerts greater force on charities than those with fewer funds. But it seems unlikely that having more money would be correlated with having more accurate views about the world.
Perhaps I am misunderstanding, or you intended to make some weaker argument. But as it stands your premise here, which seems important to the entire endeavor, seems overwhelmingly likely to be false.
There are many factors which are correlated both with having money money and having accurate views about the world, because they help with both: intelligence, education, diligence, emotional control, strong social networks, low levels of chronic stress, low levels of lead poisoning, low levels of childhood disease… And there are direct causal connections between money and accurate views, in both directions, because having accurate views about the world directly helps you make money (recognizing good opportunities for income, avoiding unnecessary costs, etc.) and having money helps you gain more accurate views about the world (access to information, more well educated social circle, etc.).
Even absent these general considerations, you can see it just by looking at the major donors we have in EA: they are generally not lottery winners or football players, they tend to be people who succeeded in entrepreneurship or investment, two fields which require accurate views about the world.
Moral Trade, Impact Distributions and Large Worlds
Maybe asks how he chooses which issues to focus on?
Good analysis, thanks for writing this up! It does seem that in general our political/regulatory system has little to no sensitivity to the dollar cost of fulfilling requirements and avoiding identifiable but small harms.
And I think drawing the line at we’re not going to allow hypotheticals about murdering discernible people
Do you think it is acceptable to discuss the death penalty on the forum? Intuitively this seems within scope—historically we have discussed criminal justice reform on the forum, and capital punishment is definitely part of that.
If so, is the distinction state violence vs individual violence? This seems not totally implausible to me, though it does suggest that the offending poster could simply re-word their post to be about state-sanctioned executions and leave the rest of the content untouched.
The baby analogy seems a bit forced to me because babies do not drink blood (and babies in utero do not choose to be there). But if an adult came along and started biting me hard enough to break the skin, potentially infecting me with some disease, I’d consider myself justified in whacking them as hard as it takes to get them off. I guess to your point I’d try to hit them non-lethally though, unlike with a mosquito.
I have no qualms about killing something that literally chose to try to steal my blood.
In a Nov 2023 speech Harris mentioned she’s concerned about x-risk and risks from cyber & bio. She has generally put more emphasis on current harms but so far without dismissing the longer-term threats.
This seems like a very generous interpretation of her speech to me. I feel like you are seeing what you want to see.
For context, this was a speech given when she came to the UK for the AI Safety Summit, which was explicitly about existential safety. She didn’t really have a choice but to mention them unless she wanted to give a major snub to an important US ally, so she did:
But just as AI has the potential to do profound good, it also has the potential to cause profound harm. From AI-enabled cyberattacks at a scale beyond anything we have seen before to AI-formulated bio-weapons that could endanger the lives of millions, these threats are often referred to as the “existential threats of AI” because, of course, they could endanger the very existence of humanity. (Pause)
These threats, without question, are profound, and they demand global action.… and that’s it. That’s all she said about existential risks. She then immediately derails the conversation by offering a series of non-sequiturs:
But let us be clear. There are additional threats that also demand our action — threats that are currently causing harm and which, to many people, also feel existential.
Consider, for example: When a senior is kicked off his healthcare plan because of a faulty AI algorithm, is that not existential for him?
When a woman is threatened by an abusive partner with explicit, deep-fake photographs, is that not existential for her?
When a young father is wrongfully imprisoned because of biased AI facial recognition, is that not existential for his family?I think it’s pretty clear that these are not the sorts of things you say if you are actually concerned about existential risks. No-one genuinely motivated by fear of the deaths of every human on earth, and all future generation, goes around saying “oh yeah and a single person’s health insurance admin problems, that is basically the same thing”.
I won’t quote the speech in full, but I think it is worth looking at. She repeatedly returns to potential harms of AI, but never—once the bare necessities of diplomatic politeness have been met—does she bother to return to catastrophic risks. Instead we have:
… make sure that the benefits of AI are shared equitably and to address predictable threats, including deep fakes, data privacy violations, and algorithmic discrimination.
and
… establish a national safety reporting program on the unsafe use of AI in hospitals and medical facilities. Tech companies will create new tools to help consumers discern if audio and visual content is AI-generated. And AI developers will be required to submit the results of AI safety testing to the United States government for review.
and
… protect workers’ rights, advanced transparency, prevent discrimination, drive innovation in the public interest, and help build international rules and norms for the responsible use of AI.
and
the wellbeing of their customers, the safety of our communities, and the stability of our democracies.
and
… the principles of privacy, transparency, accountability, and consumer protection.
My interpretation here, that she is basically rejecting AI safety, is not unusual. You can see for example Politico here calling it a ‘rebuke’ to Sunak and the focus on existential risks, and making clear that it was very deliberate.
Overall this actually makes me more pessimistic about Kamala. You clearly wrote this post in a soldier mind and looked for the best evidence you could find to show that Kamala cared about existential risks, so if this speech, which I think basically suggests the opposite, is the best you could find then that seems like a pretty big negative update. In particular it seems worse than Trump, who gave a fairly clear explanation of one casual risk pathway—deepfakes causing a war—and he did this without being explicitly asked about existential risks and without a teleprompter. Are there any examples of Kamala, unprompted, bringing up in an interview the risk of AI causing a nuclear war, or taking over the human race?
I agree with your point that the record of the Biden Administration seems fairly good here, and she might continue out of status quo bias, continuity of staff, and so on. But in terms of her specific views she seems significantly less well aligned than Biden or Rishi were, and maybe less than Trump.
(I previously wrote about this here)
Notably bargaining between worldviews, if done behind a veil of ignorance, can lead to quite extreme outcomes if some worldviews care a lot more about large worlds.