Assuming two interventions are around similarly effective in life-years saved, interventions saving old lives must (necessarily) save more lives in the short run. E.g. save 4 lives granting 10 life-years v.s. save 1 life granting 40 life years.
nonn
Some fraction of people who don’t work on AI risk cite “wanting to have more certainty of impact”as their main reason. But I think many of them are running the same risk anyway: namely, that what they do won’t matter because transformative AI will make their work irrelevant, or dramatically lower value.
This is especially obvious if they work on anything that primarily returns value after a number of years. E.g. building an academic career or any career where most impact is realized later, working toward policy changes, some movement-building things, etc.
But also applies somewhat to things like nutrition or vaccination or even preventing deaths, where most value is realized later (by having better life outcomes, or living an extra 50 years). Though this category does still have certainty of impact, just the amount of impact might be cut by whatever fraction of worlds are upended in some way by AI. And this might affect what they should prioritize… e.g. they should prefer saving old lives over young ones, if the interventions are pretty-close on naive effectiveness measures.
Feels like there’s some line where your numbers are getting so tiny and speculative that many other considerations start dominating, like “are your numbers actually right?” E.g. I’d be pretty skeptical of many proposed ”.000001% of huge number” interventions (especially skeptical on the on the .000001% side).
In practice, the line could be where “are your numbers actually right” starts becoming the dominant consideration. At that point, proving your numbers are plausible is the main challenge that needs to be overcome—and is honestly where I suspect most people’s anti-low-probabilities intuitions come from in the first place.
Very cool!
random thought: could include some of Yoshua Bengio’s or Geoffrey Hinton’s writings/talks on AI risks concerns in week 10 (& could include Lecun for counterpoint to get all 3), since they’re very-well cited academics & Turing Award Winners for deep learning
I haven’t looked through their writings/talks to find the most directly relevant, but some examples: https://yoshuabengio.org/2023/05/22/how-rogue-ais-may-arise/ https://yoshuabengio.org/2023/06/24/faq-on-catastrophic-ai-risks/
My experience is that it’s more that group leaders & other students in EA groups might reward poor epistemics in this way.
And that when people are being more casual, it ‘fits in’ to say AI risk & people won’t press for reasons in those contexts as much, but would push if you said something unusual.
Agree my experience with senior EAs in the SF Bay was often the opposite–I was pressed to explain why I’m concerned about AI risk & to respond to various counterarguments.
No, though maybe you’re using the word “intrinsically” differently? For the (majority) consequentialist part of my moral portfolio: The main intrinsic bad is suffering, and wellbeing (somewhat broader) is intrinsically good.
I think any argument about creating people/etc is instrumental—will they or won’t they increase wellbeing? They can both potentially contain suffering/wellbeing themselves, and affect the world in ways that affect wellbeing/suffering now & in the future. This includes effects before they are born (e.g. on women’s lives). TBH given your above arguments, I’m confused about the focus on abortion—it seems like you should be just as opposed to people choosing not to have children, and focus on encouraging/supporting people having kids.
For now, I think the ~main thing that matters is from a total-view longtermist perspective is making it through “the technological precipice”, where risks of permanent loss of sentient life/our values is somewhat likely, so other total-view longtermist arguments flow through effects on this + influencing for good trajectory arguably. Since abortion access seems good for civilization trajectory (women can have children when the want, don’t have their lives & health derailed, etc), more women involved in the development of powerful technology probably makes these fields more cautious/less rash, fewer ‘unwanted children’ [probably worse life outcomes], etc. Then abortion access seems good.
Maybe related: in general when maximizing, I think it’s probably best to finding the most important 1-3 things, then focus on those things. (e.g. for temp of my house, focus on temp of thermostat + temp of outside + insulation quality, ignore body heat & similar small things)
I don’t think near-term population is helpful for long-term population or wellbeing, e.g. in >10,000 years from now. More likely negative effect than positive effect imo, especially if the mechanism of trying to increase near-term population is to restrict abortion (this is not a random sample of lives!)
I also think it seems bad for general civilization trajectory (partially norm-damaging, but mostly just direct effects on women & children), probably bad for ability to make investments in resilience & be careful with powerful new technology. These seem like the most important effects from a longtermist perspective, so I think abortion-restriction is bad from a total-longtermist perspective.
I guess I did mean aggregate in the ‘total’ well-being sense. I just feel pretty far from neutral about creating people who will live wonderful lives, and also pretty strongly disagree with the belief that restricting abortion will create more total well-being in the long run (or short tbh).
For total-view longtermism, I think the most important things are ~civilization is on a good trajectory, people are prudent/careful with powerful new technology, the world is lower conflict, investments are made to improve resilience to large catastrophes, etc. Restricting abortion seems kinda bad for several of those things, and positive for none. So it seems like total-view longtermism, even ignoring all other reasons to think this, says abortion-restriction is bad.
I guess part of this is a belief that in the long-run, the number of morally-valuable lives & total wellbeing (e.g. in a 10 million years) is very uncorrelated or anti-correlated with near-term world population. (though I also think restricting abortion is one of the worst ways to go about increasing near-term population, even for those who do think near-term & very-long-term are pretty positively correlated)
abortion is morally wrong is a direct logical extension of a longtermist view that highly values maximizing the number of people on assumption that the average existing persons life will have positive value
I’m a bit confused by this statement. Is a world where people don’t have access to abortion likely to have more aggregate well-being in the very long run? Naively, it feels like the opposite to me
To be clear I don’t think it’s worth discussing abortion at length, especially considering bruce’s comment. But I really don’t think the number of people currently existing says much about well-being in the very long run (arguably negatively correlated). And even if you wanted to increase near-term population, reducing access to abortion is a very bad way to that, with lots of negative knock-on effects.
Agree that was a weird example.
Other people around the group (e.g. many of the non-Stanford people who sometimes came by & worked at tech companies) are better examples. Several weren’t obviously promising at the time, but are doing good work now.
typo, imo. (in my opinion)
I’m somewhat more pessimistic that disillusioned people have useful critiques, at least on average. EA asks people to swallow a hard pill “set X is probably the most important stuff by a lot”, where X doesn’t include that many things. I think this is correct (i.e. the set will be somewhat small), but it means that a lot of people’s talents & interests probably aren’t as [relatively] valuable as they previously assumed.
That sucks, and creates some obvious & strong motivated reasons to lean into not-great criticisms of set X. I don’t even think this is conscious, just vague ‘feels like this is wrong’ when people say [thing I’m not the best at/dislike] is the most important. This is not to say set X doesn’t have major problems
They might more often have useful community critiques imo, e.g. more likely to notice social blind spots that community leaders are oblivious to.
Also, I am concerned about motivated reasoning within the community, but don’t really know how to correct for this. I expect the most-upvoted critiques will be the easy-to-understand plausible-sounding ones that assuage the problem above or social feelings, but not the correct ones about our core priorities. See some points here: https://forum.effectivealtruism.org/posts/pxALB46SEkwNbfiNS/the-motivated-reasoning-critique-of-effective-altruism
I’d add a much more boring cause of disillusionment: social stuff
It’s not all that uncommon for someone to get involved with EA, make a bunch of friends, and then the friends gradually get filtered through who gets accepted to prestigious jobs or does ‘more impactful’ things in community estimation (often genuinely more impactful!)
Then sometimes they just start hanging out with cooler people they meet at their jobs, or just get genuinely busy with work, while their old EA friends are left on the periphery (+ gender imbalance piles on relationship stuff). This happens in normal society too, but there seem to be more norms/taboos there that blunt the impact.
Your second question “Will the potential negative press and association with Democrats be too harmful to the EA movement to be worth it?” seems to ignore that a major group EAs will be running against will be democrats in primaries.
So it’s not only that you’re creating large incentives for republicans to attack EA, you’re also creating it for e.g. progressive democrats. See: Warren endorsing Flynn’s opponent & somewhat attacking flynn for crypto billionaire sellout stuff
That seems potentially pretty harmful too. It’d be much harder to be an active group on top universities if progressive groups strongly disliked EA.
Which I think they would, if EAs ran against progressives enough that Warren or Bernie or AOC more strongly criticized EA. Which would be in line the incentives we’re creating & general vibe [pretty skeptical of a bunch of white men, crypto billionaires, etc].
Random aside, but does the St. Petersburg paradox not just make total sense if you believe Everett & do a quantum coin flip? i.e. in 1⁄2 universes you die, & in 1⁄2 you more than double. From the perspective of all things I might care about in the multiverse, this is just “make more stuff that I care about exist in the multiverse, with certainty”
Or more intuitively, “with certainty, move your civilization to a different universe alongside another prospering civilization you value, and make both more prosperous”.
Or if you repeat it, you have “move all civilizations into a few giant universes, and make them dramatically more prosperous.
Which is clearly good under most views, right?
Another complication: we want to select for people who are good fits for our problems, e.g. math kids, philosophy research kids, etc. To some degree, we’re selecting for people with personal-fun functions that match the shape of the problems we’re trying to solve (where what we’d want them to do is pretty aligned with their fun)
I think your point applies with cause selection, “intervention strategy”, or decisions like “moving to Berkeley”. Confused more generally
I’m confused about how to square this with specific counterexamples. Say theoretical alignment work: P(important safety progress) probably scales with time invested, but not 100x by doubling your work hours. Any explanations here?
Idk if this is because uncertainty/ probabilistic stuff muddles the log picture. E.g. we really don’t know where the hits are, so many things are ‘decent shots’. Maybe after we know the outcomes, the outlier good things would be quite bad on the personal-liking front. But that doesn’t sound exactly correct either
Curious if you disagree with Jessica’s key claim, which is “McKinsey << EA for impact”? I agree Jessica is overstating the case for “McKinsey ⇐ 0″, but seems like best-case for McKinsey is still order(s) of magnitude less impact than EA.
Subpoints:
Current market incentives don’t address large risk-externalities well, or appropriately weight the well-being of very poor people, animals, or the entire future.
McKinsey for earn-to-learn/give could theoretically be justified, but that doesn’t contradict Jessica’s point of spending money to get EAs
Most students require a justification for anyone charitable spending significant amounts of money on movement building & competing with McKinsey reads favorably
Agree we should usually avoid saying poorly-justified things when it’s not a necessary feature of the argument, as it could turn off smart people who would otherwise agree.
There were tons of cases from EAGx Boston (an area with lower covid case counts). I’m one of them. Idk exact numbers but >100 if I extrapolate from my EA friends.
Not sure whether this is good or bad tho, as IFR is a lot lower now. Presumably lower long covid too, but hard to say
I don’t know what you mean? You can look at existing interventions that primarily help very young people (neonatal or childhood vitamin supplementation) v.s. a comparably-effective interventions that target adults or older people (e.g. cash grants, schistosomiasis)
There are multiple GiveWell charities in both categories, so this is just saying you should weight towards the ones that target older folks by maybe a factor of 2x or more, v.s. what givewell says (they assume the world won’t change much)