Nick Bostrom’s website now lists him as “Principal Researcher, Macrostrategy Research Initiative.”
Doesn’t seem like they have a website yet.
Nick Bostrom’s website now lists him as “Principal Researcher, Macrostrategy Research Initiative.”
Doesn’t seem like they have a website yet.
This seems relevant to any intervention premised on “it’s good to reduce the amount of net-negative lives lived.”
If factory-farmed chickens have lives that aren’t worth living, then one might support an intervention that reduces the number of factory-farmed chickens, even if it doesn’t improve the lives of any chickens that do come to exist. (It seems to me this would be the primary effect of boycotts, for instance, although I don’t know empirically how true that is.)
I agree that this is irrelevant to interventions that just seek to improve conditions for animals, rather than changing the number of animals that exist. Those seem equally good regardless of where the zero point is.
I wholeheartedly agree, and think we need to look elsewhere to apply this model.
Donor Lotteries unhealthily exhibit winner-take-all dynamics, centralizing rather than distributing power. If this individual makes a bad decision, then the impact of that money evaporates—it’s a very risky proposition.
A more robust solution would be to proportionally distribute the funds to everyone who joins, based on the amount they put in. This would democratize funding ability throughout the EA ecosystem and lead to a much healthier funding ecosystem.
The concrete suggestions here seem pretty wild, but I think the possible tension between computationalism and shrimp welfare is interesting. I don’t think it’s crazy to conclude “given x% credence on computationalism (plus these moral implications), I should reduce my prioritization of shrimp welfare by nearly x%.”
That said, the moral implications are still quite wild. To paraphrase Parfit, “research in [ancient Egyptian shrimp-keeping practices] cannot be relevant to our decision whether to [donate to SWP today].” The Moral Law keeping a running tally of previously-done computations and giving you a freebie to do a bit of torture if it’s already on the list sounds like a reductio.
A hazy guess is that something like “respecting boundaries” is a missing component here? Maybe there is something wrong with messing around with a water computer that’s instantiating a mind, because that mind has a right to control its own physical substrate. Seems hard to fit with utilitarianism though.
Thanks for posting, these look super interesting!
I’m hoping to read (and possibly respond to) more, but I ~randomly started with the final article “Saving the World Starts at Home.”
My thoughts on this one are mostly critical: I think it fundamentally misunderstands what EA is about (due to relying too heavily on a single book for its conception of EA), and will not be persuasive to many EAs. But it raises a few interesting critiques of EA prioritization at the end.
The Most Good You Can Do has a list (referred to as “The List”) of some prototypical EA projects; roughly: “earn to give, community building, working in government, research, organizing, organ donation.”
Thesis of the piece: “Building a good home” should be on The List.
Some reasons it’s good to build a good home: having a refuge (physical and psychological safety), showing hospitality to others, raising a family.
I was expecting to see discussion of externalities here; perhaps focusing on how creating a good home can boost effectiveness in other altruistic endeavors, or how there are more spillovers to society than might be expected. The latter shows up a bit, but this mostly discusses benefits to the people who physically enter your home.
Traditional EA priorities have been critiqued on the following grounds:
Demandingness
Motivational obstacles / they’re psychologically difficult
Epistemic limits: the world is very complicated
Ineffectiveness
Grift
Building a good home is not subject to these criticisms: it’s not overly demanding, it’s intrinsically motivating (or at least more than traditional EA interventions), it clearly produces direct good outcomes and isn’t subject to difficult-to-determine n-th order effects.
According to Singer, EAs don’t need to maximize the good at all times, and don’t have to be perfectly impartial. So it’s not necessary to discuss whether this is among the most effective interventions in order to argue that this should be an EA priority—effectively creating some good is enough.
(IMO this is simply a misunderstanding of EA, and undermines much of the article.)
Why do EAs ignore this issue? Some suggestions:
It’s not effective enough to count as an EA priority. (The rest of the article is arguing against this point.)
Status: It’s lower-status than other EA priorities, like donating lots of money to charity or producing interesting research
It’s less amenable to calculation
EAs have a bias toward “direct” rather than “indirect” forms of benevolence
(This seems in tension with the point from earlier about how reading to your kid produces clear, direct value, in contrast to the unclear and more-prone-to-backfire approach of donating to Oxfam. I also think EAs are super willing to consider indirect benevolence, but I digress.)
Politics: “Building a home” is conservative-coded in the US, and EA is left-leaning.
I think the “status” and “politics” critiques of EA prioritization are useful and probably under-discussed.
Certain fields (e.g. AI safety research) are often critiqued for being suspiciously interesting / high-status / high-paying, but this makes the case that even donating to GiveWell is a little suspicious in how much status it can buy. (But I think there are likely much more efficient ways to buy status; donating 1% of your income probably buys much more than 1⁄10 the status you’d get from donating 10%.)
I also think it’s reasonably likely that there are some conservative-coded causes that EAs undervalue for purely political reasons (but I don’t have any concrete examples at hand).
There are a few fundamental issues with the analysis that cause this to fail to connect for me.
(this is a bit scattershot; I tried to narrow it down to a few points to prevent this from being 3x longer)
It’s too anchored on Singer’s description of EA in The Most Good You Can Do, rather than the current priorities of the community.
A recurring example is “should you work an extra hour at Starbucks to donate $10 to Oxfam, or spend that hour hosting friends or reading a story to your kid?”
Oxfam is not currently a frequently-recommended charity in EA circles (it’s not recommended by GWWC, although Singer’s org The Life You Can Save does recommend it).
I’ve never heard “work a low-wage job to give” advocated as a top EA recommendation, so this isn’t a strong point of comparison.
It doesn’t engage with the typical criteria for EA causes (e.g. the ITN framework), and especially fails to engage with on-the-margin thinking.
“Are we to believe that effective altruists think that if we had more bad homes, this would not affect how much people care about the global poor or give to charities? Surely not.”
The question of how big this impact is, or how much a marginal increase in “good homes” creates a marginal increase in charitable giving (and how that compares to other approaches to increasing donations) is not discussed.
“If large numbers of people were regularly giving much of their income to charity and donating their kidneys, these activities would not thereby cease being acts of effective altruism. So, home life cannot be excluded from the List simply because many people already do it.”
Neglectedness is a key consideration for determining EA priorities: if there were no shortage of kidney donors, the argument for kidney-donation-as-effective-altruism would indeed be much weaker.
Rather than arguing directly that “building a good home” has positive externalities on par with the good done by other EA priorities, the main argument seems to be something like “this is technically compatible with the definition of effective altruism in TMGYCD.”
From the conclusion of section VII: “Assuming home life is an effective way of [creating] great good for the world, then effective altruists should have no complaint about recommending it as one potential expression of effective altruism. … [Otherwise,] the effective altruist commits to a very demanding view, one they should state and defend.”
I think this conflates “demandingness” (asking people to sacrifice a lot) with “having a high bar for declaring something an EA intervention.” For instance, you can recommend only the top 0.01% of charities, but still only ask people to give 10%.
EAs do state and defend the view that there should be a very high bar for what counts as an EA intervention.
Two more nitpicky points:
hosts and guests of the 80k podcast laughing at the ‘wokeness’ of this or that when civil rights/feminism are being brought in a conversation
A google search turned up one instance of a guest discussing wokeness, which was Bryan Caplan discussing why not to read the news:
(15:45) But the main thing is they’re just giving this overwhelmingly skewed view of the world. And what’s the skew exactly? The obvious one, which I will definitely defend, is an overwhelming left-wing view of the world. Basically, the woke Western view is what you get out of almost all media. Even if you’re reading media in other countries, it’s quite common: the journalists in those other countries are the most Westernised, in the sense of they are part of the woke cult. So there’s that. That’s the one that people complain about the most, and I think those complaints are reasonable.
But long before anyone was using the word “woke,” there’s just a bunch of other big problems with the news. The negativity bias: bad, bad, bad, sad, sad, sad, angry, angry, angry.
This wasn’t in the context of civil rights or feminism being discussed, and I couldn’t find any other instances where that was the case. Rob doesn’t comment on the “woke” bit here one way or another, and doesn’t laugh during these paragraphs. So unless there’s an example I missed, I think this characterization is incorrect.
posts on LessWrong talking about foetus’s sentience without mentioning ONCE reproductive rights
This is probably an example of decoupling vs contextualizing norms clashing, but I don’t think I see anything wrong here. Whether or not a fetus is sentient is a question about the world with some correct answer. Reproductive rights also concern fetuses, but don’t have any direct bearing on the factual question; they also tend to provoke heated discussion. So separating out the scientific question and discussing it on its own seems fine.
Some reactions I have to this:
In my (limited) personal experience, AI safety / longtermism isn’t diverse along racial or gender lines, which probably indicates talented people aren’t being picked up. Seems worth figuring out how to do a better job here. Similarly for EA as a whole, although this varies between cause area (iirc EA animal advocacy has a higher % of women than EA as a whole?)
I’m genuinely unsure how accurate / fair the statement “EA has an issue of sexism” is. But certainly there is a nonzero amount, which is more than there should be, and the accounts of sexism and related unwelcome-attitudes-toward-women in the community make me very sad.
The optimal amount of “cultural offputtingness” is not zero. It should be possible to “keep EA weird” without compromising on racial/gender/etc inclusion, and there are a lot of contingent weird things about EA that aren’t core to its goodness. But there are also a lot of ways I can see a less-weird EA being a less-good EA overall.
The link between increased diversity / decreased tech-bro reputation and passing AI safety regulations seems tenuous to me.
I have a general, vague sense that “do this for PR reasons” is not a good way to get something done well
It doesn’t seem like public perception updates very frequently (to take one example, here’s Fast Company two days ago saying ETG is the “core premise” of EA). I don’t think we should completely give up here, but unfortunately the “EA = techbro” perception is pretty baked in and I expect it to only change very gradually, if at all.
EA is also not very politically diverse—there are very few Republicans, and even the ones that are around tend to be committed libertarians rather than mainstream GOP voters. If we’re just considering the impact on passing AI safety regulations, having a less left-leaning public image could be more useful. (For the reasons in the two bullet points above though, I’m also skeptical of this idea; I just think it has a similar a priori plausibility.)
On reflection, I think the somewhat combative tone (framing disagreement as “refusal to admit” and being “in denial”) is fine here, but it did negatively color my initial reading, and probably contributed to some downvotes / disagree votes.
When you say a full hiring round is misdirected, what is this compared to? (Maybe pitching the position to individuals the org knows and trusts, and giving them more time to consider it and overcome their hesitations?) I don’t have any real experience in hiring so I’m not sure whether I see the drawback being implicitly pointed at here.
On their page explaining their definition of positive impact in more depth, footnote 1 clarifies:
“We often say “helping people” here for simplicity and brevity, but we don’t mean just humans — we mean anyone with experience that matters morally — e.g. nonhuman animals that can suffer or feel happiness, even conscious machines if they ever exist.”
I think it would be better to make it clearer that animals are included. But its not the case that they exclude animals from moral consideration.
https://80000hours.org/articles/what-is-social-impact-definition/
In case anyone else was wondering about pricing: most of the features described require the premium plan, which is $139 / year.
(There is a free version, but it only includes some low-quality voices and doesn’t allow changing speed, so it’s not very useful.)
I’ll explain my downvote.
I think the thing you’re expressing is fine, and reasonable to be worried about. I think Anthropic should be clear about their strategy. The Google investment does give me pause, and my biggest worry about Anthropic (as with many people, I think) has always been that their strategy could ultimately lead to accelerating capabilities more than alignment.
I just don’t think this post expressed that thing particularly well, or in a way I’d expect or want Anthropic to feel compelled to respond to. My preferred version of this would engage with reasons in favor of Anthropic’s actions, and how recent actions have concretely differed from what they’ve stated in the past.
My understanding of (part of) their strategy has always been that they want to work with the largest models, and sometimes release products with the possibility of profiting off of them (hence the PBC structure rather than a nonprofit). These ideas also sound reasonable (but not bulletproof) to me, so I consequently didn’t see the Google deal as a sudden change of direction or backstab—it’s easily explainable (although possibly concerning) in my preexisting model of what Anthropic’s doing.
So my objection is jumping to a “demand answers” framing, FTX comparisons, and accusations of Machiavellian scheming, rather than an “I’d really like Anthropic to comment on why they think this is good, and I’m worried they’re not adequately considering the downsides” framing. The former, to me, requires significantly more evidence of wrongdoing than I’m aware of or you’ve provided.
Here’s an attempt at a meta-level diagnosis of the conversation. My goal is to explain how the EA Forum got filled with race-and-IQ conversations that nobody really wants to be having, with everyone feeling like the other side is at fault for this.
First, the two main characters.
Alice from Group A is:
High-contextualizing
Tends to bring up diversity as a value in conversations
Finds Bostrom’s apology highly inadequate
Absolutely does not want there to be object-level discussions of group IQ differences on the EA forum
Statements that look racist should be strongly challenged, especially since they seem very likely to alienate people from EA.
Bob from Group B is:
High-decoupling
Tends to bring up epistemics as a value in conversations.[1]
Probably thinks Bostrom’s apology was at least fine, if not great (since they can’t find a sentence in it which, when read literally, expresses something they think is wrong)
Thinks conversations on a topic can only be improved by additional true information on that topic
So statements that seem false should be strongly challenged.
I’m naturally more of a Group B, but as the discussion has evolved, I think I’ve moved toward understanding and agreeing with the concerns of Group A.[2] Hopefully this allows me to be moderately objective here—but I expect I’m still biased in the B direction, so I welcome those who are naturally more A to tear this to shreds.
With the groundwork laid, here’s my potted conversation between Alice from A and Bob from B.
Alice: Bostrom’s apology is inadequate. He should completely renounce the position in the old email. Saying there’s a racial IQ gap is completely unacceptable, and he should renounce this too.
Bob: I understand criticizing Bostrom’s apology, but as far as I can tell he was correct about the existence of an IQ gap. Here, look at these sources I found. You can’t ask him to say something false.
Alice: I absolutely do not want to discuss the question of whether or not there is an IQ gap. Please don’t bring up this question, it will be extremely alienating to tons of people for no benefit.
Bob: Hold up, it seems to me like you made a factual claim about race and IQ before I did. I’m just continuing the conversation you started. Am I not allowed to point out your mistake?
Alice: If you go around discussing questions of race and IQ, people will assume that you’re a racist. It could be ok to discuss this question in narrow contexts in academia, but it’s not ok here and the discussion is going to make us all look bad.
Bob: But you said something false! Are you saying we have to lie for good PR? I don’t support that.
Alice: I’m saying I don’t want to be having this object-level conversation, can’t we just agree to condemn racist ideas?
[debate continues, neither side is happy about it.]
I don’t mean to imply by this framing that diversity and epistemics are inherently in opposition—this is just an observation that each side mentions one more than the other. I expect both A and B care about both values.
Remembering other forums that were practically split apart by discussions of group IQ differences was one big update for me toward “discussing this on the EA forum is really bad.” This makes me sympathize more with wishing the conversation could have been avoided at all costs, although I’m less sure what to do going forward.
I upvoted this post and think it’s a good contribution. The EA community as a whole has done damage to itself the past few days. But I’m worried about what it would mean to support having less epistemic integrity as a community.
This post says both:
If you believe there are racial differences in intelligence, and your work forces you to work on the hard problems of resource allocation or longtermist societal evolution, nobody will trust you to do the right tradeoffs.
and
If he’d said, for instance, “hey I was an idiot for thinking and saying that. We still have IQ gaps between races, which doesn’t make sense. It’s closing, but not fast enough. We should work harder on fixing this.” That would be more sensible. Same for the community itself disavowing the explicit racism.
The first quote says believing X (that there exists a racial IQ gap) is harmful and will result in nobody trusting you. The second says X is, in fact, true.[1]
For my own part, I will trust someone less if they endorse statements they think are false. I would also trust someone less if they seemed weirdly keen on having discussions that kinda seem racist. Unfortunately, it seems we’re basically having to decide between these two options.
My preferred solution is to—while being as clear as possible about the context, and taking great care not to cause undue harm—maintain epistemic integrity. I think “compromising your ability to say true, relevant things in order to be trusted more” is the kind of galaxy-brain PR move that probably doesn’t work. You incur the cost of decreased epistemic integrity, and then don’t fool anyone else anyway. If I can lose someone’s trust by saying something true in a relevant context,[2] then keeping their trust was a fabricated option.
I’m left not knowing what this post wants me to do differently. When I’m in a relevant conversation, I’m not going to lie or dissemble about my beliefs, although I will do my best to present them empathetically and in a way that minimizes harm. But if the main thrust here is “focus somewhat less on epistemic integrity,” I’m not sure what a good version of that looks like in practice, and I’m quite worried about it being taken as an invitation to be less trustworthy in the interest of appearing more trustworthy.
I’ve seen other discussions where someone seems to both claim “the racial IQ gap is shrinking / has no genetic component / is environmentally caused” and “believing there is a racial IQ gap is, in itself, racist.”
I think another point of disagreement might be whether this has been a relevant context to discuss race and IQ. My position is that if you’re in a discussion about how to respond to a person saying X, you’re by necessity also in a discussion about whether X is true. You can’t have the first conversation and completely bracket the second, as the truth or falsity of X is relevant to whether believing X is worthy of criticism.
Nick Bostrom’s website now lists him as “Principal Researcher, Macrostrategy Research Initiative.”
Doesn’t seem like they have a website yet.