A: I didn’t say we should defer only to longtermist experts, and I don’t see how this could come from any good-faith interpretation of my comment. Singer and Gates should some weight, to the extent that they think about cause prio and issues with short and longtermism, I’d just want to see the literature.
You cited the views of the leaders forum as evidence that leaders are longtermist, and completely ignored me when I pointed out that the “attendees [were] disproportionately from organizations focused on global catastrophic risks or with a longtermist worldview.” I also think it’s unreasonable to simply declare that “Christiano, Macaskill, Greaves, Shulman, Bostrom, etc” are “the most accomplished experts” but require “literature” to prove that Singer and Gates have thought a sufficient amount about cause prioritization.
I’m pretty sure in 20 years of running the Gates Foundation, Bill has thought a bit about cause prioritization, and talked to some pretty smart people about it. And he definitely cares about the long-term future, he just happens to prioritize climate change over AI. Personally, I trust his philanthropic and technical credentials enough to take notice when Gates says stuff like :
[EAs] like working on AI. Working on AI is fun. If they think what they’re doing is reducing the risk of AI, I haven’t seen that proof of that. They have a model. Some people want to go to Mars. Some people want to live forever. Philanthropy has got a lot of heterogeneity in it. If people bring their intelligence, some passion, overall, it tends to work out. There’s some dead ends, but every once in a while, we get the Green Revolution or new vaccines or models for how education can be done better. It’s not something where the philanthropists all homogenize what they’re doing.
Sounds to me like he’s thought about this stuff.
I agree that some longtermists would favour shorttermist or mixed content. If they have good arguments, or if they’re experts in content selection, then great! But I think authenticity is a strong default.
You’ve asked us to defer to a narrow set of experts, but (as I previously noted) you’ve provided no evidence that any of the experts you named would actually object to mixed content. You also haven’t acknowledged evidence that they’d prefer mixed content (e.g. Open Phil’s actual giving history or KHorton’s observation that “Will MacAskill [and] Ajeya Cotra [have] both spoken in favour of worldview diversification and moral uncertainty.”) I don’t see how that’s “authentic.”
In my ideal universe, the podcast would be called an “Introduction to prioritization”, but also, online conversation would happen on a “priorities forum”, and so on.
I agree that naming would be preferable. But you didn’t propose that name in this thread, you argued that an “Intro to EA playlist”, effectivealtruism.org, and the EA Handbook (i.e. 3 things with “EA” in the name) should have narrow longtermist focuses. If you want to create prioritization handbooks, forums, etc., why not just go create new things with the appropriate names instead of coopting and changing the existing EA brand?
OK, so essentially you don’t own up to strawmanning my views?
You… ignored me when I pointed out that the “attendees [were] disproportionately from organizations focused on global catastrophic risks or with a longtermist worldview.”
This could have been made clearer, but when I said that incentives come from incentive-setters thinking and being persuaded, the same applies to the choice of invitations to the EA leaders’ forum. And the leaders’ forum is quite representative of highly engaged EAs , who also favour AI & longtermist causes over global poverty work byt a 4:1 ratio anyway.
I’m...stuff like
Yes, Gates has thought about cause prio some, but he’s less engaged with it, and especially the cutting edge of it than many others.
You’ve …”authentic”
You seem to have missed my point. My suggestion is to trust experts to identify the top priority cause areas, but not on what messaging to use, and to instead authentically present info on the top priorities.
I agree… EA brand?
You seem to have missed my point again. As I said, “It’s [tough] to ask people to switch unilaterally”. That is, when people are speaking to the EA community, and while the EA community is the one that exist, I think it’s tough to ask them not to use the EA name. But at some point, in a coordinated fashion, I think it would be good if multiple groups find a new name and start a new intellectual project around that new name.
Per my bolded text, I don’t get the sense that I’m being debated in good faith, so I ’ll try to avoid making further comments in this subthread.
And the leaders’ forum is quite representative of highly engaged EAs , who also favour AI & longtermist causes over global poverty work by a 4:1 ratio anyway.
If it’s a 4:1 ratio only, I don’t think that should mean that longtermist content should be ~95% of the content on 80K’s Intro to EA collection. That survey shows that some leaders and some highly engaged EAs still think global poverty or animal welfare work are the most important to work on, so we should put some weight on that, especially given our uncertainty (moral and epistemic).
As such, it makes sense for 80K to have at least 1-2 out of 10 episodes to be focused on neartermist priorities like GH&D and animal welfare. This is similar to how I think it makes sense for some percentage of the EA (or “priorities”) community to work on neartermist work. It doesn’t have to be a winner-take-all approach to cause prioritization and what content we say is “EA”.
Based on their latest comment below, 80K seems to agree with putting in 1-2 episodes on neartermist work. Would you disagree with their decision?
It seems like 80K wants to feature some neartermist content in their next collection, but I didn’t object to the current collection for the same reason I don’t object to e.g. pages on Giving What We Can’s website that focus heavily on global development (example): it’s good for EA-branded content to be fairly balanced on the whole, but that doesn’t mean that every individual project has to be balanced.
Some examples of what I mean:
If EA Global had a theme like “economic growth” for one conference and 1⁄3 of the talks were about that, I think that could be pretty interesting, even if it wasn’t representative of community members’ priorities as a whole.
Sometimes, I send out an edition of the EA Newsletter that is mostly development-focused, or AI-focused, because there happened to be a lot of good links about that topic that month. I think the newsletter would be worse if I felt compelled to have at least one article about every major cause area every month.
It may have been better for 80K to refer to their collection as an “introduction to global priorities” or an “introduction to longtermism” or something like that, but I also think it’s perfectly legitimate to use the term “EA: An Introduction”. Giving What We Can talks about “EA” but mostly presents it through examples from global development. 80K does the same but mostly talks about longtermism. EA Global is more balanced than either of those. No single one of these projects is going to dominate the world’s perception of what “EA” is, and I think it’s fine for them to be a bit different.
(I’m more concerned about balance in cases where something could dominate the world’s perception of what EA is — I’d have been concerned if Doing Good Better had never mentioned animal welfare. I don’t think that a collection of old podcast episodes, even from a pretty popular podcast, has the same kind of clout.)
I generally agree with the following statements you said:
“It’s good for EA-branded content to be fairly balanced on the whole, but that doesn’t mean that every individual project has to be balanced.”
“If EA Global had a theme like “economic growth” for one conference and 1⁄3 of the talks were about that, I think that could be pretty interesting, even if it wasn’t representative of community members’ priorities as a whole.
“Sometimes, I send out an edition of the EA Newsletter that is mostly development-focused, or AI-focused, because there happened to be a lot of good links about that topic that month. I think the newsletter would be worse if I felt compelled to have at least one article about every major cause area every month.”
Moreover, I know that a collection of old podcast episodes from 80K isn’t likely to dominate the world’s perception of what EA is. But I think it would benefit 80K and EA more broadly if they just included 1 or 2 episodes about near-termism. I think I and others would be more interested to share their collection to people as an intro to EA if there were 1 or 2 episodes about GH&D and animal welfare. Not having any makes me know that I’ll only share this to people interested in longtermism.
Anyway, I guess 80K’s decision to add one episode on near-termism is evidence that they think that neartermist interventions do merit some discussion within an “intro to EA” collection. And maybe they sense that more people would be more supportive of 80K and this collection if they did this.
If it’s any consolation, this was weird as hell for me too; we presumably both feel pretty gaslit at this point.
FWIW (and maybe it’s worth nothing), this was such a familiar and very specific type of weird experience for me that I googled “Ryan Carey + Leverage Research” to test a theory. Having learned that your daily routine used to involve talking and collaborating with Leverage researchers “regarding how we can be smarter, more emotionally stable, and less biased”, I now have a better understanding of why I found it hard to engage in a productive debate with you.
If you want to respond to the open questions Brian or KHorton have asked of you, I’ll stay out of those conversations to make it easier for you to speak your mind.
Thanks for everything you’ve done for the EA community! Good luck!
Oh come on, this is clearly unfair. I visited that group for a couple of months over seven years ago, because a trusted mentor recommended them. I didn’t find their approach useful, and quickly switched to working autonomously, on starting the EA Forum and EA Handbook v1. For the last 6-7 years, (many can attest that) I’ve discouraged people from working there! So what is the theory exactly?
I’m glad you’ve been discouraging people from working at Leverage, and haven’t been involved with them for a long time.
In our back and forth, I noticed a pattern of behavior that I so strongly associate with Leverage (acting as if one’s position is the only “rational” one , ignoring counterevidence that’s been provided and valid questions that have been asked, making strong claims with little evidence, accusing the other party of bad faith) that I googled your name plus Leverage out of curiosity. That’s not a theory, that’s a fact (and as I said originally, perhaps a meaningless one).
But you’re right: it was a mistake to mention that fact, and I’m sorry for doing so.
You cited the views of the leaders forum as evidence that leaders are longtermist, and completely ignored me when I pointed out that the “attendees [were] disproportionately from organizations focused on global catastrophic risks or with a longtermist worldview.” I also think it’s unreasonable to simply declare that “Christiano, Macaskill, Greaves, Shulman, Bostrom, etc” are “the most accomplished experts” but require “literature” to prove that Singer and Gates have thought a sufficient amount about cause prioritization.
I’m pretty sure in 20 years of running the Gates Foundation, Bill has thought a bit about cause prioritization, and talked to some pretty smart people about it. And he definitely cares about the long-term future, he just happens to prioritize climate change over AI. Personally, I trust his philanthropic and technical credentials enough to take notice when Gates says stuff like :
Sounds to me like he’s thought about this stuff.
You’ve asked us to defer to a narrow set of experts, but (as I previously noted) you’ve provided no evidence that any of the experts you named would actually object to mixed content. You also haven’t acknowledged evidence that they’d prefer mixed content (e.g. Open Phil’s actual giving history or KHorton’s observation that “Will MacAskill [and] Ajeya Cotra [have] both spoken in favour of worldview diversification and moral uncertainty.”) I don’t see how that’s “authentic.”
I agree that naming would be preferable. But you didn’t propose that name in this thread, you argued that an “Intro to EA playlist”, effectivealtruism.org, and the EA Handbook (i.e. 3 things with “EA” in the name) should have narrow longtermist focuses. If you want to create prioritization handbooks, forums, etc., why not just go create new things with the appropriate names instead of coopting and changing the existing EA brand?
OK, so essentially you don’t own up to strawmanning my views?
This could have been made clearer, but when I said that incentives come from incentive-setters thinking and being persuaded, the same applies to the choice of invitations to the EA leaders’ forum. And the leaders’ forum is quite representative of highly engaged EAs , who also favour AI & longtermist causes over global poverty work byt a 4:1 ratio anyway.
Yes, Gates has thought about cause prio some, but he’s less engaged with it, and especially the cutting edge of it than many others.
You seem to have missed my point. My suggestion is to trust experts to identify the top priority cause areas, but not on what messaging to use, and to instead authentically present info on the top priorities.
You seem to have missed my point again. As I said, “It’s [tough] to ask people to switch unilaterally”. That is, when people are speaking to the EA community, and while the EA community is the one that exist, I think it’s tough to ask them not to use the EA name. But at some point, in a coordinated fashion, I think it would be good if multiple groups find a new name and start a new intellectual project around that new name.
Per my bolded text, I don’t get the sense that I’m being debated in good faith, so I ’ll try to avoid making further comments in this subthread.
Just a quick point on this:
If it’s a 4:1 ratio only, I don’t think that should mean that longtermist content should be ~95% of the content on 80K’s Intro to EA collection. That survey shows that some leaders and some highly engaged EAs still think global poverty or animal welfare work are the most important to work on, so we should put some weight on that, especially given our uncertainty (moral and epistemic).
As such, it makes sense for 80K to have at least 1-2 out of 10 episodes to be focused on neartermist priorities like GH&D and animal welfare. This is similar to how I think it makes sense for some percentage of the EA (or “priorities”) community to work on neartermist work. It doesn’t have to be a winner-take-all approach to cause prioritization and what content we say is “EA”.
Based on their latest comment below, 80K seems to agree with putting in 1-2 episodes on neartermist work. Would you disagree with their decision?
It seems like 80K wants to feature some neartermist content in their next collection, but I didn’t object to the current collection for the same reason I don’t object to e.g. pages on Giving What We Can’s website that focus heavily on global development (example): it’s good for EA-branded content to be fairly balanced on the whole, but that doesn’t mean that every individual project has to be balanced.
Some examples of what I mean:
If EA Global had a theme like “economic growth” for one conference and 1⁄3 of the talks were about that, I think that could be pretty interesting, even if it wasn’t representative of community members’ priorities as a whole.
Sometimes, I send out an edition of the EA Newsletter that is mostly development-focused, or AI-focused, because there happened to be a lot of good links about that topic that month. I think the newsletter would be worse if I felt compelled to have at least one article about every major cause area every month.
It may have been better for 80K to refer to their collection as an “introduction to global priorities” or an “introduction to longtermism” or something like that, but I also think it’s perfectly legitimate to use the term “EA: An Introduction”. Giving What We Can talks about “EA” but mostly presents it through examples from global development. 80K does the same but mostly talks about longtermism. EA Global is more balanced than either of those. No single one of these projects is going to dominate the world’s perception of what “EA” is, and I think it’s fine for them to be a bit different.
(I’m more concerned about balance in cases where something could dominate the world’s perception of what EA is — I’d have been concerned if Doing Good Better had never mentioned animal welfare. I don’t think that a collection of old podcast episodes, even from a pretty popular podcast, has the same kind of clout.)
Thanks for sharing your thinking!
I generally agree with the following statements you said:
“It’s good for EA-branded content to be fairly balanced on the whole, but that doesn’t mean that every individual project has to be balanced.”
“If EA Global had a theme like “economic growth” for one conference and 1⁄3 of the talks were about that, I think that could be pretty interesting, even if it wasn’t representative of community members’ priorities as a whole.
“Sometimes, I send out an edition of the EA Newsletter that is mostly development-focused, or AI-focused, because there happened to be a lot of good links about that topic that month. I think the newsletter would be worse if I felt compelled to have at least one article about every major cause area every month.”
Moreover, I know that a collection of old podcast episodes from 80K isn’t likely to dominate the world’s perception of what EA is. But I think it would benefit 80K and EA more broadly if they just included 1 or 2 episodes about near-termism. I think I and others would be more interested to share their collection to people as an intro to EA if there were 1 or 2 episodes about GH&D and animal welfare. Not having any makes me know that I’ll only share this to people interested in longtermism.
Anyway, I guess 80K’s decision to add one episode on near-termism is evidence that they think that neartermist interventions do merit some discussion within an “intro to EA” collection. And maybe they sense that more people would be more supportive of 80K and this collection if they did this.
If it’s any consolation, this was weird as hell for me too; we presumably both feel pretty gaslit at this point.
FWIW (and maybe it’s worth nothing), this was such a familiar and very specific type of weird experience for me that I googled “Ryan Carey + Leverage Research” to test a theory. Having learned that your daily routine used to involve talking and collaborating with Leverage researchers “regarding how we can be smarter, more emotionally stable, and less biased”, I now have a better understanding of why I found it hard to engage in a productive debate with you.
If you want to respond to the open questions Brian or KHorton have asked of you, I’ll stay out of those conversations to make it easier for you to speak your mind.
Thanks for everything you’ve done for the EA community! Good luck!
Oh come on, this is clearly unfair. I visited that group for a couple of months over seven years ago, because a trusted mentor recommended them. I didn’t find their approach useful, and quickly switched to working autonomously, on starting the EA Forum and EA Handbook v1. For the last 6-7 years, (many can attest that) I’ve discouraged people from working there! So what is the theory exactly?
I’m glad you’ve been discouraging people from working at Leverage, and haven’t been involved with them for a long time.
In our back and forth, I noticed a pattern of behavior that I so strongly associate with Leverage (acting as if one’s position is the only “rational” one , ignoring counterevidence that’s been provided and valid questions that have been asked, making strong claims with little evidence, accusing the other party of bad faith) that I googled your name plus Leverage out of curiosity. That’s not a theory, that’s a fact (and as I said originally, perhaps a meaningless one).
But you’re right: it was a mistake to mention that fact, and I’m sorry for doing so.