OK, so essentially you don’t own up to strawmanning my views?
You… ignored me when I pointed out that the “attendees [were] disproportionately from organizations focused on global catastrophic risks or with a longtermist worldview.”
This could have been made clearer, but when I said that incentives come from incentive-setters thinking and being persuaded, the same applies to the choice of invitations to the EA leaders’ forum. And the leaders’ forum is quite representative of highly engaged EAs , who also favour AI & longtermist causes over global poverty work byt a 4:1 ratio anyway.
I’m...stuff like
Yes, Gates has thought about cause prio some, but he’s less engaged with it, and especially the cutting edge of it than many others.
You’ve …”authentic”
You seem to have missed my point. My suggestion is to trust experts to identify the top priority cause areas, but not on what messaging to use, and to instead authentically present info on the top priorities.
I agree… EA brand?
You seem to have missed my point again. As I said, “It’s [tough] to ask people to switch unilaterally”. That is, when people are speaking to the EA community, and while the EA community is the one that exist, I think it’s tough to ask them not to use the EA name. But at some point, in a coordinated fashion, I think it would be good if multiple groups find a new name and start a new intellectual project around that new name.
Per my bolded text, I don’t get the sense that I’m being debated in good faith, so I ’ll try to avoid making further comments in this subthread.
And the leaders’ forum is quite representative of highly engaged EAs , who also favour AI & longtermist causes over global poverty work by a 4:1 ratio anyway.
If it’s a 4:1 ratio only, I don’t think that should mean that longtermist content should be ~95% of the content on 80K’s Intro to EA collection. That survey shows that some leaders and some highly engaged EAs still think global poverty or animal welfare work are the most important to work on, so we should put some weight on that, especially given our uncertainty (moral and epistemic).
As such, it makes sense for 80K to have at least 1-2 out of 10 episodes to be focused on neartermist priorities like GH&D and animal welfare. This is similar to how I think it makes sense for some percentage of the EA (or “priorities”) community to work on neartermist work. It doesn’t have to be a winner-take-all approach to cause prioritization and what content we say is “EA”.
Based on their latest comment below, 80K seems to agree with putting in 1-2 episodes on neartermist work. Would you disagree with their decision?
It seems like 80K wants to feature some neartermist content in their next collection, but I didn’t object to the current collection for the same reason I don’t object to e.g. pages on Giving What We Can’s website that focus heavily on global development (example): it’s good for EA-branded content to be fairly balanced on the whole, but that doesn’t mean that every individual project has to be balanced.
Some examples of what I mean:
If EA Global had a theme like “economic growth” for one conference and 1⁄3 of the talks were about that, I think that could be pretty interesting, even if it wasn’t representative of community members’ priorities as a whole.
Sometimes, I send out an edition of the EA Newsletter that is mostly development-focused, or AI-focused, because there happened to be a lot of good links about that topic that month. I think the newsletter would be worse if I felt compelled to have at least one article about every major cause area every month.
It may have been better for 80K to refer to their collection as an “introduction to global priorities” or an “introduction to longtermism” or something like that, but I also think it’s perfectly legitimate to use the term “EA: An Introduction”. Giving What We Can talks about “EA” but mostly presents it through examples from global development. 80K does the same but mostly talks about longtermism. EA Global is more balanced than either of those. No single one of these projects is going to dominate the world’s perception of what “EA” is, and I think it’s fine for them to be a bit different.
(I’m more concerned about balance in cases where something could dominate the world’s perception of what EA is — I’d have been concerned if Doing Good Better had never mentioned animal welfare. I don’t think that a collection of old podcast episodes, even from a pretty popular podcast, has the same kind of clout.)
I generally agree with the following statements you said:
“It’s good for EA-branded content to be fairly balanced on the whole, but that doesn’t mean that every individual project has to be balanced.”
“If EA Global had a theme like “economic growth” for one conference and 1⁄3 of the talks were about that, I think that could be pretty interesting, even if it wasn’t representative of community members’ priorities as a whole.
“Sometimes, I send out an edition of the EA Newsletter that is mostly development-focused, or AI-focused, because there happened to be a lot of good links about that topic that month. I think the newsletter would be worse if I felt compelled to have at least one article about every major cause area every month.”
Moreover, I know that a collection of old podcast episodes from 80K isn’t likely to dominate the world’s perception of what EA is. But I think it would benefit 80K and EA more broadly if they just included 1 or 2 episodes about near-termism. I think I and others would be more interested to share their collection to people as an intro to EA if there were 1 or 2 episodes about GH&D and animal welfare. Not having any makes me know that I’ll only share this to people interested in longtermism.
Anyway, I guess 80K’s decision to add one episode on near-termism is evidence that they think that neartermist interventions do merit some discussion within an “intro to EA” collection. And maybe they sense that more people would be more supportive of 80K and this collection if they did this.
If it’s any consolation, this was weird as hell for me too; we presumably both feel pretty gaslit at this point.
FWIW (and maybe it’s worth nothing), this was such a familiar and very specific type of weird experience for me that I googled “Ryan Carey + Leverage Research” to test a theory. Having learned that your daily routine used to involve talking and collaborating with Leverage researchers “regarding how we can be smarter, more emotionally stable, and less biased”, I now have a better understanding of why I found it hard to engage in a productive debate with you.
If you want to respond to the open questions Brian or KHorton have asked of you, I’ll stay out of those conversations to make it easier for you to speak your mind.
Thanks for everything you’ve done for the EA community! Good luck!
Oh come on, this is clearly unfair. I visited that group for a couple of months over seven years ago, because a trusted mentor recommended them. I didn’t find their approach useful, and quickly switched to working autonomously, on starting the EA Forum and EA Handbook v1. For the last 6-7 years, (many can attest that) I’ve discouraged people from working there! So what is the theory exactly?
I’m glad you’ve been discouraging people from working at Leverage, and haven’t been involved with them for a long time.
In our back and forth, I noticed a pattern of behavior that I so strongly associate with Leverage (acting as if one’s position is the only “rational” one , ignoring counterevidence that’s been provided and valid questions that have been asked, making strong claims with little evidence, accusing the other party of bad faith) that I googled your name plus Leverage out of curiosity. That’s not a theory, that’s a fact (and as I said originally, perhaps a meaningless one).
But you’re right: it was a mistake to mention that fact, and I’m sorry for doing so.
OK, so essentially you don’t own up to strawmanning my views?
This could have been made clearer, but when I said that incentives come from incentive-setters thinking and being persuaded, the same applies to the choice of invitations to the EA leaders’ forum. And the leaders’ forum is quite representative of highly engaged EAs , who also favour AI & longtermist causes over global poverty work byt a 4:1 ratio anyway.
Yes, Gates has thought about cause prio some, but he’s less engaged with it, and especially the cutting edge of it than many others.
You seem to have missed my point. My suggestion is to trust experts to identify the top priority cause areas, but not on what messaging to use, and to instead authentically present info on the top priorities.
You seem to have missed my point again. As I said, “It’s [tough] to ask people to switch unilaterally”. That is, when people are speaking to the EA community, and while the EA community is the one that exist, I think it’s tough to ask them not to use the EA name. But at some point, in a coordinated fashion, I think it would be good if multiple groups find a new name and start a new intellectual project around that new name.
Per my bolded text, I don’t get the sense that I’m being debated in good faith, so I ’ll try to avoid making further comments in this subthread.
Just a quick point on this:
If it’s a 4:1 ratio only, I don’t think that should mean that longtermist content should be ~95% of the content on 80K’s Intro to EA collection. That survey shows that some leaders and some highly engaged EAs still think global poverty or animal welfare work are the most important to work on, so we should put some weight on that, especially given our uncertainty (moral and epistemic).
As such, it makes sense for 80K to have at least 1-2 out of 10 episodes to be focused on neartermist priorities like GH&D and animal welfare. This is similar to how I think it makes sense for some percentage of the EA (or “priorities”) community to work on neartermist work. It doesn’t have to be a winner-take-all approach to cause prioritization and what content we say is “EA”.
Based on their latest comment below, 80K seems to agree with putting in 1-2 episodes on neartermist work. Would you disagree with their decision?
It seems like 80K wants to feature some neartermist content in their next collection, but I didn’t object to the current collection for the same reason I don’t object to e.g. pages on Giving What We Can’s website that focus heavily on global development (example): it’s good for EA-branded content to be fairly balanced on the whole, but that doesn’t mean that every individual project has to be balanced.
Some examples of what I mean:
If EA Global had a theme like “economic growth” for one conference and 1⁄3 of the talks were about that, I think that could be pretty interesting, even if it wasn’t representative of community members’ priorities as a whole.
Sometimes, I send out an edition of the EA Newsletter that is mostly development-focused, or AI-focused, because there happened to be a lot of good links about that topic that month. I think the newsletter would be worse if I felt compelled to have at least one article about every major cause area every month.
It may have been better for 80K to refer to their collection as an “introduction to global priorities” or an “introduction to longtermism” or something like that, but I also think it’s perfectly legitimate to use the term “EA: An Introduction”. Giving What We Can talks about “EA” but mostly presents it through examples from global development. 80K does the same but mostly talks about longtermism. EA Global is more balanced than either of those. No single one of these projects is going to dominate the world’s perception of what “EA” is, and I think it’s fine for them to be a bit different.
(I’m more concerned about balance in cases where something could dominate the world’s perception of what EA is — I’d have been concerned if Doing Good Better had never mentioned animal welfare. I don’t think that a collection of old podcast episodes, even from a pretty popular podcast, has the same kind of clout.)
Thanks for sharing your thinking!
I generally agree with the following statements you said:
“It’s good for EA-branded content to be fairly balanced on the whole, but that doesn’t mean that every individual project has to be balanced.”
“If EA Global had a theme like “economic growth” for one conference and 1⁄3 of the talks were about that, I think that could be pretty interesting, even if it wasn’t representative of community members’ priorities as a whole.
“Sometimes, I send out an edition of the EA Newsletter that is mostly development-focused, or AI-focused, because there happened to be a lot of good links about that topic that month. I think the newsletter would be worse if I felt compelled to have at least one article about every major cause area every month.”
Moreover, I know that a collection of old podcast episodes from 80K isn’t likely to dominate the world’s perception of what EA is. But I think it would benefit 80K and EA more broadly if they just included 1 or 2 episodes about near-termism. I think I and others would be more interested to share their collection to people as an intro to EA if there were 1 or 2 episodes about GH&D and animal welfare. Not having any makes me know that I’ll only share this to people interested in longtermism.
Anyway, I guess 80K’s decision to add one episode on near-termism is evidence that they think that neartermist interventions do merit some discussion within an “intro to EA” collection. And maybe they sense that more people would be more supportive of 80K and this collection if they did this.
If it’s any consolation, this was weird as hell for me too; we presumably both feel pretty gaslit at this point.
FWIW (and maybe it’s worth nothing), this was such a familiar and very specific type of weird experience for me that I googled “Ryan Carey + Leverage Research” to test a theory. Having learned that your daily routine used to involve talking and collaborating with Leverage researchers “regarding how we can be smarter, more emotionally stable, and less biased”, I now have a better understanding of why I found it hard to engage in a productive debate with you.
If you want to respond to the open questions Brian or KHorton have asked of you, I’ll stay out of those conversations to make it easier for you to speak your mind.
Thanks for everything you’ve done for the EA community! Good luck!
Oh come on, this is clearly unfair. I visited that group for a couple of months over seven years ago, because a trusted mentor recommended them. I didn’t find their approach useful, and quickly switched to working autonomously, on starting the EA Forum and EA Handbook v1. For the last 6-7 years, (many can attest that) I’ve discouraged people from working there! So what is the theory exactly?
I’m glad you’ve been discouraging people from working at Leverage, and haven’t been involved with them for a long time.
In our back and forth, I noticed a pattern of behavior that I so strongly associate with Leverage (acting as if one’s position is the only “rational” one , ignoring counterevidence that’s been provided and valid questions that have been asked, making strong claims with little evidence, accusing the other party of bad faith) that I googled your name plus Leverage out of curiosity. That’s not a theory, that’s a fact (and as I said originally, perhaps a meaningless one).
But you’re right: it was a mistake to mention that fact, and I’m sorry for doing so.