I think it’s absurd to say that it’s inappropriate for EAs to comment on their opinions on the relative altruistic impact of different actions one might take. Figuring out the relative altruistic impact of different actions is arguably the whole point of EA; it’s hard to think of something that’s more obviously on topic.
Buck
Obviously it would have been better if those organizers had planned better. It’s not clear to me that it would have been better for the event to just go down in flames; OP apparently agreed with me, which is why they stepped in with more funding.
I don’t think the Future Forum organizers have particularly strong relationships with OP.
The main bottleneck I’m thinking of is energetic people with good judgement to execute on and manage these projects.
I disagree, I think that making controversial posts under your real name can improve your reputation in the EA community in ways that help your ability to do good. For example, I think I’ve personally benefited a lot from saying things that were controversial under my real name over the years (including before I worked at EA orgs).
Stand up a meta organization for neartermism now, and start moving functions over as it is ready.
As I’ve said before, I agree with you that this looks like a pretty good idea from a neartermist perspective.
Neartermism has developed meta organizations from scratch before, of course.
[...]
which is quite a bit more than neartermism had when it created most of the current meta.
I don’t think it’s fair to describe the current meta orgs as being created by neartermists and therefore argue that new orgs could be created by neartermists. These were created by people who were compelled by the fundamental arguments for EA (e.g. the importance of cause prioritization, cosmopolitanism, etc). New meta orgs would have to be created by people who are compelled by these arguments but also not compelled by the current arguments for longtermism, which is empirically a small fraction of the most energetic/ambitious/competent people who are compelled by arguments for the other core EA ideas.
More importantly, meta orgs that were distanced from the longtermist branch would likely attract people interested in working in GHD, animal advocacy, etc. who wouldn’t currently be interested in affiliating with EA as a whole. So you’d get some experienced hands and a good number of new recruits
I think this is the strongest argument for why neartermism wouldn’t be substantially weaker without longtermists subsidizing its infrastructure.
Two general points:
There are many neartermists who I deeply respect; for example, I feel deep gratitude to Lewis Bollard from the Open Phil farmed animal welfare team and many other farmed animal welfare people. Also, I think GiveWell seems like a competent org that I expect to keep running competently.
It makes me feel sad to imagine neartermists not wanting to associate with longtermists. I personally feel like I am fundamentally an EA, but I’m only contingently a longtermist. If I didn’t believe I could influence the long run future, I’d probably be working on animal welfare; if I didn’t believe that there were good opportunities there, I’d be working hard to improve the welfare of current humans. If I believed it was the best thing to do, I would totally be living frugally and working hard to EtG for global poverty charities. I think of neartermist EAs as being fellow travelers and kindred spirits, with much more in common with me than almost all other humans.
Fwiw my guess is that longtermism hasn’t had net negative impact by its own standards. I don’t think negative effects from AI speed up outweigh various positive impacts (e.g. promotion of alignment concerns, setting up alignment research, and non-AI stuff).
and then explains why these longtermists will not be receptive to conventional EA arguments.
I don’t agree with this summary of my comment btw. I think the longtermists I’m talking about are receptive to arguments phrased in terms of the classic EA concepts (arguments in those terms are how most of us ended up working on the things we work on).
Holden Karnofsky on evaluating people based on public discourse:
I think it’s good and important to form views about people’s strengths, weaknesses, values and character. However, I am generally against forming negative views of people (on any of these dimensions) based on seemingly incorrect, poorly reasoned, or seemingly bad-values-driven public statements. When a public statement is not misleading or tangibly harmful, I generally am open to treating it as a positive update on the person making the statement, but not to treating it as worse news about them than if they had simply said nothing.
The basic reasons for this attitude are:
I think it is very easy to be wrong about the implications of someone’s public statement. It could be that their statement was poorly expressed, or aimed at another audience; that the reader is failing to understand subtleties of it; or that the statement is in fact wrong, but that it merely reflects that the person who made it hasn’t been sufficiently reflective or knowledgeable on the topic yet (and could become so later).
I think public discourse would be less costly and more productive for everyone if the attitude I take were more common. I think that one of the best ways to learn is to share one’s impressions, even (especially) when they might be badly wrong. I wish that public discourse could include more low-caution exploration, without the risks that currently come with such things.
I generally believe in evaluating people based on what they’ve accomplished and what they’ve had the opportunity to accomplish, plus any tangible harm (including misinformation) they’ve caused. I think this approach works well for identifying people who are promising and people whom I should steer clear of; I think other methods add little of value and mostly add noise.
I update negatively on people who mislead (including expressing great confidence while being wrong, and especially including avoidable mischaracterizations of others’ views); people who do tangible damage (usually by misleading); and people who create little of value despite large amounts of opportunity and time investment. But if someone is simply expressing a view and being open about their reasons for holding it, I try (largely successfully, I think) not to make any negative updates simply based on the substance.
FWIW I’m somewhat more judgemental than Holden, but I think the position Holden advocates is not that unusual for seniorish EAs.
- Feb 7, 2023, 9:46 AM; 55 points) 's comment on The number of burner accounts is too damn high by (
I think you’re imagining that the longtermists split off and then EA is basically as it is now, but without longtermism. But I don’t think that’s what would happen. If longtermist EAs who currently work on EA-branded projects decided to instead work on projects with different branding (which will plausibly happen; I think longtermists have been increasingly experimenting with non-EA branding for new projects over the last year or two, and this will probably accelerate given the last few months), EA would lose most of the people who contribute to its infrastructure and movement building.
My guess is that this new neartermist-only EA would not have the resources to do a bunch of things which EA currently does—it’s not clear to me that it would have an actively maintained custom forum, or EAGs, or EA Funds. James Snowden at Open Phil recently started working on grantmaking for neartermist-focused EA community growth, and so there would be at least one dedicated grantmaker trying to make some of this stuff happen. But most of the infrastructure would be gone.
I agree that longtermism’s association with EA has some costs for neartermist goals, but it’s really not clear to me that the association is net negative for neartermism overall. Perhaps we’ll find out.
(I personally like the core EA ideas, and I have learned a lot from engaging with non-longtermist EA over the last decade, and I feel great fondness towards some neartermist work, and so from a personal perspective I like the way things felt a year ago better than a future where more of my peers are just motivated by “holy shit, x-risk” or similar. But obviously we should make these decisions to maximize impact rather than to maximize how much we enjoy our social scenes.)
Note that while Caleb is involved with grantmaking, I don’t think he has funded atlas, so this post isn’t about a grantee of his.
Note that “lots of people believe that they need to hire their identities” isn’t itself very strong evidence for “people need to hide their identities”. I agree it’s a shame that people don’t have more faith in the discourse process.
Thanks for the specific proposals.
The reasonable person also knows that senior EAs have a lot of discretionary power, and thus there is a significant chance retailatory action would not be detected absent special safeguards.
FWIW, I think you’re probably overstating the amount of discretionary power that senior EAs could use for retaliatory action.
IMO, if you proposition someone, you’re obligated to mention this to other involved parties in situations where you’re wielding discretionary power related to them. I would think it was wildly inappropriate for a grantmaker to evaluate a grant without disclosing this COI (and probably I’d think they shouldn’t evaluate the grant at all), or for someone to weigh in on a hiring decision without disclosing it. If I heard of someone not disclosing the COI in such a situation, I’d update strongly against them, and I’d move maybe halfway towards thinking that they should have their discretionary power removed.
If some senior person decided that they personally hated someone who had rejected them and wanted to wreck their career, I think they could maybe do it, but it would be hard for them to do it in a way that didn’t pose a big risk to their own career.
However, if the person’s power is “soft” and does not run through organizational lines (e.g., the person is a leading public intellectual), there is likely no practical way to hold that person accountable to a recusal commitment.
I think you’re overestimating the extent to which being a leading public intellectual makes it possible to engage in discretionary vengeance, because again, I’d think it was very inappropriate for such a person to comment substantially on someone without disclosing the COI.
--
On a totally different tack, I think it’s interesting that your suggestions are mostly about problems resulting from more senior EAs propositioning junior EAs. To what extent would you be okay with norms that it’s bad for more senior EAs to proposition junior EAs, but it’s okay for the senior EAs to date junior EAs if the junior EAs do the propositioning?
A lot of the motivation behind the dating website reciprocity.io (which I maintain) is that it’s good for EAs to avoid propositioning each other if their interest isn’t reciprocated.
Re 2: You named a bunch of cases where a professional relationship comes with restrictions on sex or romance. (An example you could given, which I think is almost universally followed in EA, is “people shouldn’t date anyone in their chain of management”; IMO this is a good rule.) I think it makes sense to have those relationships be paired with those restrictions. But it’s not clear to me that the situation in EA is more like those situations than like these other situations:
Professors dating grad students who work at other universities
Well-respected artists dating people in their art communities
High-income people dating people they know from college who aren’t wealthy
I think it’s really not obvious that those relationships should be banned (though I don’t feel hugely confident, and I understand that some people think that they should be).
I’m interested in more specific proposals for what rules along these lines you might support.
Yeah, IMO medals definitely don’t suffice for me to think it’s extremely likely someone will be AFAICT good at doing research.
I agree that it’s important to separate out all of these factors, but I think it’s totally reasonable for your assessment of some of these factors to update your assessment of others.
For example:
People who are “highly intelligent” are generally more suitable for projects/jobs/roles.
People who agree with the foundational claims underlying a theory of change are more suitable for projects/jobs/roles that are based on that theory of change.
For example, I recently was surprised to hear someone who controls relevant funding and community space access remark to a group “I rank [Researcher X] as an A-Tier researcher. I don’t actually know what they work on, but they just seem really smart.” I found this very epistemically concerning, but other people didn’t seem to.
I agree that this feels somewhat concerning; I’m not sure it’s an example of people failing to consciously separate these things though. Here’s how I feel about this kind of thing:
It’s totally reasonable to be more optimistic about someone’s research because they seem smart (even if you don’t know anything about the research).
In my experience, smart people have a pretty high rate of failing to do useful research (by researching in an IMO useless direction, or being unproductive), so I’d never be that confident in someone’s research direction just based on them seeming really smart, even if they were famously smart. (E.g. Scott Aaronson is famously brilliant and when I talk to him it’s obvious to me that he knows way more theoretical computer science than I do, but I definitely wouldn’t feel optimistic about his alignment research directions without knowing more about the situation.)
I think there is some risk of falling into echo chambers where lots of people say really positive things about someone’s research without knowing anything about it. To prevent this, I think that when people are optimistic about someone’s research because the person seems smart rather than because they’ve specifically evaluated the research, I think they should clearly say “I’m provisionally optimistic here because the person seems smart, but fwiw I have not actually looked at the research”.
I do not actually endorse this comment above. It is used as an illustration of why a true statement alone might not mean it is “fair game”, or a constructive way to approach what you want to say. Here is my real response:
As a random aside, I thought that your first paragraph was totally fair and reasonable and I had no problem with you saying it.
Thanks for your comment. I think your comment seems to me like it’s equivocating between two things: whether I negatively judge people for writing certain things, and whether I publicly say that I think certain content makes the EA Forum worse. In particular, I did the latter, but you’re worrying about the former.
I do update on people when they say things on this forum that I think indicate bad things about their judgment or integrity, as I think I should, but for what it’s worth I am very quick to forgive and don’t hold long grudges. Also, it’s quite rare for me to update against someone substantially from a single piece of writing of theirs that I disliked. In general, I think people in EA worry too much about being judged negatively for saying things and underestimate how forgiving people are (especially if a year passes or if you say particularly reasonable things in the meantime).
By “unpleasant” I don’t mean “the authors are behaving rudely”, I mean “the content/framing seems not very useful and I am sad about the effect it has on the discourse”.
I picked that post because it happened to have a good critical comment that I agreed with; I have analogous disagreements with some of your other posts (including the one you linked).
Thanks for your offer to receive critical feedback.
Thanks for your sincere reply (I’m not trying to say other people aren’t sincere, I just particularly felt like mentioning it here).
Here are my thoughts on the takeaways you thought people might have.
There is an EA leadership (you saying it, as a self-confessed EA leader, is likely more convincing in confirming something like this than some anonymous people saying it), which runs counter to a lot of the other messaging within EA. This sounds very in-groupy (particularly as you refer to it as a ‘social cluster’ rather than e.g. a professional cluster)
As I said in my comment, I think that it’s true that the actions of EA-branded orgs are largely influenced by a relatively small number of people who consider each other allies and (in many cases) friends. (Though these people don’t necessarily get along or agree on things—for example, I think William MacAskill is a well-intentioned guy but I disagree with him a bunch on important questions about the future and various short-term strategy things.)
If the authors of this post are asking for community opinion on which changes are good after giving concerns, the top (for a while at least) comment being criticising this for a lack of theory of change suggests a low regard of the EA leadership to the opinions of the EA community overall (regardless of agreement to any specific element of the original post)
Not speaking for anyone else here, but it’s totally true that I have a pretty low regard for the quality of the average EA Forum comment/post, and don’t think of the EA Forum as a place where I go to hear good ideas about ways EA could be different (though occasionally people post good content here).
Unless I am very high up and in the core EA group, I am unlikely to be listened to
For whatever it’s worth, in my experience, people who show up in EA and start making high-quality contributions quickly get a reputation among people I know for having useful things to say, even if they don’t have any social connection.
I gave a talk yesterday where someone I don’t know made some objections to an argument I made, and I provisionally changed my mind about that argument based on their objections.
While EA is open to criticism in theory, it is not open to changing based on criticism as the leadership has already reasoned about this as much as they are going to
I think “criticism” is too broad a category here. I think it’s helpful to provide novel arguments or evidence. I also think it’s helpful to provide overall high-level arguments where no part of the argument is novel, but it’s convenient to have all the pieces in one place (e.g. Katja Grace on slowing down AI). I (perhaps foolishly) check the EA Forum and read/skim potentially relevant/interesting articles, so it’s pretty likely that I end up reading your stuff and thinking about it at least a little.
I also think some of the suggestions are likely more relevant and require more thought from people actively working in e.g. community building strategy, than someone who is CTO of an AI alignment research organisation (from your profile)/a technical role more generally, at least in terms of considerations that are required in order to have greatest impact in their work.
You’re right that my actions are less influenced by my opinions on the topics raised in this post than community building people’s are (though questions about e.g. how much to value external experts are relevant to me). On the other hand, I am a stakeholder in EA culture, because capacity for object-level work is the motivation for community building.
I don’t think Holden agrees with this as much as you might think. For example, he spent a lot of his time in the last year or two writing a blog.