Here’s another issue: Lack of morally urgent causes In the blogpost On Caring, Nate Soares writes: “It’s not enough to think you should change the world — you also need the sort of desperation that comes from realizing that you would dedicate your entire life to solving the world’s 100th biggest problem if you could, but you can’t, because there are 99 bigger problems you have to address first.” In the moral-reflection environment, the world is on pause. If you’ve suffered from poverty, illnesses or abuse in your life, these things are no longer an issue. Also, there are no people to lift out of poverty and no factory farms to shut down. You’re no longer in a race against time to prevent bad things from happening, seeking friends and allies while trying to defend your cause against corrosion from influence-seeking people. Without morally urgent causes, it’s harder to form a strong identity around wanting to do good. It’s still morally important what you decide – after all, your deliberations in the reflection procedure determine how to allocate your caring capacity. Still, you’re deliberating about how to do that from a perspective where everything is well. For better or worse, this perspective could change the nature of moral reflection as compared to how people adopt moral convictions in real-life conditions.
I agree that the vibe you’re describing tends to be a bit cultish precisely because people take it too far. That said, I think it’s true that low-prestige jobs within highly impactful teams are sometimes a lot more impactful than high-prestige jobs somewhere further away from where things matter. (I’m making a general point; I’m not saying that MIRI is necessarily a great example for “where things matter,” nor am I saying the opposite.) In particular, personal assistant strikes me as an example of a highly impactful role (because it requires a hard-to-replace skillset).(Edit: I don’t expect you to necessarily disagree with any of that, since you were just giving a plausible explanation for why the comment above may have turned off some people.)
I think this post is brilliant! I plan to link to it heavily in an upcoming piece for my moral anti-realism sequence. On X., Passive and active ethics:
Rather, what I’m trying to point at is a way that importing and taking for granted a certain kind of realist-flavored ethical psychology can result in an instructive sort of misfire. Something is missing, in these cases, that I expect the idealizing subjectivist needs. In particular: these agents, to the end, lack an affordance for a certain kind of direct, active agency — a certain kind of responsibility, and self-creation. They don’t know how to choose, fully, for themselves.
Yeah, I think there’s a danger for people who expect that “having more information,” or other features of some idealized reflection procedure, would change the phenomenology of moral reasoning, such that once they’re in the reflection procedure, certain answers will stick out to them. But, as you say, this point may never come! So instead, it could continue to feel like one has to make difficult judgment calls left and right, with no guarantee that one is doing moral reasoning “the right way.”
(In fact, I’m convinced such a phase change won’t come. I have a draft on this.)
In a sense, what I’m saying here is that idealizing subjectivism is, and needs to be, less like “realism-lite,” and more like existentialism, than is sometimes acknowledged.
I’ve also used the phrase “more like existentialism” in this context. :)On IX., Hoping for convergence, tolerating indeterminacy: This is an excellent strategy for people who find themselves without strong object-level intuitions about their goals/values. (Or people who only have strong object-level intuitions about some aspects of their goals/values, but not the details. E.g., being confident that one would want to be altruistic, but unsure about population ethics or different theories of well-being. [In these cases, perhaps with a guarantee for the reflection procedure to not to change the overarching objective – being altruistic, or finding a suitable theory of well-being, etc.])Some people would probably argue that “Hoping for convergence, tolerating indeterminacy” is the rational strategy in the light of our metaethical uncertainty. (I know you’re not necessarily saying this in your post.) For example, they might argue as follows:”If there’s convergence among reflection procedures, I miss out if I place too much faith in my object-level intuitions and already formed moral convictions. By contrast, if there’s no convergence, then it doesn’t matter – all outcomes would be on the same footing.” I want to push back against this stance, “rationally mandated wagering on convergence.” I think it only makes sense for people whose object-level values are still under-defined. By contrast, if you find yourself with solid object-level convictions about your values, then you not only stand something to gain from wagering on convergence. You also stand things to lose. You might be giving up something you feel is worth fighting for to follow the kind-of-arbitrary outcome of some reflection procedure.My point is, the currencies are commensurable: What’s attractive about the possibility of many reflection procedures converging is the same thing that’s attractive to people who already have solid object-level convictions about their values (assuming they’re not making one of the easily identifiable mistakes, i.e., assuming that, for them, there’d be no convergence among reflection procedures that are open-ended enough to get them to adopt different values). Namely, when they reflect to the best of their abilities, they feel drawn to certain moral principles or goals or specific ways of living their lives.In other words, the importance of moral reflection for someone is exactly proportional to their credence in it changing their thinking. If someone feels highly uncertain, they almost exclusively have things to gain. By contrast, the more certain you already are in your object-level convictions, the larger the risk that deferring to some poorly understood reflection procedure would lead you to an outcome that constitutes a loss, in a sense relevant to your current self. Of course, one can always defer to conservative reflection procedures, i.e., procedures where one is fairly confident that they won’t lead to drastic changes in one’s thinking. Those could be used to flesh out one’s thinking in places where it’s still uncertain (and therefore, possibly, under-defined), while protecting convictions that one would rather not put at risk.
Is the map/territory distinction central to your point? I get the impression that you’re mostly expressing the opinion that the LTFF has too high a bar or idiosyncratic (or too narrow) research taste. (I’d imagine that grantmakers are trying to do what’s best on impact grounds.)
It sounds like we both agree that when it comes to reflecting about what’s important to us, there should maybe be a place for stuff like “(idiosyncratic) reactive attitudes,” “psychotherapy or raising a child or ‘things the humanities do’” etc. Your view seems to be that you have two modes of moral reasoning: The impartial mode of analytic philosophy, and the other thing (subjectivist/particularist/existentialist). My point with my long comment earlier is basically the following: The separation between these two modes is not clear! I’d argue that what you think of the “impartial mode” has some clear-cut applications, but it’s under-defined in some places, so different people will gravitate toward different ways of approaching the under-defined parts, based on using appeals that you’d normally place in the subjectivist/particularist/existentialist mode. Specifically, population ethics is under-defined. (It’s also under-defined how to extract “idealized human preferences” from people like my parents, who aren’t particularly interested in moral philosophy or rationality.) I’m trying to point out that if you fully internalized that population ethics is going to be under-defined no matter what, you then have more than one option for how to think about it. You no longer have to think of impartiality criteria and “never violating any transitivity axioms” as the only option. You can think of population ethics more like this: Existing humans have a giant garden (the ‘cosmic commons’) that is at risk of being burnt, and they can do stuff with it if they manage to preserve it, and people have different preferences about what definitely should or shouldn’t be done with that garden. You can look for the “impartially best way to make use of the garden” – or you could look at how other people want to use the garden and compromise with them, or look for “meta-principles” that guide who gets to use which parts of the garden (and stuff that people definitely shouldn’t do, e.g., no one should shit in their part of the garden), without already having a fixed vision for how the garden has to look like at the end, once it’s all made use of. Basically, I’m saying that “knowing from the very beginning exactly what the ‘best garden’ has to look like, regardless of the gardening-related preferences of other humans, is not a forced move (especially because there’s no universally correct solution anyway!). You’re very much allowed to think of gardening in a different, more procedural and ‘particularist’ way.”
Oh, you’re probably right then!
that seems to imply that developing countries had lower survival rates, despite their more favourable demographics, which would be sad.
This isn’t impossible because there seems to be a correlation where people with lower socioeconomic status have worse Covid outcomes, but I still doubt that the IFR was worse overall in developing countries. The demographics (esp. the proportion of people age 70-80, and older) make a huge difference.
But I never looked into this in detail, and my impression was also that for a long time at least, there wasn’t any reliable data.
From excess deaths in some locations, such as Guayaquil (Ecuador), one could rule out the possibility that the IFR in developing countries was incredibly low (it would have been at least 0.3% given plausible assumptions about the outbreak there, and possibly a lot higher).
IFR (but back in February/March 2020, a lot of people called everything “CFR”). I think he was talking about high-income countries (that’s what my 0.9% estimate for 2020 referred to – note that it’s lower for 2020+2021 combined because of better treatment and vaccines). I’d have to look it up again, but I doubt that Adalja was talking about a global IFR that includes countries with much younger demographics than the US. It could be that he left it ambiguous. Here’s the Sam Harris podcast in question; I haven’t re-listened to it yet.
To be fair, the Johns Hopkins Center isn’t just Adalja. I’m not aware of the list of things they do, but for instance, they kept an updated database in the early stage of the virus outbreak that was extremely helpful for forecasting!
He said he travelled internationally “yesterday” (which would have been February 9th if the video was uploaded the day of the lecture) and didn’t wear a mask.
This seems totally okay to me, FWIW. In most places (e.g., London or the US), it would have seemed a bit overly cautious to wear masks before the end of February, no?
I think his prediction and advice should probably be judged negatively and reflect poorly on him / Center for Health Security, but I’m not sure how harshly he/ CHS should be judged.
I generally agree with that, but it’s worth noting that it was extremely common for Western epidemiologists to repeat the mantra “you cannot do what Asian countries are doing; there’s no way to contain the virus.”
Adalja also confidently predicted the infection fatality rate for the rest of 2020 to be around 0.6% (on the Sam Harris podcast) despite thinking the virus can’t be contained (if true, this would have led to more ICU beds and oxygen shortages in lots of places). In reality, the IFR was more like 0.9% or higher for countries like the US and UK. Probably it was lower for countries with younger demographics, but I don’t even think Adelja was basing his estimates on that.
(TBC, this isn’t as big a mistake compared to other statements or compared to Ioannidis who completely disgraced himself throughout 2020 and ongoing, but I find it worth pointing out because I remember distinctly that, at the time when Adalja said this, there was a lot of fairly strong evidence for higher IFRs, including published estimates. I thought 0.6% seemed hard to defend, though I don’t remember how much he flagged that there’s a substantial chance it’s significantly higher. Importantly, it would have been higher than it actually turned out to be, if Adalja had been right about “the virus can’t be contained.”)
E.g., suppose I’m uncertain between:Worldview A, according to which I should prioritize based on time scales of trillions of years.Worldview B, according to which I should prioritize based on time scales of hundreds of years.[...]Now, I do have views on this matter that don’t make me very sympathetic to allocating a significant chunk of my resources to, say, speeding up economic growth or other things someone concerned about next few decades might prioritize. (Both because of my views on normative uncertainty and because I’m not aware of anything sufficiently close to ‘worldview B’ that I find sufficiently plausible—these kind of worldviews from my perspective sit in too awkward a spot between impartial consequentialism and a much more ‘egoistic’, agent-relative, or otherwise nonconsequentialist perspective.)
E.g., suppose I’m uncertain between:
Worldview A, according to which I should prioritize based on time scales of trillions of years.
Worldview B, according to which I should prioritize based on time scales of hundreds of years.
Now, I do have views on this matter that don’t make me very sympathetic to allocating a significant chunk of my resources to, say, speeding up economic growth or other things someone concerned about next few decades might prioritize. (Both because of my views on normative uncertainty and because I’m not aware of anything sufficiently close to ‘worldview B’ that I find sufficiently plausible—these kind of worldviews from my perspective sit in too awkward a spot between impartial consequentialism and a much more ‘egoistic’, agent-relative, or otherwise nonconsequentialist perspective.)
I think I have a candidate for a “worldview B” that some EAs may find compelling. (Edit: Actually, the thing I’m proposing also allocates some weight to trillions of years, but it differs from your “worldview A” in that nearer-term considerations don’t get swamped!) It requires a fair bit of explaining, but IMO that’s because it’s generally hard to explain how a framework differs from another framework when people are used to only thinking within a single framework. I strongly believe that if moral philosophy had always operated within my framework, the following points would be way easier to explain.
Anyway, I think standard moral-philosophical discourse is a bit dumb in that it includes categories without clear meaning. For instance, the standard discourse talks about notions like, “What’s good from a universal point of view,” axiology/theory of value, irreducibly normative facts, etc.The above notions fail at reference – they don’t pick out any unambiguously specified features of reality or unambiguously specified sets from the option space of norms for people/agents to adopt.
You seem to be unexcited about approaches to moral reasoning that are more “more ‘egoistic’, agent-relative, or otherwise nonconsequentialist” than the way you think moral reasoning should be done. Probably, “the way you think moral reasoning should be done” is dependent on some placeholder concepts like “axiology” or “what’s impartially good” that would have to be defined crisply if we wanted to completely solve morality according to your preferred evaluation criteria. Consider the possibility that, if we were to dig into things and formalize your desired criteria, you’d realize that there’s a sense in which any answer to population ethics has to be at least a little bit ‘egoistic’ or agent-relative. Would this weaken your intuitions that person-affecting views are unattractive?
I’ll try to elaborate now why I believe “There’s a sense in which any answer to population ethics has to be at least a little bit ‘egoistic’ or agent-relative.”
Basically, I see a tension between “there’s an objective axiology” and “people have the freedom to choose life goals that represent their idiosyncrasies and personal experiences.” If someone claims there’s an objective axiology, they’re implicitly saying that anyone who doesn’t adopt an optimizing mindset around successfully scoring “utility points” according to that axiology is making some kind of mistake / isn’t being optimally rational. They’re implicitly saying it wouldn’t make sense for people (at least for people who are competent/organized enough to reliably pursue long-term goals) to live their lives in pursuit of anything other than “pursuing points according to the one true axiology.” Note that this is a strange position to adopt! Especially when we look at the diversity between people, what sorts of lives they find the most satisfying (e.g., differences between investment bankers, MMA fighters, novelists, people who open up vegan bakeries, people for whom family+children means everything, those EA weirdos, etc.), it seems strange to say that all these people should conclude that they ought to prioritize surviving until the Singularity so as to get the most utility points overall. To say that everything before that point doesn’t really matter by comparison. To say that and any romantic relationships people enter are only placeholders until something better comes along with experience-machine technology.
Once you give up on the view that there’s an objectively correct axiology (as well as the view that you ought to follow a wager for the possibility of it), all of the above considerations (“people differ according to how they’d ideally want to score their own lives”) will jump out at you, no longer suppressed by this really narrow and fairly weird framework of “How can we subsume all of human existence into utility points and have debates on whether we should adopt ‘totalism’ toward the utility points, or come up with a way to justify taking a person-affecting stance.”
There’s a common tendency in EA to dismiss the strong initial appeal of person-affecting views because there’s no elegant way to incorporate them into the moral realist “utility points” framework. But one person’s modus ponens is another’s modus tollens: Maybe if your framework can’t incorporate person-affecting intuitions, that means there’s something wrong with the framework.
I suspect that what’s counterintuitive about totalism in population ethics is less about the “total”/”everything” part of it, and more related to what’s counterintuitive about “utility points” (i.e., the postulate that there’s an objective, all-encompassing axiology). I’m pretty convinced that something like person-affecting views, though obviously conceptualized somewhat differently (since we’d no longer be assuming moral realism) intuitively makes a lot of sense.
Here’s how that would work (now I’ll describe the new proposal for how to do ethical reasoning):
Utility is subjective. What’s good for someone is what they deem good for themselves by their lights, the life goals for which they get up in the morning and try doing their best.
A beneficial outcome for all of humanity could be defined by giving individual humans the possibility to reflect about their goals in life under ideal conditions to then implement some compromise (e.g., preference utilitarianism, or – probably better – a moral parliament framework) to make everyone really happy with the outcome.
Preference utilitarianism or the moral parliament framework would concern people who already exist – these frameworks’ population-ethical implications are indirectly specified, in the sense that they depend on what the people on earth actually want. Still, people individually have views about how they want the future to go. Parents may care about having more children, many people may care about intelligent earth-originating life not going extinct, some people may care about creating as much hedonium as possible in the future, etc.
In my worldview, I conceptualize the role of ethics as two-fold:
(1) Inform people about the options for wisely chosen subjective life goals
--> This can include life goals inspired by a desire to do what’s “most moral” / “impartial” / “altruistic,” but it can also include more self-oriented life goals
(2) Provide guidance for how people should deal with the issue that not everyone shares the same life goals
Population ethics, then, is a subcategory of (1). Assuming you’re looking for an altruistic life goal rather than a self-oriented one, you’re faced with the question of whether your notion of “altruism” includes bringing happy people into existence. No matter what you say, your answer to population ethics will be, in a weak sense, ‘egoistic’ or agent-relative, simply because you’re not answering “What’s the right population ethics for everyone.” You’re just answering, “What’s my vote for how to allocate future resources.” (And you’d be trying to make your vote count in an altruistic/impartial way – but you don’t have full/single authority on that.)
If moral realism is false, notions like “optimal altruism” or “What’s impartially best” are under-defined. Note that under-definedness doesn’t mean “anything goes” – clearly, altruism has little to do with sorting pebbles or stacking cheese on the moon. “Altruism is under-defined” just means that there are multiple ‘good’ answers.
Finally, here’s the “worldview B” I promised to introduce:
Within the anti-realist framework I just outlined, altruistically motivated people have to think about their preferences for what to do with future resources. And they can – perfectly coherently – adopt the view: “Because I have person-affecting intuitions, I don’t care about creating new people; instead, I want to focus my ‘altruistic’ caring energy on helping people/beings that exist regardless of my choices. I want to help them by fulfilling their life goals, and by reducing the suffering of sentient beings that don’t form world-models sophisticated enough to qualify for ‘having life goals’.”
Note that a person who thinks this may end up caring a great deal about humans not going extinct. However, unlike in the standard framework for population ethics, she’d care about this not because she thinks it’s impartially good for the future to contain lots of happy people. Instead, she thinks it’s good from the perspective of the life goals of specific, existing others, for the future to go on and contain good things.
Is that really such a weird view? I really don’t think so, myself. Isn’t it rather standard population-ethical discourse that’s a bit weird?Edit: (Perhaps somewhat related: my thoughts on the semantics of what it could mean that ‘pleasure is good’. My impression is that some people think there’s an objectively correct axiology because they find experiential hedonism compelling in a sort of ‘conceptual’ way, which I find very dubious.)
the surveys were sampling from somewhat similar populations (most clearly for the FHI research scholar’s survey and this one, and less so for the 2008 one—due to a big time gap—and the Grace et al. one)
I mostly just consider the FHI research scholar survey to be relevant counter evidence here because 2008 is indeed really far away and because I think EA researchers reason quite differently than the domain experts in the Grace et al survey. When I posted my above comment I realized that I hadn’t seen the results of the FHI survey! I’d have to look it up to say more, but one hypothesis I already have could be: The FHI research scholars survey was sent to a broader audience than the one by Rob now (e.g., it was sent to me, and some of my former colleagues), and people with lower levels of expertise tend to defer more to what they consider to be the expert consensus , which might itself be affected by the possibility of public-facing biases.Of course, I’m also just trying to defend my initial intuition here. :) Edit: Actually I can’t find the results of that FHI RS survey. I only find this announcement. I’d be curious if anyone knows more about the results of that survey – when I filled it out I thought it was well designed and I felt quite curious about people’s answers!
I find it plausible that there’s some perceived pressure to not give unreasonably-high-seeming probabilities in public, so as to not seem weird (as Rob hypothesized in the discussion here, which inspired this survey). This could manifest both as “unusually ‘optimistic’ people being unusually likely to give public, quantitative estimates” and “people being prone to downplay their estimates when they’re put into the spotlight.”
Personally, I’ve noticed the latter effect a couple of times when I was talking to people who I thought would be turned off by high probabilities for TAI. I didn’t do it on purpose, but after two conversations I noticed that the probabilities I gave for TAI in 10 years, or things similar to that, seemed uncharacteristically low for me. (I think it’s natural for probabilities estimates to fluctuate between elicitation attempts, but if the trend is quite strong and systematically goes in one direction, then that’s an indicator of some type of bias.)
I also remember that I felt a little uneasy about giving my genuine probabilities in a survey of alignment- and longtermist-strategy researchers in August 2020 (by an FHI research scholar), out of concerns of making myself or the community seem a bit weird. I gave my true probabilities anyway (I think it was anonymized), but I felt a bit odd for thinking that I was giving 65% to things that I expected a bunch of reputable EAs to only give 10% to. (IIRC, the survey questions were quite similar to the wording in this post.)(By the way, I find the “less than maximum potential” operationalizations to call for especially high probability estimates, since it’s just a priori unlikely that humans set things up in perfect ways, and I do think that small differences in the setup can have huge effects on the future. Maybe that’s an underappreciated crux between researchers – which could also include some normative subcruxes.)
Answering “x or y?” questions with just “no” (or “yes”) seems to leave things ambiguous (does it mean “the latter,” “not the former,” or “neither of those”?). It seems impolite to me (not putting in the effort to write something slightly longer to make things easier for the reader).
I phrased my point poorly. I didn’t mean to put the emphasis on the 20% figure, but more on the notion that things will be transformative in a way that fits neatly in the economic growth framework. My concern is that any operationalization of TAI as “x% growth per year(s)” is quite narrow and doesn’t allow for scenarios where AI systems are deployed to secure influence and control over the future first. Maybe there’ll be a war and the “TAI” systems secure influence over the future by wiping out most of the economy except for a few heavily protected compute clusters and resource/production centers. Maybe AI systems are deployed as governance advisors primarily and stay out of the rest of the economy to help with beneficial regulation. And so on.
I think things will almost certainly be transformative one way or another, but if you therefore expect to always see stock market increases of >20%, or increases to other economic growth metrics, then maybe that’s thinking too narrowly. The stock market (or standard indicators of economic growth) are not what ultimately matters. Power-seeking AI systems would prioritize “influence over the long-term future” over “short-term indicators of growth”. Therefore, I’m not sure we see economic growth right when “TAI” arrives. The way I conceptualize “TAI” (and maybe this is different from other operationalizations, though, going by memory, I think it’s compatible with the way Ajeya framed it in her report, since she framed it as “capable of executing a ‘transformative task’”) is that “TAI” is certainly capable of bringing about a radical change in growth mode, eventually, but it may not necessarily be deployed to do that. I think “where’s the point of no return?” is a more important question than “Will AGI systems already transform the economy 1,2,4 years after their invention?”
That said, I don’t think the above difference in how I’d operationalize “TAI” are cruxes between us. From what you say in the writeup, it sounds like you’d be skeptical about both, that AGI systems could transform the economy(/world) directly, and that they could transform it eventually via influence-securing detours.
Yes, that’s what I meant. And FWIW, I wasn’t sure whether Ben was using modest epistemology (in my terminology, outside-view reasoning isn’t necessarily modest epistemology), but there were some passages in the original post that suggest low discrimination on how to construct the reference class. E.g., “10% on short timelines people” and “10% on long timelines people” suggests that one is simply including the sorts of timeline credences that happen to be around, without trying to evaluate people’s reasoning competence. For contrast, imagine wording things like this:
“10% credence each to persons A and B, who both appear to be well-informed on this topic and whose interestingly different reasoning styles both seem defensible to me, in the sense that I can’t confidently point out why one of them is better than the other.”
On the object level (I made the other comment before reading on), you write:
My impression from talking to Phil Trammell at various times is that it’s just really hard to get such high growth rates from a new technology (and I think he thinks the chance that AGI leads to >20% per year growth rates is lower than I do).
Maybe this is talking about definitions, but I’d say that “like the Industrial Revolution or bigger” doesn’t have to mean literally >20% growth / year. Things could be transformative in others ways, and eventually at least, I feel like things would accelerate almost certainly in a future controlled with or by AGI.
Edit: And I see now that you’re addressing why you feel comfortable disagreeing:
I sort of feel like other people don’t really realise / believe the above so I feel comfortable deviating from them.
I’m not sure about that. :)
Gave a 57% probability that AGI (or similar) would not imply TAI, i.e. would not imply an effect on the world’s trajectory at least as large as the Industrial Revolution.
My impression (I could be wrong) is that this claim is interestingly contrarian among EA-minded AI researchers. I see a potential tension between how much weight you give this claim within your framework, versus how much you defer to outside views (and potentially even modest epistemology – gasp!) in the overall forecast.
Do you think that the moderators were too charitable toward Phil?
No, I didn’t mean to voice an opinion on that part. (And the moderation decision seemed reasonable to me.)My comment was prompted by the concern that giving a warning to Halstead (for not providing more evidence) risks making it difficult for people to voice concerns in the future. My impression is that it’s already difficult enough to voice negative opinions on others’ character. Specifically, I think there’s an effect where, if you voice a negative opinion and aren’t extremely skilled at playing the game of being highly balanced, polite and charitable (e.g., some other people’s comments in the discussion strike me as almost superhumanly balanced and considerate), you’ll offend the parts of the EA forum audience that implicitly consider being charitable to the accused a much more fundamental virtue than protecting other individuals (the potential victims of bad behavior) and the community at large (problematic individuals in my view tend to create a “distortion field” around them that can have negative norm-eroding consequences in various ways – though that was probably much more the case with other community drama than here, given that Phil wrote articles mostly at the periphery of the community.) Of course, these potential drawbacks I mention only count in worlds where the concerns raised are in fact accurate. The only way to get to the bottom of things is indeed with truth-tracking norms, and being charitable (edit: and thorough) is important for that.
I just feel that the demands for evidence shouldn’t be too strong or absolute, partly also because there are instances where it’s difficult to verbalize why exactly someone’s behavior seems unacceptable (even though it may be really obvious to people who are closely familiar with the situation that it is).
Lastly, I think it’s particularly bad to disincentivize people for how they framed things in instances where they turned out to be right. (It’s different if there was a lot of uncertainty as to whether Halstead had valid concerns, or whether he was just pursuing a personal vendetta against someone.)Of course, these situations are really, really tricky, and I don’t envy the forum moderators for having to navigate the waters.
If someone wants to warn the entire community that someone is behaving badly, the most effective warnings will include evidence.
True, but that also means that the right incentives are already there. If someone doesn’t provide the evidence, it could be that they find that it’s hard to articulate, that there are privacy concerns, or that the person doesn’t have the mental energy at the time to polish their evidence and reasoning, but feels strongly enough that they’d like to speak up with a shorter comment. Issuing a warning discourages all those options. All else equal, providing clear evidence is certainly best. But I wouldn’t want to risk missing out on the relevant info that community veterans (whose reputation is automatically on the line when they voice a strong concern) have a negative opinion for one reason or another.