Is this really important? A discrepancy of £700 relative to the £5000 projection seems acceptable to me.
You have been part of the effective altruism movement since its inception. What are some interesting or important ways in which you think EA has changed over the years?
What kind of evidence will cause you to abandon the view that people always act selfishly?
As a side note, Derek Parfit was an early advocate of what you call the ‘Hinge of History Hypothesis’. He even uses the expression ‘hinge of history’ in the following quote (perhaps that’s the inspiration for your label):
We live during the hinge of history. Given the scientific and technological discoveries of the last two centuries, the world has never changed as fast. We shall soon have even greater powers to transform, not only our surroundings, but ourselves and our successors. If we act wisely in the next few centuries, humanity will survive its most dangerous and decisive period. Our descendants could, if necessary, go elsewhere, spreading through this galaxy. (On What Matters, vol. 2, Oxford, 2011, p. 616)
Interestingly, he had expressed similar views already in 1984, though back then he didn’t articulate why he believed that the present time is uniquely important:
the part of our moral theory… that covers how we affect future generations… is the most important part of our moral theory, since the next few centuries will be the most important in human history. (Reasons and Persons, Oxford, 1984, p. 351)
Initially she didn’t want to donate the whole amount, but wanted to set aside half to buy more candy so she could do this again.
I think there are more than “one or two” interesting things there.
I agree that these are pretty valuable concepts to learn. At the same time, I also believe that these concepts can be learned easily by studying the corresponding written materials. At least, that’s how I learned them, and I don’t think I’m different from the average EA in this respect.
But I also think we shouldn’t be speculating about this issue, given its centrality to CFAR’s approach. Why not give CFAR a few tens of thousands of dollars to (1) create engaging online content that explains the concepts taught at their workshops and (2) run a subsequent RCT to test whether people learn these concepts better by attending a workshop than by exposing themselves to that content?
There are no Real Apologies, it is naive to think otherwise and toxic to demand otherwise. Of course he is acknowledging wrongdoing, and he is acknowledging wrongdoing because he is being pressured to acknowledge wrongdoing.
What are you talking about? There’s a clear difference between apologizing because one sincerely believes one acted wrongly, and apologizing only because one thinks the consequences will be graver if one fails to apologize. I am puzzled by your apparent failure to recognize this difference.
If the topics to avoid are irrelevant to EA, it seems preferable to argue that these topics shouldn’t be discussed because they are irrelevant than to argue that they shouldn’t be discussed because they are offensive. In general, justifications for limiting discourse that appeal to epistemic considerations (such as bans on off-topic discussions) appear to generate less division and polarization than justifications that appeal to moral considerations.
Thank you. Your comment has caused me to change my mind somewhat. In particular, I am now inclined to believe that getting people to actually read the material is, for a significant fraction of these people, a more serious challenge than I previously assumed. And if CFAR’s goal is to selectively target folks concerned with x-risk, the benefits of insuring that this small, select group learn the material well may justify the workshop format, with its associated costs.
I would still like to see more empirical research conducted on this, so that decisions that involve the allocation of hundreds of thousands of EA dollars per year rest on firmer ground than speculative reasoning. At the current margin, I’d be surprised if a dollar given to CFAR to do object-level work achieves more than a dollar spent in uncovering “organizational crucial considerations”—that is, information with the potential to induce a major shift in the organization’s direction or priority. (Note that I think this is true of some other EA orgs, too. For example, I believe that 80k should be using randomization to test the impact of their coaching sessions.)
The most obvious implication, however, is regarding what proportion of resources longtermist EAs should be spending on near-term existential risk mitigation versus what I call ‘buck-passing’ strategies like saving or movement-building.
In his excellent Charity Cost Effectiveness in an Uncertain World, first published in 2013, Brian Tomasik calls this approach ‘Punting to the Future’. Unless there are strong reasons for introducing a new label, I suggest sticking to Brian’s original name, both to avoid unnecessary terminological profusion and to credit those who pioneered discussion of this idea.
it is not at all clear to me that the accusations that are being discussed here are separate from the accusations that appear to have caused his apology. I agree that if they were from separate disconnected communities, then that would be significant evidence
In his apology, Jacy says that he “know[s] very little of the details of these allegations.” But he clearly knows the Brown allegations very well. So even ignoring the other evidence cited by Halstead, the allegations for which he is apologizing clearly can’t include the Brown allegations.
EDIT: I now see it’s also possible that Jacy was presented with so little information that he wouldn’t be able to determine if the allegations CEA was concerned with included the Brown allegations, however well he knew the latter. My reasoning above ignores this possibility. Personally, I think the evidence Halstead offered is pretty conclusive, so I don’t think this makes a practical difference, but it still seemed something worth mentioning.
Mogensen writes (p. 20):
We might be especially interested in assessing acts that are directly aimed at improving the long-run future of Earth-originating civilization...These might include efforts to reduce the risk of near-term extinction for our species: for example, by spreading awareness about dangers posed by synthetic biology or artificial intelligence.
The problem is that we do not have good evidence of the efficacy of such interventions in achieving their ultimate aims. Nor is such evidence in the offing. The idea that the future state of human civilization could be deliberately shaped for the better arguably did not take hold before the work of Enlightenment thinkers like Condorcet (1822) and Godwin (1793). Unfolding over time- scales that defy our ability to make observations, efforts to alter the long-run trajectory of Earth- originating civilization therefore resist evidence-based assessment, forcing us to fall back on intuitive conjectures whose track record in domains that are amenable to evidence-based assessment is demonstrably poor (Hurford 2013). This is not a case where it can be reasonably claimed that there is good evidence, readily available, to constrain our decision making.
These concerns are forceful, but don’t seem to generalize to all intervention types aimed at improving the long-term future. If one believes that the readily available evidence is insufficient to constrain our decision making, one still can accumulate resources to be disbursed at a later time when good enough evidence emerges. Although we may at present be radically uncertain about the sign and the magnitude of most far-future interventions, the intervention of accumulating resources for future disbursal does not itself appear to be subject to such radical uncertainty.
I am reminded of the story where Victor Hugo, who was away from Paris when Les misérables was first published, wrote his editor a letter inquiring about the sales of his much anticipated novel. The letter contained only one character: ?
A few days later, the reply arrived. It was equally brief: !
Les misérables was an immediate best-seller.
(Unfortunately, the story is likely apocryphal.)