Related (and perhaps of interest to EAs looking for rhetorical hooks): there are a bunch of constitutions (not the US) that recognize the rights of future generations. I believe they’re primarily modeled after South Africa’s constitution (see http://www.fdsd.org/ideas/the-south-african-constitution-gives-people-the-right-to-sustainable-development/ & https://en.wikipedia.org/wiki/Constitution_of_South_Africa).
mhpage
I haven’t read about this case, but some context: This has been an issue in environmental cases for a while. It can manifest in different ways, including “standing,” i.e., who has the ability to bring lawsuits, and what types of injuries are actionable. If you google some combination of “environmental law” & standing & future generations you’ll find references to this literature, e.g.: https://scholarship.law.uc.edu/cgi/viewcontent.cgi?referer=https://www.google.com/&httpsredir=1&article=1272&context=fac_pubs
Last I checked, this was the key case in which a court (from the Phillipines) actually recognized a right of future generations: http://heinonline.org/HOL/LandingPage?handle=hein.journals/gintenlr6&div=29&id=&page=
Also, people often list parties as plaintiffs for PR reasons, even though there’s basically no chance that a court would recognize that the named party has legal standing.
This comment is not directly related to your post: I don’t think the long-run future should be viewed of as a cause area. It’s simply where most sentient beings live (or might live), and therefore it’s a potential treasure trove of cause areas (or problems) that should be mined. Misaligned AI leading to an existential catastrophe is an example of a problem that impacts the long-run future, but there are so, so many more. Pandemic risk is a distinct problem. Indeed, there are so many more problems even if you’re just thinking about the possible impacts of AI.
Variant on this idea: I’d encourage a high status person and a low status person, both of whom regularly post on the EA Forum, to trade accounts for a period of time and see how that impacts their likes/dislikes.
Variant on that idea: No one should actually do this, but several people should talk about it, thereby making everyone paranoid about whether they’re a part of a social experiment (and of course the response of the paranoid person would be to actually vote based on the content of the article).
I strongly agree. Put another way, I suspect we, as a community, are bad at assessing talent. If true, that manifests as both a diversity problem and a suboptimal distribution of talent, but the latter might not be as visible to us.
My guess re the mechanism: Because we don’t have formal credentials that reflect relevant ability, we rely heavily on reputation and intuition. Both sources of evidence allow lots of biases to creep in.
My advice would be:
When assessing someone’s talent, focus on the content of what they’re saying/writing, not the general feeling you get from them.
When discussing how talented someone is, always explain the basis of your view (e.g., I read a paper they wrote; or Bob told me).
Thanks for doing these analyses. I find them very interesting.
Two relatively minor points, which I’m making here only because they refer to something I’ve seen a number of times, and I worry it reflects a more-fundamental misunderstanding within the EA community:
I don’t think AI is a “cause area.”
I don’t think there will be a non-AI far future.
Re the first point, people use “cause area” differently, but I don’t think AI—in its entirety—fits any of the usages. The alignment/control problem does: it’s a problem we can make progress on, like climate change or pandemic risk. But that’s not all of what EAs are doing (or should be doing) with respect to AI.
This relates to the second point: I think AI will impact nearly every aspect of the long-run future. Accordingly, anyone who cares about positively impacting the long-run future should, to some extent, care about AI.
So although there are one or two distinct global risks relating to AI, my preferred framing of AI generally is as an unusually powerful and tractable lever on the shape of the long-term future. I actually think there’s a LOT of low-hanging fruit (or near-surface root vegetables) involving AI and the long-term future, and I’d love to see more EAs foraging those carrots.
Max’s point can be generalized to mean that the “talent” vs. “funding” constraint framing misses the real bottleneck, which is institutions that can effectively put more money and talent to work. We of course need good people to run those institutions, but if you gave me a room full of good people, I couldn’t just put them to work.
and I wonder how the next generation of highly informed, engaged critics (alluded to above) is supposed to develop if all substantive conversations are happening offline.
This is my concern (which is not to say it’s Open Phil’s responsibility to solve it).
Hey Josh,
As a preliminary matter, I assume you read the fundraising document linked in this post, but for those reading this comment who haven’t, I think it’s a good indication of the level of transparency and self-evaluation we intend to have going forward. I also think it addresses some of the concerns you raise.
I agree with much of what you say, but as you note, I think we’ve already taken steps toward correcting many of these problems. Regarding metrics on the effective altruism community, you are correct that we need to do more here, and we intend to. Before the reorganization, this responsibility didn’t fall squarely within any team’s jurisdiction which was part of the problem. (For example, Giving What We Can collected a lot of this data for a subset of the effective altruism community.) This is a priority for us.
Regarding measuring CEA activities, internally, we test and measure everything (particularly with respect to community and outreach activities). We measure user engagement with our content (including the cause prioritization tool), the newsletter, Doing Good Better, Facebook marketing, etc., trying to identify where we can most cost-effectively get people most deeply engaged. As we recently did with EAG and EAGx, we’ll then periodically share our findings with the effective altruism community. We will soon share our review of the Pareto Fellowship, for example.
Regarding transparency, our monthly updates, project evaluations (e.g., for EAG and EAGx, and the forthcoming evaluation of the Pareto Fellowship), and the fundraising document linked in this post are indicative of the approach we intend to take going forward. Creating all of this content is costly, and so while I agree that transparency is important, it’s not trivially true that more is always better. We’re trying to strike the right balance and will be very interested in others’ views about whether we’ve succeeded.
Lastly, regarding centralized decision-making, that was the primary purpose of the reorganization. As we note in the fundraising document, we’re still in the process of evaluating current projects. I don’t think the EA Concepts project is to the contrary: that was simply an output of the research team, which it put together in a few weeks, rather than a new project like Giving What We Can or the Pareto Fellowship (the confusion might be the result of using “project” in different ways). Whether we invest much more in that project going forward will depend on the reception and use of this minimum version.
Regards, Michael
This document is effectively CEA’s year-end review and plans for next year (which I would expect to be relevant to people who visit this forum). We could literally delete a few sentences, and it would cease to be a fundraising document at all.
Fixed. At least with respect to adding and referencing the Hurford post (more might also be needed). Please keep such suggestions forthcoming.
This came out of my pleasure budget.
As you explain, the key tradeoff is organizational stability vs. donor flexibility to chase high-impact opportunities. There are a couple different ways to strike the right balance. For example, organizations can try to secure long-term commitments sufficient to cover a set percentage of their projected budget but no more, e.g., 100% one year out; 50% two years out; 25% three years out [disclaimer: these numbers are not considered].
Another possibility is for donors to commit to donating a certain amount in the future but not to where. For example, imagine EA organizations x, y, and z are funded in significant part by donors a, b, and c. The uncertainty for each organization comes from both (i) how much a, b, and c will donate in the future (e.g., for how long do they plan to earn to give?), and (ii) to which organization (x, y, or z) will they donate. The option value for the donors comes primarily* from (ii): the flexibility to donate more to x, y, or z depending on how good they look relative to the others. And I suspect much (if not most) of the uncertainty for x, y, and z comes from (i): not knowing how much “EA money” there will be in the future. If that’s the case, we can get most of the good with little of the bad via general commitments to donate, without naming the beneficiary. One way to accomplish this would be an EA fund.
I say “primarily” because there is option value in being able to switch from earning to give to direct work, for example.
I’m looking into this on behalf of CEA/GWWC. Anyone else working on something similar should definitely message me (michael.page@centreforeffectivealtruism.org).
If the reason we want to track impact is to guide/assess behavior, then I think counting foreseeable/intended counterfactual impact is the right approach. I’m not bothered by the fact that we can’t add up everyone’s impact. Is there any reason that would be important to do?
In the off-chance it’s helpful, here’s some legal jargon that deals with this issue: If a result would not have occurred without Person X’s action, then Person X is the “but for” cause of the result. That is so even if the result also would not have occurred without Person Y’s action. Under these circumstances, either Person X or Person Y can (usually) be sued and held liable for the full amount of damages (although that person might be able later to sue the other and force them to share in the costs).
Because “but for” causation chains can be traced out indefinitely, we generally only hold one accountable for the reasonably foreseeable results of their actions. This limitation on liability is called “proximate cause.” So Person X proximately caused the result if their actions were the but-for cause of the result, and the result was reasonably foreseeable.
I think the policy reasons underlying this approach (to guide behavior) probably apply here as well.
The downvoting throughout this thread looks funny. Absent comments, I’d view it as a weak signal.
Agreed. Someone earning to give doesn’t meet the literal characterization of “full time” EA.
How about fully aligned and partially aligned (and any other modifier to “aligned” that might be descriptive)?
In thinking about terminology, it might be useful to distinguish (i) magnitude of impact and (ii) value alignment. There are a lot of wealthy individuals who’ve had an enormous impact (and should be applauded for it), but who correctly are not described as “EA.” And there are individuals who are extremely value aligned with the imaginary prototypical EA (or range of prototypical EAs) but whose impact might be quite small, through no fault of their own. Incidentally, I think those in the latter category are better community leaders than those in the former.
Edit: I’m not suggesting that either group should be termed anything; just that the current terminology seems to elide these groups.
I’ll embrace the awkwardness of doing this (and this is more than the past month):
1) I printed and distributed about 1050 EA Handbooks to about a dozen different countries.
2) I believe I am the but-for cause of about five new EAs, one of whom is a professional poker player with a significant social media following who has been donating a percentage of her major tournament wins.
3) I donated $195k this calendar year.
Tara left CEA to co-found Alameda with Sam. As is discussed elsewhere, she and many others split ways with Sam in early 2018. I’ll leave it to them to share more if/when they want to, but I think it’s fair to say they left at least in part due to concerns about Sam’s business ethics. She’s had nothing to do with Sam since early 2018. It would be deeply ironic if, given what actually happened, Sam’s actions are used to tarnish Tara.
[Disclosure: Tara is my wife]