Reminds me of The House of Saud (although I’m not saying they have this goal, or any shared goal):
”The family in total is estimated to comprise some 15,000 members; however, the majority of power, influence and wealth is possessed by a group of about 2,000 of them. Some estimates of the royal family’s wealth measure their net worth at $1.4 trillion”
https://en.wikipedia.org/wiki/House_of_Saud
capybaralet
[Question] Is “founder effects” EA jargon?
[Question] Has anyone wrote something using moral cluelessness to “debunk” anti-consequentialist thought experiments?
IMO, the best argument against strong longtermism ATM is moral cluelessness.
IMO, the main things holding back scaling are EA’s (in)ability to identify good “shovel ready” ideas and talent within the community and allocate funds appropriately. I think this is a very general problem that we should be devoting more resources to. Related problems are training and credentialing, and solving common good problems within the EA community.
I’m probably not articulating all of this very well, but basically I think EA should focus a lot more on figuring out how to operate effectively, make collective decisions, and distribute resources internally.
These are very general problems that haven’t been solved very well outside of EA either. But the EA community still probably has a lot to learn from orgs/people outside EA about this. If we can make progress here, it can scale outside of the EA community as well.
I view economists are more like physicists working with spherical cows, and often happy to continue to do so. So that means we should expect lots of specific blind spots, and for them to be easy to identify, and for them to be readily acknowledged by many economists. Under this model, economists are also not particularly concerned with the practical implications of the simplifications they make. Hence they would readily acknowledge many specific limitations of their models. Another way of putting it: this is more of a blind spot for economics, not economists.
I’ll also get back to this point about measurement… there’s a huge space between “nature has intrinsic value” and “we can measure the extrinsic value of nature”. I think the most reasonable position is:
- Nature has some intrinsic value, because there are conscious beings in it (with a bonus because we don’t understand consciousness well enough to be confident that we aren’t under-counting).
- Nature has hard to quantify, long-term extrinsic value (in expectation), and we shouldn’t imagine that we’ll be able to quantify it appropriately any time soon.
- We should still try to quantify it sometimes, in order to use quantitative decision-making / decision-support tools. But we should maintain awareness of the limitations of these efforts.
[Question] When is cost a good proxy for environmental impact? How good and why?
It hardly seems “inexplicable”… this stuff is harder to quantify, especially in terms of the long-term value. I think there’s an interesting contrast with your comment and jackmalde’s below: “It’s also hardly news that GDP isn’t a perfect measure.”
So I don’t really see why there should be a high level of skepticism of a claim that “economists haven’t done a good job of modelling X[=value of nature]”. I’d guess most economists would emphatically agree with this sort of critique.
Or perhaps there’s an underlying disagreement about what to do when we have hard time modelling something: Do we mostly just ignore them? Or do we try to reason about them less formally? I think the latter is clearly correct, but I get the sense a lot of people in EA would disagree (e.g. the “evidence-based charity” perspective seems to go against this).
[Question] Any EAs familiar with Partha Dasgupta’s work?
I think this illustrates a harmful double standard. Let me substitute a different cause area in your statement:
”Sounds like any future project meant to reduce x-risk will have to deal with the measurement problem”.
Online meetings could be an alternative/supplement, especially in the post-COVID world.
Reiterating my other comments: I don’t think it’s appropriate to say that the evidence showed it made sense to give up. As others have mentioned, there are measurement issues here. So this is a case where absence of evidence is not strong evidence of absence.
Just because they didn’t get the evidence of impact they were aiming for doesn’t mean it “didn’t work”.
I understand if EAs want to focus on interventions with strong evidence of impact, but I think it’s terrible comms (both for PR and for our own epistemics) to go around saying that interventions lacking such evidence don’t work.
It’s also pretty inconsistent; we don’t seem to have that attitude about spending $$ on speculative longtermist interventions! (although I’m sure some EAs do, I’m pretty sure it’s a minority view).
Thanks for this update, and for your valuable work.
I must admit I was frustrated by reading this post. I want this work to continue, and I don’t find the levels of engagement you report surprising or worth massively updating on (i.e. suspending outreach).
I’m also bothered by the top-level comments assuming that this didn’t work and should’ve been abandoned. What you’ve shown is that you could not provide strong evidence of the type that you hoped for the programs effectiveness, NOT that it didn’t work!
Basically, I think there should be a strong prior that this type of work is effective, and I think the question should be how to do a good job of it. So I want these results to be taken as a baseline, and for your org to continue iterating and trying to improve your outreach, rather than giving up on it. And I want funders to see your vision and stick with you as you iterate.
I’m frustrated by the focus on short-term, measurable results here. I don’t expect you to be able to measure the effects well.
Overall, I feel like the results you’ve presented here inspire a lot of ideas and questions, and I think continued work to build a better model of how outreach to high schoolers works seems very valuable. I think this should be approached with more of a scientific/tinkering/start-up mindset of “we have this idea that we believe in and we’re going to try our damndest to make it work before giving up!” I think part of “making it work” here includes figuring out how to gauge the impact. How do teachers normally tell if they’re having an impact? Probably they mostly trust their gut. So is there a way to ask them (obvious risk is they’ll tell you a white lie). Maybe you think continuing this work is not your comparative advantage, or you’re not the org to do it, which seems fine, but I’d rather you try and hire a new “CEO”/team for SHIC in that case (if possible), and not throw away existing institutional knowledge, rather than suspend the outreach.
-------------------------
RE evaluating effectiveness:
I’d be very curious to know more about the few students who did engage outside of class. In my mind, the evidence for effectiveness hinges to a significant extent on the quality and motivation of the students who continue engaging.
I think there are other ways you could gauge effectiveness, mostly by recruiting teachers into this process. They were more eager for your material than you expected (well, I think it makes sense, since its less work for them!) So you can ask for things in return: follow-up surveys, assignments, quiz questions, or any form of evaluation from them in terms of how well the content stuck and if they think it had any impact.
A few more specific questions:
- RE footnote 3: why not use “EA” in the program? This seems mildly dishonest and liable to reduce expected impact.
- RE footnote 7: why did they feel inappropriate?
I have a recommendation: try to get at least 3 people, so you aren’t managing your manager. I think accountability and social dynamics would be better that way, since:
- I suspect part of why line managers work for most people is because they have some position of authority that makes you feel obligated to satisfy them. If you are in equal positions, you’d mostly lose that effect.
- If there are only 2 of you, it’s easier to have a cycle of defection where accountability and standards slip. If you see the other person slacking, you feel more OK with slacking. Whereas if you don’t see the work of your manager, you can imagine that they are always on top of their shit.
(Sorry, this is a bit stream-of-conscious):
I assume its because humans rely on natural ecosystems in a variety of ways in order to have the conditions necessary for agriculture, life, etc. So, like with climate change, the long-term cost of mitigation is simply massive… really these numbers should not be thought of as very meaningful, I think, since the kinds of disruptions and destruction we are talking about is not easily measured in $s.
TBH, I find it not-at-all surprising that saving coral reefs would have a huge impact, since they are basically part of the backbone of the entire global ocean ecosystem, and this stuff is all connected, etc.
I think environmentalism is often portrayed as some sort of hippy-dippy sentimentalism and contrasted with humanist values and economic good sense, and I’ve been a bit surprised how prevalent that sort of attitude seems to be in EA. I’m not trying to say that either of you in the thread have this attitude; it’s more just that I was reminded of it by these comments… it seems like I have a much stronger prior that protecting the environment is good for people’s long-term future (e.g. like most people here have probably heard the idea that all the biodiversity we’re destroying could have massive scientific implications, e.g. leading to the development of new materials and drugs).
I think the reality is that we’re completely squandering the natural resources of the earth, and all of this only looks good for people in the short term, or if we expect to achieve technological independence from nature. I think it’s very foolhardy to assume that we will achieve technological independence from nature, and doing so is a source of x-risk. (TBC, I’m not an expert on any of this; just sharing my perspective.)
To be clear, I also think that AI timelines are likely to be short, and AI x-risk mostly dominates my thinking about the future. If we can build aligned, transformative AI, there is a good chance that we will be able to leverage to develop technological independence from nature. At the same time, I think our current irresponsible attitude towards managing natural resources doesn’t bode well, even if we grant ourselves huge technological advances (it seems to me that many problems facing humanity now require social, not technological solutions; the technology is often already there...).
Yeah… it’s not at all my main focus, so I’m hoping to inspire someone else to do that! :)
I recommend changing the “climate change” header to something a bit broader (e.g.”environmentalism” or “protecting the natural environment”, etc.). It is a shame that (it seems) climate change has come to eclipse/subsume all other environmental concerns in the public imagination. While most environmental issues are exacerbated by climate change, solving climate change will not necessarily solve them.
A specific cause worth mentioning is preventing the collapse of key ecosystems, e.g. coral reefs: https://forum.effectivealtruism.org/posts/YEkyuTvachFyE2mqh/trying-to-help-coral-reefs-survive-climate-change-seems
Great post!
This framing doesn’t seem to capture the concern that even slight misspecification (e.g. a reward function that is a bit off) could lead to x-catastrophe.
I think this is a big part of many people’s concerns, including mine.
This seems somewhat orthogonal to the Saint/Sycophant/Schemer disjunction… or to put it another way, it seems like a Saint that is just not quite right about what your interests actually are (e.g. because they have alien biology and culture) could still be an x-risk.
Thoughts?