Senior Geothermal Associate at Clean Air Task Force, working on next-generation superhot rock energy. Interested in climate change and systems thinking.
Ann Garth šø
Thanks for catching! I missed them when doing my initial scan of the nominees list.
High-levĀerĀage opĀporĀtuĀnity to fund an EA orgāvote for CATF (takes 15 secĀonds)!
Hi Ben! I have been checking your podcast feed and havenāt seen this episode come up. Did I miss something, or do you have a sense of when it will be posted?
Iām curious to know what open questions he has after all the research heās done. What research still needs to be done? What are the biggest areas of uncertainty that he sees in this space?
Do you worry at all about a bait-and-switch experience that new people might have?
I would hope that people wouldnāt feel this way. I think neartermism is a great on-ramp to EA, but I donāt think it has to be an on-ramp to longtermism. That is, if someone joins EA out of an interest in neartermism, learns about longtermism but isnāt persuaded, and continues to work on EA-aligned neartermist stuff, I think that would be a great outcome.
And thank you for the fact-checking on the books!
I agree that it should be! Just not sure it is, at least not for everyone
EA is beĀcomĀing inĀcreasĀingly inĀacĀcessible, at the worst posĀsiĀble time
These are very reasonable concerns. To address them, I think it might make sense to limit submissions so that only people employed at EA orgs could submit, and only for bills related to their work at the org. Those people would presumably have the specialized knowledge needed to evaluate the legislation, and most EA orgs arenāt advocating for legislation that is polarizing within the community.
Alternately, submissions could stay open to everyone but the person receiving/āorganizing the submissions could be empowered to ask for more info about the submission, ask for qualifications from the person proposing the idea, or even delete submissions that arenāt aligned to current EA priorities (e.g. related to abortion). Iād like to believe that, if the submission form asked folks to only submit things they had a lot of knowledge about, that they would self-monitor.
I think this could be a great approach, but my concern is that people might not check the forum often enough (or might not check the tag). My personal experience suggests that one email every few weeks with a list of bills to call about all in one place would be better. But of course that might not be true for others!
Iāve heard this from activists I trust, but canāt cite a specific source. That said, this article (https://āāwww.newyorker.com/āāmagazine/āā2017/āā03/āā06/āāwhat-calling-congress-achieves) has a paragraph which discusses the impact of calling about small bills (control-F āmud-flapā to find the paragraph).
Idea: call-your-repĀreĀsenĀtaĀtive newsletter
I paid for a lifetime subscription to Freedom (freedom.to/ā), an app that blocks certain websites from your phone and computer during pre-set windows. It cost like $60 (one-time cost) and has made an extraordinary difference in my productivity.
Things my office has bought me that are well worth the money/āthat I will buy for myself in the future if I need to: a mouse/āmousepad, a second monitor, and a good (comfortable, height-adjustable) office chair.
I did competitive college debate for four years (American Parliamentary format, which is similar to the BP format used in the EA Debate Championship but not identical) and I think that the extent to which it does/ādoesnāt encourage truth-seeking is less important than the way it pushes people to justify their values.
Oversimplifying broadly, debate has two layers: one is the arguments about what the impacts of a certain idea/āpolicy are likely to be, and one is arguments about which impacts are more important (known as āweighingā). In order to win rounds, you have to win arguments at both levels. This means that debate requires people to engage with one of issues that is most central to EA ā a relatively consequentialist understanding of which issues matter most. In regular life you can say, āI support government funding for the arts because art is goodā and not think very hard about how that trades off with, say, funding for healthcare. But if you do that in a debate round, the other team will point out the tradeoff, estimate the number of people who will die as a result of there being less funding for healthcare, and you will lose the round.
I think this is the main benefit of debate from an EA perspective, and I suspect that it has meaningful impacts on people who are forced to confront, over and over again in countless debate rounds, the actual effects (in lives lost and other very serious harms) of different ways of weighing between issues. Anecdotally, a higher-than-average percentage of the debaters I know are EAers, or at least interested in EA. And even debaters who donāt personally support EA very often use EA weighing arguments in rounds. As a result, for some people (I suspect many), debate is the first place they hear about EA. To me, this makes debate leagues a fertile recruiting ground for EA.
Hi Teo! I know your comment was from a few years ago, but I was so excited to see someone else in EA talk about self-compassion. Self-compassion is one of the main things that lets me be passionate about EA and have a maximalist moral mindset without spiraling into guilt, and I think it should be much more well-known in the community. I donāt know if you ever ended up writing more about this, but if you did, I hope youād consider publishing itāI think that could help a lot of people!
Hi Rocket, thanks for sharing these thoughts (and Iām sorry itās taken me so long to get back to you)!
To respond to your specific points:
Improving the magnitude of impact while holding tractability and neglectedness constant would increase impact on the margin, ie, if we revise our impact estimates upwards at every possible level of funding, then climate change efforts become more cost-effective. 2. It seems like considering co-benefits does affect tractability, but the tractability of these co-benefit issue areas, rather than of climate change per se. Eg, addressing energy poverty becomes more tractable as we discover effective interventions to address it.
I certainly agree with thisāwas only trying to communicate that increases in importance might not be enough to make climate change more cost-effective on the margin, especially if tractability and neglectedness are low. Certainly that should be evaluated on a case-by-case basis.
To be fair, other x-risks are also time-limited. Eg if nuclear war is currently going to happen in t years, then by next year we will only have tā1 years left to solve it. The same holds for a catastrophic AI event. It seems like ~the nuance~ is that in the climate change case, tractability diminishes the longer we wait, as well as the timeframe.
This is true (and very well-phrased!). I think thereās some additional ~ nuance ~ which is that the harms of climate change are scalar, whereas the risks of nuclear war or catastrophic AI seem to be more binary. Iāll have to think more about how to talk about that distinction, but it was definitely part of what I was thinking about when I wrote this section of the post.
One data point: I recently got a job which, at the time I initially applied for it, I didnāt really want (as I went through the interview process and especially now that Iāve started, I like it more than I thought I would based on the job posting alone).
Thanks for the feedback! I updated the title to be a bit more descriptive