Hi, is be interested and have been thinking about similar stuff (meeting the impact of lobbying, etc) from a uk policy perspective.
If helpful happy to chat and share thoughts. Feel free to get in touch to:
sam [at] appgfuturegenerations.com
This is excellent. Very well done.
It crossed my mind to ponder on whether much can be said about where different categories* of risk prevention are under-resourced. For example it maybe that the globe spends enough resources on preventing natural risks as we have seen them in the past so understand them. It maybe that militarisation of states means that we are prepared for malicious risk. It maybe that we under-prepare for large risks as they have less small scale analogues.
Not sure how useful following that kind of thinking is but it could potentially help with prioritisation. Would be interested to hear if the authors have though through this.
*(The authors break down risks into different categories: Natural Risk / Accident Risk / Malicious Risk / Latent Risk / Commons Risk, and Leverage Risk / Cascading Risk / Large Risk, and capability risk / habitat risk / ubiquity risk / vector risk / agency risk).
Optimisers curse / Regression to the mean
On how trying to optimise can lead you to make mistakes
Knightian uncertainty / deep uncertainty
a lack of any quantifiable knowledge about some possible occurrence
This means any situation where uncertainty is so high that it is very hard / impossible / foolish to quantify the outcomes.
To understand this it is useful to note the difference between uncertainty (EG 1: The chance of a nuclear war this century) and risk (EG 2: the chance of a coin coming up heads).
The process for making decisions that rely on uncertainty may be very different form the process for making decision that rely on risk. The optimal tactic for making good decisions on situations about deep uncertainty may not be to just quantify the situation.
Why this matters
This could drastically change the causes EAs care about and the approaches they take.
This could alter how we judge the value of taking action that affects the future.
This could means that “rationalist”/LessWrong approach of “shut up and multiply” for making decisions might not be correct.
For example this could shift decisions away from a naive exacted value based on outcomes and probabilities and towards favoring courses of actions that are robust to failure modes, have good feedback loops, have short chains of affects, etc.
(Or maybe not, I don’t know. I don’t know enough about how to make optimal decisions under deep uncertainty but I think it is a thing I would like to understand better.)
The difference between “risk” and “uncertainty”. “Black swan events”. Etc
Section 9.3 here: https://www.nickbostrom.com/existential/risks.html
(Disclaimer: Not my own views/criticism. I am just trying to steelman a Facebook post I read. I have not looked into the wider context of these views or people’s current positions on these views.)
I downvoted this but I wanted to explain why and hopefully provide constructive feedback. I felt that, having seen the original post this is referencing, I really do not think this post did a good/fair job of representing (or steelmanning) the original arguments raised.
To try and make this feedback more useful and help the debate here are some very quick attempts to steelman some of the original arguments:
Historically arguments that justify horrendous activities have a high frequency of being utopia based (appealing to possible but uncertain future utopias). The long-termist astronomical waste argument has this feature and so we should be wary of it.
If an argument leads to some ridiculous / repugnant conclusions that most people would object too then it is worth being wary of that argument. The philosophers who developed the long-termist astronomical waste argument openly use it to promote a range of abhorrent hawkish geopolitical responses (eg premptive nuclear strikes). We should be wary of following and promoting such arguments and philosophers.
There are problems with taking a simple expected value approach to decision making under uncertainty. Eg Pascal’s mugging problems. [For more on this look up robust decision making under deep uncertainty or knightian uncertainty]
The astronomical waste type arguments are not robust to a range of different philosophical and non-utilitarian ethical frameworks and (given ethical uncertainty) this makes them not great arguments
The above are not arguments against working on x-risks etc (and the original poster does himself work on x-risk issues) but are against overly relying on, using and promoting the astronomical waste type arguments for long-termism.
Having looked at your sources I am not sure they justify the conclusions.
Your sources for point 1 seem to ignore the >10% case that the world warms significantly more than expected (they generally look at mortality in the business as usual case).
Your sources for point 2 focus on whether climate change is truly existential, but do seem to point to a possibly if it being a global catastrophe. (Point 2 appears to be somewhat crucial, the other points, especially 1, 4, 5, 7 depend on this point.)It seems plausible from looking at your sources that there are tail risks of extreme warming that could lead to huge global catastrophe (maybe not quite at your cut-off the 10% chance of 10% mortality level but huge).Eg Halstead:”On current pledges and promises, we’ll probably end up at around 700ppm by 2100 and increasing well beyond that.”“at 700ppm, … there is an 11% chance of an eventual >6 degrees of warming””at 1120ppm, there is between a 10% and 34% chance of >9 degrees of warming”“Heat stress … seems like it would be a serious problem for warming >6 degrees for large portions of the planet … With 11–12 °C warming, such regions would spread to encompass the majority of the human population as currently distributed””6 degrees would drastically change the face of the globe, with multi-metre sea level rises, massive coastal flooding, and the uninhabitability of the tropics.” “10 degrees … would be extremely bad”
Overall I expect these points 1 and 2 are quite possibly correct, but, having looked through your sources and concluded that they do not justify the points very well, I would have low epistemic status in these points.
Also on points 4 and 7, I think they are dependent on what kind of skills and power you have and are using. Eg: If you are long-term focused and have political influence climate issues might be a better thing to focus on than AI safety risks which is not really on the political agenda much.
Hi Michael, That all sounds really sensible and well thought out. Good job :-)
First year donating is super exciting!!
Not an expert but some feedback that jumps to mind is:
Overall this looks like a great donation plan.
Giving to the Animal Welfare Fund or ACE’s top recommended charities seems like a pretty solid surefire bet / way to outsource donations.
I am slightly less certain about donating directly to RP or CE unless you have a reason to think the Animal Welfare Fund is not funding these enough (which does happen), but either way you are following the donations of the Animal Welfare Fund so there is really not much in it and it is useful sometimes to donate and see how the orgs are using your money.
One extra thing to consider is donating to the charities being created by Charity Entrepreneurship, (for example https://forum.effectivealtruism.org/posts/iMofrSc86iSR7EiAG/introducing-fish-welfare-initiative-1 ). I cant talk for CE but I think CE believe donations to their new charities are a bit more urgent than donations directly to CE. Maybe one of the fish people can say if they are looking for funds.
I endorse solving collective action problems that benefit you and other donors. You are probably better placed to evaluate RC Forward than us non-Canadians, and if RC Forward is useful to help you donate more then supporting it with at least some of your donation makes sense.
Hope that helps,
Hi, ditto what Khorton said. I don’t have a background that has lead me to be able to opine wisely on this.
My initial intuition is: I am unconvinced by this. From a policy perspective you make a reasonable case that more immigration to the US could be very good, but unless you had more certainty about this (more research, evidence, cases studies, etc), I would worry about the cost of actively pushing out a US vs China message.
But I have no expertise in US politics so I would not put much faith in my judgment.
Giving What We Can’s impact reports (when I last read them) suggested they had raised for effective charities £6 per £ spent using pessimistic assumptions or £60 per £ best guess.
The Life You Can Save raised $11 per $ spent for effective charities
Raising for Effective giving has raised $24 per $ spent, for effective charities.
EA London (which does not do much fundraising) roughly raised £2.5 per £.
Rethink Forward moves £7 per £.
This are all post hoc analyses of money moved to date, not estimates of future impact. The quality of the evidence for these is variable between the different programs and you can look into it. As well as moving money I believe all of cheese ALSO purport to have improved the effectiveness of donations given.
If helpful to provide a baseline / prior against which to judge these successes note that the standard fundraising ratio in the charity sector is that charities raise £4 per £ spent on fundraising.
Cause: EA meta (+ global poverty)
Main donation: £3000 to Happier Lives Institute (HLI)
Other donations: £500 to each of EA meta Fund, Let’s Fund, Rethink Priorities, Against Malaria Foundation (AMF)
Why EA Meta Leverage: It seems empirically evidence to me that meta EA activity is influencing both the amount and the direction of funding at a ratio of at least £10 influenced £1 inputted. Evidenced: I was skeptical of what EA meta work could achieve but over the last few years this kind of giving has gone from being an idea to having demonstrated impact. Underfunded: The EA Meta Fund has than other EA Funds and on its last pay-outs it only filled about 15% of the funding gaps of the organisations it was looking to support. Collective action: If everyone in EA funded meta work rather than pet causes more money would go to good places (or we may learn we were wrong about our pet causes).
Why EA meta research not outreach I think we are still learning about how to do good and getting that right is more important than getting more money moved. This has had most impact to date and I am very unconvinced that we are getting close to diminishing marginal returns on this.
Why £3000 to HLI Happier Lives Institute are doing innovative and useful new research on subjective wellbeing data that I believe could significantly change how people in the EA community think about what causes are most important. I expect this donation combined with a donation from a collaborator can fill their funding gap at least until August 2020. I may donate more at a later date.
Why £500 each EA Meta Fund, Let’s Fund and Rethink Priorities I am not giving everything to HLI partly because I think HLI’s immediate funding gap can be filled and partly I want to influence and keep up with these other projects and partly just poor heuristics on my part. Note that my view that HLI is better than any of these 3 other donation opportunities is very weak (although I expect they have a more pressing funding gap). These 3 projects are the other EA meta (research) projects that I think it is worth supporting. Splitting between them because not sure it is worth the time / energy to evaluate and compare them all given the amount of money I am giving. I have not included GPI because I have not been as impressed by the immediate usefulness of their research or research agenda. On Let’s Fund: They are not actually asking for money but they are doing good work and always seem short of funds so will try to offer them funds. If they don’t take it will split the money between other projects.
Why £500 to AMF I am not giving everything to meta, partly I want to still force myself to think about what is the most important non meta cause and partly because I think if I am give the amount I would likely have given to non-meta causes had I not come across EA / GWWC then I help avoid the meta trap. Against Malaria Foundation are an excellent charity, continuously top-rated by GiveWell. (Giving to AMF rather than to GiveWell to distribute as not totally convinced that Deworming or GiveDirectly are as good as AMF). I might alternatively give the EA Animal Fund – need to think about this more.
Key uncertainties Is it silly to split my donations this much? Have I done enough due diligence of HLI?AMF or the EA Animal Fund?
In one key way this post very solidly completely misses the point.The post makes a number of very good points about systemic change. But bases all of the points on financial cost-effective estimates. It is embedded in the language throughout, discussing: options that “outperformed GiveWell style charities”, the “cost … per marginal vote”, lessons for “large-scale spending” or for a “small donor”, etc.I think a way the EA community has neglected systemic change in exactly in this manner. Money is not the only thing that can be leveraged in the world to make change (and in some cases money is not a thing people can give).I think this some part of what people are pointing to when they criticise EA.To be constructive I think we should rethink cause priotisation, but not from a financial point of view. Eg:- If you have political power how best to spend it?- If you have a public voice how best to use it?- If you can organise activism what should it focus on? (PS. Happy to support with money or time people doing this kind of research) I think we could get noticeably different results. I think things like financial stability (hard to donate to but very important) might show up as more of a priority in the EA space if we start looking at things this way.I think the EA community currently has a limited amount to say to anyone with power. For example: • I met the civil servant with oversight of UK’s £8bn international development spending who seemed interested in EA but did not feel it was relevant to them – I think they were correct, I had nothing to say they didn’t already know. • Another case is an EA I know who does not have a huge amount to donate but lots of experience in political organising and activism, I doubt the EA community provides them much useful direction.It is not that the EA community does none of this, just that we are slow. It feels like it took 80000 Hours a while to start recommending policy/politics as a career path and it is still unclear what people should do once in positions of power. (HIPE.org.uk if doing some research on this for Government careers) --Overall a very interesting post. Thank you for posting.
I note you mention a “relative gap in long-termist and high-risk global poverty work”. I think this is interesting. I would love it if anyone has the time to do some back of the envelope evaluations of international development governance reform organisations (like Transparency International)
Tl;dr: This assumes pure rate of time discounting. I curious how well your analysis works for anyone who do not think that we should discount harms in the future simply by virtue of being in the future.--1. THIS IS SO GOODThis is super good research and super detailed and I am hugely impressed and hope many many people donate to Let’s Fund and support you with this kind of research!!!--2.LET’S BE MORE EXPLICIT ABOUT THE ETHICAL ASSUMPTIONS MADEI enjoyed reading Appendix 3• I agree with Pindyck that models of the social cost of carbon (SCC) require a host of underlying ethical decisions and so can be highly misleading.• I don’t however agree with Pindyck that there is no alternative so we might as well ignore this problemAt least for the purposes of making decisions within the EA community, I think we can apply models but be explicit about what ethical assumptions have been made and how they affect the models conclusions. Many people on this forum have a decent understanding of their ethical views and how that affects decisions and so being more explicit would support good cause prioritisation decisions of donors and others.Of course this is holding people on this forum to a higher standard of rigor than professional academic economists reach so should be seen as a nice to have rather than a default, but lets see what we can do...--3.DISCOUNTING THE FUTURE, AND OTHER ASSUMPTIONS
3.1My (very rough) understanding of climate analysis is that the SCC is very highly dependent on the discount rate.
(Appendix 3 makes this point. Also the paper you link to on SCC per country says “Discounting assumptions have consistently been one of the biggest determinants of differences between estimations of the social cost of carbon”).
The paper you draw your evidence from seems to uses a pure rate of time discounting of 1-2%. This basically assumes that future people matter less.I think many readers of this forum do not believe that future people matter less than people today.
I do not know how much this matters for the analysis. A high social cost of carbon seems, from the numbers in your article, to make climate interventions of the same order of magnitude but slightly less effective than cash transfers. 3.2I also understand that estimates of the SCC is also dependent on the calculation of the worse case tail-end effects and there is some concern among people in the x-risk research space that small chances of very catastrophic affects are ignored in climate economics. I do not know how much this matters either. 3.3I could also imagine that many people (especially negative leaning utilitarians) are more concerned by stopping the damage caused from climate change than impressed by the benefits of cash transfers. SO:I do not have answers to what effects these things have on the analysis. I would love to get your views on this.Thank you for you work on this!!!
If I had to guess and (and I feel uncomfortable doing so as not really going on anything here but my gut) I would say that at an entry level it is all pretty similar but that an entry level job in the civil service is likely slightly higher impact than an entry level job as an MP’s research but the variation between jobs and MPs is likely more important. I think your personal expected value is dominated by the jobs you get in later career rather than at an entry level so this is small on the scale of your career.
Value of information to the broader EA community is good, as is any other low-hanging-fruit benefits gained by being an early EA mover into a space.
Hi, I think the 80K advice is still fairly applicable (also I don’t think it would be a second opinion as my views were taken into account in that 80K article)
Would probably put the diplomatic fast-stream on par with the generalist one (although not very sure about this)
I would say that do not forget you can go in direct entry into a job and if you have a bit of experience (even a year or 2) getting an SEO job (or higher) may well be preferable to the FastStream.
This image displays for me. I am not sure what I need to do to make it display properly for you or what has gone wrong. Can someone admin-y investigate?
There are maybe 40 people who are in the EA community currently in the UK civil service and none currently in politics. I think most people I know would agree that it is comparatively more useful and more neglected for EAs to move towards politics.
I also think it is generally more impactful to do well in politics than to do well in the civil service, as ultimately politicians make the decisions. Although I do know EAs would disagree with this and point out that people do not have positions of political power for very long.
I think politics is more challenging: I think it is more competitive to do very well in. Also I think if you want to go into politics you need to really commit to that path and spend your time engaged in party politics whereas I think it is easier to move in and out of the civil service.