weeatquince
Karma: 1,021
NewTop
I would be curious about any views or research you may have done into geoengineering risk?
My understanding is that climate change is not itself an existential risk but that it may lead to other risks (such war which as Peter Hurford mentions). One other risk is geoengineering where humanity starts thinking it can control planetary temperatures and makes a mistake (or the technology is used maliciously) and that presents a risk.
Just to flag that the case for this is much much weaker outside the USA.
The matching limits for donations outside the US is much lower and you may also lose your tax benefits of donating.
See: https://docs.google.com/document/d/1hCCfv-1DI4FD5I5Pw5E3Ov4O46uzpIdI0QqRrWzvRsI/edit
Hi Kerry, Thank you for the call. I wrote up a short summary of what we discussed. It is a while since we talked so not perfect. Please correct anything I have misremembered.
~
1.
~ ~ Setting the scene ~ ~
CEA should champion cause prioritisation. We want people who are willing to pick a new cause based on evidence and research and a community that continues to work out how to do the most good. (We both agreed this.)
There is a difference between “cause impartiality”, as defined above, and “actual impartiality”, not having a view on what causes are most important. (There was some confusion but we got through it)
There is a difference between long-termism as a methodology where one considers the long run future impacts which CEA should 100% promote and long-termism as a conclusion that the most important thing to focus on right now is shaping the long term future of humanity. (I asserted this, not sure you expressed a view.)
A rational EA decision maker could go through a process of cause prioritisation and very legitimately reach different conclusions as to what causes are most important. They may have different skills to apply or different ethics (and we are far away from solving ethics if such a thing is possible). (I asserted this, not sure you expressed a view.)
~
2.
~ ~ Create space, build trust, express a view, do not be perfect ~ ~
The EA community needs to create the right kind of space so that people can reach their own decision about what causes are most important. This can be a physical space (a local community) or an online space. People should feel empowered to make their own decisions about causes. This means that they will be more adept at cause prioritisation, more likely to believe the conclusions reached and more likely to come to the correct answer for themselves, and EA is more likely to come to a correct answers overall. To do this they need good tools and resources and to feel that the space they are in is neutral. This needs trust...
Creating that space requires trust. People need to trust the tools that are guiding and advising them. If people feel they being subtly pushed in a direction they will reject the resources and tools being offered. Any sign of a breakdown of trust between people reading CEA’s resources and CEA should be taken very seriously.
Creating that space does not mean you cannot also express a view. You just want to distinguish when you are doing this. You can create cause prioritisation resources and tools that are truly neutral but still have a separate section on what answers do CEA staff reach or what is CEA’s answer.
Perfection is not required as long as there is trust and the system is not breaking down.
For example: providing policy advice I gave the example of writing advice to a Gov Minister on a controversial political issue, as a civil servant. The first ~85% of this imaginary advice has an impartial summary of the background and the problem and then a series of suggested actions with evaluations of their impact. The final ~15% has a recommended action based on the civil servant’s view of the matter. The important thing here is that there generally is trust between the Minister and the Department that advice will be neutral, and that in this case the Minister trusts that the section/space setting out the background and possible actions is neutral enough for them to make a good decision. It doesn’t need to be perfect, in fact the Minister will be aware that there is likely some amount of bias, but as long as there is sufficient trust that does not matter. And there is a recommendation which the Minister can choose to follow or not. In many cases the Minister will follow the recommendation.
~
3.
~ ~ How this goes wrong ~ ~
Imagine someone who has identified cause X which is super important comes across the EA community. You do not want the community to either be so focused on one cause that this person is either put off or is persuaded that the current EA cause is more important and forgets about cause X
I mentioned some of the things that damage trust (see the foot of my previous comment).
You mentioned you had seen signs of tribalism in the EA community.
~
4.
~ ~ Conclusion ~ ~
You said that you saw more value in CEA creating a space that was “actual impartial” as opposed to “cause impartial” than you had done previously.
~
5.
~ ~ Addendum: Some thoughts on evidence ~ ~
Not discussed but I have some extra thoughts on evidence.
There are two areas of my life where much of what I have learned points towards the views above being true.
Coaching. In coaching you need to make sure the coachee feels like you are there to help them not in any way with you own agenda (that is different from theirs).
Policy. In policy making you need trust and neutrality between Minister and civil servant.
There is value in following perceived wisdom on a topic. That said I have been looking out for any strong evidence that these things are true (eg. that coaching goes badly if they think you are subtly biased one way or another) and I have yet to find anything particularly persuasive. (Counterpoint: I know one friend who knows their therapist is overly-bias towards pushing them to have additional sessions but this does not put them off attending or mean they find it less useful). Perhaps this deserves further study.
Also worth bearing in mind there maybe dissimilarities between what CEA does and the fields of coaching and policy.
Also worth flagging that the example of policy advice given above is somewhat artificial, some policy advice (especially where controversial) is like that but much of it is just: “please approve action x”
In conclusion my views on this are based on very little evidence and a lot of gut feeling. My intuitions on this are strongly guided by my time doing coaching and doing policy advice.
Feature idea: If you co-write an article with someone being able to post as co-authors.
Hi Kerry, Some more thoughts prior to having a chat.
-
Is longtermism a cause?
Yes and no. The term is used in multiple ways.
A: Consideration of the long-term future.
It is a core part of cause prioritisation to avoid availability biases: to consider the plights of those we cannot so easily be aware of, such as animals, people in other countries and people in the future. As such, in my view, it is imperative that CEA and EA community leaders promote this.
B: The long-term cause area.
Some people will conclude that the optimal use of their limited resources should be putting them towards shaping the far future. But not everyone, even after full rational consideration, will reach this view. Nor should we expect such unanimity of conclusions. As such, in my view, CEA and EA community leaders can recommend people to consider this causes area, but should not tell people this is the answer.
-
Threading the needle
I agree with the 6 points you make here.
(Although interestingly I personally do not have evidence that “area allegiance is operating as a kind of tribal signal in the movement currently”)
-
CEA and cause-impartiality
I think CEA should be careful about how to express a view. Doing this in wrong way could make it look like CEA is not cause impartial or not representative.
My view is to give recommendations and tools but not answers. This is similar to how we would not expect 80K to have a view on what the best job is (as it depends on an individual and their skills and needs) but we would expect 80K to have recommendations and to have advice on how to choose.
I think this approach is also useful because:
People are more likely to trust decisions they reach through their own thinking rather than conclusions they are pushed towards.
It handles the fact that everyone is different. The advice or reasoning that works for one person may well not make sense for someone else.
I think (as Khorton says) it is perfectly reasonable for an organisation to not have a conclusion.
-
(One other thought I had was on examples of actions I would be concerned about CEA or another movement building organisations taking would be: Expressing certainty about a area (in internal policy or externally), basing impact measurement solely on a single cause area, hiring staff for cause-general roles based on their views of what causes is most important, attempting to push as many people as possible to a specific cause area, etc)
Yes thanks. Edited.
We would like to hear suggestions from forum users about what else they might like to see from CEA in this area.
Here is my two cents. I hope it is constructive:
1.
The policy is excellent but the challenge lies in implementation.
Firstly I want to say that this post is fantastic. I think you have got the policy correct: that CEA should be cause-impartial, but not cause-agnostic and CEA’s work should be cause-general.
However I do not think it looks, from the outside, like CEA is following this policy. Some examples:
EA London staff had concerns that they would need to be more focused on the far future in order to receive funding from CEA.
You explicitly say on your website: “We put most of our credence in a worldview that says what happens in the long-term future is most of what matters. We are therefore more optimistic about others who roughly share this worldview.“[1]
The example you give of the new EA handbook
There is a close association with 80000 Hours who are explicitly focusing much of their effort on the far future.
These are all quite subtle things, but collectively they give an impression that CEA is not cause impartial (that it is x-risk focused). Of course this is a difficult thing to get correct. It is difficult to draw the line between saying: ‘our staff members believe cause___ is important’ (a useful factoid that should definitely be said), whilst also putting across a strong front of cause impartiality.
2.
Suggestion: CEA should actively champion cause impartiality
If you genuinely want to be cause impartial I think most of the solutions to this are around being super vigilant about how CEA comes across. Eg:
Have a clear internal style guide that sets out to staff good and bad ways to talk about causes
Have ‘cause impartiality’ as a staff value
If you do an action that does not look cause impartial (say EA Grants mostly grants money to far future causes) then just acknowledge this and say that you have noted it and explain why it happened.
Public posts like this one setting out what CEA believes
If you want to do lots of “prescriptive” actions split them off into a sub project or a separate institution.
Apply the above retroactively (remove lines from your website that make it look like you are only future focused)
Beyond that, if you really want to champion cause impartiality you may also consider extra things like:
More focus on cause prioritisation research.
Hiring people who value cause impartiality / cause prioritisation research / community building, above people who have strong views on what causes are important.
3.
Being representative is about making people feel listened too.
Your section on representatives feels like you are trying to pin down a way of finding an exact number so you can say we have this many articles on topic x and this many on topic y and so on. I am not sure this is quite the correct framing.
Things like the EA handbook should (as a lower bound) have enough of a diversity of causes mentioned that the broader EA community does not feel misrepresented but (as an upper bound) not so much that CEA staff [2] feel like it is misrepresenting them. Anything within this range seems fine to me. (Eg. with the EA handbook both groups should feel comfortable handing this book to a friend.) Although I do feel a bit like I have just typed ‘just do the thing that makes everyone happy’ which is easier said than done.
I also think that “representativeness” is not quite the right issue any way. The important thing is that people in the EA community feel listened too and feel like what CEA is doing represents them. The % of content on different topics is only part of that. The other parts of the solution are:
Coming across like you listen: see the aforementioned points on championing cause impartiality. Also expressing uncertainty, mentioning that there are opposing views, giving two sides to a debate, etc.
Listening—ie. consulting publicly (or with trusted parties) wherever possible.
If anything getting these two things correct is more important than getting the exact percentage of your work to be representative.
Sam :-)
[1] https://www.centreforeffectivealtruism.org/a-three-factor-model-of-community-building
[2] Unless you have reason to think that there is a systematic bias in staff, eg if you actively hired people because of the cause they cared about.
YAY <3
Marek, well done on all of your hard work on this.
Separate from the managed funds. I really like the work that CEA is doing to help money be moved around the world to other EA charities. I would love to see more organisations on the list of places that donations can be made through the EA Funds platform. Eg, REG or Animal Charity Evaluators or Rethink Charity. Is this in the works?
https://app.effectivealtruism.org/donations/new/organizations
counting our research as 0 value, and using the movement building impact estimates from LEAN, we come out well on EV compared to an average charity … I will let readers make their own calculations
Hi Geoff. I gave this a little thought and I am not sure it works. In fact it looks quite plausible that someone’s EV (expected value) calculation on Leverage might actually come out as negative (ie. Leverage would be causing harm to the world).
This is because:
Most EA orgs calculate their counterfactual expected value by taking into account what the people in that organisation would be doing otherwise if they were not in that organisation and then deduct this from their impact. (I believe at least 80K, Charity Science and EA London do this)
Given Leverage’s tendency to hire ambitious altruistic people and to look for people at EA events it is plausible that a significant proportion of Leverage staff might well have ended up at other EA organisations.
There is a talent gap at other EA organisations (see 80K on this)
Leverage does spend some time on movement building but I estimate that this is a tiny proportion of the time, >5%, best guess 3%, (based on having talked to people at leverage and also based on looking at your achievements to date compared it to the apparent 100 person-years figure)
Therefore if the amount of staff who could be expected to have found jobs in other EA organisations is thought to be above 3% (which seems reasonable) then Leverage is actually displacing EAs from productive action so the total EV of Leverage is negative
Of course this is all assuming the value of your research is 0. This is the assumption you set out in your post. Obviously in practice I don’t think the value of your research is 0 and as such I think it is possible that the total EV of Leverage is positive*. I think more transparency would help here. Given that almost no research is available I do think it would be reasonable for someone who is not at Leverage to give your research an EV of close to 0 and therefore conclude that Leverage is causing harm.
I hope this helps and maybe explains why Leverage gets a bad rep. I am excited to see a more transparency and a new approach to public engagement. Keep on fighting for a better world!
*sentence edited to better match views
Hi Joey, thank you for writing this.
I think calling this a problem of representation is actually understating the problem here.
EA has (at least to me) always been a community that inspires encourages and supports people to use all the information and tools available to them (including their individual priors intuitions and sense of morality) to reach a conclusion about what causes and actions are most important for them to take to make a better world (and of course to then take those actions).
Even if 90% of experienced EAs / EA community leaders currently converge on the same conclusion as to where value lies, I would worry that a strong focus on that issue would be detrimental. We’d be at risk of losing the emphasis on cause prioritisation—arguably most useful insight that EA has provided to the world.
We’d risk losing an ability to support people though cause prioritisation (coaching, EA or otherwise, should not pre-empt the answers or have ulterior motives)
we risk creating a community that is less able to switch to focus on the most important thing
we risk stifling useful debate
we risk creating a community that does not benefits from collaboration by people working in different areas
etc
(Note: Probably worth adding that if 90% of experienced EAs / EA community leaders converged on the same conclusion on causes my intuitions would suggest that this is likley to be evidence of founder effects / group-think as much as it is evidence for that cause. I expect this is because I see a huge diversity in people’s values and thinking and a difficulty in reaching strong conclusions in ethics and cause prioritisation)
Hi, a little late, but did you get an answer to this? I am not an expert but can direct this to people in EA London who can maybe help.
My very initial (non-expert) thinking was:
this looks like a very useful list of how to mitigate climate consequences through further investment in existing technologies.
this looks like a list written by a scientist not a policy maker. Where do diplomatic interventions such as “subsidise China to encourage them not to mine as much coal” etc fall on this list. I would expect subsidies to prevent coal mining are likely to be effective.
“atmospheric carbon capture” is not on the list. My understanding is that “atmospheric carbon capture” may be a necessity for allowing us to mitigate climate change in the long run (by controlling CO2 levels) whereas everything else on this list is useful in the medium-short run none of these technologies are necessary.
Greg this is awesome—go you!!! :-D :-D
To provide one extra relevant reference class: I have let EAs stay for free / donations at my place in London to work on EA projects and on the whole was very happy I did so. I think this is worthwhile and there is a need for it (with some caution as to both risky / harmful projects and well intentioned free-riders).
Good luck registering as a CIO—not easy. Get in touch with me if you are having trouble with the Charity Commission. Note: you might need Trustee’s that are not going to live for free at the hotel (there’s lots of rules against Trustees receiving any direct benefits from their charity).
Also if you think it could be useful for there to be a single room in London for Hotel guests to use for say business or conference attendance then get in touch.
For information. EA London has neither been funded by the EA Community Fund nor diligently considered for funding by the EA Community Fund.
In December EA London was told that the EA Community Fund was not directly funding local groups as CEA would be doing that. (This seem to be happening, see: http://effective-altruism.com/ea/1l3/announcing_effective_altruism_community_building/)
Concerns about model uncertainty cut in both directions and I think the preponderance of probabilities favours SAI (provided it can be governed safely)
Good point. Agreed. Had not considered this
I tend to deflate their significance because SAI has natural analogues… volcanoes … industrial emissions.
This seems like flawed thinking to me. Data from natural analogues should be built into predictive SAI models. Accepting that model uncertainty is a factor worth considering means questioning whether these analogues are actually good predictors of the full effects of SAI.
(Note: LHC also had natural analogues in atmospheric cosmic rays, I believe this was accounted for in FHI’s work on the matter)
-
I think the main thing that model uncertainty suggests is that mitigation or less extreme forms of geoengineering should be prioritised much more.
Hi, can you give an example or two of an “announcement of a personal nature”. I cannot think I have seen any posts that would fall into that category at any point.
Cheers
My very limited understanding of this topic is that climate models, especially of unusual phenomena. are highly uncertain and therefore there is a some chance that our models are incorrect. this means that SAI could go horribly wrong, not have the intended effects or make the climate spin out of control in some catastrophic way.
The chance of this might be small but if you are worried about existential risks it should definitely be considered. (In fact I thought this was the main x-risk associated with SAI and similar grand geo-engineering exercises).
I admit I have not read your article (only this post) but I was surprised this was not mentioned and I wanted to flag the matter.
For a similar case see the work of FHI researchers Toby Ord and Anders Sandberg on the risks of the Large Hadron Collider (LHC) here: https://arxiv.org/abs/0810.5515 and I am reasonably sure that SAI models are a lot more uncertain than the LHC physics.
In general I would be very wary of taking definitions written for an academic philosophical audience and relying on them in other situations. Often the use of technical language by philosophers does not carry over well to other contexts
The definitions and explanations used here: https://www.effectivealtruism.org and here: https://whatiseffectivealtruism.com/ are in my mind, better and more useful than the quote above for almost any situation I have been in to date.
ADDITIONAL EVIDENCE FOR THE ABOVE For example I have a very vague memory of talking to Will on this and concluding that he had a slightly odd and quite broad definition of “welfarist”, where “welfare” in this context just meant ‘good for others’ without any implications of fulfilling happiness / utility / preference / etc. This comes out in the linked paper, in the line “if we want to claim that one course of action is, as far as we know, the most effective way of increasing the welfare of all, we simply cannot avoid making philosophical assumptions. How should we value improving quality of life compared to saving lives? How should we value alleviating non-human animal suffering compared to alleviating human suffering? How should we value mitigating risks ….” etc
This sounds like a really good project. You clearly have a decent understanding of the local political issues, a clear ideas of how this project can map to other countries and prove beneficial globally. And a good understanding of how this plays a role in the wider EA community (I think it is good that this project is not branded as ‘EA’).
Here are a number of hopefully constructive thoughts I have to help you fine tune this work. These maybe things you thought about that did not make the post. I hope they help.
-
1.
DIFFERENCES BETWEEN EA AND CCC VALUES
As far as I can tell the CCC seems to not care much about scenarios with a small chance of a very high impact. On the whole the EA community does care about these scenarios. My evidence for this comes from the EA communities concern for the extreme risks of climate change (https://80000hours.org/problem-profiles/climate-change/) and x-risks whereas the CCC work on climate change that I have seen seems to have ignored these extreme risks. I am unsure why the discrepancy (Many EA researchers do not use a future discount rate for utility, does CCC?)
This could be problematic in terms of the cause prioritisation research being useful for EAs, for building a relationship with this project and EA advocacy work, EA funding, etc, etc.
-
2.
ADVOCACY CHALLENGES
Sometimes the most important priorities will not be the ones that public will latch onto. It is unclear from the post:
2.1 how you intend to find a balance between delivering the messages that are most likely to create change verses saying the things you most believe to be true. And
2.2 how the advocacy part of this work might differ from work that CCC has done in the past. My understanding is that to date the CCC has mostly tried to deliver true messages to an international policy maker audience. Your post however points to the public sentiment as a key driving factor for change. The advocacy methods and expertise used in CCC’s international work are not obviously the best methods for this work.
-
3
SCOPE / META? IMPROVING INSTITUTIONS?
For a prioritization research piece like I could imagine the researcher might dive straight into looking at the existing issues on the political agenda and prioritising between those based on some form of social rate of return. However I think there are a lot of very high level questions that I could be asked first like: • Is it more important to prevent the government making really bad decisions in some areas or to improve the quality of the good decisions • Is it more important to improve policy or to prevent a shift to harmful authoritarianism • How important is it to set policy that future political trends will not undo • How important is the acceptability among policy makers . public of the policy being suggested Are these covered in the research?
Also to what event will the research be looking at improving institutional decision making? To be honest I would genuinely be surprised if the conclusion of this project was that the most high impact policies were those designed to improve the functioning / decision making / checks and balances of the government. If you can cut corruption and change how government works for the better then the government will get more policies correct across the board in future. Is this your intuition too?
-
Finally to say I would be interested to be kept up-to-date with this project as it progresses. Is there a good way to do this? Looking forward to hearing more.
I found this article unclear about what you were talking about when you say “improving institutional decision making” (in policy). I think we can break this down into two very different things.
A: Improving improving the decision making processes and systems of accountability that policy institutions use to make decisions so that these institutions will more generally be better decision makers. (This is what I have always meant by and understood by the term “improving institutional decision making”, and what JEss talks about in her post you link to)
B: Having influence in a specific situation on the policy making process. (This is basically what people tend to call “lobbying” or sometimes “campaigning”.)
I felt that the DFID story and the three models were all focused on B: lobbying. The models were useful for thinking about how to do B well (assuming you know better than the policy makers what policy should be made). Theoretical advice on lobbying is a nice thing to have* if you are in the field (so thank you for writing them up, I may give them some thought in my upcoming work). And if you are trying to change A it would be useful to understand how to do B.
The models were very useful for advising on how to do A: improving how institutions work generally. And A is where I would say the value lies.
I think the main point is just on how easy the article was to read. I found the article itself was very confusing as to if you were talking about A or B at many points.
*Also in general I think the field of lobbying is as one might say “more of an art than a science” and although a theoretical understanding of how it works is nice it is not super useful comapred to experience in the field in the specific country that you are in.