Similar to not costing others work, you can end up in situations where the same impact is counted multiple times across all the charities involved, giving an inflated picture of the total impact.
Eg. If Effective Altruism (EA) London runs an event and this leads to an individual signing the Giving What We Can (GWWC) pledge and donating more the charity, both EA London and GWWC and the individual may take 100% of the credit in their impact measurement.
Also I do plan to write this up as a top level post soon
It is an interesting suggestion and I had not come across the idea before and it is great to have people thinking of new innovative policy ideas. I agree that this idea is worth investigating.I think my main point to add is just to set out the wider context. I think it is worth people who are interested in this being aware that there is already a vast array of tried and tested policy solutions that are known to encourage more long term thinking in governments. I would lean towards the view that almost all of the ideas I list below: have very strong evidence of working well, would be much easier to push for than age-weighted voting, and would have a bigger effect size than age-weighted voting.Here’s the list (example of evidence it helps in brackets)* Longer election cycles (UK compared to Aus)* A non-democratic second house (UK House of Lords)* Having a permanent neutral civil service (as in UK)* An explicit statement of policy intent setting out a consistent cross-government view that policy makers should think long-term. * A formal guide to best practice on discounting or on how to make policy that balances the needs of present and future generations. (UK Treasury Green Book, but more long term focused)* An independent Office for Future Generations, or similar, with a responsibility to ensure that Government is acting in a long term manner. (as in Wales)* Independent government oversight bodies, (UKs National Audit Office, but more long term focused)* Various other combinations of technocracy and democracy, where details are left to experts. (UK’s Bank of England, Infrastructure Commission, etc, etc)* A duty on Ministers to consider the long term. (as in Wales)* Horizon scanning and foresight skills, support, tools and training brought into government (UK Gov Office for Science).* Risk management skills, support, tools and training brought into government (this must happen somewhere right?).* Good connections between academia and science and government. (UK Open Innovation Team)* A government body that can support and facilitate others in government with long term planning. (UK Gov Office for Science, but ideally more long term focused).* Transparency of long term thinking. Through publication of statistics, impact assessments, etc (Eg. UK Office for National Statistics)* Additional democratic oversight of long term issues (UK parliamentary committees)* Legislatively binding long term targets (UKs climate change laws)* Rules forcing Ministers to stay in position longer (untested to my knowledge)* Being a dictatorship (China, it does work although I don’t recommend) I hope to find time to do more work to collate suggestions and the evidence for them and do a thorough literature review(If anyone wants to volunteer to help then get in touch). Some links here. My notes are at: https://docs.google.com/document/d/1KGLc_6bKhi5ClZPGBeEQIDF1cC4Dy8mo/edit#heading=h.mefn6dbmnz2See also: http://researchbriefings.files.parliament.uk/documents/LLN-2019-0076/LLN-2019-0076.pdf As an aside I have a personal little bugbear with people focusing on the voting system when they try to think about how to make policy work. It is a tiny tiny part of the system and one where evidence of how to do it better is often minimal and tractability to change is low. I have written about this here: https://forum.effectivealtruism.org/posts/cpJRB7thJpESTquBK/introducing-gpi-s-new-research-agenda#Zy8kTJfGrY9z7HRYH Also my top tip for anyone thinking about tractable policy options is to start with asking: do we already know how to make significant steps to solve this problem, from existing policy best practice. (I think in this case we do.)
Hi, I’m curious, what are the main aims, expectations and things you hope will come from this call out? Cheers
Hi Jade. I disagree with you. I think you are making a straw man of “regulation” and ignoring what modern best practice regulation actually looks like, whilst painting a rosy picture of industry led governance practice.
Regulation doesn’t need to be a whole bunch of strict rules that limit corporate actors. It can (in theory) be a set of high level ethical principles set by society and by government who then defer to experts with industry and policy backgrounds to set more granular rules.
These granular rules can be strict rules that limit certain actions, or can be ‘outcome focused regulation’ that allows industry to do what it wants as long is it is able to demonstrate that it has taken suitable safety precautions, or can involve assigning legal responsibility to key senior industry actors to help align the incentives of those actors. (Good UK examples include HEFA and the ONR).
Not to say that industry cannot or should not take a lead in governance issues, but that Governments can play a role of similar importance too.
David. This is great.
Your newsletters also (as well as the updates) also have a short story on what one EA community person is doing to make the world better. Why not include those here too?
I very much like the idea of an independent impact auditor for EA orgs.
I would consider funding or otherwise supporting such a project, anyone working on, get in touch...
One solution that happens already is radical transparency.
GiveWell and 80,000 Hours both publicly write about their mistakes. GiveWell have in the past posted vast amounts of their background working online. This level of transparency is laudable.
There is a very obvious upside to sleeping less: when you are not asleep you are awake and when you are awake you can do stuff.
On a very quick glace the economic analysis referenced above (and the quotes from Why Sleep Matters) seems to ignore this. If, as Khorton says, a person is missing sleep to raise kids or work a second job, then this benefits society.
This omission makes me very sceptical of the analysis on this topic.
Just to note that there’s been some discussion on this on Facebook: https://m.facebook.com/groups/437177563005273?view=permalink&id=2251872561535755
This is amazing. Great work for everyone who inputted.
Was thinking that a possible future features (although perhaps not a priority) would be integration to the EA funds donation tracking and maybe LinkedIn profile data.
Your videos are great.
I am sure there is space for content creators to be having a powerful impact on the world. Not entirely sure how but I did want to flag that the Long Term Future EA Fund has just given a $39,000 grant to a video producer: https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions .
Maybe get in touch or have a look into what was successful there (I get the impression that they found an important areas where there was otherwise a lack of good video content).
Suggestion: I have found in person feedback to useful alongside surveys. Suggest making a bit of effort to talk to people in person, especially if it is friends you see anyway, and including this data into a final impact estimate.
There are maybe 100+ as important other steps to policy. In rough chronological order I started listing some of them below (I got bored part way through and stopped at what looks like 40 points).
I have aimed to have all of these issues at a roughly similar order of magnitude of importance. The scale of these issues will depend on country to country and the tractability of trying to change these issues will vary with time and depend on individual to individual.
Overall I would say that voting reform is not obviously more or less important than the other 100+ things that could be on this list (although I guess it is often likely to somewhere in the top 50% of issues). There is a lot more uncertainty about what the best voting mechanisms look like than many of the other issues on the list. It is also an issue that may be hard to change compared to some of the others.
Either way voting reform is a tiny part of an incredibly long process, a process with some huge areas for improvements in other parts.
constitution and human rights and setting remits of political powers to change fundamental structures of country
devolution and setting remits of central political powers verses local political bodies
electoral commission body setting or adjusting borders of voting areas / constituencies
initial policy research by potential candidates (often with very limited resources)
manifesto writing (this is hugely important to set the agenda and hard to change)
public / parties choosing candidates (often a lot of internal party squabbling behind the scenes)
campaign fundraising (maybe undue influences)
campaigning and information spreading (maybe issues with false information)
tackling voter apathy / engagement
coalition forming (often very untransparent)
government/leader assigns topic areas to ministers / seniors (very political, evidence that understanding a topic is inversely proportional to how long a minister will work on that topic)
CIVIL SERVICE STAFFING
hiring staff into government (hiring processes, lack of expertise, diversity issues)
how staff in government are managed (values, team building, rewards, progression, diversity)
how staff in government are trained (feedback mechanisms, training)
splitting out areas where political leadership is needed and areas where technocratic leadership is needed
designing clear mechanisms of accountability to topics so that politicians and civil servants are aware of what their responsibilities are and can be held to account for their actions (this is super important)
ensuring political representation so each individual has direct access to a politician who is accountable for their concerns
putting in place systems that allow changes to the system if an accountability mechanisms is not working
ensuring accountability for unknown unknown issues that may arise
how poor performance of political and civil staff is addressed (poor performance procedures, whistleblowing)
how corruption is rooted out and addressed (yes there is corruption in developed countries)
mechanisms to allow parties / populations to kick out bad leaders if needed
Ensuring mechanisms for cross party dialogue and that partisan-ism of politics does not lead to distortions of truth
AGENDA SETTING AND INITIAL RESEARCH
carrying out research to understand what the policy problems are (often unclear how to do this)
understanding what the population wants (public often ignored, need good procedures for information gathering, public consultation, etc)
Development of policy options to address problems
Mechanisms for Cost Benefit Analysis and Impact Assessments to decide best policy options
access to expertise advice and best practice (lack of communication between academia and policy)
measuring impact of a policy proposal once in place (ensuring that mechanisms to measure impact are initiated at the very start of the policy implementation)
actually using information on
how politicians are allowed to change their mind given new evidence (updating is often seen as weakness)
mechanisms to ensure issues that are not politically immediately necessary are tackled (lack of long term thinking)
flexibility to deal with shocks of every step of the above process (often lacking)
transparency of every step of the above process (often lacking)
Another thing to consider is that, given climate modeling is so imprecise and regularly flawed, that our models are wrong and the risk is significantly different than predicted.
(Similar to some of Toby’s stuff on the Large Hadron Collider risks: http://blog.practicalethics.ox.ac.uk/2008/04/these-are-not-the-probabilities-you-are-looking-for/)
This could go both ways.
This is really really impressive. An amazing collection of really important questions.POSITIVES. I like the fact that you intend to research:* Institutional actors (2.8). Significant changes to the world are likely to come through institutional actors and the EA community has largely ignored them to date. The existing research has focused so much on the benefits of marginal donations (or marginal research) that our views on cause prioritisation cannot be easily applied to states. As someone into EA in the business of influencing states this is a really problematic oversight of the community to date, that we should be looking to fix as soon as possible.* Decision-theoretic issues (2.1)* The use of discount rates. This is practically useful for decision makers.OMISSIONS. I did however note a few things that I would have expected to be included, to not be mentioned in this research agenda in particular there was no discussion on * Useful models for thinking about and talking about cause prioritisation. In particular the scale neglectedness and tractability framework is often used and often criticised. What other models can or should be used by the EA community.* Social change. Within section 1 there is some discussion of broad verses narrow future focused interventions, and so I would have expected a similar discussion in section 2 on social change interventions verses targeted interventions in general. This was not mentioned.* (which risks to the future are most concerning. Although I assume this is because those topics are being covered by others such as FHI.)CONCERN Like I said above I think the questions within 2.8 are really importation for EA to focus on. I hope that the fact it is low on the list does not mean it is not priorotised. I also note that there is a sub-question in 2.8 on “what is the best feasible voting system”. I think this issue comes up too much and is often a distraction. It feels like a minor sub part of the question on “what is the optimal institution design” which people gravitate too because it is the most visible part of many political systems, but is really unlikely to be thing on the margin that most needs improving.
I hope that helps, Sam
CEA run the EA community fund to provide financial support EA community group leaders.
The key metric that CEA for evaluating the success of the groups they fund is the number of people from each local group who reach interview stage for high impact jobs, which largely means jobs within EA organisations. Bonus points available if they get the job.
This information feels like a relevant piece of the puzzle for anyone thinking through these issues. It could be (that in hindsight) CEA pushing chapter organisers to push people to focus on jobs in EA organisations in many ways might not be the best strategy.
I found this article unclear about what you were talking about when you say “improving institutional decision making” (in policy). I think we can break this down into two very different things.
A: Improving improving the decision making processes and systems of accountability that policy institutions use to make decisions so that these institutions will more generally be better decision makers. (This is what I have always meant by and understood by the term “improving institutional decision making”, and what JEss talks about in her post you link to)
B: Having influence in a specific situation on the policy making process. (This is basically what people tend to call “lobbying” or sometimes “campaigning”.)
I felt that the DFID story and the three models were all focused on B: lobbying. The models were useful for thinking about how to do B well (assuming you know better than the policy makers what policy should be made). Theoretical advice on lobbying is a nice thing to have* if you are in the field (so thank you for writing them up, I may give them some thought in my upcoming work). And if you are trying to change A it would be useful to understand how to do B.
The models were very useful for advising on how to do A: improving how institutions work generally. And A is where I would say the value lies.
I think the main point is just on how easy the article was to read. I found the article itself was very confusing as to if you were talking about A or B at many points.
*Also in general I think the field of lobbying is as one might say “more of an art than a science” and although a theoretical understanding of how it works is nice it is not super useful comapred to experience in the field in the specific country that you are in.
I would be curious about any views or research you may have done into geoengineering risk?
My understanding is that climate change is not itself an existential risk but that it may lead to other risks (such war which as Peter Hurford mentions). One other risk is geoengineering where humanity starts thinking it can control planetary temperatures and makes a mistake (or the technology is used maliciously) and that presents a risk.
Just to flag that the case for this is much much weaker outside the USA.
The matching limits for donations outside the US is much lower and you may also lose your tax benefits of donating.
Hi Kerry, Thank you for the call. I wrote up a short summary of what we discussed. It is a while since we talked so not perfect. Please correct anything I have misremembered.
~ ~ Setting the scene ~ ~
CEA should champion cause prioritisation. We want people who are willing to pick a new cause based on evidence and research and a community that continues to work out how to do the most good. (We both agreed this.)
There is a difference between “cause impartiality”, as defined above, and “actual impartiality”, not having a view on what causes are most important. (There was some confusion but we got through it)
There is a difference between long-termism as a methodology where one considers the long run future impacts which CEA should 100% promote and long-termism as a conclusion that the most important thing to focus on right now is shaping the long term future of humanity. (I asserted this, not sure you expressed a view.)
A rational EA decision maker could go through a process of cause prioritisation and very legitimately reach different conclusions as to what causes are most important. They may have different skills to apply or different ethics (and we are far away from solving ethics if such a thing is possible). (I asserted this, not sure you expressed a view.)
~ ~ Create space, build trust, express a view, do not be perfect ~ ~
The EA community needs to create the right kind of space so that people can reach their own decision about what causes are most important. This can be a physical space (a local community) or an online space. People should feel empowered to make their own decisions about causes. This means that they will be more adept at cause prioritisation, more likely to believe the conclusions reached and more likely to come to the correct answer for themselves, and EA is more likely to come to a correct answers overall. To do this they need good tools and resources and to feel that the space they are in is neutral. This needs trust...
Creating that space requires trust. People need to trust the tools that are guiding and advising them. If people feel they being subtly pushed in a direction they will reject the resources and tools being offered. Any sign of a breakdown of trust between people reading CEA’s resources and CEA should be taken very seriously.
Creating that space does not mean you cannot also express a view. You just want to distinguish when you are doing this. You can create cause prioritisation resources and tools that are truly neutral but still have a separate section on what answers do CEA staff reach or what is CEA’s answer.
Perfection is not required as long as there is trust and the system is not breaking down.
For example: providing policy advice I gave the example of writing advice to a Gov Minister on a controversial political issue, as a civil servant. The first ~85% of this imaginary advice has an impartial summary of the background and the problem and then a series of suggested actions with evaluations of their impact. The final ~15% has a recommended action based on the civil servant’s view of the matter. The important thing here is that there generally is trust between the Minister and the Department that advice will be neutral, and that in this case the Minister trusts that the section/space setting out the background and possible actions is neutral enough for them to make a good decision. It doesn’t need to be perfect, in fact the Minister will be aware that there is likely some amount of bias, but as long as there is sufficient trust that does not matter. And there is a recommendation which the Minister can choose to follow or not. In many cases the Minister will follow the recommendation.
~ ~ How this goes wrong ~ ~
Imagine someone who has identified cause X which is super important comes across the EA community. You do not want the community to either be so focused on one cause that this person is either put off or is persuaded that the current EA cause is more important and forgets about cause X
I mentioned some of the things that damage trust (see the foot of my previous comment).
You mentioned you had seen signs of tribalism in the EA community.
~ ~ Conclusion ~ ~
You said that you saw more value in CEA creating a space that was “actual impartial” as opposed to “cause impartial” than you had done previously.
~ ~ Addendum: Some thoughts on evidence ~ ~
Not discussed but I have some extra thoughts on evidence.
There are two areas of my life where much of what I have learned points towards the views above being true.
Coaching. In coaching you need to make sure the coachee feels like you are there to help them not in any way with you own agenda (that is different from theirs).
Policy. In policy making you need trust and neutrality between Minister and civil servant.
There is value in following perceived wisdom on a topic. That said I have been looking out for any strong evidence that these things are true (eg. that coaching goes badly if they think you are subtly biased one way or another) and I have yet to find anything particularly persuasive. (Counterpoint: I know one friend who knows their therapist is overly-bias towards pushing them to have additional sessions but this does not put them off attending or mean they find it less useful). Perhaps this deserves further study.
Also worth bearing in mind there maybe dissimilarities between what CEA does and the fields of coaching and policy.
Also worth flagging that the example of policy advice given above is somewhat artificial, some policy advice (especially where controversial) is like that but much of it is just: “please approve action x”
In conclusion my views on this are based on very little evidence and a lot of gut feeling. My intuitions on this are strongly guided by my time doing coaching and doing policy advice.