Nice videos. Well done.
I thought the first one was really good – very impressive!!
weeatquince
Going to type and think at the same time – lets see where this goes (sorry if it ends up with a long reply).
Well firstly, as long as you still have a non zero chance of the universe not being infinite, then I think you will avoid most of the paradoxes mentioned above (zones of happiness and suffering, locating value and rankings of individuals, etc), But it sounds like you are claiming you still get the “infinite fanatics” problems.
I am not sure how true this is. I find it hard to think through what you are saying without a concrete moral dilemma in my head. I don’t on a daily basis face situations where I get to create universes with different types of physics. Here are some (not very original) stories that might capture what you are suggestion could happen.
1. Lets imagine a pascals mugging situation
A stranger stops you in the street and says give me a $5 or I will create a universe of infinite sadness.
2. A rats on heroin type situation. Imagine we are in a world where:
Scientists believe with very high certainty that the universe will eventually undergo heat death and utility will stop.
You have a device that will tile the entire universe with rats on heroin (or something else that maximises utility, until the heat death of the universe (and people agree that is a good thing). But this would stop scientific research.
An infinite fanatic might say: don’t use the device, it sounds good but if we keep doing science then there is an extremely small chance we can prove our current scientific view of the universe to be wrong and find a way to create infinite joy which is bigger than an entire universe of joy.
Feel free to suggest a better story if you have one.
These do look like problems for utilitarianism that involve infinites.
But I am not convinced that they are problems to do with infinite ethics. They both seem to still arise if you replace the “infinite” with “Graham’s number” or “10^100” etc.
But I already think that standard total utilitarianism breaks down quite often, especially in situations of uncertainty or hard to quantify credences. Utilitarian philosophers don’t even agree on if preventing extinction risks should be a priority (for, against), even using finite numbers.
Now I might be wrong, I am not a professional philosopher with a degree in making interesting thought experiments, but I guess I would say that all of the problems in the post above EITHER make no more sense than saying, oh look utiltarinaism doesn’t work if you add in time travel paradoxes, or something like that OR are reduceable to problems with large finites or high uncertainties. So considering “infinitities” does not itself break utilitarianism (which is already broken).
I would disagree.
Let me try to explain why by reversing your argument in on itself. Imagine with me for a minute we live in a world where the vast majority of physicists believe in a big bounce and/or infinite time etc.
Ok got that, now consider:
The infinite ethics problems still do not arise as long as you have any non-trivial credence in time being finite. For more recent consequences always dominate later ones as long as the later have any probability above 0 of not happeneing.
Moreover, you should have such a non-trivial credence. For example, although we have pretty good evidence that the universe is not going to suddenly end in a false vacuum decay scenario, it’s certainly not totally ruled out (definitely not to the point where you should have credence of 1 that it doesn’t happen). Plenty of cosmologists are still kicking around that and other universe ending cosmologies, which do theoretically allow for literally finite effects from individual actions, even if they’re in the minority.
Basically even if time went on forever as long as we have a >0 credence that it would stop at some point then we would prefer w1 to w2 where:
Time t1 t2 t3 t4 t5 t6 t7 ….
w 1 +1 +1 +1 +1 +1 +1 +1 ….
w 2 +1 +1 +1 +1 ….
So no infinite ethics paradoxes!!
YAY we can stop worrying.[I should add this is not the most thorough explanation it mostly amused me to reverse your argument. I would also see: djbinder comment and my reply to that comment for a slightly better explanation of why (in my view) physics does not allow infinite paradoxes (any more than it allows time travel paradoxes).]
Hi. I have a bunch of notes on how to start a UK registered charity that I can share on request if useful to people. Message me if needed.
One worry I have is the possibility that the longtermist community (especially the funders) is actively repelling and pushing away the driver types – people who want to dive in and start doing (Phase 2 type) things.
This is my experience. I have been pushing forward Phase 2 type work (here) but have been told various things like: not to scale up, phase 1.5 work is not helpful, that we need more research first to know what we are doing, that any interactions with the real world is too risky. Such responses have helped push me away. And I know I am not the only one (e.g. the staff at the Longtermist Entrepreneurship Project team seemed to worry about this feature of longtermist culture too).
Not quite sure how to fix this. Maybe FTX will help. Maybe we should tell entrepreneurial/policy folk not to apply to the LTFF or other Phase 2 sceptical funders. Maybe just more discussion of the topic.
PS. Owen I am glad you are part of this community and thinking about these things. I thought this post was amazing. So Thank you for it. And great reply John.
I came to say the same thing. I was (not that long ago) working on longtermist stuff and donating to neartermist stuff (animal welfare). I think this is not uncommon among people I know.
Sorry to hear you are struggling. It is difficult. So do look after yourself. I am sure those around you in EA really appreciate and value what you are doing and that you are not at all being net-negative, do talk to the people you know if you feel like that.
Some extra links that might be of use to you:
Article on the desire for impact: Its supposed to feel like this 8 emotional challenges of altruism
EA peer support Facebook group. A place to go and talk to others if you are struggling https://www.facebook.com/groups/ea.peer.support
I thought this post was wonderful. Very interestingly written thoughtful and insightful. Thank you for writing. And good luck with your next steps of figuring out this problem. It makes me want to write something similar, I have been in EA circles for a long time now and to some degree have also failed to form strong views on AI safety. Also I thought your next steps were fantastic and very sensible, I would love to hear your future thoughts on all of those topics.
On your next steps, picking up on:
To evaluate the importance of AI risk against other x-risk I should know more about where the likelihood estimates come from.
I was thinking of something similar to compare: bio risk, AI risk, and unknown unknow risks. However I was thinking if I was putting time into this I would not focus solely on understanding the likelihood estimates but would look for a broad range of evidence. E.g. on AI and bio could compare the risks by looking at: what are the limitations on what AI/bio systems are able to do, what do experts in this field think of the risks, are there good historical analogues for each risk type, how convincing are the case studies of the best things people are doing to prevent risk from AI/bio, how does the topic look on a scale neglectedness tractability comparison, etc, etc.
Anyway just my thoughts on this research topic. Do reach out if you dive into that direction and want to discuss more.
my default hypothesis is that you’re unconvinced by the arguments about AI risk in significant part because you are applying an usually high level of epistemic rigour
This seems plausible to me, based on:
The people I know who have thought deeply about AI risk and come away unconvinced often seems to match this pattern.
I think some of the people who care most about AI risk apply a lower level of epistemic rigour than I would, e.g. some seem to have much stronger beliefs about how the future will go than I think can be reasonably justified.
One way to see the problem is that in the past we used frugality as a hard-to-fake signal of altruism
Agree.
Fully agree we need new hard-to-fake signals. Ben’s list of suggested signals is good. Other things I would add are vegan and cooperates with other orgs / other worldviews. But I think we can do more as well as increase the signals. Other suggestions of things to do are:
Testing for altruism in hiring (and promotion) processes. EA orgs could put greater weight on various ways to test or look for evidence of altruism and kindness in their hiring processes. There could also be more advice and guidance for newer orgs on the best ways to look for and judge this when hiring. Decisions to promote staff should seek feedback from peers and direct reports.
Zero tolerance to funding bad people. Sometimes an org might be tempted to fund or hire someone they know / have reason to expect it is a bad person or primarily seeking power or prestige not impact. Maybe this person has relevant skills and can do a lot of good. Maybe on a naïve utilitarian calculus it looks good to hire them as we can pay them for impact. I think there is a case to be heavily risk adverse here and avoid hiring or funding such people.
Accountability mechanisms. Top example: external impact reviews of organisations. This could provide a way to check for and discourage any corruption / excess / un-cooperativeness. Maybe an EA whistleblowing system (but maybe not needed). Maybe more accountability checking and feedback for individuals in senior roles in EA orgs (not so sure about this, as it can backfire).
So far the community seems to be doing well. Yet EA is gaining resources and power. And power has been known to corrupt. So lets make sure we build in mechanisms so that doesn’t happen to our wonderful community.
(Thanks to others in discussion for these ideas)[edited]
I think for difficult questions it is helpful to form both an inside view (what do I think) and an outside view (what does everyone else think). Pay is an indicator of the outside view. In an altruistic market how good an indicator it is depends on how much you trust a few big grantmakers to be making good decisions.
Hi Lauren, This post was fantastic!!! An incredibly well researched and well written look at a really important topic. I think it is amazing to see things like this on the EA Forum and I am sure it will be useful to people. (For example, talking for myself, reading this and getting a better understanding of the scale of this issue makes it more likely that I will nudge Charity Entrepreneurship (where I work) to look into this area in future.)
In the sprit of trying to provide useful feedback, a suggestion and a question:
A suggested intervention
Police reform in LMICs, including better policing and more trust in police and in law courts.
The idea here would be to reduce the number of incidents of violence and to improve the trusted mechanisms for dealing with those incidents to prevent violence escalating. I think there might be some work by the Copenhagen Consensus on what this might look like and the evidence base, and maybe some OpenPhil internal stuff on corruption prevention.Question / research suggestion
It would be interesting to know which if any of the intervention you list you think would be most useful for preventing conflicts beyond civil wars. For example it would be interesting to get a sense of if there are interventions that might be good for both (neartermist) global development and (longtermist) preventing global catastrophic risks, or if the two topics are best treated wholly separately.
Extra ideas for the idea list:
Altruistic perks, rather than personal perks. E.g.1. Turn up at this student event and got $10 donated to a charity of your choice. E.g.2. donation matching schemes mentioned in job adverts, perhaps funded by offering maybe slightly lower salaries. Anecdotally I remember the first EAish event I went to had money to charity for each attendee and free wine and it was the money to charity that attracted me to go, and free wine that attracted my friend, and I am still here and they are not involved.
Frugality options, like an optional version of the above idea. E.g.1. when signing up to an EA event the food options could be: “[]vegan, []nut free, []gluten free, []frugal—will bring my own lunch please donate money saved to charity x”. E.g.2. Jobs could advertise the organisation offers salary sacrifice schemes that some employees take. I don’t know how well this would work but would be interested to see a group try. Anecdotally I know some EAs in well paid jobs take lower salaries than they are offered but I don’t think this is well known.
Also for what it is worth I was really impressed by the post. I it was an very well written, clear, and transparent discussion of this topic with clear actions to take.
One challenge you might find with examining the literature in my space is a lack of prioritisation – in particular I think this leads to an overly strong focus on voting mechanisms above other issues.
To me it feels like how animal charities focus mostly on pets*. Sure pets are the most obvious animals that we engage with in our daily life, but the vast majority of animal suffering happens in farms. Sure voting is the the most obvious part of the system that we engage with in our daily life, but the vast majority of system improvements are more behind the scenes.
I don’t think voting mechanisms are more important than other governance issues such as: constitutions, the ability to kick out corrupt leaders, judiciary independence, the way political leaders pick ministers, or about 100 other aspects of governance. I would be interested in someone making the case for voting mechanisms being more important than other aspects of governance but I have honestly never seen anyone even trying to prioritise along these lines.
* and donkeys. So much money donated to donkeys. I have never really understood why.
For “empirical research”
The thing I have found most useful is the work of the UK’s Institute for Government. Both their reports and podcasts. I often find I pick up useful things on ideal system design like it may well be that a mix of private and public services are better than 100% one or the other as can compare and see which is working better and take best practice from both (this was from their empirical work on prisons). The caveat is that if you are not into UK policy there may be too much context to wade through to reach the interesting conclusions. But worth a look.Also when looking into the ideal governance structures for AI companies I think I found it very useful to look at the nuclear system. Civil nuclear risk is surprisingly (compared to other areas of policy I have experience of) well managed at both a international level, a regulatory level (in the UK), and a company level. And it is a hard topic because the aim is to stop the one very bad and very unlikely scenario of a major meltdown. Nuclear is obviously more understood than AI alignment but interesting nevertheless. Not sure the best reading on this but perhaps guidance notes form the IAEA or the ONR.
[I have thoughts to add on brainstorming but might have to add that at another time]
Hello. Ex-community builder here sharing my two cents. Some ideas you might want to consider are:
Supporting people leaving the filed to stay on as mentors/advisers/trustees. I stopped full time community building in London in 2017 but have stayed on in an advisory/Trustee capacity for EA London ever since. Boosting the status of this and making it easy and fun for people to do or expecting this of people would then help future community builders have someone to talk to on a regular basis who knows their region/community and can offer support.
Try hiring people mid-career. I have noticed a trend of mid-career-ites who have got board of their jobs or made their money/prestige points who want to move onto something else. They are often keen for very interesting work and/or more impact, and I think might be willing to stick around for longer – there is less pressure to try out other things at that stage, you’ve already done it. The one mid-career community builder I know (David at EA London) has stayed in the role for about 5 years now and is still going strong (♥).
Support with the boring tasks. Each person will have different boring tasks but whether it is fixing a website or doing taxes support would be nice.
Good luck with this. Looks like you have many things to try!!
Potentially relevant post here: https://forum.effectivealtruism.org/posts/GFkzLx7uKSK8zaBE3/we-need-more-nuance-regarding-funding-gaps.
Post author makes the claim that there is lots of funding for big global poverty orgs but less for smaller newer innovative orgs. Whereas farmed animal welfare and AI has more funding available for small new projects and individuals.
This could mean that just looking at the total amount of funding available is not complete measure of how prioritised an area is.
I might be in the minority view here but I liked the style this post was written in, emotive language and all. It was flowery language but that made it fun to read it and I did not find it to be alarmist (e.g. it clearly says “this problem has yet to become an actual problem”).
And more importantly I think the EA Forum is already a daunting place and it is hard enough for newcomers to post here without having to face everyone upvoting criticisms of their tone / writing style / post title. It Is not the perfect post (I think there is a very valid critique in what Stefan says that the post could have benefited from linking to some examples / evidence) but not everything here needs to be in the perfect EA-speak. Especially stuff from newcomers.
So welcome CitizenTen. Nice to have you here and to hear your views. I want to say I enjoyed reading the post (don’t fully agree tho) and thank you for it. :-)
https://www.cser.ac.uk/media/uploads/files/Risk_Management_in_the_UK_Final1.pdf
See also other related work:
https://forum.effectivealtruism.org/posts/wyHjpcCxuqFzzRgtX/a-practical-guide-to-long-term-planning-and-suggestions-for
https://www.longtermresilience.org/futureproof (P31-42)
https://forum.effectivealtruism.org/posts/znaZXBY59Ln9SLrne/how-to-think-about-an-uncertain-future-lessons-from-other (old)