Astrobiologist @ Imperial College London, Mars2020
Partnerships Coordinator @ SGAC Space Safety and Sustainability Project Group
Interested in: Space Governance, Great Power Conflict, Existential Risk, Cosmic threats, Academia, International policy
Astrobiologist @ Imperial College London, Mars2020
Partnerships Coordinator @ SGAC Space Safety and Sustainability Project Group
Interested in: Space Governance, Great Power Conflict, Existential Risk, Cosmic threats, Academia, International policy
Hello! I’m just here to introduce myself as I think I’m a bit of an unusual effective altruist. I am an astrobiologist and my research focuses on the search for life on Mars. Before discovering effective altruism I was always very interested in the long term future of life in the context of looming existential risks. I thought the best thing to do was to send life to other planets so that it would survive in a worst case scenario. But a masters degree later, I got into effective altruism and decided that this cause was a 10⁄10 on importance, 10⁄10 on neglectedness, and 1⁄10 on tractability.
So my focus has changed more recently as I progress through my PhD. I’m interested in the moral implications of astrobiology as it plays a really important role at the core of longtermism. There are a few moral implications that depend on astrobiology research:
Research conclusion: The universe and our Solar System are full of habitable celestial bodies. Moral implication: The number of potential future humans is huge in the long term future, so we ought to protect these people through research into existential risks.
Research conclusion: The universe seems to be empty of life. Moral Implication: Life on Earth is extremely valuable, so ensuring its survival should be the highest moral priority.
Research conclusion: Planets like Earth are extremely rare and far away. Moral implication: “there is no planet B”—we ought to protect our Earth for the next ~1000 years as there is no backup plan
I plan to investigate these ideas further and see where they lead me. I think that as I progress in my career, I can equip philosophy and outreach to help people understand the long term perspective and inspire action towards tackling existential risks. Great to meet you all!
Very interesting post, I haven’t thought about animal welfare in Africa before. I guess the first thing I (and most others) think of when I imagine philanthropy in Africa is human poverty and disease. With this perspective in mind, I wonder how this impacts funding sources from the western world… Getting money out of the “animal welfare budget” for this will probably work! But getting money out of the “philanthropy in Africa” budget I imagine will be very challenging.
Yeah, though I only became aware of it after getting involved with EA. It seems to fit in the group of very interesting philosophical space things that are scary and I have no idea what to do about! Joining the club with the great filter, the doomsday argument, and the fermi paradox.
Hi Felix. Thanks for the welcome and the introduction to @Ekaterina_Ilin :)
I am a researcher in the space community and I recently wrote a post introducing the links between outer space and existential risk. I’m thinking about developing this into a sequence of posts on the topic. I plan to cover:
Cosmic threats—what are they, how are they currently managed, and what work is needed in this area. Cosmic threats include asteroid impacts, solar flares, supernovae, gamma-ray bursts, aliens, rogue planets, pulsar beams, and the Kessler Syndrome. I think it would be useful to provide a summary of how cosmic threats are handled, and determine their importance relative to other existential threats.
Lessons learned from the space community. The space community has been very open with data sharing—the utility of this for tackling climate change, nuclear threats, ecological collapse, animal welfare, and global health and development cannot be understated. I may include perspective shifts here, provided by views of Earth from above and the limitless potential that space shows us.
How to access the space community’s expertise, technology, and resources to tackle existential threats.
The role of the space community in global politics. Space has a big role in preventing great power conflicts and building international institutions and connections. With the space community growing a lot recently, I’d like to provide a briefing on the role of space internationally to help people who are working on policy and war.
Would a sequence of posts on space and existential risk be something that people would be interested in? (please agree or disagree to the post) I haven’t seen much on space on the forum (apart from on space governance), so it would be something new.
Hi Matt. Sorry I missed your post and thanks for getting in touch! Your research sounds very interesting, I’ve messaged you directly :)
Just curious, why did you decide not to tackle AI risks? This seems like it would be more of a natural flow based on your interest in existential risk and experience with programming.
I searched google for “gain of function UK” and the first hit was a petition to ban gain of function research in the UK that only got 106 signatures out of the 10,000 required.
How did this happen? Should we try again?
Awesome, thanks for sharing!
Greetings! I’m a doctoral candidate and I have spent three years working as a freelance creator, specializing in crafting visual aids, particularly of a scientific nature. However, I’m enthusiastic about contributing my time to generate visuals that effectively support EA causes.
Typically, my work involves producing diagrams for academic grant applications, academic publications, and presentations. Nevertheless, I’m open to assisting with outreach illustrations or social media visuals as well. If you find yourself in need of such assistance, please don’t hesitate to get in touch! I’m happy to hop on a zoom chat
Thank you for these updates! They are super useful for me as someone who is just starting to get more involved with EA. The updates are really helping me get a good overview of what EA’s priorities are and what measurable differences the movement is making. I come out of the post with a list of things to look further into :D
I’m thinking about organising a seminar series on space and existential risk. Mostly because it’s something I would really like to see. The webinar series would cover a wide range of topics:
Asteroid Impacts
Building International Collaborations
Monitoring Nuclear Weapons Testing
Monitoring Climate Change Impacts
Planetary Protection from Mars Sample Return
Space Colonisation
Cosmic Threats (supernovae, gamma-ray bursts, solar flares)
The Overview Effect
Astrobiology and Longtermism
I think this would be an online webinar series. Would this be something people would be interested in?
Woah, a really nice article that identified the most common criticisms of EA that I’ve come across, namely, cause prioritization, earning to give, billionaire philanthropy, and longtermism. Funnily enough, I’ve come across these criticisms on the EA forum more than anywhere else!
But it’s nice to see a well-researched, external, and in-depth review of EA’s philosophy, and as a non-philosopher, I found it really accessible too. I would like to see an article of a similar style arguing against EA principles though. Does anyone know where I can find something like that? A search for EA criticism on the web brings up angry journalists and media articles that often miss the point.
Plugging this into EAometer....
We can propose a project to “direct charitable donations to popular but low-impact causes to the charities with the highest impact within each low-impact cause”
We can score this project on importance, tractability, and neglectdness to help decide if it’s worth working on.
Importance: Probably a 3⁄10 as this project is directed at low-impact causes. But the causes may be fairly important as lots of people care about them/are impacted by them enough to donate.
Tractability: I think 5⁄10. Charities like Cancer Research and WWF have monopolies over giving to these causes, and dominate advertising. So I’m not sure how we could peel people away from that. But the fact that lots of people donate to these causes would probably make it easier to get donations to grant funds on these cause areas—but maybe they wont attract the type of people who give through GWWC/EA.
Neglectedness: Not sure, I’d have to do some research. But I would guess it’s low because these are popular causes, so they would be very busy with researchers to trying to increase impact.
So to conclude, I would say it would be hard to implement this project and compete in such busy and giant cause areas that invest a lot of money in advertising. The change in impact is most likely not as great as just directing people to more effective cause areas. Popular cause areas are so over crowded that probably everything gets funded anyway.
This book is desperately needed. The scale and neglect of animal welfare put this cause area right up there for me.
I will be attending the talk in London! (and bringing along as many people as I can)