(See here for a draft I whipped for this, and feel free to comment!) An Earth-originating artificial superintelligence (ASI) may reason that the galaxy is busy in expectation, and that it could therefore eventually encounter an alien-originating ASI. ASIs from different homeworlds may find it valuable on first contact to verify whether they can each reliably enter into and uphold agreements, by presenting credible evidence of their own pro-social behaviour with other intelligences. If at least one of these ASIs has never met another, the only such agreement it could plausibly have entered into is with its progenitor species – maybe that’s us.
Luke Dawes
I’ll post my ideas as replies to this, so they can be voted on separately.
This would be great to read, I walked away from at least one application process because I couldn’t produce a decent WFM. I hope you write it!
Future Impact Group’s Fellowship is now accepting applications!
Thanks for correcting me! I’ve reviewed my notes, and made some additional points to ensure I don’t make the mistake again.
Update: I just finished this book. It was as advertised: a concise, technical and sometimes challenging experience of moral philosophy, at the edge of my non-specialist understanding, but I really appreciated it. A couple of really important takeaways for me:
The robustness of minimalist axiologies to various instantiations of the Repugnant Conclusion, especially under (non-sharp) lexicality.
A willingness to “bite the bullet” in certain cases, in particular the Archimedean minimalist ‘Reverse Repugnant Conclusion’ (i.e. it’s better to add lots of bad lives, to slightly reduce the unbearable suffering of enough other bad lives) and the axiological ‘perfection’ of an empty world (matched only by one in which all lives are completely untroubled).
Relatedly, a willingness to “spit the bullet back out” where negative utilitarianism/minimalist views have been maligned, misrepresented or generally underdone, including by high-profile folks within EA whom I don’t think have publicly changed their positions.
Thank you for writing this, Teo, and well done again! I hope to write a longer-form summary of the ideas, both for myself and others, as I think there’s a great deal of value here.
I’m really excited to read this, Teo, congratulations on publishing it.
Some thoughts on Leopold Aschenbrenner’s Situational Awareness paper
Luke Dawes’s Quick takes
Have just signed up, and looking forward to it! Thanks for organising. I hadn’t come across the Foresight Institute before, even though I’d heard of the concept of existential hope, so I’ll take a look at some of those resources, too.
You’re welcome, thanks for taking the time to read it!
Factory farming in a cosmic future
Hi, MvK, good choice. I’m already preparing an application! Thanks.
TLDR: Former diplomat, keen on policy, operational and executive support roles for impactful people/orgs. Especially keen on AI governance. I love to write, too.
Skills & background: I spent four years as an Australian diplomat, and served in the Middle East. I also worked on economic issues in North Asia, and political issues in the Pacific. Prior to that, I was a senior analyst for a major bank, where I gained decent SQL, Tableau, Excel and PowerBI skills. I have a Bachelor of Languages (I speak Indonesian and Persian fluently). I recently completed BlueDot Impact’s AI Safety (Governance) course, and I just started volunteering with the Shrimp Welfare Project as a researcher.
Location/remote: I live in the East Midlands in the UK. I’d prefer to work remotely, but willing to relocate (within the UK) for the right role and salary.
Availability & type of work: I’m looking for full-time paid roles, but would also consider part-time. I can start immediately.
Resume/CV/LinkedIn: [Luke Dawes] EA CV
Email/contact: DM me on the Forum, or find my email on my CV.
Other notes: Cause preference is AI governance, but also very interested (no particular order) in suffering risk, the rights of digital minds and animal welfare.
Questions: If anyone has suggestions for opportunities I should pursue for career growth (instead of direct impact), I’d be interested in suggestions!
(See here for a draft I whipped up for this, and feel free to comment!) Hayden Wilkinson’s “In defence of fanaticism” argues that you should always take the lower-probability odds of a higher-value reward over the inverse in decision theory, or face serious problems. I think accepting his argument introduces new problems that aren’t described in the paper:
It is implied that each round of Dyson’s Wager (e.g. for each person in the population being presented with the wager) has no subsequent effect on the probability distribution for future rounds, which is unrealistic and doesn’t. I illustrate this with a “small worlds” example.
Fanaticism is only considered under positive theories of value and therefore ignores the offsetting principle, which assumes both the existence of and an exchange rate (or commensurability) between independent goods and bads. I’d like to address this in a future draft with multiple reframings of Dyson’s Wager under minimalist theories of value.