Happily, they are already available:
Caro
Publication of Stuart Russell’s new book on AI safety—reviews needed
[Link] Stuart Russell will have an AMA on Reddit on 12/16
Hi Tobby! Thanks for being such a great source of inspiration for philosophy and EA. You’re a great model to me!
Some questions, feel free to pick:
1) What philosophers are your sources of inspiration and why?
(put my other questions in separate comments). Also, writing “Toby”!
What are you looking for in a research / operations colleague?
If you’ve read the book ‘So good they can’t ignore you’, what do you think are the most important skills to master to be a writer/philosopher like yourself?
What are some of your current challenges? (maybe someone in the audience can help!)
What do you like to do during your free time?
What are some directions you’d like the EA movement or some parts of the EA movement to take?
Do you think that climate change has been neglected in the EA movement? What are some options that seem great to you at the moment to have a very large impact to stir us in a better direction regarding climate change?
Thanks Ben! I’ve edited the message to have only one question per post. :-)
Hi Larks!
Thanks for asking!
We have been very careful since the beginning of the epidemics and were effectively in quarantine before the Bay Area shelter-in-place.
Currently, everyone stays and works from home. We maintain the food stockpile via grocery deliveries or having one person going to Trader Joes’ every two weeks (with a mask and gloves). We have occasional walks/runs (while keeping a safety distance with other people).
If the Bay Area gets 50,000 reported cases or 500 such cases in Berkeley, we will stop walking outside.
We have copper-tapped common-used surfaces in the house and have a lot of resources to live comfortably in total isolation for over one month.
When people will move in, we will probably implement a quarantine for themselves for a few days/weeks.
Do you think that the RSP is interesting for people working on policy engagement—eg writing “grey literature” reports, policy propositions, and feedback on legislation—or do you think it’s a better fit for people working on things in the “peer-reviewed/academic work” category?
Thanks, I find this very useful!
I guess I would refine the”weird cause area” reason with adding that some EAs may leave because of strong disagreement with some EA mainstream or public figures’ views. For example, a few years ago climate change was not taken as an x-risk, and somewhat regularly dismissed, which would have put off a few longtermists. I know someone who left EA because of strong disagreement with how AI safety is handled—eg encouraging working for an organization that works on AGI development. Basically, I think that sometimes there is a “tipping point” for strong disagreement where some people leave. Ideally, EA would be able to strongly focus on “EA is a question, not an ideology” so that people who have informed different opinions still say in.
I suspect that burnout may also be another reason why people in EA orgs leave.
It would be super interesting to work on how to improve “retainment” with social integration. I was thinking that having a regular gather.town “mega meeting” of EAs may be pretty nice in times of confinement to promote social interactions, project collaborations, etc.
Thank you, this list is a useful complement to this post.
Hi Danica! Thanks for putting this process together. What is the best process to recommend a therapist on this list?
Bravo! This is fantastic and it’s also great that you used the opportunity to talk about EA! The future of REG!
I agree. I have read only a few but I am crying as they are very moving and inspiring. This is the combined effect of their beauty and strength with my attachment to this community that shares my values. I will keep reading them in the next few days…
Hi guys, I wanted to make you aware of a global online debate on the governance of AI by a Harvard-incubated think-tank.
For background, I’m a French EA, and I recently decided to work on AI policy as it is a pressing and neglected issue. I’ve been working for The Future Society for a few weeks already and would like to share with you this opportunity to impact policy-making. The Future Society at Harvard Kennedy School is a think tank dedicated to the governance of emerging advanced technologies. It has partnerships with the Future of Life Institute and the Centre for the Study of Existential Risk.
The think-tank provides an participatory debate platform to people all around the world The objective is to craft actionable and ethical policies that will be delivered in a White Paper, to the White House, the OECD, the European Union and other policymaking institutions that the think-tank is working with.
Because we know AI policy is hard, the idea is to use collective intelligence to provide innovative and reasonable policies. The debate is hosted on an open source collective intelligence software resulting from a research project funded by the European Commission, technologically supported by MIT. It’s based on research on collective intelligence, going from open and exploratory questions to more in-depth discussions. Right now, we are in the “Ideation” phase, which is very open. You can make constructive answers and debate with other people who are also interested in crafting AI Policies with instant translation.
The platform is like an online forum articulated around several issues, both short-term and long-term oriented. You have six themes, including “AI Safety and Security”, “Reinvent Man & Machine Relationship” and “Governance Framework”.
So far, most of the answers have been very constructive. But with you guys… it can be even better.
Because you are EAs, I really wanted to pick your brains!
It would be great if you guys could participate, on the topic you’re most interested in, knowing that a) it will be impactful b) you will be able to challenge your thoughts with other people passionate about AI social impacts. Of course, you don’t have to talk about AI safety if you’d rather focus on other topics.
Also, the more EAs, the merrier. Or rather, the more impactful!
So please connect on the debate, and participate!!
Debate is here