An aspiration in my life is to make the biggest positive impact in the world that I can. In 2018 I started working on this goal as a junior paramedic and in 2019 by starting to be trained as a physiotherapist. My perspective shifted significantly after reading Factfulness by Hans Rosling, which inspired me to explore larger-scale global issues. This led me to pursue an interdisciplinary degree in Global Studies and to discover the research field and social community of Effective Altruism.
Since 2022, I’ve been actively involved in projects ranging from founding a local EA university group to launching an AI safety field building organization. Through these experiences and the completion of my bachelor programme, I discovered that my strengths seem to align best with AI governance research, a field I believe is fundamental for ensuring the responsible development of artificial intelligence.
Moving forward, my goal is to deepen my expertise in AI governance as a researcher and contribute to projects that advance this critical area. I am excited to connect with like-minded professionals and explore opportunities that allow me to make a meaningful impact.
Johan de Kock
Thank you for the post! How much weight do you think should one allocate to the inside and outside view respectively in order to develop a comprehensive estimate of the potential future unemployment rate?
Your calculations look fancy and all of that, but it seems inappropriate to me to be putting so much weight on historical data as you are doing. Especially because I think this ignores the apparent fact that the development of intelligent systems that are more capable than humans has never occurred in history. This fundamentally changes the game.
The more the world changes, I think the less weight one should be putting on the outside view (needs more nuance). People are scared, people don’t update in the face of new evidence, people dislike change.
I know you are not saying that the inside view doesn’t matter, but I am concerned that a post like this anchors people toward a base rate that is a lot lower than what things will actually be like. It reinforces status quo bias. And this is frustrating to me because so many people don’t seem to understand the seriousness of our situation.
I think it makes a lot of sense to reason bottom-up when thinking about topics like these, and I actually disagree with you a lot. It seems to be that there is a deeply correlated failure happening in the AI safety community. In my view, people are putting way too much weight onto the outside view. I am happy to elaborate.
Thank you for sparking this discussion.
Beware of the new scaling paradigm
Thank you for writing this! I just took the time to write a letter.
Would you consider adding your ideas for 2 minutes? - Creating an comprehensive overview of AI x-risk reduction strategies
------
Motivation: To identify the highest impact strategies for reducing the existential risk from AI, it’s important to know what options are available in the first place.I’ve just started creating an overview and would love for you to take a moment to contribute and build on it with the rest of us!
Here is the work page: https://workflowy.com/s/making-sense-of-ai-x/NR0a6o7H79CQpLYw
Some thoughts on how we collaborate:Please don’t delete others’ bullet points; instead, use the comment feature to suggest changes or improvements.
If you’re interested in discussing this further, feel free to add your name and contact details here. I may organize a follow-up discussion.
My (current) model of what an AI governance researcher does
Thank you for sharing Zach! I think it is valuable to highlight the key parts from the podcast episode and share them here. With so many podcast episodes to choose from, this helps people selectively engage with the parts of the episode that are most relevant to them.
Thank you for writing this up, Akash! I am currently exploring my aptitude as an AI governance researcher and consider the advice provided here to be valuable. Especially the point on bouncing off ideas with people early on, but also throughout the research process is something I have started to appreciate a lot more.
For anyone who is in a similar position, I can also highly recommend to check out this and this post.For any other (junior or senior) researchers interested in expanding their pool of people to reach out to for feedback on their research projects, or simply to connect, feel free to reach out on LinkedIn or schedule a call via Calendly! I look forward to chatting.
I think this is an interesting post. I don’t agree with the conclusion, but I think it’s a discussion worth having. In fact, I suspect that this might be a crux for quite some people in the AI safety community. To contribute to the discussion, here are two other perspectives. These are rough thoughts and I could have added a lot more nuance.
Edit: I just noticed that your title includes the word “sentient”. Hence, my second perspective is not as applicable anymore. My own take that I offer at the end seems to hold up nonetheless.
If we develop an ASI that exterminates humans, it will likely also exterminate all other species that might exist in the universe.
Even if one subscribes to utilitarianism, it does not seem clear at all that an ASI would be able to experience any joy or happiness, or that it would be able to create it. Sure, it can accomplish objectives, but one can argue from a strong position that these won’t accomplish any utilitarian goals. Where is the the positive utility here? And even more importantly, how should we frame positive utility in this context?
I think a big reason to not buy your argument stems from the apparent fact that humans are a lot more predictable than an ASI. We know how to work together (at least a bit), we know that we have managed to improve the world throughout the last centuries pretty well. Many people dedicate their life to helping others (such as this lovely community) the higher they are located on Maslow hierarchy. Sure, we have so many flaws (humans), but it seems a lot more plausible to me that we will be able to accomplish full-scale cosmic colonisation that actually maximises positive utility if we don’t go extinct in the process. On the other hand, we don’t even know whether an ASI could create positive utility, or experience it.
I hope you are okay with the storm! Good luck there. And indeed, figuring out how to work with ones evolutionary tendencies is not always straightforward. For many personal decisions this is easier, such as recognising that sitting 10 hours a day at the desk is not what our bodies have evolved for. “So let’s go for a run!” If it comes to large scale coordination, however, things get trickier...
”I think what has changed since 2010 has been general awareness of transcending human limits as a realistic possibility.” → I agree with this and your following points.
[Question] What is the impact of chip production on pausing AI development?
Thank you for writing this up Hayven! I think there are multiple reasons as to why it will be very difficult for humans to settle for less. Primarily, I suspect this to be the case because a large part of our human nature is to strive for maximizing resources, and wanting to consistently improve the conditions of life. There are clear evolutionary advantages to have this ingrained into a species. This tendency to want to have more got us out of picking berries and hunting mammoths to living in houses with heating, being able to connect with our loved ones via video calls and benefiting from better healthcare. In other words, I don’t think that the human condition was different in 2010, it was pretty much exactly the same as it is now, just as it was 20 000 years ago. “Bigger, better, faster.”
The combination out of this human tendency, combined with our short-sightedness is a perfect recipe for human extinction. If we want to overcome the Great Filter, I think the only realistic way we will accomplish this is by figuring out how we can combine this desire for more with more wisdom and better coordination. It seems to be that we are far from that point, unfortunately.
A key takeaway for me is the increased likelihood of success with interventions that guide, rather than restrict, human consumption and development. These strategies seem more feasible as they align with, rather than oppose, human tendencies towards growth and improvement. That does not mean that they should be favoured though, only that they will be more likely to succeed. I would be glad to get pushback here.
I can highly recommend the book The Molecule of More to read more about this perspective (especially Chapter 6).
Ryan, thank you for your thoughts! The distinctions you brought up are something I did not think about yet, so I am going to take a look at the articles you linked in your reply. If I have more to add to this point, I’ll add that. Lots of work ahead to figure out these important things. I hope we have enough time.
AI safety is largely about ensuring that humanity can reap the benefits of AI in the long term. To effectively address the risks of AI, it’s useful to keep in mind what we haven’t yet figured out.
I am currently exploring the implications of our current situation and the best ways to contribute to the positive development of AI. I am eager to hear your perspective on the gaps we have not yet addressed. Here is my quick take on things we seem to not have figured out yet:
We have not figured out how to solve the alignment problem. We don’t know whether alignment is solvable in the first place, even though we hope so. It may not be solvable at all.
We don’t know the exact timelines (I define ‘timelines’ here as the moments when an AI system becomes capable of recursively self-improving). It might range from already having happened to 100 years or more.
We don’t know what takeoff will look like once we develop AGI.
We don’t know how likely it is that AI will become uncontrollable, and if it does become uncontrollable, how likely it is to cause human extinction.
We haven’t figured out the most effective ways to govern and regulate AI development and deployment, especially at an international level.
We don’t know how likely it is that rogue actors will use sophisticated open-source AI to cause large-scale harm to the world.
I think it is useful to call it “we have not figured x out” if there is no consensus on it. People in the community have very different probability estimates for each, all across the range.
Do you disagree with any of these points? And what are other points we might want to add to the list?
I hope to read your take!
Johan de Kock’s Quick takes
TL;DR: In this comment I share my experience being coached by Kat.
I care about the world and about making sure that we develop and implement effective solutions to the many global challenges we face. To accomplish this, we need more people actively working on these issues. I think that Kat plays an important role in facilitating this.
Since I have not followed or analyzed all the recent developments surrounding Nonlinear in detail, I cannot and will not provide my opinion on these developments.
However, I think it’s still useful to share my experience with Kat, because I believe that if more people had the opportunity to speak with her about their projects and challenges, it would be highly valuable, provided they go as I experienced them. I had three calls with Kat, two of which occurred in July and August 2023.
So, what was my experience being coached by Kat? It was very positive. During our conversations, I felt listened to, and she directly addressed the challenges I communicated. What particularly stood out was Kat’s energy and enthusiasm which are infectious. Starting a new organization is challenging, and I remember a call where I felt somewhat discouraged about a development at my project. After the call, I felt re-energized and gained new perspectives on tackling the issues we discussed. She encouraged me to reach out again if I needed further discussion which made me feel supported.
Having someone to bounce ideas off, especially someone who has co-founded multiple organizations is incredibly helpful. Kat’s directness was both amusing and beneficial in ensuring clear communication. This frank approach is refreshing compared to the often indirect and confusing hints others may give.
A significant aspect of coaching is understanding the coachee’s needs in depth to provide tailored solutions. Different coaching styles work for different people. In my case, while I felt listened to, the coaching could have been even more effective if Kat had spent more time initially asking questions. This would have allowed for a more nuanced understanding before she passionately began offering resources and solutions to my problems. However, this point didn’t detract from the overall value of the calls. I always felt that I made significant progress and found the calls highly beneficial.
Another aspect of my interaction with Kat that I greatly appreciated was her warm and bubbly nature. This demeanor added a sense of comfort and positivity to our discussions. Working on reducing existential risks can often be a daunting and emotionally taxing endeavor. It’s rare to find someone who can blend professional insight with a genuinely uplifting attitude, and Kat does this exceptionally well. Her ability to lighten the mood without undermining the seriousness of the topics we discussed was a skill that significantly enhanced the coaching experience.
Overall, I would rate her 9 out of 10, considering these points. I am grateful for having had the opportunity to receive guidance and coaching from Kat and hope that she can assist many more individuals in their efforts to do good better.
Hi Dvir, thank you for sharing your thoughts and raising some interesting points. I appreciate the insights and would like to address each of them in the context of my original post and previous responses.
Your first point about scope insensitivity and the difficulty for people to “think big” is well-taken. This ties in nicely with your third point about many people not believing they are capable of “doing something big.” I completely agree that these challenges exist, which is why I believe it is important to help people gain this confidence in themselves. As expressed previously, I am quite skeptical to what extent the existing introduction track actually enables people to build this. Surely, people can learn about the fact that we live in a very important time and that each of us can make a big impact, but I think that real belief in yourself and ambitiousness stems from seeing evidence of the things you have already accomplished. It also comes from a deep understanding of who you are, where you come from, and what you are about. This is what I try to address with the PLP Track partially. The point you bring up is very important in my eyes, and I think one of the most influential factors in people considering high-impact opportunities.
I appreciate your point about the importance of financial security and stability in people’s lives. As you rightly pointed out (I think), many people need to have their basic needs met before they can focus on higher-level goals, such as making a positive impact in the world. This highlights the importance of presenting EA as not only a path to do good but also as a means to achieve personal fulfillment and security. Emphasizing the variety of careers and opportunities within the EA community that can provide both financial stability and the chance to make a difference could be a powerful motivator for many individuals.
This leads me to my last point. The perception of doing good as a sacrifice is indeed a challenge that needs to be addressed. I think that reframing EA as a fulfilling and purpose-driven pursuit that can be integrated into one’s life without requiring a sacrifice of everything can make the ideas of Effective Altruism more appealing and accessible to a wider audience. I am not entirely sure, though, to what extent we want this, as I do think that the majority of impact stems from a very small fraction of people. On the other hand, you could flip the argument again and argue that due to the young age of students at university, there is not an insignificant chance that people could become highly engaged if approached from a different angle.
In light of your points, I wonder if you have any suggestions on how EA university groups could better communicate the potential personal benefits and opportunities for personal growth that come with engaging in Effective Altruism? Do you have any ideas on how we can better address the concerns and challenges you’ve raised to create a more inclusive and empowering community for individuals at different stages of their lives?
Thank you for sharing your insights and prompting further discussion on this topic!
Thanks for writing up your thoughts Isaac! You present some thought-provoking perspectives that I have not yet considered.
I particularly resonate with your first point of disagreement that individuals can derive personal benefits from being altruistic simply by choosing some cause. Your argument that striving for cause-neutrality and maximizing positive impact may be less fulfilling is a valid one. However, I am unsure why working on a less neglected cause would necessarily be less emotionally fulfilling. In fact, pursuing something “unique” may be quite exciting. Nonetheless, I agree that cause-neutrality may be less fulfilling, as we all have unconscious biases that may favor certain causes due to personal experiences or connections. This may make steering against these inclinations more difficult, perhaps even unpleasant.
I also agree that targeting “already-altruistic people” who care about the magnitude of their impact probably is very promising. Social impact is heavy tailed so it is likely that these individuals could contribute to most of the net impact generated. I just think that EA university groups should not be the stakeholder group that make this trade-off.
In my view, it is important to carefully consider how to differentiate and vary the strategies of EA university, city, and national groups.
With the target audience of university groups being very young adults, I believe it is detrimental to exclude those who may not yet be “there yet”. As I have previously argued, there are many young and ambitious individuals who have not yet determined their life’s direction, and they could be easily nudged towards becoming “already-altruistic”. The loss of counterfactual impact would be huge.
I would agree, however, that for city or national groups, a narrower focus might be a better strategy.
What are your thoughts on having a broader focus for EA university groups, but a narrower one for city groups?
Yes indeed. It is not only about providing guidance for those who already prioritize making a positive impact, but also about inspiring and fostering that desire in individuals who may not have fully considered that option. By providing people with the opportunity to think about the relevance of altruism in their own lives we might not only be able to elevate its importance within individuals’ values but also create a more motivated and well-informed group of individuals who are eager to learn about the most effective ways to make a difference.
Yes, I will give my best! Thanks for asking Jakub. Using your list, it would be points 4 and 5.
To provide further nuance, I would like to emphasize two points.
First, regarding point 4, I believe that many individuals possess a great deal of talent and ambition, however, they may be directed towards different pursuits. When I speak of “creating,” I am primarily referring to redirecting ambition towards tackling the world’s most pressing problems.
Secondly, in regards to point 5, I believe that encouraging individuals to address the most pressing issues is not always best accomplished simply by educating them about these issues.People have different priorities and ambitions at various stages in their lives, which is natural. I would argue that the challenge is that many individuals have not thoroughly reflected on their most important values and goals. As a result, it becomes hard for them to tell if what they’re doing aligns with their goals they might find most important in the end. In fact, determining this can be challenging. In other words, when I speak of “creating” individuals who want to tackle the world’s most pressing problems, I mean that we should empower people to re-evaluate their existing priorities and to let them learn about the “fact” that making a positive impact may be one of many meaningful ways to lead a fulfilling life. Through this process, individuals may come to realize that this path will bring them (and others) more happiness.
This ties into the second point. Before individuals can care about addressing the most pressing issues, they need to understand why it matters to them personally. Unless an individual has established that making a positive impact is a core part of their life, why should they be motivated to tackle the world’s most pressing problems? The underlying reason for wanting to do so often stems from a desire to prevent large-scale suffering or improve the wellbeing of many.
I agree with you that community building is already working towards point 5, however, the current approach is only effective for a relatively small portion of people—namely, those who have already determined that making a positive impact is important to them. For those who have not yet reached this realization, learning about how to make a bigger impact will not be particularly effective unless they are first motivated to do so. Improving their productivity or time management skills will not be particularly helpful unless they have a desire to use their time to make a positive impact. I believe that we need a more diverse range of strategies to inspire this motivation in different types of individuals. For some, learning about how to make a big impact may be sufficient, while for others, learning about why making an impact matters may be necessary as a foundation for intrinsic motivation.
I hope this helps!
Thank you for your reply!
Summary: My main intention in my previous comment was to share my perspective on why relying too much on the outside view is problematic (and, to be fair, that wasn’t clear because I addressed multiple points). While I think your calculations and explanation are solid, the general intuition I want to share is that people should place less weight on the outside view, as this article seems to suggest.
I wrote this fairly quickly, so I apologize if my response is not entirely coherent.
Emphasizing the definition of unemployment you use is helpful, and I mostly agree with your model of total AI automation, where no one is necessarily looking for a job.
Regarding your question about my estimate of the median annual unemployment rate: I haven’t thought deeply enough about unemployment to place a bet or form a strong opinion on the exact percentage points. Thanks for the offer, though.
To illustrate the main point in my summary, I want to share a basic reasoning process I’m using.
Assumptions:
Most people are underestimating the speed of AI development.
The new paradigm of scaling inference-time compute (instead of training compute) will lead to rapid increases in AI capabilities.
We have not solved the alignment problem and don’t seem to be making progress quickly enough (among other unsolved issues).
An intelligence explosion is possible.
Worldview implications of my assumptions:
People should take this development much more seriously.
We need more effective regulations to govern AI.
Humanity needs to act now and ambitiously.
To articulate my intuition as clearly as possible: the lack of action we’re currently seeing from various stakeholders in addressing the advancement of frontier AI systems seems to be, in part, because they rely too heavily on the outside view for decision-making. While this doesn’t address the crux of your post ( but it prompted me to write my comment initially), I believe it’s dangerous to place significant weight on an approach that attempts to make sense of developments we have no clear reference classes for. AGI hasn’t happened yet, so I don’t understand why we should lean heavily on historical data to assess such a novel development.
What’s currently happening is that people are essentially throwing their arms up and saying, “Uh, the probabilities are so low for X or Y impact of AGI, so let’s just trust the process.” If people placed more weight on assumptions like those above, or reasoned more from first principles, the situation might look very different. Do you see? My issue is with putting too much weight on the outside view, not with your object-level claims.
I am open to changing my mind on this.