Was going to say that! Its incredible how we give little attention to cash flows and how every proposition to change it is being shut down.
Vaipan
We will organize a viewing of Oppenheimer in our office for our community and have a debate afterward: what would have the movie looked like if it was about AI?
Have you ever considered interacting with policy institutes/political commissions (EU commission, national parliaments, etc) to spread the word about effective allocation of resources, similar trends that could be followed by some governmental departments?
The second one is more daring, but I’m curious. How much does OpenPhilantropy and its council of advisors rely/apply your advice? For example, you wrote a very interesting sequence on value maximisation and one insight was that animal welfare was a winner on the short and longterm, but that does not translate at all in OP current funding allocation given the recent reductions in animal welfare budget/grant-criteria tightening when it comes to animal projects?
This is true, and in our EA group, we are establishing an outreach model to attract them. So far here’s why they don’t get involved:
Mood at EA event is very young and excited, and connecting is harder for older people since they have different interests/lifestyles (not everyone can ‘optimize’ each step of their lives when they have kids and such). Communication norms are also different.
Career opportunities are much harder to find and grasp for experienced people: it’s not as easy to go three months do a fellowship somewhere and leave your family behind, or change countries to find the perfect job because there are few effective opportunities in your country. We’re improving that by working on a mapping of EA-likeminded institutions, but it’s nascent work, not that supported by 80k.
Many feel that their experience isn’t valued and appreciated by EA members, when it’s often a contest of who has read this and that but not so much learning from experienced members.
So yeah, as long as EA won’t have a clear strategy of cooperation with other institutions (for example, the UN is often discarded efficiency-wise, but no good research proves this!), and as long as behavioural norms won’t change, it”s going to be hard. We trying to reach a tipping point of 25% of experienced people for the mood to change, but it’s hard.
[Question] EA folks in Prague, 2-7th of July?
You are not alone, definitely not alone.
As a community builder, I have several people telling me this on a frequent basis. It’s nice to be able to follow good charities on Twitter, but that does not make up for the direction of the funding and therefore the opportunities and projects that are actually selected and funded, or the fact that most posts on the forum are now about AI given the sharp increase in AI-interested people (who do not necessarily have a past with EA, or altruism, as in giving etc). It does not make up for the fact that most people enter EA through 80k and get the feeling that they have to get into AI to be impactful, given the priorities. Or the fact that your chance to be coached by 80k is much greater if you want to work in longtermistic matters.
There is really a turning point in the movement, few actors are reacting against it, there is no real counter movement and most people in power do not speak up against this, even though they might have a more nuanced view on funding distribution than what is actually happening.
Maybe it will be one of these cases where the audience of one community changes completely, and thus becomes a different organization. It makes me very sad—there is no replacement to EA. No, global aid economics are not ‘GH’ in EA. No, animalistic parties cannot replace the work done by some EA orgs. It’s a question to all: will we silently abide and passively go along the movement, whatever it becomes, or will we just have to exit EA? The latter is already happening a lot.
Well you said it: STEM is what makes the very big difference here. A ‘leftwing’ STEM will not have the same priorities at all than a social science student, so this leftwing label is very misleading, no matter how much people like to use it here to claim that EA is leftist.
A STEM student will have much more contempt towards protests, and what you conveniently forget to say is that STEM students are in general earning much more and come from much more privileged backgrounds. It’s all about resources and how they are distributed, and so these students are in much less need to go out in the streets. So it’s easier to look down on protests and think that these protests are just noisy and useless.
So my answer still stands and explains why EA is not protest-friendly.
Protests are usually done by those in dire need of change: minorities, poor people, people whose identity is attacked, etc. AI risks are overwhelmingly highlighted by rich white male engineers: not those who usually have a reason to go out in the streets. And as Geoffrey says, who despise those who do—it’s easier to mock those who struggle when you don’t, assuming that they make unnecessary noise because you don’t feel at all part of their fight.
And now EAs realize that profit is taking over safety concerns—it took a lot of time! It was painful to read Altman’s praising until the board shuffling at OpenAI. It’s been years that people protest because greed and unequal distribution of money make their own lives poorer and harder; but now greed causes survival risks that also extend to rich engineers, so they have to do something.
Of course. It is much easier for privileged individuals to relate to the suffering of minds that do no exist yet compared to the very real suffering of people and animals today that force you to confront your emotions and uneasiness towards those who have so little when you have so much.
The divide between gender and cause-area is obvious (not just from this study but also from my own EA group!). Women in general care much more about GHD and animal welfare and dislike fixing technological issues with yet another technology; they want more systemic change. That some privileged men who benefit from the current status quo do not want to change the current power dynamics and prefer to think about future beings who do not have a voice yet to feel useful is hard to deny.
Sadly I have not seen any research mixing gender dynamics and longtermist urgency.
I agree. We have to take into account that 80k strongly pushed for careers in AI safety, encouraged field building specifically for AI safety, and the job board has become increasingly dominated by AI safety job offers. And the trend is not likely to be reversed soon.
However, that does not keep people outside of EA to obtain jobs in the GHD field (which is not just development economics, as someone wrote one day); they are just not accounted for. And if the movement keep giving opportunities and funding specifically towards AI safety, sure we’ll get less and less GHD people. So it’s still impressive, taking all this funding concentration, that we get so many EAs that still consider GHD as the most pressing cause-area.
Thanks! Will look.
That’s a clever one, thanks!
[Question] Impactful career as a lawyer specialized in Green Building Standards?
It is always appalling to see tech lobbying power shut down all the careful work done by safety people.
Yet the article highlights a very fair point: that safety people have not succeeded at being clear and convincing enough about the existential risks posed by AI. Yes, it’s hard, yes it’s a lot about speculations. But that’s exactly where impact lies : trying to have a consistent and pragmatic discourse about AI risks, that is not uselessly alarmist or needlessly vague.
The state of the EA community is a good example of that. I often hear that yes, risks are high, but what risks exactly, and how can they be quantified? Impact measurement is awfully vague when it comes to AI safety (and a minor measure, AI governance).
I wish this was more well-known and read in the EA community. So far I have not seen any credible objections to these three compelling arguments. Perils or not perils, these arguments are still valid on their own.
Hey Joseph,
I am exactly in the same boat, very specialized path and lack of financial visibility. I also work for an EA org, which means that I chose a pay cut (and the role is time-constrained in terms of funding) compared to other jobs that could be safer (consulting, etc).
But recently, I’ve been thinking about the fact that donating is a bit like starting a new sport class or any new habit; if you don’t start, you’ll never start (except under ideal conditions but that rarely happens!). Accepting a bit of risk to accomplish something that you care a lot about makes sense for me, which is why I will start giving soon. There will never be a threshold of financial safety where I’ll feel completely safe, so waiting will not do good to me.
Also, inflation means that all my careful savings are losing value right now, so I’m realizing that I would be better off spending a part of it now rather than wait and see their value slowly disappearing.
This is only my choice; I just wanted to comment since I am a bit in the same case but came to think differently about it recently. Also just want to empathize with your situation. Sometimes I feel bad when I see that some of my colleagues have been giving for ten years, but again, we clearly were not given the same set of circumstances at birth.
[Question] What are good lit references about International Governance of AI?
[Question] What happened to the ‘only 400 people work in AI safety/governance’ number dated from 2020?
Thanks for saying it, though! Because it feels validating to hear it, instead of having this internal voice that hammers that time is being wasted and that I’m letting everyone and everything down. I might do just that!
No, it is more confusing than anything. What matters is to have an impact. Impactful orgs have been doing days and years of research on what is the most cost-effective and impactful. With a basic knowledge of EA principles you can identify which organizations meet your criteria of impact and which do not, and then apply. If you get a job there, you will learn by yourself how they think about impact and refine your view. If you prefer to go to a non-EA org to make more EA-alike (like WHO or UN) then I would certainly dive deeper into the principles and metrics of impact.
But in general I do not overwhelm people with philosophical conundrums. humility, scout mindset and solid skill-building are what matters to me.