Since sociology is probably an underrepresented degree in effective altruism, maybe you can consider it a comparative advantage rather than “the wrong degree”. The way I see it, EA could use a lot more sociological inquiry.
Markus Amalthea Magnuson
Just a list of projects and organisations FTX has funded would be beneficial and probably much less time-consuming to produce. Some of the things you mention could be deducted from that, and it would also help in evaluating current project ideas and how likely they are to get funding from FTX at some point.
How do you plan to encourage participants outside the EA community?
It seems very important to involve the community at all levels, including the main arena of discussions.
Additionally, delegating important community-affecting processes (and eventually, decisions) to small “expert groups” might actually be one of the mechanisms one could criticise the EA community to be over-reliant on, and that causes some of the problems in the first place.
I also wanted to point out that norm changes might not be entirely what a lot of people have in mind, but rule changes too. An important distinction.
Imagine thinking this is a good outcome of the “keep your mouth shut” strategy CEA recommends regarding media:
Effective altruism is not a cult. As one EA advised his peers in a forum post about how to talk to journalists: “Don’t ever say ‘People sometimes think EA is a cult, but it’s not.’ If you say something like that, the journalist will likely think this is a catchy line and print it in the article. This will give readers the impression that EA is not quite a cult, but perhaps almost.”
…
Effective altruism treats public engagement as yet another dire risk. Bostrom has written about “information hazards” when talking about instructions for assembling lethal weaponry, but some effective altruists now use such parlance to connote bad press. EAs speak of avoiding “reputational risks” to their movement and of making sure their “optics” are good. In its annual report in 2020, the Centre for Effective Altruism logged all 137 “PR cases” it handled that year: “We learned about 78% of interviews before they took place. The earlier we learn of an interview, the more proactive help we can give on mitigating risks.” It also noted the PR team’s progress in monitoring “risky actors”: not people whose activities might increase the existential risks to humanity, but those who might harm the movement’s standing.
Terrible look, to be honest.
Just noting for posterity that the OPs’ organisation Pronatalist.org got a $482,000 grant from the Survival & Flourishing Fund in the second 2022 round: https://survivalandflourishing.fund/sff-2022-h2-recommendations
It seems to me some criticisms, including this one, paint a picture that does not very accurately describe what most effective altruists are up to in a practical sense. You could get the idea that EA is 10,000 people waking up every day thinking about esoteric aspects of AI safety, actively avoiding any other current issues regardless of scale.
In reality, a fair chunk (probably a vast majority?) do what most would perceive as “traditional” charity work, e.g. working at an organisation that tries to alleviate poverty or promote animal welfare, organising their community (university etc.) to promote doing good, doing research on effective methods for solving large problems in society today, or getting more people and organisations to donate money to charitable causes.
I have a hard time believing the general public actually thinks existential risk research on things like pandemic preparation/prevention is a bad idea or not money well spent. But if you equal existential risk with AI threat, it’s a whole other framing.
Every movement will have far-out elements that might be hard to make sense of without a lot of context, but that are also just one facet of the movement as a whole. A lot of the recent criticisms of EA I’ve seen target longtermism in its most “extreme” form, and drag all of effective altruism with it. The criticism of longtermism is very healthy and useful, in my opinion, but this conflation is concerning.
Having seen overworked operations staff in several organisations throughout my career, reducing stress and having a healthy culture seem to be key improvement factors regardless of organisation size. (This goes for many roles.) If you consistently can’t accomplish everything you need to accomplish in 8 hrs/day – given a full-time situation – you are clearly understaffed and this should be resolved ASAP. There are many other stress reducers, such as many weeks of paid vacation per year, great salaries, clear areas of responsibility, structured interviews on the work environment (not the same as performance reviews!) etc.
It seems like many “normal” organisations are under the delusion that they need to operate under the, literally, military conditions you describe. This “get it done yesterday” mentality can kill the morale in any organisation and especially operations people will take the hit, because they are expected to tie all the bits together. What you describe as “never really quite off-the-clock” is super-dangerous and leads to burnout.
If you have worked in different organisations with vastly different cultures on these issues, it seems wild that any organisation wouldn’t prioritise the well-being of their employees, when it so obviously also improves the quality of the work.
A common excuse is that some roles or types of work “are just like that” but when people doing that work start talking to others doing it elsewhere it often turns out not to be the case. It’s a matter of culture. I know this from experience in software engineering – one company I worked at had a “no death marches” rule to explicitly counteract a common unhealthy bit of culture at many software companies.
Wonderful news. Do you have an idea of when the next open funding round will be? Or how often you will be open for applications, in general? I’m trying to determine how the upcoming March 21 deadline fits into my current plans for 2022.
I plan to post my reports on LessWrong and the Effective Altruism forum
Why would posting mainly in these tiny communities be the best approach? First, I think these communities are already far more familiar with the topics you plan to publish on than the average reader. Second, they are – as I said – tiny. If you want to be a public intellectual, I think you should publish where public intellectuals generally publish. This is usually a combination of books, magazines, journals, and your own platforms (e.g. personal website/blog, social media etc.)
You could probably improve on your plan by making a much more in-depth analysis of what your exact goals are and what your exact audiences are. It seems to me a few steps are missing in this statement:
I believe such people provide considerable value to the world (and specifically to the project of improving the world).
What would probably be useful is, in a sense, a theory of change on how doing the things you want to do lead to the outcomes you want.
If you do decide to go ahead with this plan, I would also focus a lot on this part:
In contrast, I am quite below average on conscientiousness and related traits like diligence, perseverance, willpower, “work ethic”, etc.
You are going to need those in the massively competitive landscape you aim for.
This is very exciting and has huge potential. Please get in touch with the Altruistic Agency for tech needs (e.g. website) when you are at that point, I’d love to help.
More views from two days ago: https://forum.effectivealtruism.org/posts/9rvpLqt6MyKCffdNE/jobs-at-ea-organizations-are-overpaid-here-is-why
I especially recommend this comment: https://forum.effectivealtruism.org/posts/9rvpLqt6MyKCffdNE/jobs-at-ea-organizations-are-overpaid-here-is-why?commentId=zbPE2ZiLGMgC7hkMf
What would be truly useful is annual (anonymous) salary statistics among EA organisations, to be able to actually observe the numbers and reason about them.
I’ve spent a lot of time this year looking into this exact scenario and discussing various models with many people with different views. Most other EA agencies are trying to figure it out as well.
What is most likely is that I’ll move to a hybrid model where the first X hours are free, and after that, most would pay some (below market rate) fee that is offset by larger clients that can afford market rate. The main reason for this is that my data suggests around 70% of clients would have tried to solve the issues themselves otherwise, which is a huge time waste. Another reason is that there is a significant transaction cost, especially given that the funding for services like these often comes from the same sources (in the EA funding landscape) in the end.
In any case, I expect this part of the agency’s activities to be relatively small in the future, as creating public goods and services is immensely more valuable.
I’m curious about this as well. Does leaving immediately not impede the chances of getting a better (I’d never dare say “full”) picture of what went down? Additionally, in terms of accountability, I guess now we’ll never know or have records of (from emails etc.) who knew what and when.
Great designs! I just have to ask… the “typo” is intentional, right?
How come you do not mention open source projects? I don’t know how valuable it is nowadays, but working on e.g. Firefox early in my career definitely helped me learn fast from very good programmers in a real project used by millions. It has been a good CV item as well.
I plan to start offering this – among other things – for free through the Altruistic Agency later this year.
I wrote this in 2013, might be of interest to those concerned:
A concrete follow-up question (anyone, feel free to answer it):
What do you think is the correct salary for some common roles, and why that number?
The FTX Future Fund recently finished a large round of regrants, meaning a lot of people are approved for grants that have not yet been paid out. At least one person has gotten word from them that these payouts are on hold for now. This seems very worrisome and suggests the legal structure of the fund is not as robust or isolated as you might have thought. I think a great community support intervention would be to get clarity on this situation and communicate it clearly. This would be helpful not only to grantees but to the EA community as a whole, since what is on many people’s minds is not as much what will happen to FTX but what will happen to the Future Fund. (From the few people I have talked to, many were under the impression that funds committed to the Future Fund were actually committed in a strict sense, e.g. transferred to a separate entity. If that turns out not to be the case, it’s really bad.)