I think sharing with caveats can make sense. But I don’t think it’s a good idea for a teacher to recommend this book without clarifying that they do not endorse the views by the author.
My vague memory of me reading it at 16 is that I found a lot of the stories interesting, but was also put off by his attitude.
I wouldn’t recommend this book, especially not to gifted women. Feynman is very sexist.
>> I’ve switched off the karma display on all comments and my experience improved. The karma system tends to mess up with my S1 processing.
Fully understand if you don’t want to, but I’m curious if you could elaborate on this. I’m not entirely sure what you mean.
Thank you for posting this and especially organizing the EAGs in general! They are a valuable community contribution.
One thing I’m curious about is why the prize for a ticket has increased?
I find some of the statements in your post a bit jarring, and this is not the first time I feel like this when reading your writing. The founders of Good Ventures are multi-billionaires who have been influenced by EA ideas which stem to some extent from the EA community. This is excellent. But the EA community does not have ownership over this money. Your writing makes it sound like it does, which I find presumptuous and off-putting.
For the future, I would recommend that you should try to better understand the relationships between different individuals and institutions that are associated with the Effective Altruism community before asking questions like this one.
(writing in personal capacity here, not as a mod)
I think it’s worth noting what the report says about family structures that fall in neither category—both married and unmarried parents where one parent isn’t biologically related to the child but still takes on a parental role as well as single parents without a partner fall somewhere in between married biological parents and single parents with partner in terms of child abuse rates.
(I was a bit confused and thought ‘single parents with partner’ included cases in which the partner takes on parental responsibility so the high rate seemed off to me.)
But asking privately only gives one person the answer, instead of many. I’m a bit surprised by your response—I had expected that the group who knows the answer usually has better things to do than answer random emails, while there are a lot of individuals who probably have knowledge like this whose time isn’t as valuable.
This seems like a reasonable piece to me, laying out the basic groundwork for more future scrutiny on philanthropy’s impact on the biosecurity field, but not more than that. (‘Establishing common knowledge’ seems like a good summary to me.)
A large influx of money can significantly change a field, and generally speaking, it is much harder for sudden big changes to improve the state of affairs than to make them worse. That said, sudden large changes, even if net positive overall, will often have some negative side effects, and I would expect more money for a ‘do good-ing’ field to lead to more good overall.
Something that might be interesting to see would be a survey of top people in the biosecurity field how this has changed their field and whether they view this change as positive. Generally speaking, I would expect them to have a much better grasp of empirical prioritisation questions in biosecurity than a few people at a large foundation, no matter how careful they are and how much work they put in. The more work large foundations put into being in touch with people in the field, the less concerned one needs to be I think.
Similar criticisms also exist in other fields, e.g. about the Gates foundation drowning out primary health care work by focusing on vaccinations and specific diseases and inadvertently causing some harm this way. I have not investigated the merits of this criticism, but it seems like a worthwhile thing to do.
80,000 Hours thinks earning to give is the best option for a substantial number of people—those for whom it’s their comparative advantage. They are keen, however, to make sure that people fully consider direct work options, instead of defaulting to earning to give because they’ve heard it is the best way to do good with one’s career.
If I remember correctly, 80,000 Hours has stated that they think 15% of people in the EA Community should be pursuing earning to give. Have they revised this opinion or am I remembering it incorrectly?
If not, your description seems a bit misleading to me. Substantial number sounds like a significantly higher fraction of people to me, perhaps something like 40% instead of 15%.
As one of the people who voted, I was also surprised and disappointed by this. But different voters applied different standards on what kind of content they wish to support.
(I still feel like I don’t really understand where you’re coming from.)
I am concerned that your model of how idea proposals get evaluated (and then plausibly funded) is a bit off. From the original post:
hard to evaluate which project ideas are excellent , which are probably good, and which are too risky for their estimated return.
You are missing one major category here: projects which are simply bad because they do have approximately zero impact, but aren’t particularly risky. I think this category is the largest of the the four.
Which projects have a chance of working and which don’t is often pretty clear to people who have experience evaluating projects quite quickly (which is why Oli suggested 15min for the initial investigation above). It sounds to me a bit like your model of ideas which get proposed is that most of them are pretty valuable. I don’t think this is the case.
When funders give general opinions on what should or should not get started or how you value or not value things, again, I think you are at greater risk of having too much of an influence on the community. I do not believe the knowledge of the funders is strictly better than the knowledge of grant applicants.
I am confused by this. Knowledge of what?
The role of funders/evaluators is to evaluate projects (and maybe propose some for others to do). To do this well they need to have a good mental map of what kind of projects have worked or not worked in the past, what good and bad signs are, ideally from an explicit feedback loop from funding projects and then seeing how the projects turn out. The role of grant applicants is to come up with some ideas they could execute. Do you disagree with this?
I think it much harder to give open feedback if it is closely tied with funding. Feedback from funders can easily have too much influence on people, and should be very careful and nuanced, as it comes from some position of power. I would expect adding financial incentives can easily be detrimental for the process. (For self-referential example, just look on this discussion: do you think the fact that Oli dislikes my proposal and suggest LTF can back something different with $20k will not create at least some unconscious incentives?)
I’m a bit confused here. I think I disagree with you, but maybe I am not understanding you correctly.
I consider having people giving feedback to have ‘skin in the game’ to be important for the accuracy of the feedback. Most people don’t enjoy discouraging others they have social ties with. Often reviewers without sufficient skin in the game might be tempted to not be as openly negative about proposals as they should be.
Funders instead can give you a strong signal—a signal which is unfortunately somewhat binary and lacks nuance. But someone being willing to fund something or not is a much stronger signal for the value of a proposal than comments from friends on a GoogleDoc. This is especially true if people proposing ideas don’t take into account how hard it is to discourage people and don’t interpret feedback in that light.
EA jobs, unlike many other jobs, do not compare very well to other kinds of work experience,
I’m pretty sceptical of this claim (not just made here, but also made in many other posts). I think this might be true for some roles like the Research Analyst positions at the Open Philanthropy Project which combine academic research with grantmaking which is unusual in the wider job market.
But I don’t see why e.g. operations at an average EA organisation would not compare well to other kinds of work experience in operations. I’m happy to hear counterarguments to this.
The underlying crux here might be that I’m generally wary of any claims of ‘EA exceptionalism’.
This list seems roughly reasonable. What most stands out to me is that your suggestions are extremely time consuming, especially in aggregate. The hours applicants to jobs at EA organisations spend on timed work tests and honing their CVs pale in comparison.
I also think your suggestions are applicable to some other fields which might be of interest to people who are trying to have a high impact. It is not unusual for desirable roles in e.g. international development to require hundreds to thousands of hours of investment.
However, if people are investing those thousands of hours into learning about EA, they will not spend them investing in international development or nuclear security.
While people following your suggestions might benefit individually, as a movement we and the world might be worse off.
(Funding manager of the EA Meta Fund here)
We have run an application round for our last distribution for the first time. I conducted the very initial investigation which I communicated to the committee. Previous grantees came all through our personal network.
Things we learnt during our application round:
i) We got significantly fewer applications than we expected and would have been able to spend more time vetting projects. This was not a bottleneck. After some investigation through personal outreach I have the impression there are not many projects being started in the Meta space (this is different for other funding spaces).
ii) We were able to fund a decent fraction of the applications we received (25%?). For about half of the applications I was reasonably confident that they did not meet the bar so I did not investigate further. The remaining quarter felt borderline to me, I often still investigated but the results confirmed my initial impression.
My current impression for the Meta space is that we are not vetting constrained, but more mentoring/pro-active outreach constrained. One thing we want to do in the future is to run a request for proposals process.
This isn’t really comparing like with like however—in one case you’re doing cold outreach and in others there are established application processes. It might make more sense to compare the demand for researcher positions with e.g. the Toby Ord’s Research Assistant position.
But if your point is that people should be more willing to do cold outreach for research assistant positions like you did, that seems fair.
many candidates treated the process like a 2 way application the whole way through the process. This three off my intuitions and normally I would have dropped all candidates who weren’t signalling they were specifically very excited about my role. First call excluded.
I wonder whether this is just a result of people on both sides of the application process knowing each other in a social context.
If the candidate knows they will interact with people making the hiring decision in the future, they might not want them to feel bad about rejecting them. The people making the hiring decision might arguably feel less bad about not hiring someone if the candidate wasn’t that excited. Lack of excitement also allows the candidate to save face if they get rejected, which also only matters because the candidate and the person making the hiring decision might interact socially in the future.
I don’t really agree with your second and third point. Seeing this problem and responding by trying to create more ‘capital letter EA jobs’ strikes me as continuing to pursue a failing strategy.
What (in my opinion) the EA Community needs is to get away from this idea of channelling all committed people to a few organisations—the community is growing faster* than the organisations, and those numbers are unlikely to add up in the mid term.
Committing all our people to a few organisations seriously limits our impact in the long run. There are plenty of opportunities to have a large impact out there—we just need to appreciate them and pursue them. One thing I would like to see is stronger profession-specific networks in EA.
It’s catastrophic that new and long-term EAs now consider their main EA activity to be to apply for the same few jobs instead of trying to increase their donations or investing in non-‘capital letter EA’ promising careers.
But this is hardly surprising given past messaging. The only reason EA organisations can get away with having very expensive hiring rounds for the applicants is because there are a lot of strongly committed people out there willing to take on that cost. Organisations cannot get away with this in most of the for-profit sector.
*Though this might be slowing down somewhat, perhaps because of this ‘being an EA is applying unsuccessfully for the same few jobs’ phenomena.
I really appreciate you writing this. You are not the first person to consider doing so and I applaud you for actually doing it.
I’m a fund manager for the EA Meta Fund. Your assessment in your post is incorrect—we are also open to individual grant applications, though applications for the February distribution have now closed. I’d expect them to open again in a couple of months.
I’m curious how you got the impression that we aren’t open to applications. It’s important to us that we are able to reach all interested individuals so any insight into where we may have failed to communicate that is useful to us.