I feel that often saying X is overrated/underrated is a lazy way for people (including me sometimes) to increase/decrease X’s status without making the effort to state concretely their position on X (which opens them up to more criticism and might require introspection and more careful reasoning rather than purely evaluating vibes)
Quadratic Reciprocity
Do you think it ended up having a net positive impact so far?
I’m not sure what would be the best thing since I don’t remember there being a particular post about this. However, he talks about it in his book review for Going Infinite and I also like his post on Altruism is Incomplete. Lots of people I know find his writing confusing though and it’s not like he’s rigorously arguing for something. When I agree with Zvi, it’s usually because I have had that belief in the back of my mind for a while and him pointing it out makes it more salient, rather than because I got convinced by a particular argument he was making.
This post got some flak and I am not sure if it actually led to more EAs seriously considering engaging with the Sequences. However, I stand by the recommendation even more strongly now. If I were in a position to give reading recommendations to smart young people who wanted to do big, impactful things, I would recommend the Sequences (or HPMOR) over any of the EA writing.
Strong upvoted and couldn’t decide whether to disagreevote or not. I agree with the points you list under meta-uncertainty and your point on naively using calibration as a proxy for forecasting ability + thinking you can bet on the end of the world by borrowing money. I disagree with your thoughts on ethics (I’m sympathetic to Zvi’s writing on EAs confusing the map for the territory).
Am also skeptical about the intellectual benefits directly from gender diversity.
However, I think one pretty plausible way it could happen is because women tend to specialise in different fields from men (more women in life sciences, biology, and psychology as opposed to computer science) and maybe the benefits result from the diversity of expertise in different fields. Eg: PIBBS seems to have greater diversity in their fellows than other programmes and it seems like a good idea for it to exist.
Some folks explicitly prefer a world in which a lower proportion of money spent on EA-ish projects was from Open Philanthropy even if overall donations were the same. That seems like a sensible preference.
Have your thoughts on earning to give vs direct work changed (including the specific numbers) since the linked post from summer 2022?
I think another example of the jerking people around thing could be the vibes from summer 2021 to summer 2022 that if you weren’t exceptionally technically competent and had the skills to work on object-level stuff, you should do full-time community building like helping run university EA groups. And then that idea lost steam this year.
It would be cool to have someone with experience in startups who also knows a decent amount about EA because many insights from running a successful startup might apply to people working to ambitiously solve neglected and important problems. Maybe Patrick Collison?
Links to the application form don’t seem to work?
Another newsletter(?) that I quite like is Zvi’s
What disagreements do the LTFF fund managers tend to have with each other about what’s worth funding?
What projects to reduce existential risk would you be excited to see someone work on (provided they were capable enough) that don’t already exist?
I’ve also heard people doing SERI MATS for example explicitly talk/joke about this, about how they’d have to work in AI capabilities now if they don’t get AI safety jobs
When people do this, do you think they mostly want someone with more skills or knowledge or someone with better, more prestigious credentials?
Yeah, same. I know of recent university graduates interested in AI safety who are applying for jobs in AI capabilities alongside AI safety jobs.
It makes me think that what matters more is changing the broader environment to care more about AI existential risk (via better arguments, more safety orgs focused on useful research/policy directions, better resources for existing ML engineers who want to learn about it etc.) rather than specifically convincing individual students to shift to caring about it.
I would be surprised if the accurate number is as low as 1:20 or even 1:10. I wish there was more data on this, though it seems a bit difficult to collect since at least for university groups most of the impact (to both capabilities and safety) will occur a few+ years after the students start engaging with the group.
I also think it depends a lot on what the best opportunities available to them are. It would depend heavily on what opportunities to work on AI safety exist in the near future versus on AI capabilities for people with their aptitudes.
I would love to see people debate the question of how difficult AI alignment really is. This has been argued before in for example the MIRI conversations and other places but more content would still be helpful for people similar to me who are uncertain about the question. Also, at the EAG events I went to, it felt like there was more content by people who had more optimistic views on alignment so would be cool to see the other side.
It’s not necessarily a loss of a million pounds if many of the events that happened there would have spent money to organise events elsewhere (renting event spaces and accommodation for event attendees can get quite pricey) and would have spent additional time on organising the events, finding venues, setting them up etc (compared to having them at Wytham).
For comparison, EA Global events cost in the ballpark of a million pounds per event.