A bunch of disorganized thoughts related to this post:
Fast growth still does lots of good, especially if you have short AI timelines. If the current policy of growth brings lots of adverse selection, the optimal policy might change to double the number of top AI safety researchers every 18 months, rather than double the number of HEAs every 12 months.
I think more potential top people are put off by EA groups having little overlap with their other interests, than are suspicious of EA being manipulative. This can be mitigated by focusing more on the object level, like discussion of problems in alignment, altpro, policy, or whatever.
People are commonly made uncomfortable by community-builders visibly optimizing against them. But we have to optimize. I think the solution here is to create boundaries so you’re not optimizing against people. When talking about career changes, I think it’s good to help the person preserve optionality so they’re not stuck in an EA career path with little career capital elsewhere. I’ve also found it helpful to come at 1-1s with the frame “I’ll help you optimize for your values”.
The “Scaling makes them worse” section implies a tension between two causes of epistemic harm. Less variation in culture makes EA more insular, but more variation causes this selection effect where the faster-growing groups might have worse epistemics.
I think pushing people into donations / GWWC pledges by default is a pretty obvious mistake. Pledges can be harmful and have pretty limited impact anyway.
A bunch of disorganized thoughts related to this post:
Fast growth still does lots of good, especially if you have short AI timelines. If the current policy of growth brings lots of adverse selection, the optimal policy might change to double the number of top AI safety researchers every 18 months, rather than double the number of HEAs every 12 months.
I think more potential top people are put off by EA groups having little overlap with their other interests, than are suspicious of EA being manipulative. This can be mitigated by focusing more on the object level, like discussion of problems in alignment, altpro, policy, or whatever.
People are commonly made uncomfortable by community-builders visibly optimizing against them. But we have to optimize. I think the solution here is to create boundaries so you’re not optimizing against people. When talking about career changes, I think it’s good to help the person preserve optionality so they’re not stuck in an EA career path with little career capital elsewhere. I’ve also found it helpful to come at 1-1s with the frame “I’ll help you optimize for your values”.
The “Scaling makes them worse” section implies a tension between two causes of epistemic harm. Less variation in culture makes EA more insular, but more variation causes this selection effect where the faster-growing groups might have worse epistemics.
I think pushing people into donations / GWWC pledges by default is a pretty obvious mistake. Pledges can be harmful and have pretty limited impact anyway.
| I think the solution here is to create boundaries so you’re not optimizing against people.
I prefer 80,000 Hours’ ‘plan changes’ metric to the ‘HEA’ one for this reason (if I’ve understood you correctly).