Here’s a relevant set of estimates from a couple of years ago, which has a guesstimate model you might enjoy. Your numbers seem to be roughly consistent with theirs. They were trying to make a broader argument that “1. EA safety is small, even relative to a single academic subfield. 2. There is overlap between capabilities and short-term safety work. 3. There is overlap between short-term safety work and long-term safety work. 4. So AI safety is less neglected than the opening quotes imply. 5. Also, on present trends, there’s a good chance that academia will do more safety over time, eventually dwarfing the contribution of EA.”
Here’s a relevant set of estimates from a couple of years ago, which has a guesstimate model you might enjoy. Your numbers seem to be roughly consistent with theirs. They were trying to make a broader argument that “1. EA safety is small, even relative to a single academic subfield. 2. There is overlap between capabilities and short-term safety work. 3. There is overlap between short-term safety work and long-term safety work. 4. So AI safety is less neglected than the opening quotes imply. 5. Also, on present trends, there’s a good chance that academia will do more safety over time, eventually dwarfing the contribution of EA.”