On 3, I basically don’t think this matters. I hadn’t considered it largely because it seems super irrelevant. It matters far more if any individual people shouldn’t be there or some individuals should be there who aren’t. AFAICT without much digging, they all seem to be doing a fine job and I don’t see the need for a male/poc though feel free to point out a reason. I think nearly nobody feels they have a problem to report and then upon finding out that they are reporting to a white woman feels they can no longer do so.
On 4, this is a risk with basically all nonprofit organizations. Do we feel AI safety organizations are exaggerating the problem? How about SWP? Do you think they exaggerate the number of shrimp or how likely they are to be sentient? How about Givewell? Should we be concerned about their cost-effectiveness analyses? It’s always a question to ask but usually, a concern would come with something more concrete or a statistic. For example, the charity Will Macaskill talks about in the UK that helps a certain kind of Englishperson who is statistically ahead (though I can’t remember if this is Scotts or Irishmen or another group)
On 7, university groups are limited in resources. Very limited. It is almost always done part-time while managing a full time courseload and working on their own development among other things and so they focus on their one comparative advantage of recruitment (since it would be difficult for others to do that) and outsource the training to other places (80k, MATS, etc.).
On 10, good point, I would like to see some movement within EA to increase the intensity.
On 11, another good point. I’d love to read more about this.
On 12, another good point but this is somewhat how networks work, unfortunately. There’s just so many incentives for hubs to emerge and then to have a bunch of gravity. It kinda started in the Bay area and then for individual actors, it nearly always makes sense to go around there and then there is a feedback loop.
I’ll take a crack at some of these.
On 3, I basically don’t think this matters. I hadn’t considered it largely because it seems super irrelevant. It matters far more if any individual people shouldn’t be there or some individuals should be there who aren’t. AFAICT without much digging, they all seem to be doing a fine job and I don’t see the need for a male/poc though feel free to point out a reason. I think nearly nobody feels they have a problem to report and then upon finding out that they are reporting to a white woman feels they can no longer do so.
On 4, this is a risk with basically all nonprofit organizations. Do we feel AI safety organizations are exaggerating the problem? How about SWP? Do you think they exaggerate the number of shrimp or how likely they are to be sentient? How about Givewell? Should we be concerned about their cost-effectiveness analyses? It’s always a question to ask but usually, a concern would come with something more concrete or a statistic. For example, the charity Will Macaskill talks about in the UK that helps a certain kind of Englishperson who is statistically ahead (though I can’t remember if this is Scotts or Irishmen or another group)
On 7, university groups are limited in resources. Very limited. It is almost always done part-time while managing a full time courseload and working on their own development among other things and so they focus on their one comparative advantage of recruitment (since it would be difficult for others to do that) and outsource the training to other places (80k, MATS, etc.).
On 10, good point, I would like to see some movement within EA to increase the intensity.
On 11, another good point. I’d love to read more about this.
On 12, another good point but this is somewhat how networks work, unfortunately. There’s just so many incentives for hubs to emerge and then to have a bunch of gravity. It kinda started in the Bay area and then for individual actors, it nearly always makes sense to go around there and then there is a feedback loop.