I’ve written a bunch of stuff on this recently, so in that sense I’m biased. But my suggestions have generally been:
More transparency from core EA orgs (Givewell seem to set a good benchmark here, and even moreso the charities they recommend)
More division of responsibility among EA orgs (ie more orgs doing fewer things each) - especially (but not exclusively) having separate orgs for EA PR and special ops
More carrots and/or more sticks to incentivise staff at EA orgs to perform their jobs adequately
Less long-term reliance on heuristic reasoning (eg ITN, orthogonality thesis, existential risk, single-author assessments of particular events)
I’ve written a bunch of stuff on this recently, so in that sense I’m biased. But my suggestions have generally been:
More transparency from core EA orgs (Givewell seem to set a good benchmark here, and even moreso the charities they recommend)
More division of responsibility among EA orgs (ie more orgs doing fewer things each) - especially (but not exclusively) having separate orgs for EA PR and special ops
More carrots and/or more sticks to incentivise staff at EA orgs to perform their jobs adequately
Less long-term reliance on heuristic reasoning (eg ITN, orthogonality thesis, existential risk, single-author assessments of particular events)