While we’re taking a short break from writing criticisms, I (the non-technical author) was wondering if people would be find it valuable for us to share (brief) thoughts what we’ve learnt so far from writing these first two critiques—such as how to get feedback, balance considerations, anonymity concerns, things we wish would be different in the ecosystem to make it easier for people to provide criticisms etc.
Especially keen to write for the audience of those who want to write critiques
Keen to hear what specific things (if any) people would be curious to hear
We’re always open to providing thoughts / feedback / inputs if you are trying to write a critique. I’d like to try and encourage more good-faith critiques that enable productive discourse.
Hi Omega, I’d be especially interested to hear your thoughts on Apollo Research, as we (Manifund) are currently deciding how to move forward with a funding request from them. Unlike the other orgs you’ve critiqued, Apollo is very new and hasn’t received the requisite >$10m, but it’s easy to imagine them becoming a major TAIS lab over the next years!
I love this series and I’m sorry to see that you haven’t continued it. The rapid growth of AI Safety organizations and the amount of insider information and conflicts of interest is kind of mind boggling. There should be more of this type of informed reporting, not less.
While we’re taking a short break from writing criticisms, I (the non-technical author) was wondering if people would be find it valuable for us to share (brief) thoughts what we’ve learnt so far from writing these first two critiques—such as how to get feedback, balance considerations, anonymity concerns, things we wish would be different in the ecosystem to make it easier for people to provide criticisms etc.
Especially keen to write for the audience of those who want to write critiques
Keen to hear what specific things (if any) people would be curious to hear
We’re always open to providing thoughts / feedback / inputs if you are trying to write a critique. I’d like to try and encourage more good-faith critiques that enable productive discourse.
Hi Omega, I’d be especially interested to hear your thoughts on Apollo Research, as we (Manifund) are currently deciding how to move forward with a funding request from them. Unlike the other orgs you’ve critiqued, Apollo is very new and hasn’t received the requisite >$10m, but it’s easy to imagine them becoming a major TAIS lab over the next years!
I’d be interested to read about what you’ve learnt so far from writing these critiques.
I love this series and I’m sorry to see that you haven’t continued it. The rapid growth of AI Safety organizations and the amount of insider information and conflicts of interest is kind of mind boggling. There should be more of this type of informed reporting, not less.