I did a sort of version of this for many years. Eventually it became a huge amount of somewhat painful work and it was never exactly clear to me how many people it was helping; it got a lot of karma, but so did a lot of much lower effort posts, and I didn’t have a lot of other feedback mechanisms.
I really appreciated your assessments of the alignment space, and would be open to paying out a retroactive bounty and/or commissioning reports for 2022 and 2023! Happy to chat via DM or email (austin@manifund.org)
Some yes, sometimes years after the event, but generally without quantification of the impact. The highlight was probably Mustafa Suleyman mentioning it, though I only learned of this a long time later, and I’m not aware of any specific actions he took as a result.
I did a sort of version of this for many years. Eventually it became a huge amount of somewhat painful work and it was never exactly clear to me how many people it was helping; it got a lot of karma, but so did a lot of much lower effort posts, and I didn’t have a lot of other feedback mechanisms.
Wow Larks you really did a thorough and impressive roundup there. If anyone is interested you can check out his 2021 review here.
https://forum.effectivealtruism.org/posts/BNQMyWGCNWDdP2WyG/2021-ai-alignment-literature-review-and-charity-comparison
Thanks!
I really appreciated your assessments of the alignment space, and would be open to paying out a retroactive bounty and/or commissioning reports for 2022 and 2023! Happy to chat via DM or email (austin@manifund.org)
Were there donors who said that they benefitted from your work and/or made specific decisions based on it?
Some yes, sometimes years after the event, but generally without quantification of the impact. The highlight was probably Mustafa Suleyman mentioning it, though I only learned of this a long time later, and I’m not aware of any specific actions he took as a result.
Congrats!