I am naturally an angsty person, and I don’t carry much reputational risk
Relate! Although you’re anonymous, I’m just ADD.
Point 1 is interesting to me:
longtermist/AI safety orgs could require a diverse ecosystem of groups working based on different approaches. This would mean the “current state of under-funded-ness” is in flux, uncertain, and leaning towards “some lesser-known group(s) need money”.
lots of smaller donations could indicate/signal interest from lots of people, which could help evaluators or larger donors with something.
Another point: since I think funding won’t be the bottleneck in the near future, I’ve refocused my career somewhat to balance more towards direct research.
(Also, partly inspired by your “Irony of Longtermism” post, I’m interested in intelligence enhancement for existing human adults, since the shorter timelines don’t leave room for embryo whatevers, and intelligence would help in any timeline.)
Point 1 is interesting to me:
longtermist/AI safety orgs could require a diverse ecosystem of groups working based on different approaches. This would mean the “current state of under-funded-ness” is in flux, uncertain, and leaning towards “some lesser-known group(s) need money”.
lots of smaller donations could indicate/signal interest from lots of people, which could help evaluators or larger donors with something.
Another point: since I think funding won’t be the bottleneck in the near future, I’ve refocused my career somewhat to balance more towards direct research.
(Also, partly inspired by your “Irony of Longtermism” post, I’m interested in intelligence enhancement for existing human adults, since the shorter timelines don’t leave room for embryo whatevers, and intelligence would help in any timeline.)