Hey Jordan! Great to see another USC person here. The best writing advice I’ve gotten (that I have yet to implement) is to identify a theory of change for each potential piece—something to keep in mind!
6 sounds interesting, if you can make a strong case for it. Aligning humans isn’t an easy task (as most parents, employers, governments, and activists know very well), so I’m curious to hear if you have tractable proposals.
7 sounds important given that a decent number of EAs are vegan, and I’m quite surprised I haven’t heard of this before. 15 IQ points is a whole standard deviation, so I’d love to see the evidence for that.
8 might be interesting. I suspect most people are already aware of groupthink, but it could be good to be aware of other relevant phenomena that might not be as widely-known (if there are any).
From what I can tell, 11 proposes a somewhat major reconsideration of how we should approach improving the long-term future. If you have a good argument, I’m always in favor of more people challenging the EA community’s current approach. I’m interested in 21 for the same reason.
(In my experience, the answer to 19 is no, probably because there isn’t a clear, easy-to-calculate metric to use for longtermist projects in the way that GiveWell uses cost-effectiveness estimates.)
Out of all of these, I think you could whip up a draft post for 7 pretty quickly, and I’d be interested to read it!
Dang yeah I did a quick search on creatine and the IQ number right before writing this post, but now it’s looking like that source was not credible. Would have to research more to see if I can find an accurate reliable measure of creatine cognitive improvement, it seems it at least has a significant impact on memory. Anecdotally, I noticed quite a difference when I took a number of supplements while vegan, and I know there’s some research on a number of differences of various nutrients which vegans lack related to cognitive function. Will do a short post on sometime!
I think human alignment is incredibly difficult, but too important to ignore. I have thought about it a very long time so do have some very ambitious ideas that could feasibly start small and scale up.
Yes! I have been very surprised since joining how narrowly longtermism is focused. I think if the community is right about AGI being within a few decades with fast takeoff then broad longtermism may be less appealing, but I think if there is any doubt about this then we are massively underinvested in broad longtermism and putting all eggs in one basket so to speak. Will definitely write more about this!
Right, definitely wouldn’t be exactly analogous to GiveWell, but I think nonetheless it is important to have SOME way of comparing all the longtermist projects to know what a good investment looks like.
Thanks again for all the feedback Aman! Really appreciate it (and everything else you do for the USC group!!) and really excited to write more on some of these topics :)
Hey Jordan! Great to see another USC person here. The best writing advice I’ve gotten (that I have yet to implement) is to identify a theory of change for each potential piece—something to keep in mind!
6 sounds interesting, if you can make a strong case for it. Aligning humans isn’t an easy task (as most parents, employers, governments, and activists know very well), so I’m curious to hear if you have tractable proposals.
7 sounds important given that a decent number of EAs are vegan, and I’m quite surprised I haven’t heard of this before. 15 IQ points is a whole standard deviation, so I’d love to see the evidence for that.
8 might be interesting. I suspect most people are already aware of groupthink, but it could be good to be aware of other relevant phenomena that might not be as widely-known (if there are any).
From what I can tell, 11 proposes a somewhat major reconsideration of how we should approach improving the long-term future. If you have a good argument, I’m always in favor of more people challenging the EA community’s current approach. I’m interested in 21 for the same reason.
(In my experience, the answer to 19 is no, probably because there isn’t a clear, easy-to-calculate metric to use for longtermist projects in the way that GiveWell uses cost-effectiveness estimates.)
Out of all of these, I think you could whip up a draft post for 7 pretty quickly, and I’d be interested to read it!
Dang yeah I did a quick search on creatine and the IQ number right before writing this post, but now it’s looking like that source was not credible. Would have to research more to see if I can find an accurate reliable measure of creatine cognitive improvement, it seems it at least has a significant impact on memory. Anecdotally, I noticed quite a difference when I took a number of supplements while vegan, and I know there’s some research on a number of differences of various nutrients which vegans lack related to cognitive function. Will do a short post on sometime!
I think human alignment is incredibly difficult, but too important to ignore. I have thought about it a very long time so do have some very ambitious ideas that could feasibly start small and scale up.
Yes! I have been very surprised since joining how narrowly longtermism is focused. I think if the community is right about AGI being within a few decades with fast takeoff then broad longtermism may be less appealing, but I think if there is any doubt about this then we are massively underinvested in broad longtermism and putting all eggs in one basket so to speak. Will definitely write more about this!
Right, definitely wouldn’t be exactly analogous to GiveWell, but I think nonetheless it is important to have SOME way of comparing all the longtermist projects to know what a good investment looks like.
Thanks again for all the feedback Aman! Really appreciate it (and everything else you do for the USC group!!) and really excited to write more on some of these topics :)