I’m glad you found my post insightful! Regarding time, I would probably recommend going through the process with iterative depth. First, outline the points that seem most valuable to investigate based on your goals, uncertainty, and any upcoming moral decisions. Then, go through the process of the project repeatedly, starting with a very low level of depth and repeating with higher levels of depth as needed. Between each round, you could also re-prioritize based on changing uncertainties and decision relevance.
I don’t actually think there is much object-level knowledge required to engage with this project. If anything, I imagine that developing object-level knowledge of EA topics would be more fulfilling after developing a more refined moral framework.
I’m glad you found my post insightful! Regarding time, I would probably recommend going through the process with iterative depth. First, outline the points that seem most valuable to investigate based on your goals, uncertainty, and any upcoming moral decisions. Then, go through the process of the project repeatedly, starting with a very low level of depth and repeating with higher levels of depth as needed. Between each round, you could also re-prioritize based on changing uncertainties and decision relevance.
I don’t actually think there is much object-level knowledge required to engage with this project. If anything, I imagine that developing object-level knowledge of EA topics would be more fulfilling after developing a more refined moral framework.