This is super insightful and definitely sounds like highly valuable to do, in order to make decisions that have higher credence. Thank you! I am wondering what people’s thoughts on the following are:
Is there a rough estimate that should be spent on such an evaluation and how many topics of exploration to choose (without being overly ambitious or loosing track of the ultimate goal)? I assume that given the ‘Lessons learned’ there might be a chance that the ~40 hours @Evan LaForge spent on 12 individual points/questions might not be what could be recommend…
note: I appreciate that this is highly dependent on the individual and understand if it is hard to give a specific answer or one at all :)
Could it be the case that such a moral evaluation process is only useful once one has sufficient object-level knowledge of a wider range of topics relevant to EA and high-impact choices? Maybe there even is an example of a minimum of things/courses/readings one should have completed before such a project can be done successfully + effectively.
I’m glad you found my post insightful! Regarding time, I would probably recommend going through the process with iterative depth. First, outline the points that seem most valuable to investigate based on your goals, uncertainty, and any upcoming moral decisions. Then, go through the process of the project repeatedly, starting with a very low level of depth and repeating with higher levels of depth as needed. Between each round, you could also re-prioritize based on changing uncertainties and decision relevance.
I don’t actually think there is much object-level knowledge required to engage with this project. If anything, I imagine that developing object-level knowledge of EA topics would be more fulfilling after developing a more refined moral framework.
This is super insightful and definitely sounds like highly valuable to do, in order to make decisions that have higher credence. Thank you!
I am wondering what people’s thoughts on the following are:
Is there a rough estimate that should be spent on such an evaluation and how many topics of exploration to choose (without being overly ambitious or loosing track of the ultimate goal)? I assume that given the ‘Lessons learned’ there might be a chance that the ~40 hours @Evan LaForge spent on 12 individual points/questions might not be what could be recommend…
note: I appreciate that this is highly dependent on the individual and understand if it is hard to give a specific answer or one at all :)
Could it be the case that such a moral evaluation process is only useful once one has sufficient object-level knowledge of a wider range of topics relevant to EA and high-impact choices? Maybe there even is an example of a minimum of things/courses/readings one should have completed before such a project can be done successfully + effectively.
I’m glad you found my post insightful! Regarding time, I would probably recommend going through the process with iterative depth. First, outline the points that seem most valuable to investigate based on your goals, uncertainty, and any upcoming moral decisions. Then, go through the process of the project repeatedly, starting with a very low level of depth and repeating with higher levels of depth as needed. Between each round, you could also re-prioritize based on changing uncertainties and decision relevance.
I don’t actually think there is much object-level knowledge required to engage with this project. If anything, I imagine that developing object-level knowledge of EA topics would be more fulfilling after developing a more refined moral framework.