I think a common pitfall from being part of groups which appear to have better epistemics than a lot of others (i.e. EA, LW) is that being part of these groups implicitly gives a feeling of being able to let your [epistemic] guard down (e.g. to defer).
Iāve noticed this in myself recently; identifying (whether consciously or not) as being more intelligent/ārational than average Joe is actually is a surefire way for me to end up not thinking as clearly as I would have otherwise. (this is obvious in retrospect, but I think itās pretty important to keep in mind)
I agree with a lot of what you said, and have had similar concerns. I appreciate you writing this and making it public!
Not sure what the motivation behind this post is; it would be good for you to clarify.
I think the question isnāt framed very well, since a love letter doesnāt make a person happy for their entire life. Clearly more QALYs or WELLBYs or whatever are preserved by running over the letters.
I like the general idea of this post, and I think this idea/āprogram is something worth experimenting with. Would love to see how it goes, if you decide to run it. That being said, I have a couple thoughts.
Weak criticism: (I expect there is probably a good rebuttal to this) We might want some selection for students who have already determined that they want to make ādoing goodā a large part of their life. Maybe these students are more conscientious than the average individual, and this is an early signal of them being people who self reflects on their values/āthoughts/ābeliefs more. This could mean that they will perform better at careers that take a lot of critical thinking and/āor careful moral reasoning. Thatās not to say these kinds of thinking skills cannot be learned, but they may be picked up faster and performed better by students who already exhibit some level of personal reflection at a younger age.
Stronger criticism: If students have not internalized that doing good matters to them, and would therefore not want to join the Intro EA Program, I strongly suspect they will also not be interested in a 5-week program about their purpose and life planning. My main concern here is that outreach will be difficult (but itās easy to prove me wrong empirically, so feel free to go out and do it!)
A final thought on framing/āoverstepping: If I were a student first hearing about this program, I think I would be a little bit suspicious of the underlying motives. From a surface-level impression, I would think that the goal of the program would be for me āfind my purposeā⦠then I would look more into who is running the program and ask myself āwho are these EA people, and why do they care about my purpose?ā, after which I would quickly find out that they want me to join their organization.
I think the main concern that I want to bring up is that I think this program could easily be turned into a sort of bait-and-switch. Finding oneās purpose and life goals is a very individual process, and I wouldnāt want this process to be āhijackedā by an EA program that directs people in a very specific direction. I.e. my concern is that the program will presented as if it is encouraging people to find their values and purpose, but in reality itās just trying to incept them into following an EA career path.
Not sure if/āhow this can be avoided, except for being really up-front about the motivation of the program with applicants. I also might be misunderstanding something, so feel free to correct me.
You mention a few times that EV calculations are susceptible to motivated reasoning. But this conflicts with my understanding, which is that EV calculations are useful partly (largely) because they help to prevent motivated reasoning from guiding our decisions too heavily
(e.g. You can imagine a situation where charity Y performs an intervention that is more cost effective than charity X. By following on EV calculation, one might switch their donation from charity X to charity Y, despite that charity X sounds intuitively better.)
Maybe you could include some examples/ācitations of where you think this āEV motivated reasoningā has occurred. Otherwise I find it hard to believe that EV calculations are worse than the alternative, from a āsusceptible-to-motivated-reasoningā perspective (here, the alternative is not using EV calculations).
This post makes the case that warning shots wonāt change the picture in policy much, but I could imagine a world where some warning shot makes the leading AI labs decide to focus more on safety, or agree to slow down their deployment, without policy change occurring. Maybe this could buy a couple of years time for safety researchers?
This isnāt a well developed thought, just something that came to mind while reading.
Thanks for writing this! I think a lot of this is great to keep in mind for university groups.
I especially liked the āfree stuff is not actually freeā framing. Putting a counterfactual on conference costs can be humbling, and really makes one think carefully about attending⦠if ~$5000 dollars could save a life elsewhere (say, generate 80 QALYs), then a $500 reimbursement for a trip to a conference is sacrificing 8 years of life. Not a decision to take lightly!
Do you have any more info on these āepistemic infrastructureā projects or the people working on them? I would be super curious to look into this more.
Harrison šø
I think a common pitfall from being part of groups which appear to have better epistemics than a lot of others (i.e. EA, LW) is that being part of these groups implicitly gives a feeling of being able to let your [epistemic] guard down (e.g. to defer).
Iāve noticed this in myself recently; identifying (whether consciously or not) as being more intelligent/ārational than average Joe is actually is a surefire way for me to end up not thinking as clearly as I would have otherwise. (this is obvious in retrospect, but I think itās pretty important to keep in mind)
I agree with a lot of what you said, and have had similar concerns. I appreciate you writing this and making it public!
I also was wondering this.
Huge! I am so excited to see this announcement, and wish the best of luck for AoI.
Not sure what the motivation behind this post is; it would be good for you to clarify.
I think the question isnāt framed very well, since a love letter doesnāt make a person happy for their entire life. Clearly more QALYs or WELLBYs or whatever are preserved by running over the letters.
I like the general idea of this post, and I think this idea/āprogram is something worth experimenting with. Would love to see how it goes, if you decide to run it. That being said, I have a couple thoughts.
Weak criticism: (I expect there is probably a good rebuttal to this) We might want some selection for students who have already determined that they want to make ādoing goodā a large part of their life. Maybe these students are more conscientious than the average individual, and this is an early signal of them being people who self reflects on their values/āthoughts/ābeliefs more. This could mean that they will perform better at careers that take a lot of critical thinking and/āor careful moral reasoning. Thatās not to say these kinds of thinking skills cannot be learned, but they may be picked up faster and performed better by students who already exhibit some level of personal reflection at a younger age.
Stronger criticism: If students have not internalized that doing good matters to them, and would therefore not want to join the Intro EA Program, I strongly suspect they will also not be interested in a 5-week program about their purpose and life planning. My main concern here is that outreach will be difficult (but itās easy to prove me wrong empirically, so feel free to go out and do it!)
A final thought on framing/āoverstepping: If I were a student first hearing about this program, I think I would be a little bit suspicious of the underlying motives. From a surface-level impression, I would think that the goal of the program would be for me āfind my purposeā⦠then I would look more into who is running the program and ask myself āwho are these EA people, and why do they care about my purpose?ā, after which I would quickly find out that they want me to join their organization.
I think the main concern that I want to bring up is that I think this program could easily be turned into a sort of bait-and-switch. Finding oneās purpose and life goals is a very individual process, and I wouldnāt want this process to be āhijackedā by an EA program that directs people in a very specific direction. I.e. my concern is that the program will presented as if it is encouraging people to find their values and purpose, but in reality itās just trying to incept them into following an EA career path.
Not sure if/āhow this can be avoided, except for being really up-front about the motivation of the program with applicants. I also might be misunderstanding something, so feel free to correct me.
You mention a few times that EV calculations are susceptible to motivated reasoning. But this conflicts with my understanding, which is that EV calculations are useful partly (largely) because they help to prevent motivated reasoning from guiding our decisions too heavily
(e.g. You can imagine a situation where charity Y performs an intervention that is more cost effective than charity X. By following on EV calculation, one might switch their donation from charity X to charity Y, despite that charity X sounds intuitively better.)
Maybe you could include some examples/ācitations of where you think this āEV motivated reasoningā has occurred. Otherwise I find it hard to believe that EV calculations are worse than the alternative, from a āsusceptible-to-motivated-reasoningā perspective (here, the alternative is not using EV calculations).
+1 here; looks like this is a vestige from the previous version, and should probably be corrected
This post makes the case that warning shots wonāt change the picture in policy much, but I could imagine a world where some warning shot makes the leading AI labs decide to focus more on safety, or agree to slow down their deployment, without policy change occurring. Maybe this could buy a couple of years time for safety researchers?
This isnāt a well developed thought, just something that came to mind while reading.
Thanks for writing this! I think a lot of this is great to keep in mind for university groups.
I especially liked the āfree stuff is not actually freeā framing. Putting a counterfactual on conference costs can be humbling, and really makes one think carefully about attending⦠if ~$5000 dollars could save a life elsewhere (say, generate 80 QALYs), then a $500 reimbursement for a trip to a conference is sacrificing 8 years of life. Not a decision to take lightly!
Do you have any more info on these āepistemic infrastructureā projects or the people working on them? I would be super curious to look into this more.