Strong upvote for publishing this summary. Reading it, I feel like I have a good sense of the programâs timeline, logistics, and results. I also really appreciated the splitting up of metrics by âsuccess levelâ and âimportanceââa lot of progress updates donât include the second of those, making them a lot less useful.
Sounds like any future project meant to teach EA values to high-school students will have to deal with the measurement problem (e.g. âhigh school students are busy and will often flake on non-high-school thingsâ). Maybe some kind of small reward attached to surveys? At $10/âperson, that seems affordable for 380 students given the scale of the program, though it might make social desirability bias even stronger.
Thanks Aaron. Measurement problems were a big issue. We experimented with incentives a bit, particularly offering to randomly select from students who completed the post-program survey, and those selected would receive money to give to a charity of their choice, but that didnât seem to make a difference, or at least we werenât in a place to offer a significant enough incentive to make a noticeable difference.
The other measurement problem that we ran into was knowing that, given the age of workshop participants, in most cases we wouldnât be able to measure actionable impact for another ~5 years.
I think this illustrates a harmful double standard. Let me substitute a different cause area in your statement: âSounds like any future project meant to reduce x-risk will have to deal with the measurement problemâ.
I think that X-risk reduction projects also have a problem with measurement!
However, measuring the extent to which youâve reduced X-risk is a lot harder than measuring whether students have taken some kind of altruistic action: in the latter case, you can just ask the students (and maybe give them an incentive to reply).
Thus, if someone wants me to donate to their âEA education projectâ, Iâm probably going to care more about direct outcome measurement than I would if I were asked to support an X-risk project, because I think good measurement is more achievable. (Iâd hold the X-risk project to other standards, some of which wouldnât apply to an education project.)
Strong upvote for publishing this summary. Reading it, I feel like I have a good sense of the programâs timeline, logistics, and results. I also really appreciated the splitting up of metrics by âsuccess levelâ and âimportanceââa lot of progress updates donât include the second of those, making them a lot less useful.
Sounds like any future project meant to teach EA values to high-school students will have to deal with the measurement problem (e.g. âhigh school students are busy and will often flake on non-high-school thingsâ). Maybe some kind of small reward attached to surveys? At $10/âperson, that seems affordable for 380 students given the scale of the program, though it might make social desirability bias even stronger.
Thanks Aaron. Measurement problems were a big issue. We experimented with incentives a bit, particularly offering to randomly select from students who completed the post-program survey, and those selected would receive money to give to a charity of their choice, but that didnât seem to make a difference, or at least we werenât in a place to offer a significant enough incentive to make a noticeable difference.
The other measurement problem that we ran into was knowing that, given the age of workshop participants, in most cases we wouldnât be able to measure actionable impact for another ~5 years.
I think this illustrates a harmful double standard. Let me substitute a different cause area in your statement:
âSounds like any future project meant to reduce x-risk will have to deal with the measurement problemâ.
I think that X-risk reduction projects also have a problem with measurement!
However, measuring the extent to which youâve reduced X-risk is a lot harder than measuring whether students have taken some kind of altruistic action: in the latter case, you can just ask the students (and maybe give them an incentive to reply).
Thus, if someone wants me to donate to their âEA education projectâ, Iâm probably going to care more about direct outcome measurement than I would if I were asked to support an X-risk project, because I think good measurement is more achievable. (Iâd hold the X-risk project to other standards, some of which wouldnât apply to an education project.)