Strong upvote for publishing this summary. Reading it, I feel like I have a good sense of the program’s timeline, logistics, and results. I also really appreciated the splitting up of metrics by “success level” and “importance”—a lot of progress updates don’t include the second of those, making them a lot less useful.
Sounds like any future project meant to teach EA values to high-school students will have to deal with the measurement problem (e.g. “high school students are busy and will often flake on non-high-school things”). Maybe some kind of small reward attached to surveys? At $10/person, that seems affordable for 380 students given the scale of the program, though it might make social desirability bias even stronger.
Thanks Aaron. Measurement problems were a big issue. We experimented with incentives a bit, particularly offering to randomly select from students who completed the post-program survey, and those selected would receive money to give to a charity of their choice, but that didn’t seem to make a difference, or at least we weren’t in a place to offer a significant enough incentive to make a noticeable difference.
The other measurement problem that we ran into was knowing that, given the age of workshop participants, in most cases we wouldn’t be able to measure actionable impact for another ~5 years.
I think this illustrates a harmful double standard. Let me substitute a different cause area in your statement: ”Sounds like any future project meant to reduce x-risk will have to deal with the measurement problem”.
I think that X-risk reduction projects also have a problem with measurement!
However, measuring the extent to which you’ve reduced X-risk is a lot harder than measuring whether students have taken some kind of altruistic action: in the latter case, you can just ask the students (and maybe give them an incentive to reply).
Thus, if someone wants me to donate to their “EA education project”, I’m probably going to care more about direct outcome measurement than I would if I were asked to support an X-risk project, because I think good measurement is more achievable. (I’d hold the X-risk project to other standards, some of which wouldn’t apply to an education project.)
Strong upvote for publishing this summary. Reading it, I feel like I have a good sense of the program’s timeline, logistics, and results. I also really appreciated the splitting up of metrics by “success level” and “importance”—a lot of progress updates don’t include the second of those, making them a lot less useful.
Sounds like any future project meant to teach EA values to high-school students will have to deal with the measurement problem (e.g. “high school students are busy and will often flake on non-high-school things”). Maybe some kind of small reward attached to surveys? At $10/person, that seems affordable for 380 students given the scale of the program, though it might make social desirability bias even stronger.
Thanks Aaron. Measurement problems were a big issue. We experimented with incentives a bit, particularly offering to randomly select from students who completed the post-program survey, and those selected would receive money to give to a charity of their choice, but that didn’t seem to make a difference, or at least we weren’t in a place to offer a significant enough incentive to make a noticeable difference.
The other measurement problem that we ran into was knowing that, given the age of workshop participants, in most cases we wouldn’t be able to measure actionable impact for another ~5 years.
I think this illustrates a harmful double standard. Let me substitute a different cause area in your statement:
”Sounds like any future project meant to reduce x-risk will have to deal with the measurement problem”.
I think that X-risk reduction projects also have a problem with measurement!
However, measuring the extent to which you’ve reduced X-risk is a lot harder than measuring whether students have taken some kind of altruistic action: in the latter case, you can just ask the students (and maybe give them an incentive to reply).
Thus, if someone wants me to donate to their “EA education project”, I’m probably going to care more about direct outcome measurement than I would if I were asked to support an X-risk project, because I think good measurement is more achievable. (I’d hold the X-risk project to other standards, some of which wouldn’t apply to an education project.)