Strong upvote for publishing this summary. Reading it, I feel like I have a good sense of the program’s timeline, logistics, and results. I also really appreciated the splitting up of metrics by “success level” and “importance”—a lot of progress updates don’t include the second of those, making them a lot less useful.
Sounds like any future project meant to teach EA values to high-school students will have to deal with the measurement problem (e.g. “high school students are busy and will often flake on non-high-school things”). Maybe some kind of small reward attached to surveys? At $10/person, that seems affordable for 380 students given the scale of the program, though it might make social desirability bias even stronger.
Thanks Aaron. Measurement problems were a big issue. We experimented with incentives a bit, particularly offering to randomly select from students who completed the post-program survey, and those selected would receive money to give to a charity of their choice, but that didn’t seem to make a difference, or at least we weren’t in a place to offer a significant enough incentive to make a noticeable difference.
The other measurement problem that we ran into was knowing that, given the age of workshop participants, in most cases we wouldn’t be able to measure actionable impact for another ~5 years.