I think the problem here is that novel approaches are substantially more likely to be failures due to being untested and unproven. This isn’t a big deal in areas where you can try lots of stuff out and sift through them with results, but in something like an election you only get feedback like once a year or so. Worse, the feedback is extremely murky, so you don’t know if it was your intervention or something else that resulted in the outcome you care about.
Also failures trying to do really outlandish things like bribing Congresspeople to endorse Jim Mattis as a centrist candidate in the 2024 US Presidential Election are likely to backfire in more spectacular ways than (say) providing malaria nets for a region with falling malaria or losing a court case against a factory farming conglomerate. That said, this criticism does apply to some other things EAs are interested in, particularly actions purportedly addressing x-risks.
If each election is a rare and special opportunity to collect a bit of data, that makes it even more important to use that data-collection opportunity effectively.
Since we are looking for approaches which are unusually tractable, if effectiveness looks extremely murky, that’s probably not what we wanted.
I think the problem here is that novel approaches are substantially more likely to be failures due to being untested and unproven. This isn’t a big deal in areas where you can try lots of stuff out and sift through them with results, but in something like an election you only get feedback like once a year or so. Worse, the feedback is extremely murky, so you don’t know if it was your intervention or something else that resulted in the outcome you care about.
Also failures trying to do really outlandish things like bribing Congresspeople to endorse Jim Mattis as a centrist candidate in the 2024 US Presidential Election are likely to backfire in more spectacular ways than (say) providing malaria nets for a region with falling malaria or losing a court case against a factory farming conglomerate. That said, this criticism does apply to some other things EAs are interested in, particularly actions purportedly addressing x-risks.
If each election is a rare and special opportunity to collect a bit of data, that makes it even more important to use that data-collection opportunity effectively.
Since we are looking for approaches which are unusually tractable, if effectiveness looks extremely murky, that’s probably not what we wanted.