Thanks for this lovely post. I have SO many reasons to love effective altruism, here’s one (maybe I’ll write more later):
Truthseekingness
I’ve been seeking out truthseeking communities all my life, but they all fell short of my goals until I found EA. Some examples:
I studied particle physics—what could be more truthseeky than trying to find the fundamental nature of the universe? Back then, a bunch of particle physicists claimed to believe in a class of theories called “supersymmetry”—and I never understood why—there was no evidence for it, and I never really grokked why people thought the theoretical arguments were so compelling. At the time I just thought I wasn’t intelligent or knowledgeable enough to get it, but I might have undersold myself. The Large Hadron Collider has since ruled out all the then most popular versions of supersymmetry, and it isn’t cool any more. I think there might have been some sort of shared delusion because people wanted it to be true, partly because it was testable by the particle accelerator that was under construction.
When I was a science teacher it used to drive me BONKERS that students were taught (and required to regurgitate in national exams) incorrect force diagrams. (E.g. for a car accelerating on a flat, level road, students were required to draw the “friction” arrow pointing backwards—whereas in reality, friction is forward and without it the wheels would just spin and the car wouldn’t go anywhere). I get the need for simplification as much as the next guy, but you can’t tell me that “point the arrow in the exact opposite direction” counts as simplification. I talked to a bunch of teachers, the national qualifications authority, the ministry of education, and created draft alternative (and equally simple) resources for them to review. But no one else seemed to care at all about whether what we were teaching was accurate—or at least didn’t care enough to do anything about it.
But EA seems to be very different—this community seems to be unusually good at seeking the truth, even (or especially?) when it is inconvenient, scary, or even shameful. One of the first EA talks I went to blew my mind by questioning whether we are currently wasting our donations, by doing and then undoing good. Then I read GiveWell’s noodling on whether or not some of their (then) top charities are likely to have no impact, and discussions on whether becoming vegetarian increases animal suffering. More recently I’ve seen commentary about whether our community has accelerated dangerous AI capabilities or whether our community contributed to an environment that led to multi-billion dollar fraud. We should take these possible negatives very seriously. The fact that we do take these negatives seriously, and that we continue to try to get better a truth seeking gives me a whole bunch of hope.
Thanks for this lovely post. I have SO many reasons to love effective altruism, here’s one (maybe I’ll write more later):
Truthseekingness
I’ve been seeking out truthseeking communities all my life, but they all fell short of my goals until I found EA. Some examples:
I studied particle physics—what could be more truthseeky than trying to find the fundamental nature of the universe? Back then, a bunch of particle physicists claimed to believe in a class of theories called “supersymmetry”—and I never understood why—there was no evidence for it, and I never really grokked why people thought the theoretical arguments were so compelling. At the time I just thought I wasn’t intelligent or knowledgeable enough to get it, but I might have undersold myself. The Large Hadron Collider has since ruled out all the then most popular versions of supersymmetry, and it isn’t cool any more. I think there might have been some sort of shared delusion because people wanted it to be true, partly because it was testable by the particle accelerator that was under construction.
When I was a science teacher it used to drive me BONKERS that students were taught (and required to regurgitate in national exams) incorrect force diagrams. (E.g. for a car accelerating on a flat, level road, students were required to draw the “friction” arrow pointing backwards—whereas in reality, friction is forward and without it the wheels would just spin and the car wouldn’t go anywhere). I get the need for simplification as much as the next guy, but you can’t tell me that “point the arrow in the exact opposite direction” counts as simplification. I talked to a bunch of teachers, the national qualifications authority, the ministry of education, and created draft alternative (and equally simple) resources for them to review. But no one else seemed to care at all about whether what we were teaching was accurate—or at least didn’t care enough to do anything about it.
But EA seems to be very different—this community seems to be unusually good at seeking the truth, even (or especially?) when it is inconvenient, scary, or even shameful. One of the first EA talks I went to blew my mind by questioning whether we are currently wasting our donations, by doing and then undoing good. Then I read GiveWell’s noodling on whether or not some of their (then) top charities are likely to have no impact, and discussions on whether becoming vegetarian increases animal suffering. More recently I’ve seen commentary about whether our community has accelerated dangerous AI capabilities or whether our community contributed to an environment that led to multi-billion dollar fraud. We should take these possible negatives very seriously. The fact that we do take these negatives seriously, and that we continue to try to get better a truth seeking gives me a whole bunch of hope.