Thanks for writing this up. I’ve often thought about EA in terms of waves (borrowing the idea from feminist theory) but never put fingers to keyboard. It’shard to do, because there is so much vagueness and so many currents and undercurrents happening. Some bits that seem missing:
You can identify waves within causes areas as well as between cause areas. Within ‘future people’, it seemed to go from X-risks to ‘broad longtermism’ (and I guess it’s now going back to a focus on AI). Within animals, it started with factory-farmed land animals, and now seems to include invertebrates and wild-animals. Within ‘present people’, it was objective wellbeing—poverty and physical health—and now is (I think and hope) shifting to subjective wellbeing. (I certainly see HLI’s work as being part of 2nd or 3rd wave EA).
Another trend is that EA initially seemed to be more pluralistic about what the top cause was (“EA as a question”), and then became more monistic with a push towards longtermism (“EA as an answer”). I’m not what that next stage is.
I think surely EA is still pluralistic (“a question”) and it wouldn’t be at all surprised if longtermism gets de-emphasized or modified. (I am uncertain, as I don’t live in a hub city and can’t attend EAG, but as EA expands, new people could have new influence even if EAs in today’s hub cities are getting a little rigid.)
In my fantasy, EAs realize that they missed 50% of all longtermism by focusing entirely on catastrophic risk while ignoring the universe of Path Dependencies (e.g. consider the humble Qwerty keyboard―impossible to change, right? Well, I’m not on a Qwerty keyboard, but I digress. What if you had the chance to sell keyboards in 1910? There would still be time to change which keyboard layout became dominant. Or what if you had the chance to prop up the Esperanto movement in its heyday around that time? This represents the universe of interventions EAs didn’t notice. The world isn’t calcified in every way yet―if we’re quick, we can still make a difference in some areas. (Btw before I discovered EA, that was my angle on the software industry, and I still think it’s important and vastly underfunded, as capitalism is misaligned with longtermism.)
In my second fantasy, EAs realize that many of the evils in the world are a byproduct of poor epistemics, so they work on things that either improve society’s epistemics or (more simply) work around the problem.
I like the point of waves within cause areas! Though I suspect there would be a lot of disagreement—e.g. people who kept up with the x-risk approach even as WWOTF was getting a lot of attention.
Thanks for writing this up. I’ve often thought about EA in terms of waves (borrowing the idea from feminist theory) but never put fingers to keyboard. It’shard to do, because there is so much vagueness and so many currents and undercurrents happening. Some bits that seem missing:
You can identify waves within causes areas as well as between cause areas. Within ‘future people’, it seemed to go from X-risks to ‘broad longtermism’ (and I guess it’s now going back to a focus on AI). Within animals, it started with factory-farmed land animals, and now seems to include invertebrates and wild-animals. Within ‘present people’, it was objective wellbeing—poverty and physical health—and now is (I think and hope) shifting to subjective wellbeing. (I certainly see HLI’s work as being part of 2nd or 3rd wave EA).
Another trend is that EA initially seemed to be more pluralistic about what the top cause was (“EA as a question”), and then became more monistic with a push towards longtermism (“EA as an answer”). I’m not what that next stage is.
I think surely EA is still pluralistic (“a question”) and it wouldn’t be at all surprised if longtermism gets de-emphasized or modified. (I am uncertain, as I don’t live in a hub city and can’t attend EAG, but as EA expands, new people could have new influence even if EAs in today’s hub cities are getting a little rigid.)
In my fantasy, EAs realize that they missed 50% of all longtermism by focusing entirely on catastrophic risk while ignoring the universe of Path Dependencies (e.g. consider the humble Qwerty keyboard―impossible to change, right? Well, I’m not on a Qwerty keyboard, but I digress. What if you had the chance to sell keyboards in 1910? There would still be time to change which keyboard layout became dominant. Or what if you had the chance to prop up the Esperanto movement in its heyday around that time? This represents the universe of interventions EAs didn’t notice. The world isn’t calcified in every way yet―if we’re quick, we can still make a difference in some areas. (Btw before I discovered EA, that was my angle on the software industry, and I still think it’s important and vastly underfunded, as capitalism is misaligned with longtermism.)
In my second fantasy, EAs realize that many of the evils in the world are a byproduct of poor epistemics, so they work on things that either improve society’s epistemics or (more simply) work around the problem.
I like the point of waves within cause areas! Though I suspect there would be a lot of disagreement—e.g. people who kept up with the x-risk approach even as WWOTF was getting a lot of attention.