I think that EA, to the extent it’s a coherent entity, has a pretty good mental map of who its potential collaborators are. I expect that mental map is more detailed than one I could knock up off the top of my head, so it probably won’t be very useful for me to speculate on EA’s behalf. I’ll try anyway. The task of anyone trying to do anything is usually to convince governments to take their thing seriously, e.g. by setting up wastewater monitoring. Particularly when it comes to pandemics, there’s probably lots of useful stuff that just hasn’t quite been invented yet. Far-UVC, for instance, is a relatively recent innovation. Perhaps there are people in university science departments sitting on similarly good ideas.
As for EA x Progress, there’s certainly overlap. Of course there are lots of different kinds of EA and lots of different kinds of progress-head. Both groups think of problems in fairly numerical terms. Both are animated by big ideas. I think there’s probably some EA-flavoured work to be done on how good it’d be for the world if the West got its act together in economic terms. I also think there’s a lot of overlap in terms of interest in AI, and that, between them, there’s a plausible EA-Progress double act that tries to make AI go well without going badly, so to speak.
I think that EA, to the extent it’s a coherent entity, has a pretty good mental map of who its potential collaborators are. I expect that mental map is more detailed than one I could knock up off the top of my head, so it probably won’t be very useful for me to speculate on EA’s behalf. I’ll try anyway. The task of anyone trying to do anything is usually to convince governments to take their thing seriously, e.g. by setting up wastewater monitoring. Particularly when it comes to pandemics, there’s probably lots of useful stuff that just hasn’t quite been invented yet. Far-UVC, for instance, is a relatively recent innovation. Perhaps there are people in university science departments sitting on similarly good ideas.
As for EA x Progress, there’s certainly overlap. Of course there are lots of different kinds of EA and lots of different kinds of progress-head. Both groups think of problems in fairly numerical terms. Both are animated by big ideas. I think there’s probably some EA-flavoured work to be done on how good it’d be for the world if the West got its act together in economic terms. I also think there’s a lot of overlap in terms of interest in AI, and that, between them, there’s a plausible EA-Progress double act that tries to make AI go well without going badly, so to speak.