Minor, personal (non-CEA) take that isn’t really core to your post — I’m actually somewhat critical of the microCOVID project and don’t see it as a great example of EA in action. As I understand it, this involved people managing their personal risk exposure to a virus that at the time was considered to not be that dangerous to young and healthy people, rather than working on x-risk or another EA priority. While cool and interesting, it seems not that different to building an app for managing one’s exercise routine, for example.
(1) I do want to note that I don’t think one should evaluate microcovid as though it was an altruistic action selected by a prioritization process! It was more like a selfish and friend-oriented project, which we made scalable enough to unlock big positive externalities in our broader community and beyond. The first version of the system was purely to save my own group house! (That said, I do think it’s well-described as a project in the spirit of rationality, and a good example of rationality in action.)
I think that microCOVID probably looks pretty good on EA grounds just via saving a bunch of EAs a bunch of time worrying about what their COVID policies should be. But I like your point.
a virus that at the time was considered to not be that dangerous to young and healthy people
I don’t think this was true at the time. Not even for a young, healthy person only caring about themselves.
But I think the following categories probably cover 30-90% of participants in EA events anyway:
People who have a health issue that makes them more susceptible
People who aren’t that young
People who are close to someone in one of the above categories
People who don’t want to catch the disease because they know there’s a high chance they’d infect other people
I also don’t think an efficient tool to help EAs with personal exercise would be a bad thing.
In the end, you have a community of EAs, and you want people who’ll do impactful work to come from that community. If the community is unhealthy, less people would join, and out of those who do join less people would be able to make an impactful contribution. Community health is a meta cause, but that doesn’t make it inferior to “direct work” cause areas.
Edit: with all of this said, I don’t actually mean this as an endorsement of microCOVID. I didn’t find it very useful, both because I wasn’t sure what meaning I should really assign to the numbers, and because I didn’t have much choice over my environment at EA Global.
(2) Much more cantankerously: “If your dear friends are suffering deeply, and you have created a system that can help, do not spend any spare time (outside your EA day job) on teaching them the system. You need to spend that spare time on altruistic things, not your friends’ suffering. Certainly do not allow any motivated non-EA friends to spend THEIR time helping you build a scalable version for your entire community” is not a moral system or set of community norms I can get behind.
It helped me know how much I should change my behaviour in a global pandemic, which I think was pretty beneficial to me and many other EAs. I can believe it saved a lot of EA time.
I like this take. Seems like this fits in a broader discussion of how rigorously we should try to line up actions with principles, ex: going vegan, not flying for climate, or more extreme things like going zero waste
Minor, personal (non-CEA) take that isn’t really core to your post — I’m actually somewhat critical of the microCOVID project and don’t see it as a great example of EA in action. As I understand it, this involved people managing their personal risk exposure to a virus that at the time was considered to not be that dangerous to young and healthy people, rather than working on x-risk or another EA priority. While cool and interesting, it seems not that different to building an app for managing one’s exercise routine, for example.
(1) I do want to note that I don’t think one should evaluate microcovid as though it was an altruistic action selected by a prioritization process! It was more like a selfish and friend-oriented project, which we made scalable enough to unlock big positive externalities in our broader community and beyond. The first version of the system was purely to save my own group house! (That said, I do think it’s well-described as a project in the spirit of rationality, and a good example of rationality in action.)
I think that microCOVID probably looks pretty good on EA grounds just via saving a bunch of EAs a bunch of time worrying about what their COVID policies should be. But I like your point.
I don’t think this was true at the time. Not even for a young, healthy person only caring about themselves.
But I think the following categories probably cover 30-90% of participants in EA events anyway:
People who have a health issue that makes them more susceptible
People who aren’t that young
People who are close to someone in one of the above categories
People who don’t want to catch the disease because they know there’s a high chance they’d infect other people
I also don’t think an efficient tool to help EAs with personal exercise would be a bad thing.
In the end, you have a community of EAs, and you want people who’ll do impactful work to come from that community. If the community is unhealthy, less people would join, and out of those who do join less people would be able to make an impactful contribution. Community health is a meta cause, but that doesn’t make it inferior to “direct work” cause areas.
Edit: with all of this said, I don’t actually mean this as an endorsement of microCOVID. I didn’t find it very useful, both because I wasn’t sure what meaning I should really assign to the numbers, and because I didn’t have much choice over my environment at EA Global.
(2) Much more cantankerously: “If your dear friends are suffering deeply, and you have created a system that can help, do not spend any spare time (outside your EA day job) on teaching them the system. You need to spend that spare time on altruistic things, not your friends’ suffering. Certainly do not allow any motivated non-EA friends to spend THEIR time helping you build a scalable version for your entire community” is not a moral system or set of community norms I can get behind.
It helped me know how much I should change my behaviour in a global pandemic, which I think was pretty beneficial to me and many other EAs. I can believe it saved a lot of EA time.
I like this take. Seems like this fits in a broader discussion of how rigorously we should try to line up actions with principles, ex: going vegan, not flying for climate, or more extreme things like going zero waste