As we reached the one-year mark of Future Matters, we thought it a good moment to pause and reflect on the project. While the newsletter has been a rewarding undertaking, we’ve decided to stop publication in order to dedicate our time to new projects. Overall, we feel that launching Future Matters was a worthwhile experiment, which met (but did not surpass) our expectations. Below we provide some statistics and reflections.
Statistics
Aggregated across platforms, we had between 1,000–1,800 impressions per issue. Over time, an increasingly larger share came from Substack, reflecting our growth in subscribers on that platform and the absence of an equivalent subscription service on the EA Forum.
A substantial fraction of our subscriptions came via other EA newsletters:
Reflections
Time investment. Writing the newsletter took considerably more time than we had anticipated. Much of that time involved two activities: (1) actively scanning Twitter lists, EA News, email alerts and other sources for suitable content and (2) reading and summarizing this material. The publication process itself was also pretty time-consuming, but we were able to fully delegate it to a very efficient and reliable assistant. Overall, we each spent at least 2–3 days working on each issue.
AI stuff. Over the course of the year, AI-related content began to dwarf other topics, to the point where Future Matters became mostly AI-focused.
.
We feel like this shift in priorities was warranted — the recent pace of AI progress has been staggering, as has been the recklessness of certain AI labs. All the more surprising has been the receptiveness of the public and media to taking AI risk concerns seriously (e.g. the momentum behind measures to slow down AI progress).
In this context, it appears to us that the value of a newsletter focused on longtermism and existential risk generally is lower than it was when we started it, relative to a newsletter with a sole focus on AI risk. But we don’t think we’re the best people to run such a newsletter. There are already a number of good active AI newsletters out there, which have their own focuses:
The recent progress in AI has made us more reluctant to continue investing time in this project for a separate reason. Much of the work Future Matters demands, as noted earlier, involves collecting and summarizing content. But these are tasks that GPT-4 can already do tolerably well, and which we expect could be mostly delegated within the next few months.
[Update from Pablo & Matthew]
As we reached the one-year mark of Future Matters, we thought it a good moment to pause and reflect on the project. While the newsletter has been a rewarding undertaking, we’ve decided to stop publication in order to dedicate our time to new projects. Overall, we feel that launching Future Matters was a worthwhile experiment, which met (but did not surpass) our expectations. Below we provide some statistics and reflections.
Statistics
Aggregated across platforms, we had between 1,000–1,800 impressions per issue. Over time, an increasingly larger share came from Substack, reflecting our growth in subscribers on that platform and the absence of an equivalent subscription service on the EA Forum.
A substantial fraction of our subscriptions came via other EA newsletters:
Reflections
Time investment. Writing the newsletter took considerably more time than we had anticipated. Much of that time involved two activities: (1) actively scanning Twitter lists, EA News, email alerts and other sources for suitable content and (2) reading and summarizing this material. The publication process itself was also pretty time-consuming, but we were able to fully delegate it to a very efficient and reliable assistant. Overall, we each spent at least 2–3 days working on each issue.
AI stuff. Over the course of the year, AI-related content began to dwarf other topics, to the point where Future Matters became mostly AI-focused.
.
We feel like this shift in priorities was warranted — the recent pace of AI progress has been staggering, as has been the recklessness of certain AI labs. All the more surprising has been the receptiveness of the public and media to taking AI risk concerns seriously (e.g. the momentum behind measures to slow down AI progress).
In this context, it appears to us that the value of a newsletter focused on longtermism and existential risk generally is lower than it was when we started it, relative to a newsletter with a sole focus on AI risk. But we don’t think we’re the best people to run such a newsletter. There are already a number of good active AI newsletters out there, which have their own focuses:
ImportAI (Jack Clark)
AI safety newsletter (CAIS/Dan Hendrycks)
ChinAI (Jeffrey Ding)
Navigating AI Risks (Campos et al)
The recent progress in AI has made us more reluctant to continue investing time in this project for a separate reason. Much of the work Future Matters demands, as noted earlier, involves collecting and summarizing content. But these are tasks that GPT-4 can already do tolerably well, and which we expect could be mostly delegated within the next few months.