This is an incredibly beautiful piece and I think the EA forum would benefit from many more like it in style.
I live in a place and spend time with people who are very aligned with you (close to the birthplace of the modern environmental movement, direct action, etc.).
Here’s a picture from a retreat where we got up to meditate in the cold morning and do a lot of other activities like replant/move mycelium.
However I am concerned about ideas and content in this piece:
Such case studies might lead us to re-examine the origins of anthropogenic risk to our longterm flourishing. Longtermists must think more thoroughly longterm. We must consider whether what looks like an arch of progress over decades, could turn out to have been a mere depletion of resilience and thus a shift of the risk-distribution. Improvements in quantity of anything must come from somewhere, and we better make sure they don’t derive from the buffers we need in tougher times.
and
Indeed, fungal applications, will in most cases likely integrate more safely with our civilization than an artificial machinery we build from scratch.
Imagine if little pieces of artificial intelligence were lying around on the ground. A little concept formation here, a little continual learning and causal cognition there…why would we not pick it up? We may not need AI as urgently as we think. Fungal metabolic functions can already solve a number of our problems. Our conception of technology is inconsistent. We describe the human brain as complex computer and our bodies as sophisticated machines. The logical conclusion is that non-human life forms too are little machines. We should protect their functionality with the same eagerness as we are aggressively pursuing artificial intelligence to be built and solve all our problems.
I think the thinking behind this might touch on issues I am worried about:
AI and Longtermism (in the specific sense of accounting for future populations with low or zero discounting) are precious cause areas/worldviews.
However, it seems that the current, apparently intense interest in it, involves messaging issues/rhetoric that can create structural problems.
(To be more specific, communication/cultural issues will create noise that disrupts activity and involvement even among informed and involved people, e.g., everyone, including informed EAs, jockeys to tag their cause with “Longtermism” and this has consequent effects on discussion and Truth).
Maybe communicating various topics better would help. Here are some candidates:
A description of prosaic AI that emphasizes it is pretty much scaled up pattern matching (see GPT series), and providing information on what non-prosaic is and is not.
Presentations of Longtermism that don’t include rhetoric or results that lend itself to overly broad interpretations, especially if the concept isn’t that hard to understand or new.
More communication about gritty operational and implementation details in projects related to Longtermism
The obvious benefit is to make the cause more accessible but there are other benefits:
“Customer service” is a critical form of leadership in many communities and underappreciated and I think the above makes this activity more feasible and productive. It will also reduce the “distance” between public discussions and actual gears-level work (e.g. MIRI papers). It will also help “taste the soup” of actual operational and implementation details of Longtermist projects.
This is an incredibly beautiful piece and I think the EA forum would benefit from many more like it in style.
I live in a place and spend time with people who are very aligned with you (close to the birthplace of the modern environmental movement, direct action, etc.).
Here’s a picture from a retreat where we got up to meditate in the cold morning and do a lot of other activities like replant/move mycelium.
However I am concerned about ideas and content in this piece:
and
I think the thinking behind this might touch on issues I am worried about:
AI and Longtermism (in the specific sense of accounting for future populations with low or zero discounting) are precious cause areas/worldviews.
However, it seems that the current, apparently intense interest in it, involves messaging issues/rhetoric that can create structural problems.
(To be more specific, communication/cultural issues will create noise that disrupts activity and involvement even among informed and involved people, e.g., everyone, including informed EAs, jockeys to tag their cause with “Longtermism” and this has consequent effects on discussion and Truth).
Maybe communicating various topics better would help. Here are some candidates:
A description of prosaic AI that emphasizes it is pretty much scaled up pattern matching (see GPT series), and providing information on what non-prosaic is and is not.
Presentations of Longtermism that don’t include rhetoric or results that lend itself to overly broad interpretations, especially if the concept isn’t that hard to understand or new.
More communication about gritty operational and implementation details in projects related to Longtermism
The obvious benefit is to make the cause more accessible but there are other benefits:
“Customer service” is a critical form of leadership in many communities and underappreciated and I think the above makes this activity more feasible and productive. It will also reduce the “distance” between public discussions and actual gears-level work (e.g. MIRI papers). It will also help “taste the soup” of actual operational and implementation details of Longtermist projects.
Hi Charles, I’m afraid I didn’t really understand your comment. These are pretty pictures though!