Okay, I think I understand what you mean. What I meant by “X is value neutral” is something like “The platform FTX is value neutral even if the company FTX is not.” That probably not 100% true, but it’s a pretty good example, especially since I’m quite enamoured of FTX at the moment. OpenAI is all murky and fuzzy and opaque to me, so I don’t know what to think about that.
I think your suggestions go in similar directions as some of mine in various answers, e.g., marketing the product mostly to altruistic actors.
Intentional use of jargon is also something I’ve considered, but it comes at heavy costs, so it’s not my first choice.
References to previous EA materials can work, but I find it hard to think of ways to apply that to Squiggle. But certainly some demo models can be EA-related to make it differentially easier and more exciting for EA-like people to learn how to use it.
Lineage, implicit knowledge, and privacy: High costs again. Making a collaborative system secret would have it miss out on many of the benefits. And enforced openness may also help against bad stuff. But the lineage one is a fun idea I hadn’t thought of! :-D
My conclusion mostly hinges on whether runaway growth is unlikely or extremely unlikely. I’m assuming that it is extremely unlikely, so that we’ll always have time to react when things happen that we don’t want.
So the first thing I’m thinking about now is how to notice when things happen that we don’t want – say, through monitoring the referrers of website views, Google alerts, bounties, or somehow creating value in the form of a community so that everyone who uses the software has a strong incentive to engage with that community.
All in all, the measures I can think of are weak, but if the threat is also fairly unlikely, maybe those weak measures are proportional.
Okay, I think I understand what you mean. What I meant by “X is value neutral” is something like “The platform FTX is value neutral even if the company FTX is not.” That probably not 100% true, but it’s a pretty good example, especially since I’m quite enamoured of FTX at the moment. OpenAI is all murky and fuzzy and opaque to me, so I don’t know what to think about that.
I think your suggestions go in similar directions as some of mine in various answers, e.g., marketing the product mostly to altruistic actors.
Intentional use of jargon is also something I’ve considered, but it comes at heavy costs, so it’s not my first choice.
References to previous EA materials can work, but I find it hard to think of ways to apply that to Squiggle. But certainly some demo models can be EA-related to make it differentially easier and more exciting for EA-like people to learn how to use it.
Lineage, implicit knowledge, and privacy: High costs again. Making a collaborative system secret would have it miss out on many of the benefits. And enforced openness may also help against bad stuff. But the lineage one is a fun idea I hadn’t thought of! :-D
My conclusion mostly hinges on whether runaway growth is unlikely or extremely unlikely. I’m assuming that it is extremely unlikely, so that we’ll always have time to react when things happen that we don’t want.
So the first thing I’m thinking about now is how to notice when things happen that we don’t want – say, through monitoring the referrers of website views, Google alerts, bounties, or somehow creating value in the form of a community so that everyone who uses the software has a strong incentive to engage with that community.
All in all, the measures I can think of are weak, but if the threat is also fairly unlikely, maybe those weak measures are proportional.