Thanks for sharing your insights Mako! After reading your response and the IEEE Spectrum article you mentioned, I am much more optimistic that the metaverse can/will move in the right direction. Is there anything that could be done (by governments, companies, NGOs, the general public, or whatever player) to make this even more likely?
I also liked your example of Twitter, where addictiveness was not designed into the system, but happened accidentally. Accidents usually prompt investigations to improve regulations, for instance in the aircraft industry. Do you think there are any concrete key learnings from the case Twitter how to prevent similar accidents in the future of the internet or metaverse? If so, could or should some of these be baked into better designs, and are current incentives aligned with this or would it require some governmental regulations (since you are worried about liberalisation)?
I still believe that Meta is a major player on the market. And while I do agree that they have no direct interest in destroying democracy or creating an unliveable world, I think they act in line of Milton Friedman and would do just try to maximise their profits. I am not sure if there is anything wrong with that in principle, as long as the rules of the game ensure that maximizing profits aligns well with overall utility. In the past, I don’t think the rules of the social media game aligned well with overall utility. And I am not sure that the need for and support of open standard by players like Meta alone is sufficient to align profit maximization with overall utility in the metaverse. If this assessment is correct, it would make sense to brainstorm ideas for such an alignment as the metaverse develops.
Btw. thanks also for sharing your LW article on Webs of Trust (on my reading list) and your thoughts on RoamResearch (pm’d you with a question on Roam vs. Obsidian).
Is there anything that could be done (by governments, companies, NGOs, the general public, or whatever player) to make this even more likely?
Fair prompt. I get the impression that the most impactful thing you can do is to make sure that the people leading the standards dialog have strong technical vision and good taste. That’ll also make it more likely to even succeed at establishing a standard. I guess that’s something that EA (with so much software engineering acumen) could probably do better than most NGOs! But yeah, it looks like that might already be the case, I’m not sure.
Do you think there are any concrete key learnings from the case Twitter how to prevent similar accidents in the future of the internet or metaverse?
I don’t know what the addictive social media systems of VR will look like. It might just be twitter again, but with bigger text.
Hmm… I guess VR social systems might orient around VR’s adaptation to voice chats, ubiquity of mics, support for body language (filtered through an avatar, which will often make people more comfortable) and a more natural sense of presence.
I find it difficult to imagine many novel systems about that, because it seems like it’s constrained to the sorts of arrangements that’re already pretty natural for humans. People walking around in a room and making sounds at each other. If you’re rude, people remember, and you don’t get invited next time. It doesn’t seem obvious that the information or the social bonds can be structured in any alarmingly novel ways. Well, I guess one big difference is that the social cliques can end up a lot more globe-sprawling and specific and extreme. But I’m not sure. There will still be lots of cross-linking. You’ll tend to meet your friends’ friends.
Maybe systems will end up being.. less about structuring information, and more about structuring relationships, controlling group matchmaking or timetabling.
Thanks for sharing your insights Mako! After reading your response and the IEEE Spectrum article you mentioned, I am much more optimistic that the metaverse can/will move in the right direction. Is there anything that could be done (by governments, companies, NGOs, the general public, or whatever player) to make this even more likely?
I also liked your example of Twitter, where addictiveness was not designed into the system, but happened accidentally. Accidents usually prompt investigations to improve regulations, for instance in the aircraft industry. Do you think there are any concrete key learnings from the case Twitter how to prevent similar accidents in the future of the internet or metaverse? If so, could or should some of these be baked into better designs, and are current incentives aligned with this or would it require some governmental regulations (since you are worried about liberalisation)?
I still believe that Meta is a major player on the market. And while I do agree that they have no direct interest in destroying democracy or creating an unliveable world, I think they act in line of Milton Friedman and would do just try to maximise their profits. I am not sure if there is anything wrong with that in principle, as long as the rules of the game ensure that maximizing profits aligns well with overall utility. In the past, I don’t think the rules of the social media game aligned well with overall utility. And I am not sure that the need for and support of open standard by players like Meta alone is sufficient to align profit maximization with overall utility in the metaverse. If this assessment is correct, it would make sense to brainstorm ideas for such an alignment as the metaverse develops.
Btw. thanks also for sharing your LW article on Webs of Trust (on my reading list) and your thoughts on RoamResearch (pm’d you with a question on Roam vs. Obsidian).
Fair prompt. I get the impression that the most impactful thing you can do is to make sure that the people leading the standards dialog have strong technical vision and good taste. That’ll also make it more likely to even succeed at establishing a standard. I guess that’s something that EA (with so much software engineering acumen) could probably do better than most NGOs! But yeah, it looks like that might already be the case, I’m not sure.
I don’t know what the addictive social media systems of VR will look like. It might just be twitter again, but with bigger text.
Hmm… I guess VR social systems might orient around VR’s adaptation to voice chats, ubiquity of mics, support for body language (filtered through an avatar, which will often make people more comfortable) and a more natural sense of presence.
I find it difficult to imagine many novel systems about that, because it seems like it’s constrained to the sorts of arrangements that’re already pretty natural for humans. People walking around in a room and making sounds at each other. If you’re rude, people remember, and you don’t get invited next time. It doesn’t seem obvious that the information or the social bonds can be structured in any alarmingly novel ways. Well, I guess one big difference is that the social cliques can end up a lot more globe-sprawling and specific and extreme. But I’m not sure. There will still be lots of cross-linking. You’ll tend to meet your friends’ friends.
Maybe systems will end up being.. less about structuring information, and more about structuring relationships, controlling group matchmaking or timetabling.