I’m G, currently living in Guatemala. I became interested in effective altruism through long-standing questions about how societies stay resilient as technology—especially AI—scales faster than our institutions, incentives, and cultures.
I’m here to explore and stress-test an idea I’ve been working on around intent-first networking.
The core premise is simple: instead of networks forming around status, feeds, or algorithms optimized for engagement, they form around explicit human intent—what someone is trying to do, solve, learn, or contribute right now. Coordination happens by matching intent to people, resources, and context, rather than attention or influence.
The motivation comes from concerns around AI and societal scaling—not so much alignment in a technical sense, but human agency and coordination: how people continue to make meaningful, values-driven decisions as systems become faster, more automated, and more opaque.
I’m especially interested in how intent-first systems might support:
Better coordination without central control
Healthier community formation and governance
Human-in-the-loop design as AI agents become more capable
I’m here mainly to learn, share ideas, and welcome questions or critiques from an EA perspective.
Hi everyone 👋
I’m G, currently living in Guatemala. I became interested in effective altruism through long-standing questions about how societies stay resilient as technology—especially AI—scales faster than our institutions, incentives, and cultures.
I’m here to explore and stress-test an idea I’ve been working on around intent-first networking.
The core premise is simple: instead of networks forming around status, feeds, or algorithms optimized for engagement, they form around explicit human intent—what someone is trying to do, solve, learn, or contribute right now. Coordination happens by matching intent to people, resources, and context, rather than attention or influence.
The motivation comes from concerns around AI and societal scaling—not so much alignment in a technical sense, but human agency and coordination: how people continue to make meaningful, values-driven decisions as systems become faster, more automated, and more opaque.
I’m especially interested in how intent-first systems might support:
Better coordination without central control
Healthier community formation and governance
Human-in-the-loop design as AI agents become more capable
I’m here mainly to learn, share ideas, and welcome questions or critiques from an EA perspective.
Feel free to ask me any questions you may have.
Thanks!