Studying BTech at IIT Delhi. Check out my comment and post history for topics of interest. Please prefer seeing recent comments over older ones as my thoughts have been updating very frequently recently. I spent 18 months deep into cryptocurrency and DeFi before I came here.
profile: samueldashadrach.github.io/EA/
Last updated: March 2022
Im not sure how to upload a doc on phone so I’m just copy pasting the content.
If this isn’t valuable for the forum I’m also happy to take it down (either the comment or the post)
This is mostly a failed attempt at doing anything useful, that I nevertheless wish to record.
See also: https://forum.effectivealtruism.org/posts/AiH7oJh9qMBNmfsGG/institution-design-for-exponential-technology
There is a lot of complexity to understanding how tech changes the landscape of power and how bureaucracies can or should function. A lot of this complexity is specific to the tech itself.
For instance, nuances of how nukes must be distributed can affect how its command structure can look like, or how the uranium supply chain must look like. Or nuances of what near-term AI cannot or cannot do, for surveillance and enforcement, can affect the structure of bureaucracies that wish to wield this power. Or nuances of how compute governance can work, for AGI governance structures.
I wondered if it possible to abstract away all of this complexity by simply assuming any tech allows for a relation of power between two people, one exerts control on the other. And then build theory that is invariant to all possible relations of power and therefore all possible technologies we may invent in the future.
Toy example
I wished to build theory for how institutions can hold very powerful tech. Here’s a toy example I thought of.
A magical orb with following properties Wearer can wear it for 10 seconds, and instantaneously make a wish that destroys all other orbs still being formed in the universe. Wearer can wear it for 60 seconds, and become an all-knowing all-intelligent all-capable god. Also assume: Anybody anywhere in the universe can start chanting (publicly known) prayers for 60 seconds to create a new orb All chanting is audible to everyone in the universe
Challenge Design a stable social technology that uses an orb to ensure no other orb exists in the universe, but also ensure this orb doesn’t fall to someone who become a god.
Why define the problem this way?
Even when defining the problem I realised I had to make more assumptions that I initially thought I did.
I had to assume all chanting is audible to everyone, to sidestep the problem of who controls surveillance tech and how powerful is it?
I had to assume anybody can chant, to sidestep the problem of who can chant, and also not get into pre-emptive solutions such as keeping the prayers a secret. Note also that I considered prayers as the way to create orbs, and not some existing physical resource. This is because physical resources also allow for solutions that control their supply chain, which has more complexity (such enriched uranium for nukes or compute governance for AGI). I did not want those kinds of solutions; I only wanted solutions actually using the orb.
How to solve this problem?
Clearly the value of ensuring the orb is safe exceeds the value of lives of those in the bureaucracy built around the orb (assuming a bureaucracy is built). Hence I will casually refer to death threats and deaths as a form of control. In an ideal world where it is possible, I would prefer less coercive or destructive ways to design the bureaucracy—this is again just a way to remove complexity when attempting a first solution.
For starters you want one person using the orb. You also want people who can threaten to kill the person using the orb, if they wear it for more than 10 seconds at a time. You want the persons threatening to further be threatened if they don’t do this threatening, either by each other or by other people. You want a predictable second person to gain access to the orb if the first person attempts to misuse the orb and is killed. Such that you can now repeat the game theory as is, on the second person.
This attempt at a solution already contains a huge amount of complexity.
One possible way to have these threats is to give everyone guns. You now need to look at errors in firing, you need to look at errors in human’s behaviours (such as people who panic or are asleep and don’t fire). You need to look at the physical positions of the people, they are no longer agents, but agents with three-dimensional locations. You need line of sight for people to be able to surveill each other’s actions. You need ways to safely rotate shifts as people can’t be awake 24x7.
You also need mechanisms for people outside this room and society at large, to trust what happens inside is legitimate and safe, and not be willing or capable to barge in or blow up the room.
Better technology can maybe get better solutions here. A computer could fire shots more reliably than a human. A camera could avoid the requirement for line of sight. But that just introduces more complexity, such as how you will replace the cameras, who controls the bureaucracy that manufactures the cameras and how you’ll ensure they never run out, what if cameras have errors, and so on.
And these problems might be solveable but they’re high complexity which is what I was trying to avoid. Even though the orb has complexity abstracted away, I need to use technologies besides the orb to secure it, and this reintroduces all the complexity of very specific technologies.