I agree, it seems like compute governance specifically needs interdisciplinary knowledge on a spectrum of fields. One area of improvement might be co-design labs, designing software and hardware along with policy. I am wondering about the high-level aptitudes of someone who does work on Compute Governance
monadica
Would like to connect with the cohort members or the team organising this community.
X-Risk, Governance, Epistemic Risks
Carla Cremer and Igor Krawczuk argue that AI risk should be understood as an old problem of politics, power and control with known solutions, and that threat models should be driven by empirical work. Carla’s Vox article highlights the need for an institutional turn when taking on a responsibility like risk management for humanity. Carla’s “Democratizing Risk” paper found that certain types of risks fall through the cracks if they are just categorized into climate change or biological risks. Deliberative democracy has been found to be a better way to make decisions, and AI tools can be used to scale this type of democracy and be used for good, but the transparency of these algorithms to the citizens using the platform must be taken into consideration.
Conformism in EA
I think the natural move is to create a chapter within CEA that actively supports Black people. Honestly, i have been to EA conferences, and I can tell there is still work to be done on the diversity part, also including woman representation. Overall I love CEA and want to see how to be more diverse. One place to start might be supporting emerging markets like Africa, not only through donations but programs. For example 80, 000 hours is tailored for someone in the Global North, we need to rethink how does 80K look if we want to end unemployments rates in Southern Africa.
Elon’s Indefinite vs Definite Optimism
After listening to Elon’s recent interview , there are two interesting ways he comments about Philantropy. He asks the question, what is the most effective causes to give but also argues that Tesla, Space X, Boring Company are causes for Philantropy (love for humanity).
This question is about understanding whether building a company is more philantropic than donating to causes. This tweet expresses frustration with the EA philosophy of giving to causes. How can you diffuse this take?
monadica’s Quick takes
Hi Peter, very awesome idea, I am working on this kind of project, it would be nice to talk with you
Is there an empirical method of measuring progress? How can we account for piecewise progress, for example VR had a massive interest in the 80s, went into a winter in the 90s, reinstalled in 2012 by Palmer Lucky, similarly , AI went into a 10 year winter due to Minsky’s critic of Rosenblatt. It seems that progress is not linear, but stochastic and maybe a complex thing to model, it appears that it is not a monolith of which we arrive to but constantly happening in complex ways.
The perceptron was intended to be a hardware machine, first implemented on software. This theory is similar to the Hardware Lottery[1] published by Sara Hooker , implying that ideas in Software Research are successful not because they are correct but due to the available hardware to solve those problems.
Secondly, what would be necessary for a hypothetical golden age to emerge, is it building a new city, restructuring organisations (university, government), rebirth(renaissance), cataclysm (covid,climate change) or simply moving slowly towards reform.
This would be a good way to fund Alignment Research, you could make it more practical by adding liquid democracy and quadratic funding. I think there is a way to tweak it to be performant for EA Cause Areas