— I engineer ambitious ideas until they survive the battlefield of reality —
I have received funding from the LTFF and the SFF and am also doing work for an EA-adjacent organization.
My EA journey started in 2007 as I considered switching from a Wall Street career to instead help tackle climate change by making wind energy cheaper – unfortunately, the University of Pennsylvania did not have an EA chapter back then! A few years later, I started having doubts whether helping to build one wind farm at a time was the best use of my time. After reading a few books on philosophy and psychology, I decided that moral circle expansion was neglected but important and donated a few thousand sterling pounds of my modest income to a somewhat evidence-based organisation. Serendipitously, my boss stumbled upon EA in a thread on Stack Exchange around 2014 and sent me a link. After reading up on EA, I then pursued E2G with my modest income, donating ~USD35k to AMF. I have done some limited volunteering for building the EA community here in Stockholm, Sweden. Additionally, I set up and was an admin of the ~1k member EA system change Facebook group (apologies for not having time to make more of it!). Lastly, (and I am leaving out a lot of smaller stuff like giving career guidance, etc.) I have coordinated with other people interested in doing EA community building in UWC high schools and have even run a couple of EA events at these schools.
The electric grid is a powerful, current AI safety policy opportunity. ~25% of the cost of data centres is electricity and the electric grid is heavily regulated across government agencies. In fact, the very reason why many people claim nuclear energy is over regulated is exactly why we might be able to regulate AI strongly via electric grid regulation. This means “the grid” could be a strong contender for a space to make rapid and robust progress on AI safety policy. As an expert in the electric grid with a strong interest in AI safety and resilience, I see an electricity sector full of opportunities for quick wins and actually moving the needle. While “high-level” policy interventions such as SB 53 are important, a drawback of these types of policy interventions is twofold:
1 - They attract enormous public scrutiny. Electric regulation, on the other hand, hardly make it into local newspapers or industry press even.
2 - They don’t really move the needle. They are more aspirational and rely on outcomes in court cases, enforcement, etc. The electric grid already has “kill switches” installed and integration with national security.
On the other hand, in the electric grid, one can plausibly pass very strong regulation with physical, actual AI safety levers. Some examples:
A—Large consumers of electricity are critical to the grid. It is not unforeseeable at all that gov’t bodies could require a “kill switch” for data centres, not because of AI safety but due to grid health.
B—The military is extremely focused on electrical grids. They are also a likely gov’t body that would act quickly and decisively on perceived threats on the grid. They likely influence requirements for cyber security of the electrical grid. This can include monitoring of critical load (yes, data centres) and access to the above “kill switch”.
These are just two of many ideas I have around making rapid, robust progress on AI safety via much less public scrutiny, and using existing pathways and gov’t focus areas around electric grid management and national defense. I have many more ideas and many years of working in and following the industry as well as a large professional network if anyone wants to talk—please DM me!