Brain Farming is No Longer Hypothetical
Background
On March 10th, 2026, DayOne announced that it will build biological data centers in Singapore and Melbourne. “Biological” here means that the computational infrastructure will involve live human neurons grown on silicon chips, maintained by life support systems that keep the cells alive through nutrient delivery and waste removal. Human brain cells are efficient information processors, and it is not surprising that commercial interests have begun to take notice. Cortical Labs, the “Wetware-as-a-Service” company discussed in prior posts (here and here), is the company supplying the CL1 biocomputers. DayOne’s Singapore facility has the potential to scale up over time to house 1,000 units of CL1, representing a deal that could grow to be worth tens of millions of dollars.[1]
DayOne is one of the fastest-growing data center operators in the world, with plans to raise a $5 billion IPO in the United States later this year at a targeted $20 billion valuation. By shifting workloads from pure silicon to increasingly sophisticated configurations of human brain matter, companies like DayOne may be able to offer cloud computing services in a more energy-efficient manner. They can frame these operations as sustainability initiatives, given that, as they state in their official announcement, a “bio data center harnesses the natural efficiency of brain-like organoids, which can function on a fraction of the wattage required by digital computers.”[2] Interestingly, opponents of placing human neurons in server racks to power the next generation of AI, because of the current regulatory vacuum and the worrisome incentives this creates, can thus be framed as opponents of ESG and sustainable computing.
It is entirely uncertain how quickly biological computing will advance, or how commercially viable DayOne’s bio data centers will ultimately prove. The current generation of the CL1 recently went viral for playing the video game Doom[3] and for being connected to an LLM.[4] These demonstrations may sound impressive or underwhelming depending on your familiarity with the field, but it is worth noting that deep learning systems in the 2010s scaled from a niche research approach to the dominant ML paradigm. What is certain is that biological computing is not currently just being evaluated in carefully regulated research contexts. Instead, it is being pushed at commercial scale into mainstream data center operations, in a regulatory vacuum by well capitalized companies with strong incentives to downplay risks.
The Problem
A revenue model is established, institutional capital is committed, and one of the fastest-growing data center operations in the world has publicly committed to commercialized biocomputing. It is certainly the case that current biocomputers are rudimentary, and my belief is that current systems are not sentience candidates. But every dollar invested in biological data centers is a dollar that depends on the continued assumption that these systems do not matter morally. The commercial pressure to build increasingly complex and capable systems to handle inference workloads means that this assumption will be stress-tested at exactly the moment it is most expensive to abandon. The acute risks of an entrenched “brain farming” industry, in which novel forms of consciousness are exploited at scale for computational purposes, are no longer hypothetical.
Proposal
The request is simple: an immediate moratorium on the use of biological neural tissue as commercial computing infrastructure.
Next Steps
Companies should be restricted from building data centers that leverage the computational power of biological systems, and further development of these systems should occur in regulated research environments without ties to widespread commercialization.
Anyone reading this should first and foremost understand that the focus here is not on research, but rather on the widespread commercialization of such systems before adequate safeguards have been addressed. The below are ways in which I believe we can effectively work to prevent the formation of a commercial ecosystem that will be extremely difficult to challenge once entrenched:
Journalists covering this space should convey not just the technical novelty of biocomputing, but also the governance gap in which the technology is being commercialized and the incentive structure that early commercialization creates.
Regulators should be asked, formally and on the record, whether their mandates cover the commercial deployment of human neural tissue as computing infrastructure.
Donors who provided biological samples from which these neurons are derived should be informed that their cells may be used as commercial computing infrastructure in DayOne’s data centers.
Institutional investors of DayOne, and the banks underwriting the IPO (JP Morgan and Morgan Stanley), should be briefed on the ethical and regulatory risks of their proposed biological data center buildout.
It is highly unlikely, in my opinion, that the investors backing the company have been accurately made aware of the situation. Religious and political interest groups are likely to rally against DayOne and the commodification of human neural tissue in the near future, and this decision by DayOne was probably made after investor capital was already secured and without institutional endorsement.
The scientific community should establish a coordinated ethical response that emphasizes that research and widespread commercialization are distinct concepts and that the latter should not proceed in the absence of regulatory frameworks.
The DayOne announcement has generated widespread media coverage, but there has been no serious pushback to the worrying incentives that such early commercialization creates. There has been no strategic action taken to address the risks involved. This needs to change.
Please feel free to reach out if you’re interested in getting involved. Although I think that anyone who finds the above arguments compelling should feel empowered to act on them individually (with the aforementioned framing). Happy to discuss alternative solutions or opposing viewpoints.
Datacenters built out of human neurons does sound pretty dystopian. Your headline proposal here, of trying to prevent the immediate and unregulated commercialisation of this tech, sounds very reasonable, and potentially achievable. May well be worth working on!
By the way, I am a PhD student at the Centre for Biomedical Ethics under the Yong Loo Lin School of Medicine, which is commissioned to be in charge of the validation of the idea in Singapore. I am not sure what I, or even our centre can do (personally, I didn’t know this was happening until I saw this post). But if anyone can think of anything I should do, let me know. (if you think there might be infohazard, feel free to PM or email me)
Glad you think so! I find this issue fairly tractable given how visceral it is, and definitely neglected (single digit people in the world). Let me know if you have any ideas on sharpening the message.
Great write-up, thanks for bringing this to my attention! I will look into this more soon, but had a quick question that you might be able to help with: what can (Australian and Singaporean) members of the public do right now about this? Are there any particular regulatory or protest actions that you would recommend to a concerned Australian, i.e. me? It’s a very new issue, so I get there may not be any yet.
Excuse me for posting this two times:
By the way, I am a PhD student at the Centre for Biomedical Ethics under the Yong Loo Lin School of Medicine, which is commissioned to be in charge of the validation of the idea in Singapore. I am not sure what I, or even our centre can do (personally, I didn’t know this was happening until I saw this post). But if anyone can think of anything I should do, let me know. (if you think there might be infohazard, feel free to PM or email me)
Clarification: They commissioned the YLL School of Medicine, particularly the Life Science Institute, to validate the idea of using CL1 to build datacentres. Of course they won’t commission a centre for bioethics to do that.
Do you know if the Centre for Biomedical Ethics was consulted? It would also be very interesting to know how the university and IRB approval worked here. Not just the initial validation, but whether these approvals were granted with the knowledge that this would transition from a research prototype to commercial deployment at scale. Any of these answers would be very good to know (feel free to DM me if you want). In general you seemed uniquely positioned here, really glad you read the post!
I am trying to ask. I will PM you when I get an answer.
I will try to investigate too.
Please feel free to email me to keep in touch. Or add me on linkedin.
Thank you for engaging! I think it is worth sending inquiries to relevant regulators, particularly the NHMRC, who is currently reviewing its regulatory framework (with its 2016 regulations sunsetting in October 2026). It feels very important for them to understand that widespread commercialization should not proceed, and for this to be written into actual policy. Also, contacting your federal MP and other politicians to get this on their radar seems very important, since to my knowledge no Australian citizens are working on this. You definitely have an edge there. Also, any local Australian journalists or science communicators would be worth reaching out to.
What’s the reason this would be worse than current data centers? Does this require one to believe that sentience is limited to/substantially more likely in biological neurons than silicon? I would like to understand what the net effect of this is likely to be and why the same arguments wouldn’t apply to other data centers currently being built.
“Does this require one to believe that sentience is limited to/substantially more likely in biological neurons than silicon?”
″more likely” yes, “limited to” no.
I’d also add that this is a reasonable stance even for people who put a lot of credence in physicalist/ functionalist theories. Whatever your theoretical commitments, we know with more certainty than almost anything else that human brains an support consciousness, so it’d make sense to be particularly worried here.
One other point that I’d add, is that these concerns can be complementary. I mention this in a previous post, but building the institutional capacity and legal frameworks to protect potentially novel forms of consciousness from commercial exploitation (even if in this case partly biological) could set important precedents for other forms later. Digital minds should also not be used as mere computational resources, and if framed correctly regulation here could lay groundwork that assists in that effort as well.
Executive summary: The commercialization of biological neural tissue as computing infrastructure by well-capitalized companies in an unregulated environment poses acute moral and governance risks that warrant an immediate moratorium, as financial incentives will pressure developers to create increasingly complex systems precisely when abandoning them becomes prohibitively expensive.
Key points:
DayOne announced plans to build biological data centers in Singapore and Melbourne using live human neurons on silicon chips supplied by Cortical Labs, with the Singapore facility potentially scaling to 1,000 units worth tens of millions of dollars.
Companies are framing biological data centers as sustainability initiatives because brain-like organoids function on a fraction of the wattage required by digital computers, which creates rhetorical protection against critics.
The author believes current biocomputers are not sentience candidates, but notes that deep learning systems in the 2010s scaled from niche research to dominance, suggesting similar trajectories are possible for biological computing.
Every dollar invested in biological data centers depends on the assumption that these systems do not matter morally, and commercial pressure to build increasingly complex systems will stress-test this assumption when it is most expensive to abandon.
The author proposes an immediate moratorium on commercial use of biological neural tissue as computing infrastructure, with further development restricted to regulated research environments without ties to commercialization.
Recommended actions include journalists conveying the governance gap and incentive structure, regulators being asked formally whether their mandates cover commercial deployment of human neural tissue, donors being informed their cells may be used in data centers, institutional investors and IPO underwriters being briefed on ethical and regulatory risks, and the scientific community establishing a coordinated ethical response.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Not really a useful comment, just thought of qntm’s short horror story Lena when I saw your post title. Hopefully we avoid this, thanks for working on this problem :)
I was quite worried when I saw the tweets going around, as well. I think the implications of the possibility of LLM sentience are already ethically quite large and biological computing intuitively has a much larger change of inducing sentience in the “computers”.
But I’m broadly not sold on the moratorium concept.
I mean, a global moratorium would definitely be the ethically careful choice to do here. But I think if even a couple of countries allow the building of these datacenters, and other countries allow the purchase of biological computing, then it would be important to act on this in other ways than moratoriums as well.
Something like mandating x amount of research into the ethical implications of this per y amount of spending on biological computing (e.g. a pigouvian tax that is earmarked to solving the ethical problems present) would be what I would primarily advocate, at least from my European point of view.
In the regions where the data centers are being built campaigning for a moratorium or slowdown and generally raising public awareness sounds like something that should be attempted and it should be possibly to sell people on the ethical implications of brain-based computing...
I agree that a moratorium alone may not be sufficient long term, but the broader issue is that there’s no regulatory infrastructure at all to enforce other alternatives. In the near term the goal would be to halt widespread commercialization so that such policies can be thoroughly discussed and implemented. Agreed that the public awareness piece seems broadly useful regardless of outcome.