SFF’s HSEE grant round; human intelligence amplification projects I’d like to see
Summary
If you are interested in doing ambitious scientific research in areas listed below, and have a relevant project that needs funding, consider reaching out. My interest is in human intelligence amplification, though I believe that a variety of scientific projects are relevant, many of which are not specifically related to that goal.
Some areas, discussed below:
Soft questions: Strategy, financing, ethics, policy, governance, society, advocacy
Approach-general technical work on understanding and engineering brains and human intelligence
Reprogenetics-related projects, including reproductive epigenetics, chromosome engineering, microfluidics, cell engineering, and statistical genetics; see the full list here: “Projects that might help accelerate strong reprogenetics”
SFF’s themed grant round on Human Self-Enhancement and Empowerment
This year, the Survival and Flourishing Fund has a themed grant round, called Human Self-Enhancement and Empowerment (HSEE), which is “focused on organizations working to advance human self-enhancement and empowerment” and slated to make $2–4MM in grants this year. See their announcement here, with more information on what they are looking for: https://survivalandflourishing.fund/2026/application#hsee
The deadline for the HSEE Theme Round is July 8, 2026 at 11:59:59 PM PT.
I’ll be a Recommender for the HSEE Theme Round. That means that I and several other Recommenders, together with the Funder, will review and evaluate applications, and then through a collective process called the S-Process, make final recommendations of projects and amounts to fund. I do not have final granting power. Learn about how the S-Process works here: https://survivalandflourishing.fund/s-process
See information here https://survivalandflourishing.fund/faq and here https://survivalandflourishing.fund/2026/application#faq-applicants. Be sure to read the instructions carefully to check for eligibility and to submit properly.
In the rest of this post, I will express my personal opinions on what sort of projects might help accelerate human intelligence amplification. These opinions do not represent SFF. The reason I’m writing this now is so that people can have some sense about what projects I might be likely to recommend for funding. I don’t commit to recommending funding any specific project, and I categorically cannot commit to any specific project receiving funding from SFF, as all such final decisions are not up to me.
If you have a project that might be a good fit, feel free to reach out here or at my gmail address: tsvibtcontact (with the understanding that I may not have time to respond). Please also note that I will likely have a pretty high bar for what I’ll recommend funding for. I will also likely be pretty opinionated, but my aim is to use my opinionation in order to evoke the most important information quickly, rather than being stubborn or slow to change my mind. Therefore, I’m especially open to quick interactions about potential projects, e.g. “hey does [topic] seem like a thing you might possibly be interested in?” (with the understanding that the answer may often be “no”), or a phone call. My guess is that I personally am fairly unlikely to recommend funding for projects where I haven’t spoken to the applicant. I’m interested in projects that could produce information or ideas that would change my opinions, and projects that are predicated on a different opinion and where the difference can be argued for when applying for funding.
That said, I also hope that these descriptions of projects would be useful for people in general as some ideas for helping accelerate human intelligence amplification.
Generalities about my funding interests
Human intelligence amplification
I’m interested in accelerating HIA (human intelligence amplification). By that I broadly mean: greatly empowering humans with increased cognitive capacities, including philosophical problem solving, cleverness, coming up with great new ideas, wisdom, strategic ability, deep empathy, and so on.
Specifically, I’m interested in accelerating strong HIA (SHIA). SHIA is ill-defined, but means something like, “enabling people to be geniuses or world-class geniuses if they so choose”. Thus, I’m not very interested in projects where the best plausible outcome provides a small increase in brainpower, even though such projects could be great. (I’m not completely uninterested. A medium probability of a small but real increase in intelligence could definitely be worth it.)
Since SHIA is a difficult technology to develop, accelerating it could involved doing various indirect things, such as blue-sky research, public discussion, advocacy, media production, crafting regulation, improving the health of an industry, building out related business, low-probability research, etc. I’m quite open to such projects. However, I’m specifically interested in ones that have a significant contribution, in my estimation of probabilistic expectation, to the development of SHIA technology.
Intelligence in the operational sense of measured IQ is important and useful. In particular, it’s much easier to measure than other qualities, and therefore much easier to increase. I think this is a good and worthy goal that would have good consequences if achieved. For most practical purposes, it’s my main goal, due to feasibility. But IQ is far from all we care about even narrowly in the realm of SHIA (where “intelligence” is interpreted more broadly to include wisdom, strategy, empathy, sanity, reflectivity, etc.). So, while most of the following is focused on amplifying IQ, roughly speaking, I’m also interested in compelling projects related to understanding, measuring, and amplifying those other dimensions of cognitive capacity.
Approaches to SHIA
This is, a priori, a very difficult task because these aspects of humans are high-level properties of the most complex thing in the known universe: the human brain (and mind). There are broadly two ways to approach the technical problem of SHIA:
reverse engineering some key aspects that control / enable cognitive capacities, and then somehow engineering added support for those functions;
or else, copying nature’s work, i.e. looking at what genetic variants correlate with empirically measured cognitive capacity.
The latter approach is mainly about reprogenetics—enabling parents to make genomic choices on behalf of their future children. I favor this approach because I think it’s quite likely to work, technically speaking; also, the technology has a strong justification besides (S)HIA, namely disease risk reduction. It has the disadvantage that it can only benefit future children, not people currently alive; and it is harder to iterate on, compared to engineering methods.
For this reason, the following list of projects devotes a full section to reprogenetics, and only devotes subsections to other approaches. (There’s also a short section for ethics, strategy, policy, and other considerations.)
To avoid too much duplication, the following will somewhat presume my previous post “Overview of strong human intelligence amplification methods” as background. It’s not strictly necessary to read that post, but it contains many important elements of context.
What makes a good SHIA project
In short, I’d like to assist projects that actually address key bottlenecks to developing SHIA technology that benefits humanity.
An imaginary ideal application would do something like:
explain the central principles of a plausible specific technology that would constitute SHIA,
present a detailed technical analysis of a roadmap to a full implementation of that technology,
analyze the major obstacles to following that roadmap,
and then propose a project that concretely addresses those obstacles.
This is an unrealistic bar; a good realistic project might address one or more of the elements in that list. But hopefully this illustrates the sort of ambitious project I’d like to assist with.
Some more considerations:
SHIA technologies are probably difficult and complex technologies. That is, they integrate several scientific and technological areas, each of which has major challenges.
That complicates the strategic picture around accelerating such a technology. The efficient way to work, is to look for places where there are fields of science and technology that already have a lot of work going on and that are making good progress, and then see how to accelerate the neglected work which, when combined with the powerful mainstream tech, could produce a successful last-mile SHIA project. In general I think there’s probably a lot of neglected work (though it may be unpromising because it’s difficult), just because people don’t normally try to work on something very ambitious / difficult like SHIA.
Further, I will have to rely on expertise, so a good applicant would have to have a good way to credibly signal the relevant expertise. This can include credentials, past success, very clear and relevant analysis, social endorsement, and conversation.
In part because of the previous point, ethics, policy, and social issues are central aspects of a SHIA roadmap. It could be rightly considered a defection against society / humanity for a SHIA project to neglect the work of processing the nature, implications, and good use of SHIA technology along with other proper stakeholders (society at large, scientists in relevant fields, interest groups, regulators, bioethicists, etc.). Because it would be a defection to neglect that work, a SHIA project that neglects that work would naturally get much much less talent, funding, regulatory leeway, and societal support, and would probably fail.
Roadmaps are good. They are speculative and ambitious, which makes them likely to contain major inaccuracies, but it’s still helpful to try to see the way through to a successful SHIA technology, in order to orient to what matters in the area. I’d like to assist projects aimed at developing good roadmaps towards plausible SHIA technologies. Same goes for evaluations of central principles of SHIA technologies (e.g. “what sort of BCI could possibly enable SHIA, and how would we know”).
All that said, I’m definitely interested in assisting in-the-weeds / mundane / obvious / legworky science, if there’s a good case for its importance.
Strategy, financing, ethics, policy, governance, society, advocacy
For the most part, I suspect that it’s hard to productively investigate social issues around SHIA technologies until they are much closer to being acheived, and therefore more clearly resolved in terms of their safety profiles, misuse potential, social implications, etc. That said, social issues are also a key element of accelerating SHIA technology: if society is motivated to develop (S)HIA technology, the development will go faster, and if society is against it, it will go slower (and plausibly should go slower). So, I’m open to hearing about ways of addressing these issues about (S)HIA in general, or about specific technologies. For example:
What governance structures should be used for research groups and vendors developing (S)HIA technologies?
How should society develop and deploy (S)HIA technologies? E.g. what regulations and social attitudes should be used? What process should society use to decide this?
How to ensure that (S)HIA…
...doesn’t get misused?
...doesn’t harm recipients?
...isn’t marketed via fraud?
...is widely accessible?
...is broadly beneficial for society?
Doing fundraising, whether from VC, large private capital, philanthropy, or government sources. Also, mapping the views of these funders—what preconditions would have to be met to get much more funding for key science and technology research.
Doing advocacy for (S)HIA, broadly construed, e.g. producing media that explains and discusses (S)HIA to help society process these possibilities.
Gathering together people who have expertise and legitimacy to map out plausible futures of a given technology, discuss safety and ethics, make open letters, roundtables, reports, etc.
Talking to stakeholders (the public, advocacy groups, etc.) to understand opinions around (S)HIA, e.g. fears, hopes, questions, confusions, requests, etc.
How does SHIA affect existential risk from AGI? (See my essays here; they leave a lot of work to be done: https://tsvibt.blogspot.com/2025/11/hia-and-x-risk-part-1-why-it-helps.html https://www.lesswrong.com/posts/K4K6ikQtHxcG49Tcn/hia-and-x-risk-part-2-why-it-hurts )
There are many potential perils of reprogenetics; many of these apply to other SHIA methods. These perils could be analyzed (for severity, likelihood, and prevention). See “Potential perils of germline genomic engineering”.
Doing social organizing and momentum-building. E.g. conferences, forums, etc.
Approach-general technical work
There are likely many approach-agnostic investigations that would be quite helpful for eventual SHIA development. In particular, for non-reprogenetics SHIA approaches, we’d be doing some significant degree of “intelligence engineering”—coming up with specific ideas about how the brain works and how to augment its functioning, as opposed to copying nature’s work by just genomically vectoring for genes that are empirically correlated with target traits. (Even adult brain editing, which would leverage our partial understanding of the genetics of intelligence, would likely also require specific understanding of brain functioning, e.g. to decide which variants to ignore (e.g. if they are only active in childhood), to decide which tissues are most crucial to edit, and to foresee and prevent deleterious mental effects.)
Because we’d have to understand aspects of brain function that relate to intervening to increase cognitive capacities, we would have questions that might not usually be asked. I’d be interested in assisting scholarly reviews and analyses of key aspects of functional neuroanatomy, neurophysiology, cellular and molecular neuroscience, etc. E.g.:
Collating facts about brain development, macro and micro brain structures, brain function; connections between performance; connections between brain elements and genetic variants known to correlate with intelligence; etc.
Theorizing about the engines of human intelligence in terms of brain elments.
E.g., what are the bottlenecks in brain function (energy, dendro/synaptogenesis, myelination, vascularization, etc.)?
E.g., what are unique features of child / yound adult brains that make them so effective at learning? What are the neurobiological bases of these features?
(These are heavily researched areas; but the SHIA lens would be novel, leading to a novel pattern of emphasis on different questions, facts, and deductions.)
Theorizing about how brain function might be significantly improved. E.g. adding something, tweaking something, amplifying something, etc.
Theorizing about what “disagreements with evolution” we might have, which could be exploited to increase brainpower.
For example, human evolution might have built into our brains mechanisms for keeping the brain’s energy expenditure in check. But we might be confident that we can keep supplying plenty of calories steadily, and we might prefer for our brains to expend more energy for higher performance. (Cf. https://en.wikipedia.org/wiki/Cognitive_miser )
For example, evolution might have designed us to broadly follow an arc of exploration (of cognitive algorithms) during childhood followed by exploitation in adulthood (i.e. just executing the cognitive algorithms we’ve settled upon). That would be in order to set us up for practical success, rather than staying in the realm of theory, ideation, science, abstract problem solving, and so on. But we may prefer to have our brains tuned for abstract thinking in adulthood.
Developing tests that can notice genuine increases in intelligence.
Reprogenetics-related projects
Because I think that reprogenetics is the technical pathway that’s most likely to succeed in enabling strong human intelligence amplification, this is the area I’ve thought most about, am most interested in assisting with, and have the most detailed projects.
Please see this document for a full list of reprogenetics-related projects I’m interested in assisting: “Projects that might help accelerate strong reprogenetics” (discuss on LessWrong)
Copying a short overview of relevant scientific areas from that document:
Reproductive epigenetics. (Epigenetic sequencing and editing, epigenetics of reproduction and the germline, stem cell culturing, gonadal culturing, stem cell reprogramming, gametogenesis, creating gametogonia.)
Chromosome engineering. (Targeted crossover, targeted elimination, targeted missegregation, chromosome transfer, microinjection, nuclear transfer, manipulating and sorting chromosomes, physics of individual chromosomes.)
Microfluidics. (Cell lysis, microwells, droplet creation / transportation / sorting / merging, PDMS design and manufacturing.)
Cell engineering. (Stem cell culturing, DNA damage and repair, CRISPR-Cas9 and transposases and other gene editing systems.)
Statistical genetics.
Non-reprogenetics approaches
Brain emulation
I’m pretty skeptical of these approaches, partly due to difficulty but mainly due to danger. To personally want to assist such a project, I’d have to first have my opinion changed on the safety question.
See my analysis here: https://www.lesswrong.com/posts/jTiSWHKAtnyA723LE/overview-of-strong-human-intelligence-amplification-methods#Brain_emulation .
Brain interfaces
See my analysis here: https://www.lesswrong.com/posts/jTiSWHKAtnyA723LE/overview-of-strong-human-intelligence-amplification-methods#Brain_brain_electrical_interface_approaches
I think these methods are interesting, but:
I don’t know about how to accelerate them.
Neuralink seems like it’s following a good commercial ramp, but I don’t know about the scientific and technical path to much larger-scale connectivity. This makes it seem like both the research and the eventual uptake for users (also suppressed by the surgery involved) might be very slow.
Analyses of where there are gaps in research, and funding for that research, would be interesting.
For example, if there are really promising approaches to greatly decreasing the effective butcher number and increasing number of connections, e.g. through transplanted interface neurons, I’d be interested to hear about these.
It’s not clear how to greatly amplify human intelligence using these methods.
I’m quite interested in assisting with projects that analyze this question.
For example, I’ve suggested that large white matter investments by evolution imply that prosthetic connectivity might amplify intelligence (https://tsvibt.blogspot.com/2022/11/prosthetic-connectivity.html). Are there other such legible engineering bottlenecks, which could be supplemented technologically?
See for example “Principles of Neural Design” by Peter Sterling and Simon Laughlin. Do any of those principles, or other similar principles, suggest ways to substantially increase peak brain function?
What about human-human networking, or networking with externally supported brain tissue? What about machine learning acceleration for some brain operations?
I’m quite interested in projects that could either potentially have results that change these opinions of mine, or are predicated on a different opinion and could explain the difference when applying for funding.
Brain cellular/molecular reprogramming
All the existing work along these lines that I’m aware of is not ambitious enough—it doesn’t aim for SHIA. But I’m interested in ambitious ideas, such as somehow safely loosening perineuronal networks, or proliferating longer-range connections, or increasing genesis of dendrites / synapses / axons / neurons, or coaxing neurons themselves to regress to a more childlike or young-adult regulatory state, or specifically upregulating some child-brain-like GRNs or other characters, or things like that.
I’m skeptical because lifetime development seems like it carries many major irreversible changes, and making large-scale changes via cellular reprogramming would have huge risks of causing damage to existing mental structures and risking mental illness.
Brain gene editing
If anyone has a good way to accelerate research to find reasonable solutions to the delivery problem, let me know.
I’m also interested in research that could get a better picture of how large the gains would be, given that developmental windows would have passed for adult intervention. I’m skeptical that it could have large effects, but interested in more evidence.
Other
See also “Massive neural transplantation” here: https://www.lesswrong.com/posts/jTiSWHKAtnyA723LE/overview-of-strong-human-intelligence-amplification-methods#Massive_neural_transplantation. And see e.g. Revah et al. (2022)[1]. If this worked, one could imagine transplanting gene-edited neurons.
I’m interested in any approaches that could lead to strong human intelligence amplification. I don’t know of plausible such approaches other than the ones I listed, but I’m open to being convinced otherwise.
I’m also interested in approaches that have a pretty high chance of leading to more modest but genuine HIA. (By “genuine” I mean “actually increases ability to solve hard thinky problems”, rather than for example keeping you awake longer, having a higher mood, removing brain fog to bring you back to baseline, increasing performance on some tests that do not generalize to solving hard thinky problems, making you feel less inhibited, and so on.) I’m open to being convinced about some method, such as transcranial electromagnetic stimulation or selective HDACis or etc.; but it would take some convincing. Likewise, it’s conceivable that software tools could reach my bar, but it would have a large burden of proof to overcome. For example, powerful ways of communicating and coordinating, such as content filtering that allows group epistemics to be much more efficient and sane, could be interesting.
- ↩︎
Revah, Omer, Felicity Gore, Kevin W. Kelley, et al. “Maturation and Circuit Integration of Transplanted Human Cortical Organoids.” Nature 610, no. 7931 (2022): 319–26. https://doi.org/10.1038/s41586-022-05277-w.
One thing that’s unclear to me is whether attempts to use AI systems to augment human capabilities in these domains is in-scope or whether the round is focused on direct enhancement of these capabilities.
I’m also curious about your opinion on whether biological-enhancement based approaches are likely to bear fruit in time to matter. Do you think it’s plausible that timelines might be long on our current path or are you more hoping that there’s a pause that provides humanity with more time?
(Alternatively, is it more that you think that we need enhanced capabilities to succeed at alignment even if current timeline projections makes this appear challenging?).
The round is SFF’s, so I can’t speak to the round in general.
Personally, I’m open in principle to this, but it would have a high burden of proof.
Both. Pause is important. With or without a pause, I don’t think that confident short timelines make sense. See https://www.lesswrong.com/posts/sTDfraZab47KiRMmT/views-on-when-agi-comes-and-on-strategy-to-reduce and https://www.lesswrong.com/posts/5tqFT3bcTekvico4d/do-confident-short-timelines-make-sense
Something faster than reprogenetics would be nice, I just don’t see a way that seems likely to work.
I think alignment is probably extremely difficult, and we would have a relatively better chance with more brainpower, though maybe not a high chance. For why I think it helps X-risk, see https://tsvibt.blogspot.com/2025/11/hia-and-x-risk-part-1-why-it-helps.html (though see also https://www.lesswrong.com/posts/K4K6ikQtHxcG49Tcn/hia-and-x-risk-part-2-why-it-hurts).
I am interested in collaborating on proposals for human intelligence enhancement projects. I have several unusual projects and ideas documented here: https://diyhpl.us/wiki/nootropics including brain surgery, directed evolution of brain microbes, directed evolution and selective breeding of smarter animal populations, cell therapy (neurons from a smarter animal, inserted during fetal development, maaaybe post-birth), open-source software for brain implants (firmware etc), open-source software for brain surgery robotics, and, of course, human embryo genetic engineering.
Possibly. I’m not so clear what’s going on with the experiments. The one you cite that has a healthy animal getting smarter also states “Mice allografted with murine GPCs showed no enhancement of either LTP or learning.”, which suggests the same wouldn’t work with humans. Possibly you could do it with gene-edited human neurons / stem cells. But it feels super speculative whether that would improve much. But maybe.
It’s not clear to me and possibly other readers what level of research or speculative research you personally find worthwhile. (I don’t necessarily mean within the context of the grant program your post was discussing. Just with regards to the goal of biology winning or intelligence enhancement )
As a first draft, roughly, I think the speculation is worthwhile IF AND ONLY IF it’s in a context where it will then be followed up by maker/breaker investigation, on the question of “whether / how this can actually lead to SHIA in the real world”. This includes going back and forth between skeptically searching for flaws, and optimistically searching for workarounds/alternatives/reasons for hope. It also includes thinking about the whole process of getting to the working tech, including
would this even increase intelligence meaningfully, and how would we know
getting researchers and funding for the research at various stages
having intermediate feedback on success
questions about how society will receive it—researchers, regulation, funding, and deployment are all related to this, so if you’re so dismissive of these questions that you don’t consider them at all, there’s a significant chance you’re just barking up the wrong tree in terms of actually getting this done
I think that some ideas are born fragile and they need to be incubated and insulated before they are exposed to the horrors of politics.
I’m talking both about politics, but also and mainly about the technical plan.
Like what? Which ones plausibly significantly increase intelligence?
Note that this has significant risks and huge resulting difficulties. People generally don’t want to offer crazy elective risky surgeries.
(I general I don’t much buy “try it on animals and see what works” because of the issue where human brains are exceptional, and where we wouldn’t have a great way of testing intelligence in animals IIUC.)
Animal brain architecture is very similar to human brain architecture. There have been other surgeries for humans that have improved IQ in cases of severe debilitating disease. Naturally, nobody thought to try this on normal humans. … at least to my knowledge.
You’re talking about revascularization? It’s interesting, but would need a lot of fleshing out.
To step back a bit, I appreciate you thinking about these things and proposing ideas, but in order to make something actually work, I think there has to be a lot more in depth exploration. In particular, there’d have to be iterative maker/breaker investigation of the idea. In other words, I think you should argue against your own idea, then improve the idea and counterargue in favor, then critique the new version again, and repeat. Then for some ideas, you might actually convince yourself that the idea isn’t that workable or promising; for other ideas, you might be able to make a more convincing case and/or put together a promising version of the project.
This is possibly interesting, but it would need more argument / detail. At the moment, mainly I’d view it as an interesting vector for gene editing a bunch of neurons. I’m skeptical of things like “add a bunch of BDNF” increasing intelligence much, but I could maybe be convinced otherwise.
Probably for me to want to suggest that someone fund a project on this, you’d need an expert on board, who can explain well what’s been tried, what the bottlenecks are, what you’re going to try that’s different, why it would plausibly work, why it would be able to support the cargo, what the cargo is supposed to do, etc.
Well, I have some details regarding microbial nootropic evolution on my page. I’ll leave that for now but happy to discuss if prodded.
With regards to your comment about what has been tried before, you have to keep in mind that people generally have not tried to improve IQ so directly. There has been a lack of projects and resources in these areas.
For example many of the transgenic mouse experiments that have improved intelligence have been small projects that were one-off, focused on a single gene or mutation. To my knowledge there has not been a large-scale or directed project to seriously pursue the prospect of developing intelligence enhancement.
Even domestication projects have been rather limited, IIRC often resulting in lower intelligence(?).
What I mean is, what’s been tried regarding using bacteria as persistent delivery mechanisms in the brain.
I’m not sure I get the point of this. If you succeed, then now you’re doing horribly unethical+immoral experiments on intelligent conscious beings at a significant scale with little benefit. In terms of genetics, we already know enough about the polygenic architecture of intelligence to probably get to world-class-genius levels. On my view, making that feasible for many people / whoever wants, is more important (and easier and safer and more likely to work) than pushing much past that, if that’s relevant.
(I also do not in fact belief you can evolve an animal population to be human-level intelligence within a couple decades. If it’s, say, chimps, then even leaving aside ethics, you have only a few generations. If it’s, say, mice, then you’re probably really far from having genius mice.)
Further, I don’t see much intermediate benefits, whether financial or scientific.
To be clear, it’s worthwhile to test out strong reprogenetics on animals; but that’s in part because you’re skipping the intermediate generations, and instead just seeing if you can directly vector a trait by vectoring the genome based off polygenic scores from the current population.
Developing higher intelligence is not unethical or immoral. I am very surprised to hear you say otherwise. I think that in a lot of these discussions people seem to go into them with some sort of base assumption that everything is going to be abhorrent and awful and terrible. I have given you no indication of that. I think it’s an uncharitable assumption to assume that developing higher intelligence through these methods is inherently unethical or immoral. Intelligence is extremely beneficial and extremely moral to develop. Also on the detail level I don’t actually believe that you would need to breed an animal population to human-level intelligence to benefit from this sort of project. I think that you would be able to learn many things that could be applied to humans even if the animal population is developed to a level that is below human intelligence.
It’s not the higher intelligence that’s bad, it’s the forced breeding or other dangerous experiments on much smarter animals.
Like what?