I’m a former data scientist with 5 years industry experience now working in Washington DC to bridge the gap between policy and emerging technology. AI is moving very quickly and we need to help the government keep up!
I work at IAPS, a think tank of aspiring wonks working to understand and navigate the transformative potential of advanced AI. Our mission is to identify and promote strategies that maximize the benefits of AI for society and develop thoughtful solutions to minimize its risks.
I’m also a professional forecaster with specializations in geopolitics and electoral forecasting.
Peter Wildeford
If you’re considering a career in AI policy, now is an especially good time to start applying widely as there’s a lot of hiring going on right now. I documented in my Substack over a dozen different opportunities that I think are very promising.
Thank you for sharing your perspective and I’m sorry this has been frustrating for you and people you know. I deeply appreciate your commitment and perseverance.
I hope to share you a bit of perspective from me as a hiring manager on the other side of things:
Why aren’t orgs leaning harder on shared talent pools (e.g. HIP’s database) to bypass public rounds? HIP is currently running an open search.
It’s very difficult to run an open search for all conceivable jobs and have the best fit for all of them. And even if you do have a list of the top candidates for everything, it’s still hard to sort and filter through that list without more screening. This makes HIP a valuable supplement but not a replacement.
~
I also think it would be worth considering how to provide some sort of job security/benefit for proven commitment within the movement
‘The movement’ is just the mix of all the people and orgs doing their own thing. Individual orgs themselves should be responsible for job security and rewarding commitment—the movement itself unfortunately isn’t an entity that is capable of doing that.
~
I know one lady who worked at a top EA org for eight years; she’s now struggling to find her next position within the movement, competing with new applicants! That seems like a waste of career capital.
Hopefully her eight years gives her a benefit against other applicants! That is, the career capital hasn’t been ‘wasted’ at all. But it still makes sense to view her against other applicants who may have other skills needed for the role—being good at one role doesn’t make you a perfect automatic fit for another role.
~
Moreover, I would avoid the expensive undertaking of a full hiring round until my professional networks had been exhausted. After all, if you’re in my network to begin with, you probably did something meritorious to get there.
While personal networks are a great place to source talent they’re far from perfect—in particular while personal networks are created by merit they are also formed by bias and preferencing ‘people like us’. A ‘full hiring round’ is thus more meritocratic—anyone can apply, you don’t need to figure out how to get into the right person’s network first.
~
You might like this article: Don’t be bycatch.
I think you should get the LLM to give you the citation and then cite that (ideally after checking it yourself).
At least in my own normative thought, I don’t just wonder about what meets my standards. [...] I think the most important disagreement of all is over which standards are really warranted.
Really warranted by what? I think I’m an illusionist about this in particular as I don’t even know what we could be reasonably disagreeing over.
For a disagreement about facts (is this blue?), we can argue about actual blueness (measurable) or we can argue about epistemics (which strategies most reliably predict the world?) and meta-epistemics (which strategies most reliably figure out strategies that reliably predict the world?), etc.
For disagreements about morals (is this good?), we can argue about goodness but what is goodness? Is it platonic? Is it grounded in God? I’m not even sure what there is to dispute. I’d argue the best we can do is argue to our shared values (perhaps even universal human values, perhaps idealized by arguing about consistency etc.) and then see what best satisfies those.
~
On your view, there may not be any normative disagreement, once we all agree about the logical and empirical facts.
Right—and this matches our experience! When moral disagreements persist after full empirical and logical agreement, we’re left with clashing bedrock intuitions. You want to insist there’s still a fact about who’s ultimately correct, but can’t explain what would make it true.
~
It’s interesting to consider the meta question of whether one of us is really right about our present metaethical dispute, or whether all you can say is that your position follows from your epistemic standards and mine follows from mine, and there is no further objective question about which we even disagree.
I think we’re successfully engaging in a dispute here and that does kind of prove my position. I’m trying to argue that you’re appealing to something that just doesn’t exist and that this is inconsistent with your epistemic values. Whether one can ground a judgement about what is “really warranted” is a factual question.
~
I want to add that your recent post on meta-metaethical realism also reinforces my point here. You worry that anti-realism about morality commits us to anti-realism about philosophy generally. But there’s a crucial disanalogy: philosophical discourse (including this debate) works precisely because we share epistemic standards—logical consistency, explanatory power, and various other virtues. When we debate meta-ethics or meta-epistemology, we’re not searching for stance-independent truths but rather working out what follows from our shared epistemic commitments.
The “companions in guilt” argument fails because epistemic norms are self-vindicating in a way moral norms aren’t. To even engage in rational discourse about what’s true (including about anti-realism), we must employ epistemic standards. But we can coherently describe worlds with radically different moral standards. There’s no pragmatic incoherence in moral anti-realism the way there would be in global philosophical anti-realism.
You’re right that I need to bite the bullet on epistemic norms too and I do think that’s a highly effective reply. But at the end of the day, yes, I think “reasonable” in epistemology is also implicitly goal-relative in a meta-ethical sense—it means “in order to have beliefs that accurately track reality.” The difference is that this goal is so universally shared, so universal across so many different value systems, and so deeply embedded in the concept of belief itself that it feels categorical.
You say I’ve “replaced all the important moral questions with trivial logical ones,” but that’s unfair. The questions remain very substantive—they just need proper framing:
Instead of “Which view is better justified?” we ask “Which view better satisfies [specific criteria like internal consistency, explanatory power, alignment with considered judgments, etc.]?”
Instead of “Would the experience machine be good for me?” we ask “Would it satisfy my actual values / promote my flourishing / give me what I reflectively endorse / give me what an idealized version of myself might want?”
These aren’t trivial questions! They’re complex empirical and philosophical questions. What I’m denying is that there’s some further question—“But which view is really justified?”—floating free of any standard of justification.
Your challenge about moral uncertainty is interesting, but I’d say: yes, you can hedge across different moral theories if you have a higher-order standard for managing that uncertainty (like maximizing expected moral value across theories you find plausible). That’s still goal-relative, just at a meta-level.
The key insight remains: every “should” or “justified” implicitly references some standard. Making those standards explicit clarifies rather than trivializes our discussions. We’re not eliminating important questions—we’re revealing what we’re actually asking.
You raise a fair challenge about epistemic norms! Yes, I do think there are facts about which beliefs are most reasonable given evidence. But I’d argue this actually supports my view rather than undermining it.
The key difference: epistemic norms have a built-in goal—accurate representation of reality. When we ask “should I expect emeralds to be green or grue?” we’re implicitly asking “in order to have beliefs that accurately track reality, what should I expect?” The standard is baked into the enterprise of belief formation itself.
But moral norms lack this inherent goal. When you say some goals are “intrinsically more rationally warranted,” I’d ask: warranted for what purpose? The hypothetical imperative lurks even in your formulation. Yes, promoting happiness over misery feels obviously correct to us—but that’s because we’re humans with particular values, not because we’ve discovered some goal-independent truth.
I’m not embracing radical skepticism or saying moral questions are nonsense. I’m making a more modest claim: moral questions make perfect sense once we specify the evaluative standard. “Is X wrong according to utilitarianism?” has a determinate, objective, mind-indpendent answer. “Is X wrong simpliciter?” does not.
The fact that we share deep moral intuitions (like preferring happiness to misery) is explained by our shared humanity, not by those intuitions tracking mind-independent moral facts. After all, we could imagine beings with very different value systems who would find our intuitions as arbitrary as we might find theirs.
So yes, I think we can know things about the future and have justified beliefs. But that’s because “justified” in epistemology means “likely to be true”—there’s an implicit standard. In ethics, we need to make our standards explicit.
Thanks!
I think all reasons are hypothetical, but some hypotheticals (like “if you want to avoid unnecessary suffering...”) are so deeply embedded in human psychology that they feel categorical. This explains our moral intuitions without mysterious metaphysical facts.
The concentration camp guard example actually supports my view—we think the guard shouldn’t follow professional norms precisely because we’re applying a different value system (human welfare over rule-following). There’s no view from nowhere; there’s just the fact that (luckily) most of us share similar core values.
You were negative toward the idea of hypothetical imperatives elsewhere but I don’t see how you get around the need for them.
You say epistemic and moral obligations work “in the same way,” but they don’t. Yes, we have epistemic obligations to believe true things… in order to have accurate beliefs about reality. That’s a specific goal. But you can’t just assert “some things are good and worth desiring” without specifying… good according to what standard? The existence of epistemic standards doesn’t prove there’s One True Moral Standard any more than the existence of chess rules prove there’s One True Game.
For morality, there are facts about which actions would best satisfy different value systems. I consider those to be a form of objective moral facts. And if you have those value systems, I think it is thus rationally warranted to desire those outcomes and pursue those actions. But I don’t know how you would get facts about which value system to have without appealing to a higher-order value system.
Far from undermining inquiry, this view improves it by forcing explicitness about our goals. When you feel “promoting happiness is obviously better than promoting misery,” that doesn’t strike me as metaphysical truth but expressive assertivism. Real inquiry means examining why we value what we value and how to get it.
I’m far from a professional philosopher and I know you have deeply studied this much more than I have, so I don’t mean to accuse you of being naive. Looking forward to learning more.
“Nihilism” sounds bad but I think it’s smuggling in connotations I don’t endorse.
I’m far from a professional philosopher but I don’t see how you could possibly make substantive claims about desirability from a pure meta-ethical perspective. But you definitely can make substantive claims about desirability from a social perspective and personal perspective. The reason we don’t debate racist normative advice is because we’re not racists. I don’t see any other way to determine this.
Morality is Objective
People keep forgetting that meta-ethics was solved back in 2013.
I recently made a forecast based on the METR paper with median 2030 timelines and much less probability on 2027 (<10%). I think this forecast of mine is weaker to much fewer of titotal’s critiques, but still weak to some (especially not having sufficient uncertainty around the type of curve to fit).
p(doom) is about doom. For AI, I think this can mean a few things:
-
Literal human extinction
-
Humans lose power over their future but are still alive (and potentially even have nice lives), either via stable totalitarianism or gradual disempowerment or other means
The second bucket is pretty big
-
What do the superforecasters say? Well, the most comprehensive effort to ascertain and influence superforecaster opinions on AI risk was the Forecasting Research Institute’s Roots of Disagreement Study.[2] In this study, they found that nearly all of the superforecasters fell into the “AI skeptic” category, with an average P(doom) of just 0.12%. If you’re tempted to say that their number is only so low because they’re ignorant or haven’t taken the time to fully understand the arguments for AI risk, then you’d be wrong; the 0.12% figure was obtained after having months of discussions with AI safety advocates, who presented their best arguments for believing in AI x-risks.
I see this a bunch but I think this study is routinely misinterpreted. I have some knowledge from having participated in it.
The question being posed to forecasters was about literal human extinction, which is pretty different from how I see
p(doom)
be interpreted. A lot of the “AI skeptics” were very sympathetic to AI being the biggest deal, but just didn’t see literal extinction as that likely. I also have moderate p(doom) (20%-30%) while thinking literal extinction is much lower than that (<5%).Also the study ran 2023 April 1 to May 31, which was just right after the release of GPT-4. Since then there’s been so much more development. My guess is if you polled the “AI skeptics” now, the p(doom) will have gone up.
I just saw that Season 3 Episode 9 of Leverage: Redemption (“The Poltergeist Job”) that came out on 2025 May 29 has an unfortunately very unflattering portrayal of “effective altruism”.
Matt claims he’s all about effective altruism. That it’s actually helpful for Futurilogic to rake in billions so that there’s more money to give back to the world. They’re about to launch Galactica. That’s free global Internet.
[...] But about 50% of the investments in Galactica are from anonymous crypto, so we all know what that means.
The main antagonist and CEO of Futurilogic, Matt, uses EA to justify horrific actions, including allowing firefighters to be injured when his company’s algorithm throttles cell service during emergencies. He also literally murders people while claiming it’s for the greater good. And if that’s not enough, he’s also laundering money for North Korea through crypto investments!
Why would he do this? He explicitly invokes utilitarian reasoning (“Trolley Theory 101”) to dismiss harm caused:
When I started this company, I started it on the idea that if we could make enough money, we could make the entire world a better place, guys. All of it. Sometimes, in order to make something like that happen, huge sacrifices are required. Sacrifices like Josh. Or sacrifices like the firefighters. But that’s Trolley Theory 101, guys. Yeah. I don’t have any regrets. Not one.
And when wielding an axe to kill someone, Matt says: “This is altruism, Skylar! Whatever I need to do to save the world.”
But what’s his cause area? Something about ending “global hunger and homelessness” through free internet access. Matt never articulates any real theory of change beyond “make money (and do crimes) → launch free internet → somehow save world.”
And of course the show depicts the EA tech executives at Futurilogic as being in a “polycule” with a “hive mind” mentality.
Bummer.
AGI by 2028 is more likely than not
I don’t think this is as clear of a dichotomy as people think it is. A lot of global catastrophic risk doesn’t come from literal extinction because human extinction is very hard. A lot of mundane work on GCR policy involves a wide variety of threat models that are not just extinction.
Here’s my summary of the recommendations:
National security testing
Develop robust government capabilities to evaluate AI models (foreign and domestic) for security risks
Once ASL-3 is reached, government should mandate pre-deployment testing
Preserve the AI Safety Institute in the Department of Commerce to advance third-party testing
Direct NIST to develop comprehensive national security evaluations in partnership with frontier AI developers
Build classified and unclassified computing infrastructure for testing powerful AI systems
Assemble interdisciplinary teams with both technical AI and national security expertise
Export Control Enhancement
Tighten semiconductor export restrictions to prevent adversaries from accessing critical AI infrastructure
Control H20 chips
Require government-to-government agreements for countries hosting large chip deployments
As a prerequisite for hosting data centers with more than 50,000 chips from U.S. companies, the U.S. should mandate that countries at high-risk for chip smuggling comply with a government-to-government agreement that 1) requires them to align their export control systems with the U.S., 2) takes security measures to address chip smuggling to China, and 3) stops their companies from working with the Chinese military. The “Diffusion Rule” already contains the possibility for such agreements, laying a foundation for further policy development.
Review and reduce the 1,700 H100 no-license required threshold for Tier 2 countries
Currently, the Diffusion Rule allows advanced chip orders from Tier 2 countries for less than 1,700 H100s —an approximately $40 million order—to proceed without review. These orders do not count against the Rule’s caps, regardless of the purchaser. While these thresholds address legitimate commercial purposes, we believe that they also pose smuggling risks. We recommend that the Administration consider reducing the number of H100s that Tier 2 countries can purchase without review to further mitigate smuggling risks.
Increase funding for Bureau of Industry and Security (BIS) for export enforcement
Lab Security Improvements
Establish classified and unclassified communication channels between AI labs and intelligence agencies for threat intelligence sharing, similar to Information Sharing and Analysis Centers used in critical infrastructure sectors
Create systematic collaboration between frontier AI companies and intelligence agencies, including Five Eyes partners
Elevate collection and analysis of adversarial AI development to a top intelligence priority, as to provide strategic warning and support export controls
Expedite security clearances for AI industry professionals
Direct NIST to develop next-generation security standards for AI training/inference clusters
Develop confidential computing technologies that protect model weights even during processing
Develop meaningful incentives for implementing enhanced security measures via procurement requirements for systems supporting federal government deployments.
Direct DOE/DNI to conduct a study on advanced security requirements that may become appropriate to ensure sufficient control over and security of highly agentic models
Energy Infrastructure Scaling
Set an ambitious national target: build 50 additional gigawatts of power dedicated to AI by 2027
Streamline permitting processes for energy projects by accelerating reviews and enforcing timelines
Expedite transmission line approvals to connect new energy sources to data centers
Work with state/local governments to reduce permitting burdens
Leverage federal real estate for co-locating power generation and next-gen data centers
Government AI Adoption
across the whole of government, the Administration should systematically identify every instance where federal employees process text, images, audio, or video data, and augment these workflows with appropriate AI systems.
Task OMB to address resource constraints and procurement limitations for AI adoption
Eliminate regulatory and procedural barriers to rapid AI deployment across agencies
Direct DoD and Intelligence Community to accelerate AI research, development and procurement
Target largest civilian programs for AI implementation (IRS tax processing, VA healthcare delivery, etc.)
Economic Impact Monitoring
Enhance data collection mechanisms to track AI adoption patterns and economic implications
The Census Bureau’s American Time Use Survey should incorporate specific questions about AI usage, distinguishing between personal and professional applications while gathering detailed information about task types and systems employed.
Update Census Bureau surveys to gather detailed information on AI usage and impacts
Collect more granular data on tasks performed by workers to create a baseline for monitoring changes
Track the relationship between AI computation investments and economic performance
Examine how AI adoption might reshape the tax base and cause structural economic shifts
If you’ve liked my writing in the past, I wanted to share that I’ve started a Substack: https://peterwildeford.substack.com/
Ever wanted a top forecaster to help you navigate the news? Want to know the latest in AI? I’m doing all that in my Substack—forecast-driven analysis about AI, national security, innovation, and emerging technology!
Yeah I think so, though there still is a lot of disagreement about crucial considerations. I the OP advice list is about as close as it’s going to get.
I still stand by the book and I attribute a lot of my historical failures in management to not implementing this book well enough (especially the part about creating clarity around goals).