I’m a former data scientist with 5 years industry experience now working in Washington DC to bridge the gap between policy and emerging technology. AI is moving very quickly and we need to help the government keep up!
I work at IAPS, a think tank of aspiring wonks working to understand and navigate the transformative potential of advanced AI. Our mission is to identify and promote strategies that maximize the benefits of AI for society and develop thoughtful solutions to minimize its risks.
I’m also a professional forecaster with specializations in geopolitics and electoral forecasting.
Peter Wildeford
I agree. I think we have to understand that “balancing out” Hanania plays into his game. He’s an intentional provocateur—he says edgy things for attention.
And then he uses that attention to build a platform.
And his explicit intention with that platform is to overturn the US Civil Rights Act.
I don’t want to play any part in enabling that and that’s what “balancing out” does.
I’m a pro forecaster. I build forecasting tools. I use forecasting in a very relevant day job running an AI think tank. I would normally be very enthusiastic about Manifest. And I think Manifest would really want me there.
But I don’t attend because of people there who have “edgy” opinions that might be “fun” for others but aren’t fun for me. I don’t want to come and help “balance out” someone who thinks that ~using they/them pronouns is worse than committing genocide~ (sorry this was a bad example as discussed in the comments so I’ll stick with the pretty clear “has stated that black people are animals who need to be surveilled in mass to reduce crime”). I want to talk about forecasting.
It’s your right to have your conference your way, and it’s others right to attend and have fun. But I think Manifest seriously underrates how much they are losing out on here by being “edgy” and “fun”, and I really don’t want to be associated with it.
- Aug 15, 2024, 5:56 AM; 21 points) 's comment on Announcing the $200k EA Community Choice by (
Yes, I was thinking all of those:
-
Career capital generally seems good for a variety of jobs in think tanks. You could also take a high-paying job as a lobbyist and earn-to-give. (Obviously you still want to be choosy what you are a lobbyist for, so as to not do actual harm with your job.)
-
I think the direct impact is underrated, especially if you can get to the Legislative Director level or something senior it does seem like some staff get a surprising amount of autonomy to pursue policies they care most about and that a lot of good policy is bottlenecked on having someone to champion it and aggressively push for it.
-
I think nearly every person could engage productively on AI issues by (in no particular order):
-
Donating to organizations you think do good work on these issues.
-
Contacting your representatives in government and letting them know how you feel about these issues and that it affects how you vote.
-
Commenting publicly (e.g., on Twitter) how you feel about these issues.
-
Participating in demonstrations (e.g., PauseAI) as you feel like they align with your interests and values.
-
Right. I think being ambiguity and/or risk averse is a good reason to potentially prefer near-term cause areas, though they have their own issues with robustness as well.
I think this conclusion does logically flow from those premises but I would question the premises themselves—I think the first and second premise are pretty uncertain and the third premise is likely false for most people.
Thanks for your thoughts, Dustin. I think it was a mistake at the time—and I said as much—to think that FTX and OpenPhil represented sufficient plurality. But I definitely didn’t think FTX would blow up as it did and given that people can only do so many things, it’s understandable that people didn’t focus enough on donor diversification.
I agree completely.
I’d guess most people report funding, talent, or management constraints. Personally, I think I’ve found myself constrained by all of these except (1) at one point or another.
The problem with the strategy constraint is that you often don’t know if you’re faced with that constraint because you may not know your strategy is bad. As you say and I agree—empirically, a lot of people engage on bad strategies. Maybe I’m one of them? Would be hard to tell.
I also think organizations frequently underrate the extent to which they might be constrained on ops.
I think many more junior people should consider careers in government.
I’m not thinking just discouraging attempts to diversify funding of one’s own org, but also discouraging earning to give, discouraging projects to bring in more donors, etc.
I agree but I want to be clear that I don’t think senior EAs are innocent here. I agree with Habryka that this is a situation that was made by a lot of the senior EAs themselves who actively went all in on only two funders (now down to one) and discouraged a lot of attempts to diversify philanthropy.
Is it possible to elaborate on how certain grants but not others would unusually draw on GV’s bandwidth? For example, what is it about digital minds work that draws so much more bandwidth than technical AI safety grants? Personally I find that this explanation doesn’t actually make any sense as offered without more detail.
I think the key quote from the original article is “In the near term, we want to concentrate our giving on a more manageable number of strategies on which our leaders feel more bought in and with which they have sufficient time and energy to engage.” Why doesn’t Good Ventures just want to own the fact that they’re just not bought in on some of these grant areas? “Using up limited capacity” feels like a euphemism.
Eight Fundamental Constraints for Growing a Research Organization
I think TechCongress, despite the name, does do Executive branch placements and I think this role in particular is targeted to the executive branch
This is my biggest confusion as well.
If you take:
Biden won the last election by 42,918 combined votes in three swing states. Trump won the election before that by 77,744 votes. 537 votes in Florida decided the 2000 election.
along with:
generate net swing-state votes this election range from a few hundred to several thousand dollars per vote
You get that the entire election outcome can be purchased for ~$13M - $600M.
But I’m assuming it isn’t actually isn’t that cheap to swing the election?
Deeply saddened to hear this. We worked together on Rethink Charity. This loss is incredibly painful.
This could be a long slog but I think it could be valuable to identify the top ~100 OS libraries and identify their level of resourcing to avoid future attacks like the XZ attack. In general, I think work on hardening systems is an underrated aspect of defending against future highly capable autonomous AI agents.
Are the things in the bullets the things you believe or the things you disagree with?
Thanks for your comment and questions!
RP is still involved in work on AI and existential risk. This work now takes place internally at RP on our Worldview Investigations Team and externally via our special projects program.
Across the special projects program in particular we are supporting over 50 total staff working on various AI-related projects! RP is still very involved with these groups, from fundraising to comms to strategic support and I personally dedicate almost all of my time to AI-related initiatives.
As part of this strategy, our team members who were formerly working in our “Existential Security Team” and our “AI Governance and Strategy” department are doing their work under a new banner that is better positioned to have the impact that RP wants to support.
We don’t grant RP unrestricted funds to special projects, so if you want to donate to them you would have to restrict your donation to them. RP unrestricted funds could be used to support our Worldview Investigation Team. Feel free to reach out to me or to Henri Thunberg henri@rethinkpriorities.org if you want to learn more.
I think we’re missing the dynamic though where there’s a very clear theory for why Manifest would be dramatically less appealing to black people or to women, when you have a platformed and promoted speaker(!!) who think we need mass surveillance of black people to reduce crime or think that intellectual debate is an inherently male activity that women are less well suited to. It’s not a mystery here. Why would it be fun to “balance out” that?