Out of the four major AI companies, three of them seem to be actively trying to build God-level AGI as-fast-as-possible. And none of them are Meta. To paraphrase Conner Leahy, watch the hands, not the mouth. Three of them talk about safety concerns, but actively pursue a reckless agenda. One of them dismisses safety concerns, but seems to lag behind the others, and is not currently moving at breakneck speed. I think the general anti-Meta narrative in EA seems to be because the three other AI companies have used EAs for their own benefit (poaching talent, resources, etc.) I do not think Meta has yet warranted being a target.
Prometheus
Most of my experience is in the AI Safety sphere, and for that, I think perks and high salaries are critical. I’d love to see Alignment orgs with more of these things. The issue is we need high talent. And that high talent knows their worth, especially right now. If they can get Business Class working at Meta AI, I’d want to offer them First Class. If you have the money to make it happen, outbidding talent and being a talent attractor is important. Perks signal a job is high status. Retreats in luxurious locations signal high status. High status attracts high talent. I can’t ask everyone with great talent to work on safety just out of the goodness of their hearts.
Why Is No One Trying To Align Profit Incentives With Alignment Research?
Should the US start mass-producing hazmat suits? So that, in the event of an engineered pandemic, the spread of the disease can be prevented, while still being able to maintain critical infrastructure/delivery of basic necessities.
Using Consensus Mechanisms as an approach to Alignment
Widening Overton Window—Open Thread
Humans are not prepared to operate outside their moral training distribution
“The strategy of “get a lot of press about our cause area, to get a lot of awareness, even if they get the details wrong” seems to be the opposite of what EA is all about” Yes, and I think this is a huge vulnerability for things like this. Winning the narrative actually matters in the real world.
Are there plans from any organizations to support former FTX grantees? I’m not one of them, but know of many who received funding from the FTX regranting program, who are now suddenly without funding and might face clawbacks.
Five Areas I Wish EAs Gave More Focus
Aligned Objectives Prize Competition
AI Safety Strategy—A new organization for better timelines
My quick rebuttal is the flaw you seem to also acknowledge. These different factors that you calculate are not separate variables. They all likely influence the probabilities of each other. (greater capabilities can give rise to greater scaling of manufacturing, since people will want more of it. Greater intelligence can find better forms of efficiency, which means cheaper to run, etc.) This is how you can use probabilities to estimate almost anything is extremely improbable, as you noted.
Is there anything that can be done to get New START fully reinstated?
I would have said no a year ago, but a lot of people are now much more interested in AIS. I think there’s a lot of potential for much more funding coming in. The binary greedy vs. non-greedy human sounds strange to me. What I can say is many EA types have the mentality of neglectedness, how they can individually have the most impact, etc. Many EAs would probably say they wouldn’t be working on the things they were working on if enough other people were. This is great in isolation, and a mentality I usually hold, but it does have problems. The “greedy” humans have the mentality of “someone else is going to do this, I want to get there first.” Individually, this doesn’t change much. But if you multiple people doing this, you get people competing with each other, and usually they push each other to get to the outcome faster.
Yes. But everyone’s pushing hard on capabilities right now anyway. This has always been a problem in AIS. But we can’t really do anything without running into this risk. But I think there’s a big difference between employees at an org, and people starting orgs. I’d be fine with existing orgs attracting talent the way I mentioned, but I wouldn’t want to throw money at someone (who’s only interested in status) to start their own org. It’s certainly tricky. Like, I can imagine how the leaders of an org can slowly get usurped. Holding current leaders in AIS in prestige can possibly mitigate the risk, where people with senior status in the field can function as “gatekeepers”. Like, a young physicist who wants to gain clout, only for the sake of their own status, is still going to have to deal with senior members in the field who might call bs. If enough senior members call bs, that person loses status.
Thank you for voicing your concerns and frustrations. I think many in the EA Community need to hear this sort of thing, even if they don’t like it. To give an idea of why I think EAs have responded this way, I think it’s because many are stuck in certain defense mechanisms that make them think they need a certain PR face at all times. We’re used to being criticized, slammed by the media, and have our views either not taken seriously, or have them depicted as dangerous or cultish. In a sense, I think the branding of ‘EA’ inhibits many from being willing to act quickly. They like the longterm strategy. The calculated, carefully built strategies, where every piece is carefully added on top of the other. But I agree with you that the situation has changed. We may not have much time to act, and most of the progress made toward getting the public’s attention on taking AI Risk seriously has happened outside of EA. I think EA has an ivory tower problem, and needs to provide channels for people outside it to take action.
(crossposted from lesswrong)
I created a simple Google Doc for anyone interested in joining/creating a new org to put down their names, contact, what research they’re interested in pursuing, and what skills they currently have. Overtime, I think a network can be fostered, where relevant people start forming their own research, and then begin building their own orgs/get funding. https://docs.google.com/document/d/1MdECuhLLq5_lffC45uO17bhI3gqe3OzCqO_59BMMbKE/edit?usp=sharing
I’m curious what you think of this, and if it impedes what you’re describing being effective or not: https://arxiv.org/abs/2309.05463
I’m starting to think that the EA community will scare themselves into never taking any action at all. I don’t really feel like going over this point-for-point, because I think this post demonstrates a much greater failure at rationality. The short answer is: you’re doing cost/benefit analysis wrong. You’re zooming in on every possible critique, and determining it’s an organization that shouldn’t have talent directed toward it. Every alignment organization right now has relatively poor results. But the strategy for what to do with that isn’t funneling talent into ones that have slightly better results, but encouraging the spread of talent among many different approaches.