Interested in AI safety talent search and development.
Peter
Really appreciate this post. Recently I’ve felt less certain about whether slowing down AI is feasible or helpful in the near future.
I think how productive current alignment and related research is at the moment is a key crux for me. If it’s actually quite valuable at the moment, maybe having more time would seem better.
It does seem easier to centralize now when there are fewer labs and entrenched ways of doing things, though it’s possible that exponentially rising costs could lead to centralization through market dynamics anyway. Though maybe that would be short lived and some breakthrough after would change the cost of training dramatically.
Yes, it seems difficult to pin those down. Looking forward to the deeper report!
I really want to see more discussion about this. There’s serious effort put in. I’ve often felt that nuclear is perhaps overlooked/underemphasized even within EA.
This one actually made me laugh out loud.
Nice, yeah they did mention these. Good to have the links.
There’s a sister org working on it About Depression—StrongMinds America
Actually, they are the same type of error. EA prides itself on using evidence and reason rather than taking the assessments of others at face value. So the idea that others did not sufficiently rely on experts who could obtain better evidence and reasoning to vet FTX is less compelling to me as an after-the-fact explanation to justify EA as a whole not doing so. I think probably just no one really thought much about the possibility and looking for this kind of social proof helps us feel less bad.
Yeah, I do sometimes wonder if perhaps there’s a reason we find it difficult to resolve this kind of inquiry.
Yes, I think they’re generally pretty wary of saying much exactly since it’s sort of beyond conceptual comprehension. Something probably beyond our ideas of existence and nonexistence.
Glad to hear that! You’re welcome :)
On Flynn Campaign: I don’t know if it’s “a catastrophe” but I think it is maybe an example of overconfidence and naivete. As someone who has worked on campaigns and follows politics, I thought the campaign had a pretty low chance of success because of the fundamentals (and asked about it at the time) and that other races would have been better to donate to (either state house races to build the bench or congressional candidates with better odds like Maxwell Frost, a local activist who ran for the open seat previously held by Val Demings, listed pandemic prevention as a priority, and won. Then again, Maxwell raised a ton of money, more than all the other candidates combined, so maybe he didn’t need those funds as much as other candidates). Salinas was a popular, progressive, woman of color with local party support who already represented much of the district at the state level and helped draw the new one. So, it seemed pretty unlikely to me that she would lose to someone who had not lived in the state for years, did not have strong local connections, and had never run a campaign before, even with a massive money advantage. And from what I understand, the people in the district were oversaturated with ads to the point of many being annoyed. So I think of this as probably being an example where EAs would have benefitted from relying on more outside experts for which races to pick and how to run a campaign. There were a lot of congressional retirements this year, and there were probably better seats to try to win. Of course, nothing is going to guarantee a win though.
On FTX: And it seems like if anyone had thought to ask to look at FTX’s balance sheets, things might have been different? At least, considering what a mess those balance sheets are (or whatever records make sense, I’m not a financial expert)? If FTX refused or if they shared something that didn’t make sense, maybe that would have been a warning sign. So that seems like another example of where more outside expertise could have maybe been beneficial and saved a lot of headaches. Individually, maybe no one has an incentive to vet FTX even if they get a grant from them. But if we care about the EA ecosystem as a whole, and hundreds of millions suddenly start pouring in from a new source, maybe someone with the relevant financial and accounting expertise should at least request to look at the balance sheets of the new megafunder, especially when it comes from an industry full of crashes and scams. I’m not sure if this would have changed things but the fact that it doesn’t seem to have happened means there are probably many other things that we are missing. Things that people with relevant expertise are more likely to see. And I know people have said “well look all these other VCs missed it, they never looked into it” but EA sort of prides itself on NOT just doing what everyone else does but using reason and evidence to be more effective. We could have had a process for investigating any new megafunder a bit more thoroughly, perhaps with the help of outside experts. Not just donating to the same charities or picking the same career paths or volunteering for the same organizations just because other people do but being effective. So why would we think this is a good reason for failing to attempt better due diligence with respect to movement finances? We can’t change the past, but surely we can change some things going forward.
I think the main obstacle is tractability: there doesn’t seem to be any known methodology that could be applied to resolve this question in a definitive way. And it’s not clear how we could even attempt to find such a method. Whereas projects related to areas such as preventing pandemics and making sure AI isn’t misused or poorly designed seem 1) incredibly important, 2) tractable—it looks like we’re making some progress and have and can find directions to make further progress (better PPE, pathogen screening, new vaccines, interpretability, agent foundations, chip regulation) and neglected right now and will matter for the next few decades at least unless the world changes dramatically.
Also, it could be possible that there are “heaven” worlds and “hell” worlds that last an extremely long time, but not forever. Buddhist traditions are one group that tend to emphasize that all worldly places and experiences are impermanent, even extremely pleasant and unpleasant ones.
“The kingdom of heaven is within you” comes to mind. I’ve always thought that was a very important verse. I imagine it may be talking about some kind of distinct and significant transformation that other religions might refer to by other names, such as awakening or enlightenment, that makes us durably and noticeably more peaceful and loving/kind toward others.
These experiences are often described in a way that indicates the subjective experience of having a distinct, separate self diminishes or even disappears. It may not even make sense to think of heaven using our concepts of a ‘place,’ let alone one where what we perceive as a separate self would exist in.
Thank you—I had forgotten about that post and it was really helpful.
I’ve definitely seen well-meaning people mess up interactions without realizing it in my area (non-EA related). This seems like a really important point and your experience seems very relevant given all the recent talk about boards and governance. Would love to hear more of your thoughts either here or privately.
Seems interesting, I’ll def check it out sometime
Jokes aside, this is a cool idea. I wonder if reading it yourself and varying the footage, or even adapting the concepts into something would help it be more attractive to watch. Though of course these would all increase the time investment cost. I can’t say it’s my jam but I’d be curious to see how these do on TikTok though since they seem to be a sort of prevalent genre/content style.
Yeah I think college students will often think “Fellowship” is religious because that’s likely the only context they have seen the word used in, even though it’s often used for all kinds of non-religious opportunities.
I’m not sure how important this is—I soon realized lots of fellowships at my school were not religious and that it had a broader meaning.
I guess people could try different things out and see how they work. Maybe something simple like EA reading group. Or focus on a topic name: people would probably be less likely to confuse something like “public health/pandemic prevention/AI Ethics fellowship” as something religious.
I have thought a few times that maybe a safer route to AGI would be to learn as much as we can about the most moral and trustworthy humans we can find and try to build on that foundation/architecture. I’m not sure how that would work with existing convenient methods of machine learning.
Yeah there are a lot of “fairweather friends” in politics who won’t feel inclined to return any favors when it matters most. The opposite of that is having a committed constituency that votes enough in elections to not be worth upsetting—aka a base of people power. These take serious effort to create and not all groups are distributed geographically the same way so some have more/easier influence than others. One reason the NRA is so powerful and not abandoned despite negative media coverage is that they have tight relationships with Republican politicians and they turn out big time in any primary where someone opposes them or something they want. It’s not so much about the campaign contributions as far as I can tell (other groups spend far more and are much less influential) though campaign contributions are certainly a part of their system of carrots and sticks.
The lack of more broad participation in primaries is a problem for represenation and responsive good government. It’s an opportunity for groups that aren’t all that representative to magnify their influence. Alaska’s top 4 primary election seems like a step in the right direction since it opens up primaries to more voters and then lets voters rank the top 4 candidates in November. It increases the chances that someone can try to run and win as a more representative candidate instead of being filtered out by small, highly partisan groups.
It’s often easier to stick to established narratives, group identifiers, and allies, or even make up new conspiracies than to be measured and nuanced. Something inflamatory and/or conspiratorial is more likely to hook into human brains, be amplified by engagement seeking algorithms, and, if it’s obscure but rapidly repeated, not have any better sources of information competing with it when people look up its key words.
I think politics can seem very opaque, incomprehensible, and lacking clear positive payoffs but after volunteering, studying, and working on campaigns for a few years, I think it’s more simple but difficult.
I think politics is an area where there are a lot of entrenched ways of doing things as well as a lot of pitfalls that often require experience to navigate well. And even then, the chance of failure is still high. A moment’s slip up or bad assumptions or a random event can undermine months or years of work. This doesn’t happen as often in other areas.
For animal welfare, I think the outcomes show that it’s something people are more willing to vote for than pay for, so I think ballot initiatives are generally a good route to try out. I think the pork industry challenge to the MA law is pretty weak, but even if the initiative is struck down, it was probably good to try and see if it worked, and that may still open up some new opportunity. Winning by a large margin is good in that it may discourage special interests from trying to run a counter ballot initiative next time to repeal it.
I think it’s important not to become naive about anyone elected to office. Just because they have a similar background to you, say things you agree with, or belong to a group you like doesn’t mean they’re going to actually do good things or that the things they do are good. Just because they seem right about one or even many topics doesn’t mean they know what they are doing on other topics.
Politics is about coalition building and that often means various kinds of deal making. This is not for everyone, and not every deal is good or even necessarily clearly good or bad. It also involves constant tradeoffs and high uncertainty that will often make a lot of people unhappy.
Politicians spend most of their careers fundraising—even when in office—and not nearly enough time talking to groups of their constituents that represent the diversity of experiences in their districts. This means a lot of popular ideas get ignored, some of which are good and others which are maybe not. Being a good representative means knowing when, how, and how much to defer to people.
If you click on your name in the top right corner, then click edit profile, you can scroll down and delete tags under “my activity” by clicking the x on the right side of each block.
What things would make people less worried about AI safety if they happened? What developments in the next 0-5 years should make people more worried if they happen?
I’ve thought about this before and talked to a couple people in labs about it. I’m pretty uncertain if it would actually be positive. It seems possible that most ML researchers and engineers might want AI development to go as quickly or more than leadership if they’re excited about working on cutting edge technologies or changing the world or for equity reasons. I remember some articles about how people left Google for companies like OpenAI because they thought Google was too slow, cautious, and lost its “move fast and break things” ethos.