This sounds like a job for someone in mechanism design.
Closed Limelike Curves
In my experience, they’re mostly just impulsive and have already written their bottom line (“I want to work on AI projects that look cool to me”), and after that they come up with excuses to justify this to themselves.
Anyone around here happen to know any investigative reporters or journalists? I’ve happened to hit on a case of an influential nonprofit CEO engaging in unethical behavior, but I don’t have the kind of time or background I’d need to investigate this thoroughly.
Closed Limelike Curves’s Quick takes
I had substantial discussions with people on this, even prior to Sam Altman’s firing; every time I mentioned concerns about Sam Altman’s personal integrity, people dismissed it as paranoia.
In OpenAI’s earliest days, the EA community provided critical funds and support that allowed it to be established, despite several warning signs already having appeared regarding Sam Altman’s previous behavior at Y Combinator and Looped.
I think this is unlike the SBF situation in that there is a need for some soul-searching of the form “how did the EA community let this happen”. By contrast, there was very little need for it in the case of SBF.
Like I said, you investigate someone before giving money, not after receiving money. The answer to SBF is just “we never investigated him because we never needed to investigate him; the people who should have investigated him were his investors”.
With Sam Altman, there’s a serious question we need to answer here. Why did EAs choose to sink a substantial amount of capital and talent into a company run by a person with such low integrity?
The usefulness of the “bad people” label is exactly my point here. The fact of the matter is some people are bad, no matter what excuses they come up with. For example, Adolf Hitler was clearly a bad person, regardless of his belief that he was the single greatest and most ethical human being who had ever lived. The argument that all people have an equally strong moral compass is not tenable.
More than that, when I say “Sam Altman is a bad person”, I don’t mean “Sam Altman’s internal monologue is just him thinking over and over again ‘I want to destroy the world’”. It means “Sam Altman’s internal monologue is really good at coming up with excuses for unethical behavior”.
Like:
I’m not concerned that Dario Amodei will consciously think to himself: “I’ll go ahead and press this astronomically net-negative button over here because it will make me more powerful”. But he can easily end up pressing such a button anyway.
I would like to state, for the record, that if Sam Altman pushes a “50% chance of making humans extinct” button, this makes him a bad person, no matter what he’s thinking to himself. Personally I would just not press that button.
If I had to guess, the EA community is probably a bit worse at this than most communities because A) bad social skills and B) high trust.
This seems like a good tradeoff in general. I don’t think we should be putting more emphasis on smooth-talking CEOs—which is what got us into the OpenAI mess in the first place.
But at some point, defending Sam Altman is just charlie_brown_football.jpg
In the conversations I had with them, they very clearly understood the charges against him and what he’d done. The issue was they were completely unable to pass judgment on him as a person.
This is a good trait 95% of the time. Most people are too quick to pass judgment. This is especially true because 95% of people pass judgment based on vibes like “Bob seems weird and creepy” instead of concrete actions like “Bob has been fired from 3 of his last 4 jobs for theft”.
However, the fact of the matter is some people are bad. For example, Adolf Hitler was clearly a bad person. Bob probably isn’t very honest. Sam Altman’s behavior is mostly motivated by a desire for money and power. This is true even if Sam Altman has somehow tricked himself into thinking his actions are good. Regardless of his internal monologue he’s still acting to maximize his money and power.
EAs often have trouble going “Yup, that’s a bad person” when they see someone who’s very blatantly a bad person.
“Trust but verify” is Reagan’s famous line on this.
Most EAs would agree with “90% of people are basically trying to do the right thing”. But most of them have a very difficult time acting as though there’s a 10% chance anyone they’re talking to is an asshole. You shouldn’t be expecting people to be assholes, but you should be considering the 10% chance they are and updating that probability based on evidence. Maya Angelou wrote “If someone shows you who they are, believe them the first time”.
As a Bayesian who recognizes the importance of not updating too quickly away from your prior, I’d like to amend this to “If someone shows you who they are, believe them the 2nd or 3rd time they release a model that substantially increases the probability we’re all going to die”.
Don’t panic: 90% of EAs are good people
The empirical track record is that the top 3 AI research labs (Anthropic, DeepMind, and OpenAI) were all started by people worried that AI would be unsafe, who then went on to design and implement a bunch of unsafe AIs.
“AI is powerful and uncontrollable and could kill all of humanity, like seriously” is not a complicated message.
Anything longer than 8 morphemes is probably not going to survive Twitter or CNN getting their hands on it. I like the original version (“Literally everyone will die”) better.
In what sense? The problem of potential helium scarcity has been (effectively) solved in the last few years by just looking for more helium deposits.
In a sense, we are “running out” (because there’s only a finite supply of Helium), we’re just running out very, very slowly.
Actually, maybe I should clarify this. This is standard practice when you hire a decent statistician. We’ve known this since like… the 1940s, maybe?
But a lot of organizations and clinical trials don’t do this because they don’t consult with a statistician early enough. I’ve had people come to me and say “hey, here’s a pile of data, can you calculate a p-value?” too many times to count. Yes, I calculated a p-value, it’s like 0.06, and if you’d come to me at the start of the experiment we could’ve avoided the million-dollar boondoggle
that you just created.
You’re completely correct! However, it’s worth noting this is standard practice (when the treatment makes up most of the cost, which it usually doesn’t). Most statisticians will be able to tell you about this.
So I think I have two comments:
It’s actually pretty neat you figured this out by yourself, and shows you have a decent intuition for the subject.
However, if you’re a researcher at any kind of research institution, and you run or design RCTs, this suggests an organizational problem. You’re reinventing the wheel, and need to consult with a statistician. It’s very, very difficult to do good research without a statistician, no matter how clever you are. (If you’d like, I’m happy to help if you send me a DM.)
Could be marketed as a medication for hypersomnia, narcolepsy, chronic fatigue, and ADHD.
There’s an EA cause area in this, but it’s not tooth-related. Our jaws are perfectly fine. The problem is bad incentives in the healthcare system; dentists get paid to take wisdom teeth out, not to leave them in, so about 90% of wisdom tooth extractions are unnecessary.
The mortality rate of anesthesia is not quite negligible; a couple hundred people have died because of anesthesia during wisdom tooth removal, and there’s also the risk of infection.
With regard to differing forms of , the article you linked notes:
Despite these minor differences in processing, each of these forms is probably equally effective when equal doses are given.
Which is understating things a bit: All the minor variants will be identical after dissolving in the body—anyhydrous creatine will become hydrated, and particle size is 0 after the creatine has dissolved. The only (minor) difference is that micronized creatine will dissolve slightly faster if you mix it with a drink, because of the smaller particle size. This can be mildly convenient, but I’ve never been bothered by the monohydrate being too slow to dissolve (it’s almost instant for both forms).
With respect to the impacts on cognitive performance, Gwern completed an in-depth look here. Most important is the section on publication bias; the authors of these papers failed to replicate their results several times, but these failures were never published. Gwern concludes (and I agree) that creatine almost certainly doesn’t affect intelligence in nonvegetarians, and has at best
The study you quote at the end about vegetarianism has a sample size in the low dozens, and I wouldn’t put much stock in it. The large meta-analyses of this all find better life expectancy and health markers in vegetarians/vegans. The one important exception is B12 (for which the study you cite finds a deficiency in vegans). Luckily, th
is can easily be fixed by supplements.
I’d rephrase that, maybe. Whether we’re asking people to make a change doesn’t matter—that’s assigning an unearned privilege to old ideas. The reason we have to justify this claim is because it’s a claim. If you’re saying “I know vegetarianism is healthier,” you should be able to explain how you know that.
Hi, thanks for this! As someone who’s very interested in social choice and mechanism design, I’ll make more suggestions on the submissions form later. Social choice and mechanism design are the branches of economics that ask “How can we extend decision theory to society as a whole, to make rational social decisions?” and “How do we do that if people can lie?”, respectively.
Here’s one very important recommendation I will make explicitly here, though: TALK TO PEOPLE IN MECHANISM DESIGN AND SOCIAL CHOICE OR EVERYTHING WILL EXPLODE AND YOU CAN MAKE EVERYTHING WAY WORSE IF YOU MESS UP EVEN MINOR DETAILS.
If you don’t believe me, here’s an example: how you handle equal-ranked candidates in the Borda count can take it from “top-tier voting rule” to “complete disaster”. With Borda’s original truncation rule (candidates not listed on a ballot get 0 points), the Borda count is pretty good! But if you require a complete ranking, i.e. every voter has to list all the candidates going from best to worst, your rule ends up having the candidates chosen basically at random. That’s because the optimal strategy involves finding the best candidates and putting them all at the bottom of your ballot, with the worst candidates you can find taking up all of the middle ranks. If everyone realizes this, the winner is effectively chosen at random, and can even end up being a candidate who everyone agrees is the absolute worst option.