Nitpick: It’s fairly unlikely that GPT-4 is 1tn params; this size doesn’t seem compute-optimal. I grant you the Semafor assertion is some evidence, but I’m putting more weight on compute arithmetic.
technicalities
Vouching for this, it’s a wonderful place to work and also to hang out.
A successor project is live here, takes all comers.
Scottish degrees let you pick 3 very different subjects in first year and drop 1 or 2 in second year. This seems better to me than American forced generalism and English narrowness.
Thanks: you can apply here.
I’ve edited the post to link to the successor project.
I dream of getting a couple questions added onto a big conference’s attendee application form. But probably not possible unless you’re incredibly well-connected.
Oh that is annoying, thanks for pointing it out. I’ve just tried to use the new column width feature to fix it, but no luck.
it is good to omit doing what might perhaps bring some profit to the living, when we have in view the accomplishment of other ends that will be of much greater advantage to posterity.
- Descartes (1637)
Yes, if I was using the same implicature each time I should have said “MacAskill” for Guzey. Being associated with Thiel in any way is a scandal to some people, even though his far-right turn was after this talk.
It’s not normative, it’s descriptive—“shameable”, not “ought to be ashamed”.
I really think egoism strains to fit the data. From a comment on a deleted post:
[in response to someone saying that self-sacrifice is necessarily about showing off and is thus selfish]:
How does this reduction [to selfishness] account for the many historical examples of people who defied local social incentives, with little hope of gain and sometimes even destruction?
(Off the top of my head: Ignaz Semmelweis, Irena Sendler, Sophie Scholl.)
We can always invent sufficiently strange posthoc preferences to “explain” any behaviour: but what do you gain in exchange for denying the seemingly simpler hypothesis “they had terminal values independent of their wellbeing”?
(Limiting this to atheists, since religious martyrs are explained well by incentives.)
The best you can do is “egoism, plus virtue signalling, plus plain insanity in the hard cases”.
This is a great question and I’m sorry I don’t have anything really probative for you. Puzzle pieces:
“If hell then good intentions” isn’t what you mean. You also don’t mean “if good intentions then hell”. So you presumably mean some surprisingly strong correlation. But still weaker than that of bad intentions. We’d have to haggle over what number counts as surprising. r = 0.1?
Nearly everyone has something they would call good intentions. But most people don’t exploit others on any scale worth mentioning. So the correlation can’t be too high.
Good things happen, sometimes, despite the odds. We have a good theory of how this can happen in a world without good intentions, so I don’t want to use this as strong evidence for good intentions. But good things still happen without competition and counter to incentive gradients.
In general I have a pretty high bar for illusionism, eliminativism, accusations of false consciousness, etc (something something phenomenal conservatism).
In this case: we clearly have more information than others about our own intentions. (This might not be a lot on an absolute scale though.)
I buy the ‘moral licencing’ idea, where people’s sense of moral duty is (very) finite but their cupidity is way less bounded. So you can think that good intentions are real but just run out faster. Shard seems like a baroque but empirically adequate version of this.
I think I buy the PR spokesperson account of our internal narrative / phenomenal consciousness. But the spokesperson isn’t limited to retconning naive egoism, since we know that other solutions are evolutionarily stable in the presence of precommitment and all the other dongles, and so it could be hiding those too.
I could look up the psychology literature but i’m not sure it would update either you or me.
I’m mostly not talking about infighting, it’s self-flagellation—but glad you haven’t seen the suffering I have, and I envy your chill.
You’re missing a key fact about SBF, which is that he didn’t “show up” from crypto. He started in EA and went into crypto. This dynamic raises other questions, even as it makes the EA leadership failure less simple / silly.
Agree that we will be fine, which is another point of the list above.
got karma to burn baby
Just shameable.
Thanks to Nina and Noah there’s now a 2x2 of compromises which I’ve numbered:
The above post is a blend of all four.
Maybe people just aren’t expecting emotional concerns to be the point of a Forum article? In which case I broke kayfabe, pardon.
Yeah it’s not fully analysed. See these comments for the point.
The first list of examples is to show that universal shame is a common feature of ideologies (descriptive).
The second list of examples is to show that most very well-regarded things are nonetheless extremely compromised, in a bid to shift your reference class, in a bid to get you to not attack yourself excessively, in a bid to prevent unhelpful pain and overreaction.
Good analysis. This post is mostly about the reaction of others to your actions (or rather, the pain and demotivation you feel in response) rather than your action’s impact. I add a limp note that the two are correlated.
The point is to reset people’s reference class and so salve their excess pain. People start out assuming that innocence (not-being-compromised) is the average state, but this isn’t true, and if you assume this, you suffer excessively when you eventually get shamed / cause harm, and you might even pack it in.
“Bite it” = “everyone eventually does something that attracts criticism, rightly or wrongly”
You’ve persuaded me that I should have used two words:
benign compromise: “Part of this normality comes from shame usually being a common sense matter—and common sense morals correlate with actual harm, but are often wrong in the precise ways this movement is devoted to countering!”
deserved compromise: “all action incurs risk, including moral risk. We do our best to avoid them (and in my experience grantmakers are vigilant about negative EV things), but you can’t avoid it entirely. (Again: total inaction also does not avoid it.)”
- Jan 6, 2023, 10:14 AM; 3 points) 's comment on On being compromised by (
There’s some therapeutic intent. I’m walking the line, saying people should attack themselves only a proportionate amount, against this better reference class: “everyone screws up”. I’ve seen a lot of over the top stuff lately from people (mostly young) who are used to feeling innocent and aren’t handling their first shaming well.
Yes, that would make a good followup post.
- Jan 6, 2023, 10:14 AM; 3 points) 's comment on On being compromised by (
See also Anthropic’s view on this
The implicit strat (which Olah may not endorse) is to try to solve easy bits, then move on to harder bits, then note the rate you are progressing at and get a sense of how hard things are that way.
This would be fine if we could be sure we actually were solving the problems, and also not fooling ourselves about the current difficulty level, and if the relevant research landscape is smooth and not blockable by a single missing piece.