Sumner thinks they’re worryingly confused.
RomeoStevens
A common misconception is that if something is being talked about publicly there is probably funding available for it somewhere. But the number of weirdness dollars actually available in the wild for anything not passing muster with Ra can still be safely rounded to zero for most purposes. Even people who have had past success in more conventional areas often have trouble getting funding for weirder ideas, and if they do wind up spending a lot of time fundraising.
- 17 Jun 2019 2:57 UTC; 11 points) 's comment on Tal Yarkoni: No, it’s not The Incentives—it’s you by (LessWrong;
The 8.3 billion should have grown since 2011. Openphil’s grants have not even totalled 800 million yet and that is the amount that the fund should have grown *per year* in the interim.
I’m skeptical. The trajectory you describe is common among a broad class of people as they age, grow in optimization power, and consider sharp course corrections less. They report a variety of stories about why this is so, so I’m skeptical of any particular story being causal.
To be clear, I also recognize the high cost of public discourse. But part of those costs are not necessary, borne only because EAs are pathologically scrupulous. As a result, letting people shit talk various thing without response causes more worry than is warranted. Naysayers are an unavoidable part of becoming a large optimization process.
There was a thread on Marginal Revolution many years ago about why more economists don’t do the blogging thing given that it seems to have resulted in outsize influence for GMU. Cowen said his impression was that many economists tried, quickly ‘made fools of themselves’ in some minor way, and stopped. Being wrong publicly is very very difficult. And increasingly difficult the more Ra energy one has acquired.
So, three claims.
Outside view says we should be skeptical of our stories about why we do things, even after we try to correct for this.
Inability to only selectively engage with criticism will lead to other problems/coping strategies that might be harmful.
Carefully shepherding the optimization power one has already acquired is a recipe for slow calcification along hard to detect dimensions. The principles section is an outline of a potential future straightjacket.
It sounds like one crux might be what counts as rigorous. I find the ‘be specific’ feedback to be a dodge. What is the counter party expected to do in a case like this? Point out people they think are either low status or not rigorous enough?
The damage, IMO, comes from EA sucking up a bunch of intelligent contrarian people and then having them put their effort behind status quo projects. I guess I have more sympathy for the systemic change criticisms than I used to.
I would guess that many feel small not because of abstract philosophy but because they are in the same room as elephants whose behavior they can not plausibly influence. Their own efforts feel small by comparison. Note that this reasoning would have cut against originally starting GiveWell though. If EA was worth doing once (splitting away from existing efforts to figure out what is neglected in light of those existing efforts), it’s worth doing again. The advice I give to aspiring do-gooders these days is to ignore EA as mostly a distraction. Getting caught up in established EA philosophy makes your decisions overly correlated with existing efforts, including the motivation effects discussed here.
There seems to be strong status quo bias and typical mind fallacy with regard to hedonic set point. This would seem to be a basically rational response since most people show low changes in personality factors (emotional stability, or 1/neuroticism, the big five factor most highly correlated with well being reports, though I haven’t investigated this as deeply as I would like for making any strong claims) over their lifetime. In particular, environmental effects have very transient impact, colloquially referred to as the lottery effect, though this instantiation of the effect is likely false.
After doing personal research in this area for several years one of the conclusions that helped me make sense of some of the seeming contradictions in the space was the realization that humans are more like speedrunners than well-being of the video game character maximizers. In particular the proxy measure is generally maximizing the probability of successful grandchildren rather than anything like happiness. In the same way that a speedrunner trades health points for speed and sees the health points less as the abstraction of how safe the protagonist is and more as just another resource to manage, humans treat their own well being as a just another resource to manage.
Concretely, the experience is that only people *currently* in the tails of happiness seem to be able to care about it. People in the left tail obviously want out, and people in the right tail seem to be able to hold onto an emotionally salient stance that *this might be important* (they are currently directly experiencing the fact that life can be much much better than they normally suppose). In the same way that once people exit school their motivation for school reform drops off a cliff. It has been noted that humans seem to have selective memory about past experiences of intense suffering or happiness, such as sickness or peak experiences, as some sort of adaptation. Possibly to prevent overfitting errors.
More nearby, my guess is that caring about this will be anti-selected for in EA, since it currently selects for people with above average neuroticism who use the resultant motivation structure to work on future threats and try to convince others they should worry more about future threats. Positive motivational schemas are less common. Thus I predict lots of burnout in EA over time.
Meta: this seems like it was a really valuable exercise based on the quality of the feedback. Thank you for conceiving it, running it, and giving thought to the potential side effects and systematic biases that could affect such a thing. It updates me in the direction that the right queries can produce a significant amount of valuable material if we can reduce the friction to answering such queries (esp. perfectionism) and thus get dialogs going.
You may be interested in this convo I had about research on pedagogical models. The tl;dw if you just want the interventions that have replicated with large effects sizes:
Lots of low stakes quizzing
Elaboration of context (deliberately structuring things to give students the chance to connect knowledge areas themselves)
Teaching the material to others (forcing organization of the material in a way helpful to the one doing the teaching, and helping them identify holes in their own understanding)
- 6 Oct 2022 0:13 UTC; 1 point) 's comment on We all teach: here’s how to do it better by (
This portion of the PBS documentary A Century of Revolution covers the cultural revolution:
https://www.youtube.com/watch?v=PJyoX_vrlns (Around the 1 hour mark)
Recommended. One interesting bit for me is that I think foreign dictators often appear clownish because the translations don’t capture what they were speaking to, either literally in terms of them being a good speech writer, or contextually in terms of not really being familiar with the cultural context that animates a particular popular political reaction. I think this applies even if you speak nominally the same language as the dictator but don’t share their culture.
I really enjoyed this. A related thing is about a possible reason why more debate doesn’t happen. I think when rationalist style thinkers debate, especially in public, it feels a bit high stakes. There is pressure to demonstrate good epistemic standards, even though no one can define a good basis set for that. This goes doubly so for anyone who feels like they have a respectable position or are well regarded. There is a lot of downside risk to them engaging in debate and little upside. I think the thing that breaks this is actually pretty simple and is helped out by the ‘sorry’ command concept. If it’s a free move socially to choose whether or not to debate (which avoids the thing where a person mostly wants to debate only if they’re in the mood and about the thing they are interested in but don’t want to defend a position against arbitrary objections that they may have answered lots of times before etc.) and also a free move to say ‘actually, some of my beliefs in this area are cached sorries, so I reserve the right to not have perfect epistemics here already, and we also recognize that even if we refute specific parts of the argument, we might disagree on whether it is a smoking gun, so I can go away and think about it and I don’t have to publicly update on it’ then it derisks engaging in a friendly, yet still adversarial form debate.
If we believe that people doing a lot of this play fighting will on average increase the volume and quality of EA output both through direct discovery of more bugs in arguments and in providing more training opportunity, then maybe it should be a named thing like Crocker’s rules? Like people can say ‘I’m open to debating X, but I declare Kid Gloves’ or something. (What might be a good name for this?)
I think how the ‘middle class’ (a relative measure) of the USA is doing is fairly uninteresting overall. I think most meaningful progress at the grand scale (decades to centuries) is how fast is the bottom getting pulled up and how high can the very top end (bleeding edge researchers) go. Shuffling in the middle results in much wailing and gnashing of teeth but doesn’t move the needle much. Their main impact is just voting for dumb stuff that harms the top and bottom.
> stated that from 2000 to 2010, nearly 80,000 patients were involved in clinical trials based on research that was later retracted.
we can’t know if this is a good or bad number without context.
Fantastic! I like everything about this post, except its length. I wish it were longer as I think there is a ton to learn from your experience.
A lot of people are willing to try new things right now. Rapid prototyping of online EA meetups could lead to better ability to do remote collaboration permanently. This helps cut against a key constraint in matching problems, co-location.
Yes, that’s the concern. Asking me what projects I consider status quo is the exact same move as before. Being status quo is low status, so the conversation seems unlikely to evolve in a fruitful direction if we take that tack. I think institutions tend to slide towards attractors where the surrounding discourse norms are ‘reasonable and defensible’ from within a certain frame while undermining criticisms of the frame in ways that make people who point it out seem like they are being unreasonable. This is how larger, older foundations calcify and stop getting things done, as the natural tendency of an org is to insulate itself from the sharp changes that being in close feedback with the world necessitates.
The biggest risk seems to be in the hotel manager position. My guess is there is underestimation of the learning curve and ongoing maintenance costs/time to run a 17 person hotel.
Hi, I’m a CS student who is planning on earning to give. John Maxwell and I recently started a business (http://www.mealsquares.com) in order to try to improve on the default situation of software developer salaries. We figure we can accomplish several things with the entrepreneur route:
Give pretax dollars instead of post-tax salary dollars. (and hopefully more of them of course)
Spend more spare time working on effective things, for example helping out other EAs lower frictional costs that prevent them from being maximally effective.
Encourage the creation of other EA startups. We think there is some low hanging fruit in this space. (post on this forthcoming)
John and I are both highly interested in X-risk mitigation, which seems to suffer from a tragedy of the commons coordination problem.
First, doing philosophy publicly is hard and therefore rare. It cuts against Ra-shaped incentives. Much appreciation to the efforts that went into this.
>he thinks the world is metaphorically more made of liquids than solids.
Damn, the convo ended just as it was getting to the good part. I really like this sentence and suspect that thinking like this remains a big untapped source of generating sharper cruxes between researchers. Most of our reasoning is secretly analogical with deductive and inductive reasoning back-filled to try to fit it to what our parallel processing already thinks is the correct shape that an answer is supposed to take. If we go back to the idea of security mindest, then the representation that one tends to use will be made up of components, your type system for uncertainty will be uncertainty of those components varying. So which sorts of things your representation uses as building blocks will be the kinds of uncertainty that you have an easier time thinking about and managing. Going upstream in this way should resolve a bunch of downstream tangles since the generators for the shape/direction/magnitude (this is an example of such a choice that might impact how I think about the problem) of the updates will be clearer.
This gets at a way of thinking about metaphilosophy. We can ask what more general class of problems AI safety is an instance of, and maybe recover some features of the space. I like the capability amplification frame because it’s useful as a toy problem to think about random subsets of human capabilities getting amplified, to think about the non-random ways capabilities have been amplified in the past, and what sorts of incentive gradients might be present for capability amplification besides just the AI research landscape one.
Root out maximizers within yourself. Even ‘doing the most good.’ Maximizer processes are cancer, trying to convert the universe into copies of themselves. But this destroys anything that the maximizing was for.