Physician—general internal medicine. Interested in progress, existential risk and epistemology. Did some earning to give, but now have my doubts. Heavier on ideas than execution.
astupple
Hello,
I would love some feedback on what I’m calling an “EA Idea Sounding Board”
I’m thinking of a call-in show and/or a message board, where EA’s suggest ideas to someone with experience in the EA landscape, perhaps an advisor at 80,000 Hours. It might go something like this:
An 80,000 Hours advisor takes calls from EA’s who essentially pitch their ideas for anything EA related: an idea for a donation drive, for a new cause area, for a startup. The advisor hears out the idea and reframes and refines it to show both how it is promising and in what ways it seems to miss the mark. At the end, they work an assessment and plan. The overall assessment might fall into categories, like: -”Back to the Drawing Board” (Rethink/research these major limitations: ) -”Worth Pursuing” (You’re onto something here, develop these parts: _) -”Ready for Prime Time” (Well researched/planned, concrete next steps: _)
These calls could be recorded with the option to be edited and distributed as a podcast episode. I see lots of potential value:
Showcasing good ideas
Demonstrate rationality in progress
Highlight limitations of cognitive biases
Demonstrate the general principles/values of EA
Offer updates on latest EA topics
Enhance networking.
Not every call would be broadcast, only the very best ones. Part of the incentive for developing a good idea is to see if you can generate an interesting enough discussion to get published. However, maybe the best part is that even weak ideas might be very valuable to publish because they would demonstrate loose thinking and be good examples of constructive criticism.
Alternatively, this could be distributed on a message board or forum of some kind. Perhaps after the discussion with the 80,000 Hours advisor, the person pitching the idea would write up a summary of the dialog, highlighting the original idea, the general principles, the cognitive biases or weak elements, the strengths of the idea, and the final assessment. This summary could be posted for the community to review. To streamline this, a template could be created ahead of time which forms the basis of the idea discussion. After the discussion, the template is edited to reflect the content of the discussion and made ready for posting. Certain tags could be added, such as a request for someone to weigh in if they know of research in this area, or someone to fund the idea. The 80,000 Hours advisor could affix an overall rating of how promising they think this idea is and what needs to be done to make it more promising.
I bet a more neglected aspect of polarization is the degree to which the left (which I identify with) literally hates the right for being bigots, or seeming bigots (agree with Christian Kleineidam below). This is literally the same mechanism of prejudice and hatred, with the same damaging polarization, but for different reasons.
There’s much more energy to address the alt-right polarization than the not-even-radical left (many of my friends profess hatred of Trump voters qua Trump voters, it gives me the same pit of the stomach feeling when I see blatant racism). Hence, addressing the left is probably more neglected (unsure how you’d quantify this, but it seems pretty evident).
The trouble I find is that the left’s prejudice and hatred seems more complex and harder to fix. In some ways, the bigots are easier to flip toward reason (anecdotes about befriending racists, families changing when their kids come out etc). Have you ever tried to demonstrate to a passionate liberal that maybe they’ve gone too far in writing off massive swaths of society as bigots? Just bringing it up literally challenges the friendship in my experience.
I think polarization is incredibly bad, there are neglected areas, but neglectedness seems to be outweighed by intractability.
1- The Singularity is Near changed everything for me, made me quit my job and go to med school. I’ve since purchased it for many people, but I no longer do. Instead, I have been sending people copies of Home Deus by Yuval Noah Harari. Broader scope, more sociology, psychology and ethics. 2- The Selfish Gene (I think this moored me to reality closer than Steven Pinker’s work) 3- The Black Swan (Thinking Fast and Slow, Freakonomics, Predictably Irrational etc are probably better explications of irrationality, while Taleb is a pretty clear victim of his own criticisms, but Taleb’s style really shook me and I think it is the best for changing minds.) 4- Waking Up (A careful reading of Ken Wilber has been most influential for me, but I don’t recommend it because it needs a very skeptical eye. I’ve been lucky. Waking Up does most of the same work, but doesn’t get lost in the rabbit hole.) 5- Doing Good Better (not a shocker, but it really is an accessible slam dunk)
Like me, I suspect many EA’s do a lot of “micro advising” to friends and younger colleagues. (In medicine, this happens almost on a daily basis). I know I’m an amateur, and I do my best to direct people to the available resources, but it seems like creating some basic pointers on how to give casual advice may be helpful.
Alternatively, I see the value in a higher activation energy for potentially reachable advisees- if they truly are considering adjusting their careers, then they’ll take the time to look at the official EA material.
Nonetheless, it seems like even this advice to amateurs like myself could be helpful—“Give your best casual advice. If things are promising, give them links to official EA content.”
Can a Transparent Idea Directory reduce transaction costs of new ideas?
Thanks for the thoughtful comments, agree almost completely, particularly your closing points.
My main quibble is the comparison of talent vs ideas as a bottleneck, where you say talent is 80% of the problem compared to ideas at 20%. I certainly agree that lots of weak ideas pose problems, but the trouble with with this comparison is that the first step to recruiting more talent will be an idea. So, in a sense, the talent gap IS an idea gap. In fact, aside from blind luck, every improvement on what we have will first be an idea. Perhaps we shouldn’t think of ideas in opposition to anything, but instead work to maximize them (and keep the bad ones out of the way). Every gap has an idea component, essentially waiting for a better idea for how to close it.
Additionally, having high-yield, impactful ideas on hand that will make a difference could attract talent that might otherwise see EA as a bunch of airy headed idealists. Finally, if talent rather than ideas is the true bottleneck, then it’s all the more important to make sure talent gets connected with the best ideas.
Minor point- Regarding weak ideas, I think there is some value for people to see (a) what makes bad ideas bad and (b) whether or not a particular idea has already been floated, thereby cutting down on redundancy.
While I completely see what you’re saying, at the risk of sounding obtuse, I think the opposite of your opener may be true.
“People who do things are not, in general, idea constrained”
The contrary of this statement may be the fundamental point of EA (or at least a variant of it): People who do things in general (outside of EA) tend to act on bad ideas. In fact, EA is more about the ideas underlying what we do than it is about the doing itself. Millions of affluent people are doing things (going to school, work, upgrading their cars and homes, giving to charity), without examining the underlying ideas. EA’s success is its ability to convert doers to adopt its ideas. It’s creating a pool of doers who use EA ideas instead of conventional wisdom.
Perhaps there are two classes of doers, those already in the EA community who “get it,” and those outside who are just plugging away at life. When I think of filling talent gaps, I think that can be filled by (A) EA community members developing skills, and (B) recruiting skilled people to join the community. Group A probably doesn’t need good ideas because they’ve already accepted the ideas of our favorite thinkers etc. The marginal benefit of even better ideas is small. Instead, group A is better off if it simply gets down to the hard work of growing talent. But group B is laboring under bad ideas, and for many, it might not take much at all to get them to substitute bad ideas for EA-ideas. My guess is that, to grow talent, it is easier to convert doers from group B than to optimize doers in group A (which is certainly not to say group A shouldn’t do the hard work of optimizing their talent).
There is an odd circularity here- I think I just argued myself out of my original stance. I seem to have just concluded that we shouldn’t focus on the ideas of the EA community (which was my original intention) and instead should focus on methods of recruiting.
Maybe I’m arguing that we should develop recruiting ideas?
Also- any suggestions for good formal discussions of the philosophy and sociology of ideas (beyond the slightly nauseating pop business literature)? “Where Good Ideas Come From” by Steven Johnson is excellent, but not philosophically rigorous.
Malcolm Ocean- fantastic! thanks!
Interesting. It sounds like you’re possibly suggesting there’s a taxonomy of ideas. Some ideas warrant simple experiments (in this case, a simple experiment would be to review the various EA threads and simply enter proposed ideas in a table online), others warrant further research (like some of the questions begot by your global warming example), etc. Am I describing this right? I’m guessing this must have been done- any ideas on where to look.
Perhaps it’s worthwhile to review the analysis of- “What are productive ideas?” Ultimately, this could result in a one-pager about what a good idea is, how to develop it, and how (when, and to whom) to pitch it.
This is fantastically helpful, thank you so much for taking the time.
Makes me ponder the value of an “EA Curator.” There’s such an overwhelming amount of mind-bending content in the EA universe and its adjacent possible. This list of podcasts clearly only scratches the surface, yet I find myself wondering how I’m going to fit this in with the dozens of other podcast episodes, audiobooks, and print books I have on my plate, let alone other modes of discovery (and worse, how this at some point impinges on the time I have to do actual work on ideas that are so important).
Many EA’s have lists of books.… perhaps there could be an EA Reddit thread for simply voting up or down inspiringly-EA books, articles, blog posts, podcast episodes, etc?
Or, just a list of EA lists? Rob Wiblin’s list of podcasts indexed along with anyone else’s podcast list? Bonus for a method to vote individual lists up or down?
I had this same problem and finally cracked it (navigating the iOS podcast world stumps me).
Step 1: In iOS podcast app, tap “search” in lower right and enter “econtalk” Step 2: The app populates the archives going back to 2006, tap on the year you’re looking for, such as 2007 and scroll for “Weingast on Violence, Power and a Theory of Everything.” Step 3: Tap the three dots to upper right of episode and choose “Download Episode.” Step 5: Repeat for all the other archived episodes you want. Step 6: Now for the trick- to locate your downloaded episodes, go to the app’s main menu and tap “My Podcasts” at the bottom. Disregard the first appearance of “EconTalk,” and instead scroll down past all of the podcasts that you subscribe to. The archived talks you’ve downloaded will appear close to the bottom, aggregated by year.
Painful. I’m sure there’s a smarter way to do it, maybe Dominik’s suggestion below, but this should work for you.
How could it be that ideas are progressively harder to find AND we waited so long for the bicycle? How can we know how many undiscovered bicycles, ie low hanging fruit, are out there?
Seems as progress progresses and the adjacent possible expands, the number of undiscovered bicycles within easy reach expands.
Yes, but what I’m getting at is How do we know there’s a limited number of low hanging fruit? Or, as we make progress, don’t previously high fruit come into reach? AND, more progress opens more markets/fields.
It seems to me low hanging fruit is a bad analogy because there’s not way to know the number of undiscovered fruit out there. And perhaps it’s infinite. Or, it INCREASES the more we figure out.
My two cents—stagnation isn’t due to supply of good ideas waiting to be discovered, it’s stifling of free and open exploration by our norms that promote institutionalization of discovery.
The danger of nuclear war is greater than it has ever been. Why donating to and supporting Back from the Brink is an effective response to this threat
Refuting longtermism with Fermat’s Last Theorem
He was incentivized to decide whether to quit or to persevere (at the cost of other opportunities.) For accuracy, all he needed was “likely enough to be worth it.” And yet, at the moment when it should have been most evident what this likelihood was, he was so far off in his estimate that he almost quit.
Imagine if a good EA stopped him in his moment of despair and encouraged him, with all the tools available, to create the most accurate estimate, I bet he’d still consider quitting. He might even be more convinced that it’s hopeless.
I’d take the bet, but the feeling I have that inclines me toward choosing the affirmative says nothing about the actual state of the science/engineering. Even if I research for many hours on the current state of research, this will only affect the feeling I have in my mind. I can assign that feeling a probability, and tell others that the feeling I have is “roughly informed,” and I can enroll in Phil Tetlock’s forecasting challenge. But all of this learns nothing about the currently unknown discoveries that need to be made in order to bring about cold fusion.
Imagine asking Andrew Wiles the morning of his discovery if he wanted to bet that a solution would be found that afternoon. Given his despair, he might take 100x against. And this subjective sense of things would indeed be well-formed, he could talk to us for hours about why his approach doesn’t work. And we’d come away convinced—it’s hopeless. But that feeling of hopelessness, unlikelihood, despair—they have nothing to do with the math.
Estimating what remains to be discovered for a breakthrough is like trying to measure a gap but not knowing where to place the other end of the ruler.
But such work would undoubtedly produce unanticipated and destabilizing discoveries. You can’t grow knowledge in foreseeable ways, with only foreseeable consequences.
It’s not that “it happened this one time with Wiles, where he really knew a topic and was also way off in his estimate, and so that’s how it goes.” It’s that the Wiles example shows us that we are always in his shoes when contemplating the yet-to-be-discovered, we are completely in the dark. It’s not that he didn’t know, it’s that he COULDN’T know, and neither could anyone else who hadn’t made the discovery.
I love this idea, so many spin-offs come to mind, though as you describe, reaching the scale to reliably quantify the impact appears difficult.
I wonder if a way to boost followup and engagement could be to ask the recipients to donate the value of the book itself to an effective charity? “This book cost $15, if you find it interesting, can you give $15 to AMF?”
It’s still a bit tricky to track actual donation… maybe setting up a simple webpage for book recipients to donate to AMF. You could create two groups, one that gets the book and the website link, the other that gets the book and a specific ask to donate at least the value of the book.
Another thought is having a book fair, or tabling at an event. You could have a stack of free books, and have an internet device where they could donate on the spot in exchange for the book. You could compare numbers who took the book for free vs. took and donated.