I think the following rules, to supplement EA’s discussion norms, would make the debate better. CB, would you agree to them?
No misquotes.
No editing or deleting messages.
Clarifying details: Any text in quotation marks, or blockquoted, should be a 100% exact quote (and not misleading, taken out of context, or misattributed) if a reasonable reader might expect it to be a (direct, literal) quote. Basically, use copy/paste for quotes and then don’t change them. Losing formatting (e.g. italics or links) when quoting is OK because, as far as I know, EA’s software has no reasonable way to quote with formatting (though some text editors, like Ulysses, can help).
This was a good debate.
A bit long to read but I tend to agree that we should focus more on methodology. Excelent epistemology depends on an excelent methology I believe.
I agree completely about the point raised by Elliot Temple about errors. We should focus on typos for asthetics and ease to read but small errors should be corrected as soon as possible.
New posts in this Forum that are not upvoted much should surface for more views so we could expect more engagement to content not being discussed and debated currently.
I disagree with multiple things but lets focus on bias. If millions of people used your approach (as written, not actually doing all the same things as you), do you think that would work well, or would bias be a widespread problem? In other words, if lots of people say things like “I already try to do all you said in my mind” and “I feel like I have a track record of changing my viewpoint”, what sort of overall results do you expect?
Well, while I feel that this way of doing gives, at least for me, higher quality results compared to before, I don’t really know if it’s suited to how millions of people can think.
Bias would probably be a problem—but I have trouble seeing how to fix that in a systematic way. I’ve read a lot about bias so I try to be aware of these: when I see a pattern of thought in my brain that matches a bias, I try to compensate for that.
However, the way I work wouldn’t really match how most people usually do things. There is stuff that that the brain does and that I find really difficult to tackle on a wide scale.
For instance, I don’t really see how to change:
The tendency to reject really scary and frightening information that challenges deeply how we see the world and elicits really negative feelings. I can personally handle such feelings (not sure why), but for many people this would be too overwhelming, and I understand why.
Examples include research on collapse, wild animal suffering, or dealing with the fact that our industrial civilization has a net negative impact on the world when you include factory farming.
There is also some stuff that can also really threaten our sense of identity.
The fact that most people make decisions based on their senses (how they feel and see the world around them), and less based on abstract thoughts. It makes sense since we evolved in the natural world, but that means most profile don’t act on threats that are distant in time and space.
It’s why tackling climate change before seeing negative consequences around us is extremely hard—it’s also why there was more concern about climate after heatwaves.
The fact that you need a lot of time to do the research I mentioned, and most people don’t have the time since they have much more pressing matters to adress (food, providing for their family, handling daily life).
The fact that challenging the consensus is hard to do. As this article puts it, “consensus is our tribal glue”. Acknowledgement of something very different from consensus (ex. there are limits to growth) means rejecting not only the familiar but something that may have embodied our status, past efforts, our hopes and even our collective mythology.
Now, all of this stuff makes sense from an evolutionary perspective—we didn’t evolve to find truth, although some weird people try to get there. But I don’t see a way to get millions of people to change their approach there (let alone design debate rules or institutions that would enfore that).
I still think that using the method I try to apply would provide better results overall (certainly not optimal, but better). But I don’t really know how to make this way of thinking widespread—I actually don’t even think it’s possible.
But I don’t see a way to get millions of people to change their approach there (let alone design debate rules or institutions that would enfore that).
My initial focus is on getting you to make a change (to use some written rationality policies), or more broadly getting a small number of interested people to change who post on rationality-related internet forums. Maybe it could spread from there but I’m not concerned about spreading it to the masses until after I figure out how to spread it to tens of people and then see how well it works out for them.
The tendency to reject really scary and frightening information that challenges deeply how we see the world and elicits really negative feelings. I can personally handle such feelings (not sure why), but for many people this would be too overwhelming, and I understand why.
I have experience with other people saying this kind of thing about personally being able to avoid bias. This applies not only with the population in general, but also with the kind of people who post on EA and have read some books and articles about being unbiased. In other words, I find that many EA type people think that they are significantly less biased than most people, but I think most of them have major biases they aren’t seeing.
I have both theoretical arguments and practical experiences telling me that most of them, even at EA, are mistaken about themselves. Statistically, these kinds of claims are usually wrong. Do you agree or disagree?
(I’m counting partial bias as being wrong – many of those people are less biased than average, but there are still significant bias issues. It’s a significant problem rather than a non-problem, so I count them as mistaken.)
Based on the information I have about you, from my perspective, I should not trust you about your lack of bias. I should assign > 50% probability to you being mistaken. Do you agree or disagree?
Sure, it’s very possible that I am biased. It’s very hard not to be.
And I’ve never really seen a good advice on how to avoid bias besides ‘read about bias so you can be aware of it when it happens’.
Which I why I try to get some feedback. When faces with a criticism I try to reach the point where every item of criticism gets solved in one of two ways :
Either I have conceded that my vision was not correct on some of the point
Either the other person has no counterpoint to what I said
Not easy, though.
However, if I’m biased, I just want to be told in what way, and on what specific points with examples. Otherwise I cannot improve.
And I’ve never really seen a good advice on how to avoid bias besides ‘read about bias so you can be aware of it when it happens’.
I have developed several other pieces of advice about how to overcome bias. The one I’ve been talking about is pre-commit to written rationality policies and then have transparency when following them. A debate policy is one example but I don’t think it’s that hard to come up with others. Here’s one I just made up: “Read one article per week by someone from a rival tribe”. You could write that policy down, in public, and then follow it, and post each article you read so that there’s transparency: observers can see that you’re following it.
(You could get more extreme transparency by livestreaming yourself reading the articles, but probably you could just post links to them when you read them and most people would believe you. Also it’d be pretty hard to fool yourself about whether or not you read the articles. And it’s mostly just the people who are fooling themselves with their bias who I think matter. The people who are purposefully, consciously lying are a small minority who I don’t want to worry about.)
Getting back on topic, when I suggest written rationality policies as another way to help deal with bias, no one from EA wanted to do it (so far), nor did anyone give an argument that refutes it and explains why it’s a bad idea, nor did anyone share alternative ways to solve the same problems that they claim are better.
You said:
As for me personally, I’m not sure that I will use it now—as I feel like I agree with the points you make, I’m less busy than you so I answer to everything even if I don’t agree with it, and I already try to do all you said in my mind.
(which might sound like from the outside exactly like bias- but I feel like I have a track record of changing my viewpoint on complicated topics as I got better information, even for some core questions like “Is industrial civilization good?” or “Is capitalism good?”).
What I’m trying to say is that I don’t think doing all this stuff in your mind is a good enough approach, and that I think you should use some written policies. That advice isn’t just for other people who are worse at rationality. I think it’s a good idea for you.
Also, btw, even if you were 100% unbiased, I would still recommend using written rationality policies. They can set a good example for others and they can help you create a good reputation by persuading/showing people that you’re approaching things rationally (so they can see for themselves instead of trusting you).
Just a guess: I think one reason is that makes people in EA not totally convinced by the written policy you propose is that, from an outside perspective , it’s not really clear how doing that really changes things. Now, I’m sure that from your own perspective, it really had an impact for you. Which is great !
But for outsiders the benefits aren’t obvious. Your debate policy, for instance, appears to seem useful for busy people that don’t know when to stop debating and have many people proposing them to debate, but otherwise are doing well. Ok, but very few people can relate to that.
Maybe something that you could improve in your articles (although I must admit I just read 2 of your EA forum posts and your debate policy + 2-3 others) is by giving examples of policies that really appear to make a difference, even from the outside. Even give an example of how you went from “biased” to “less biased”. People love stories and examples.
It’d be good to have “templates”, debate policies that you propose and that others can just adopt on the spot. Maybe you already did such an article, but I didn’t see it in what i read so it should be more prominent.
In my opinion, the most valuable line your wrote in your last comment was “Read one article per week by someone from a rival tribe”. It’s direct, I can use that right now , and I can see the point. If you propose people straightforward stuff like that right away, I’m sure more people could relate to your suggestion.
Anyway, do you have a link with examples of debate policies that in your opinion alleviate bias? I’ll try to apply that.
Our perspectives and ways of thinking are very different. I find it confusing that you value the examples more than the concepts. And I find it confusing that you ask for more examples instead of just thinking of some yourself. I guess you can’t? Which, to me, indicates that you didn’t understand the concepts involved. But you don’t seem to be aiming to understand the concepts better.
I don’t think anyone will adopt my suggested policies without understanding the concepts, but I could be wrong. I’m also not sure it’s a good idea to adopt policies without understanding the concepts behind them. If you don’t understand the concepts well, then you don’t really understand their purpose, and therefore are likely to do a lot of things which defeat the purpose. Also, you can’t correctly judge if the policy is good without understanding the conceptual reasoning that leads to the policy. And you can’t tell if you’re using a policy right if you don’t understand its purpose well, which is a conceptual issue.
My arguments in favor of policies are conceptual, not about the concretes of specific policies. If someone doesn’t understand the concepts (like the rule of law), and therefore doesn’t understand my arguments, then why would they like or want the policies? Some policies might happen to fit with some pre-existing way of thinking they have, but overall it mostly just won’t work.
And if people systematically ignore ideas they can’t easily use right away without changing their conceptual framework much, and favor ideas that are easy to practice immediately within their current conceptual framework, then that is a huge systematic bias that will prevent people from considering, debating or adopting new, better concepts. It’s a bias favoring the status quo, the already known, the similar, etc. That’s in addition to being a bias against abstractions and concepts, which I think are necessary to being a very effective thinker.
Trying to explain another way, there is the “teach a man to fish” parable. And you seem to want me to give you fish (specific policies) instead of caring about my explanations of how to get your own fish.
Why do I like examples, and do I think it’s a good idea to add more?
It’s because, with my way of thinking, I have trouble reasoning in purely abstract terms. In my mind, I do a mental map based on stuff that exists in the real world: energy, materials, nature, relations between people, institutions, emotions, pleasure and suffering, etc. I can manipulate this stuff in my mind and scale it up or down, but it has to start from something I have to recognize first in the real world. Concept don’t stick as much—they’re too abstract, too blurry.
That’s why I have trouble with legal lingo or long equations. The language used is often too remote from reality. I can understand concepts, but first I have to see how they apply to the real world—like a direct example of what a law will actually do to a person.
This is why I like to have examples. You are telling me that the concept of debate policy is sound. I can understand that—and I think I understand the theory behind what you are saying. But I have no idea how to put that into practice, because what you say is not linked to actual actions I can take.
To continue on the “teach a man a fish” parable, it’s not that I want you to just give me a fish. I want you to show me what a fish looks like, to show me different types of fish so I can learn to recognize them (and then, eventually, catch them).
This is the first time I encounter the concept of “debate policy”, and the only example I have, your own debate policy, is not suited to what I do. So I’d like to see other examples of such policies, and examples of how that would actually turn out in a conversation.
I think your argument would be more persuasive with that.
Do you consider that difference in style or a weakness? If it’s a weakness, is it super important or only somewhat important?
Are you trying to change it? Do you think it can be changed?
To continue on the “teach a man a fish” parable, it’s not that I want you to just give me a fish. I want you to show me what a fish looks like, to show me different types of fish so I can learn to recognize them (and then, eventually, catch them).
That was helpful for understanding your perspective.
I’m concerned that most people on EA are too intolerant or uncurious talk to people with large differences in perspective. The result is basically that if they don’t see the value of something quickly, and also they won’t debate, then there’s no way to tell it to them. The result of that is that EA keeps a bunch of biases and errors, unnecessarily, because it’s not open to some types of criticism. I appreciate that you’re being more friendly and open minded than others. Unfortunately, I don’t think posting examples of rationality policies will be persuasive to most people who don’t currently have goals like being open to debate more effectively. Most of them seem content to dismiss me and take the risk that they’re in the wrong and, due to their actions, they are preventing the disagreement from being resolved. They don’t seem to understand or mind the risk of betting on being right about important issues they ignore some criticism about (with their careers and millions of dollars used less effectively if they’re wrong). Unless they actually want policies to address that risk – unless it’s a problem they want to solve – then I don’t think example solutions will work. Disagreements about goals have to be dealt with before methods of achieving the goals I think are good. Does that perspective on some of the difficulties (for my project of reforming EA) make sense to you?
“I have trouble reasoning in purely abstract terms”
Do you consider that difference in style or a weakness? If it’s a weakness, is it super important or only somewhat important? Are you trying to change it? Do you think it can be changed?
I see that as both a weakness and a strength. A weakness in the sense that it’s hard to do stuff that requires complex equations including abstract terms. This includes for instance most of physics, post-graduate maths, optics, mechanics, chemistry or all the section of economics like finance or accounting. I don’t think it can really be changed. I can do stuff like that but it’s hard, abstract, sluggish, demotivating and I don’t stick long. This is why I usually never do calculations myself.
But I’m still interested in understanding how the world works. So this forces me to find way to understand all this stuff by mapping how it applies in the real world.
For instance, I have trouble understanding explanations on economics like it’s done in finance with maths everywhere and shares and credit and stuff like that. But these are layers of abstraction over the real world. So I try to look directly at the economy from a biophysical perspective. For instance, seeing money as a claim on goods and services, which requires materials and energy, meaning money is a claim on natural resources. Or seeing debt as a promise of future goods and services—meaning we run into problem if debt grows faster than the economy. Meaning I also skip all the weird hypothesis many economists do like perfect markets, infinite substituability, prices as indicators of scarcity or stuff like that.
This is why I see that as a strength: as I have trouble understanding the math-heavy stuff, I have to find ways to express how all of that applies to reality in simpler terms, which is actually the endgoal. It’s more clear in my head, and also more engaging when discussing it with most people.
I’m concerned that most people on EA are too intolerant or uncurious talk to people with large differences in perspective.
Oh, yes, to a certain degree, like most people. But less than most people in my opinion. It’s just that change takes time and effort and time and effort, and more importantly, a way to be persuasive. Won’t affect everyone, but a portion might be interested.
Does that perspective on some of the difficulties (for my project of reforming EA) make sense to you?
I think most people in EA would absolutely agree with the goal “reducing bias”, or “being more right” or stuff like that, at least in theory. But I think what they would really like is a straightforward way of doing that, with proven results.
Basically, to see that your approach works. Right now they have no way of knowing whether what you propose provides good results. People tend to ignore problems for which there is no good solution, so, in addition to saying that something is conceptually important, you have to provide the solution.
It’s a bit like overthrowing capitalism - in theory, this might be very important (as capitalism failed time and time again to switch to anything ecologically sustainable—but you can debate that), but since there is no credible pathway on how to do that wih our current means, most people turn away from that.
That’s why I recommand examples of policies. It’s much easier to sell something that I can use right away. Selling something that I have to build up from scratch, with uncertain results, is much less attractive. You might be interested in that link.
You might also like this Astral Codex post : general criticism usually doesn’t trigger any change afterwards. While specifying specific points that we can change has more potential, because we see better what to do (and what we shouldn’t have done).
I think all types of abstract, conceptual, logical or mathematical thinking are learnable skills which are a significant part of what learning about rationality involves. As usual, I have arguments and I’m open to debate.
I have put substantial effort into teaching some of this stuff, e.g. by building from sentence grammar trees (focused on understanding a sentence) to paragraph trees (focused on understanding relationships between sentences) to higher level trees (e.g. about relationships between paragraphs). There are many things people could do to practice and get better at things. I’ve found few people want to try very persistently though. Lots of people keep looking around for things where they can have some sort of immediate success and avoid or give up on stuff that would take weeks (let alone months or years). Also a lot of people focus their learning mostly on school subjects or stuff related to their career.
I’m concerned that most people on EA are too intolerant or uncurious talk to people with large differences in perspective.
Oh, yes, to a certain degree, like most people. But less than most people in my opinion.
I don’t disagree with that. But unfortunately I don’t think the level of tolerance, while above average, is enough for many of them to deal with me. My biggest concern, though, is that moderators will censor or ban me if I’m too unpopular for too long. That is how most forums work and EA doesn’t have adequate written policies to clearly differentiate itself or prevent that. I’ve seen nothing acknowledging that problem, discussing the upsides and temptations, and stating how they avoid it while avoiding the downsides of leaving people uncensored and unbanned. Also, EA does enforce various norms, many of which are quite non-specific (e.g. civility), and it’s not that hard to make an excuse about someone violating norms and then get rid of them. People commonly do that kind of thing without quoting a single example, and sometimes without even (inaccurately) paraphrasing any examples. And if someone writes a lot of things, you can often cherry pick a quote or two which is potentially offensive, especially out of the long discussion context it comes from.
Things like downvotes can be early warning signs of harsher measures. If someone does the whole Feynman thing and doesn’t care what other people think, and ignores downvotes, people tend to escalate. They were downvoting for a reason. If they can’t socially pressure you into changing with downvotes, they’ll commonly try other ways to get what they want. On a related note, I was disappointed when I found out that both Reddit and Hacker News don’t just let users vote content to the front page and leave it at that. Moderators control what’s on the front page significantly. When the voting plus algorithm gets a result they like, they leave it alone. When they don’t like the result, they manually make changes. I originally naively thought that people setting up a voting system would treat it like an explicit written policy guarantee – whatever is voted up the most should be on top (according to a fair algorithm that also decays upvotes based on age). But actually there are lots of unwritten, hidden rules and people aren’t just happy to accept the outcome of voting. (Note: Even negative karma posts sometimes get too much visibility on small forums or subreddits, thus motivating people to suppress them further because they aren’t satisfied with the algorithm’s results. Some people aren’t like “Anyone can see it’s got −10 karma and then make a decision about whether to trust the voters or investigate the outlier or what.” Some people are intolerant and want to suppress stuff they dislike.)
I don’t know somewhere else better to go though. And my own forum is too small.
That’s why I recommand examples of policies.
I will post more examples. I have multiple essays in progress.
Broadly, if EA is a place where you can come to compete with others at marketing your ideas to get social status and popularity, that is a huge problem. That is not a rationality forum. That’s a status hierarchy like all the others. A rationality forum must have mechanisms for unpopular ideas to get attention, to disincentivize social climbing behaviors, to help enable people to stand up to, resist or call out social pressures, etc. It should have design features to help attention get allocated in other ways besides whatever is conventionally appealing (or attention grabbing) to people that marketing focuses on.
One of the big things I think EA is missing – and I have the same complaint about basically everyone else (again it’s not a way EA is worse) – is anyone who takes responsibility for answering criticism. No one in particular feels responsible for seeing that anyone answers criticism or questions. Stuff can just be ignored and if that turns out to be a mistake, it’s no one’s fault, no one is to blame, it was no one’s job to have avoided that outcome. And there’s no attempt to organize debate. I think a lot of debate happens anyway but it’s systematically biased to be about sub-issues instead of questioning people’s premises like I do. Most people learn stuff (or specialize in it) based on some premises they don’t study that much, and then they only want to have debates and address criticism that treats those premises as givens like they’re used to, but if you challenge their fundamental premises then they don’t know what to do, don’t like it, and won’t engage. And the lack of anyone having responsibility for anything, combined with people not wanting to deal with fundamental challenges, results in basically EA being fundamentally wrong about some issues and staying that way. People tend not to even try to learn a subject in terms of all levels of abstraction, from the initial premises to the final details, so then they won’t debate the other parts because they can’t, which is a big problem when it’s widespread. E.g. all claims about animal welfare, AI alignment, or clean water interventions depend in some way on epistemology. Most people who know something about factory farms do not know enough to defend their epistemological premises in debate. Even if they do know some epistemology, it’s probably just Bayesian epistemology and they aren’t in a position to debate with a Popperian about fundamental issues like whether induction works at all, and they haven’t read Popper, and they don’t want to read Popper, and they don’t know of any literature which refutes Popper that they can endorse, and they don’t know of any expert on their side who has read Popper and can debate the matter competently … but somehow that’s OK with them instead of seeming awful. Certainly almost everyone who cares about factory farms would just be confused instead of thinking “omg, thanks for bringing this up, I will start reading Popper now”. And of course Popperian disagreements are just one example of many. And even if Popper is totally right about epistemology and Bayes is wrong, what difference does that make to factory farming? That is a complex matter and it’d take a lot of work to make all the updates, and a lot of the relevance is indirect and requires complicated chains of reasoning to get from the more fundamental subject to the less fundamental one. But there would very likely be many updates.
But I think what they would really like is a straightforward way of doing that, with proven results.
It’s too much to ask for. We live in an inadequate society as Yudkowsky would say. Rationality stuff is really, really broken. People should be happy and eager to embark on speculative rationality projects that involve lots of hard work for no guaranteed results – because the status quo is so bad and intolerable that they really want to try for better. Anyone who won’t do that has some kind of major disagreement with not only me but also, IMO, Yudkowsky.
Basically, to see that your approach works. Right now they have no way of knowing whether what you propose provides good results. People tend to ignore problems for which there is no good solution, so, in addition to saying that something is conceptually important, you have to provide the solution.
One way to see that my approach works is that I will win every single debate including while making unexpected, counter-intuitive claims and challenging widely held EA beliefs. But people mostly won’t debate so it’s hard to demonstrate that. Also even if people began debates, they would mostly want to talk about concrete subjects like nutrition or poverty, not about debate methodology. But debating debate methodology basically has to come first, followed by debating epistemology, because the other stuff is downstream of that. If people are reasonable enough and acknowledge their weaknesses and inabilities you can skip a lot of stuff and still have a useful discussion, but what will end up happening with most people is they make around one basic error per paragraph or more, and when you try to point one out they make two more when responding, so it becomes an exponential mess which they will never untangle. They have to improve their skills and fundamentals, or be very aware of their ignorance (like some young children sorta are), before they can debate hard stuff effectively. But that’s work. By basic errors I mean things like writing something ambiguous, misreading something, forgetting something relevant that they read or wrote recently, using a biased framing for an issue, logical errors, mathematical errors, factual errors, grammar errors, not answering questions, or writing something different than what they meant. In a world where almost everyone does those types of errors around once a paragraph or more, in addition to being biased, and also not wanting to debate … it’s hard. Also people frequently try to write complex stuff, on purpose, despite lacking the skill to handle that complexity, so they just make messes.
The other way to see it works, besides debating me, is to consider it conceptually. It has reasoning. As best I know, there are criticisms of alternatives and no known refutations of my claims. If anyone knows otherwise they are welcome to speak up. But that’d require things like reviewing the field, understanding what I’m saying, etc. Which gets into issues of how people allocate attention and what happens when no one even tries to refute something because a whole group of people all won’t allocate attention to it and there’s no leader who takes responsibility for either engaging with it or delegating.
Well that was more than enough for now so I’ll just stop here. I have a lot of things i’d be interested in talking about if anyone was willing, and i appreciate that you’re talking with me. I could keep writing more but I already wrote 4600 words before this 1800 so I really need to stop now.
Wow, that was really interesting. Let me answer that.
One of the big things I think EA is missing – and I have the same complaint about basically everyone else (again it’s not a way EA is worse) – is anyone who takes responsibility for answering criticism. No one in particular feels responsible for seeing that anyone answers criticism or questions. Stuff can just be ignored and if that turns out to be a mistake, it’s no one’s fault, no one is to blame, it was no one’s job to have avoided that outcome.
Ok, this is a very good claim. I find that a very useful insight. Since it’s “no one’s job”, everything is decentralized, it means it’s hard for useful feedback to reach the ones that could use it. And there is no negative consequences for being wrong on some stuff, so it keeps going.
I really have trouble seeing how to fix that, however (are there some movements out there where this is the case?).
But it’s worth making a post about it, I think. In the form of “EA should...”.
I think all types of abstract, conceptual, logical or mathematical thinking are learnable skills which are a significant part of what learning about rationality involves.
Yes, it can be learned—and I had to learn that in engineering school. But I didn’t like it. As I said, I found doing that sluggish, boring and not motivating. I can do it if I am forced to, but this makes me lose motivation—and keeping my motivation is very important for me to stay active. If learning about energy was purely abstract thinking, I simply wouldn’t bother, even if it’s important. So I prefer to play on my strong suits—that’s more sustainable for me.
A rationality forum must have mechanisms for unpopular ideas to get attention, to disincentivize social climbing behaviors, to help enable people to stand up to, resist or call out social pressures, etc.
It’s true that this should be the case, I agree. However I am not certain the EA forum is a “rationality forum” per se. Rationality is important there, but it’s not a place where you debate rationality itself.
Less Wrong is a rationality forum. There are people debating rationality and stuff like that. So very abstract discussions about whether Bayes is good make sense there. Have your tried to post on Less Wrong ? Maybe your content would receive more interest there?
The EA Forum, however, feels more like a place more directly linked to action itself. You propose stuff related to action itself, like new causes, you give status updates, you explain why the priorization of some stuff should change… Some of it can be abstract, like discussions on longtermism, but there is a general vibe of “Ok, what do we do about this information? How can it help us act better?”.
For instance, I had some useful feedback about my energy post: one person said it wasn’t totally fitting the expected content of the forum. People here don’t expect broad reports about entire topics (no matter how important).
Instead, what he suggested was to make smaller posts, about one specific point (like “Models of EA about the future are missing a scenario were we fail at the energy transition”, with the causal reasoning). What’s important here is that there is only one matter at hand, with something actionable (we should do that specific thing, and here is why it could help us doing good better).
But unfortunately I don’t think the level of tolerance, while above average, is enough for many of them to deal with me. My biggest concern, though, is that moderators will censor or ban me if I’m too unpopular for too long.
I’ve never really heard of anyone here being banned unless they wrote some really bad stuff (like accusing someone of malevolence. We shouldn’t accuse people of bad faith or malevolence here). So I wouldn’t worry too much about that.
Even by browsing into your history I didn’t that much stuff that was so downvoted (you had one at “-10” disagreement but it didn’t affect your karma, which was at +2 → I can think of very few forums disagreeing but not downvoting).
I made a little list of feedback based on the I read in your posts. You are free to use it or not, I’m just listing stuff that came up in my mind—I’ve only read a sample of your writings, so it might not apply to everything.
All posts should try to answer the question “How can we do good better?”.
Right now, it isn’t necessarily obvious how your posts answer this question. It might feel off-topic (although I understand why you think it’s on-topic).
You said that you won every debate, and your way of doing works, but I have no way of knowing that from the outside.
One interesting stuff to publish would be an example of “how could your reasoning method improve the way we’re currently doing stuff in EA?”. For instance, you said Popper could improve how animal activists work. You could provide an example of something specific that could be improved in animal welfare advocacy by using the Popper method, showing why it is superior.
Don’t try to sell a tool or a solution itself—show that you get better results this way (using examples). If it works, then some people will try to use the solution.
There’s too much to read, so people don’t have extensive time to engage with everything. Try to be succint.
One of your post spent 22 minutes to say that people shouldn’t misquote. It’s a rather obvious conclusion that can be exposed in 3 minutes top. I think some people read that as a rant.
Use examples showing why the topic is important (or even stories). It allows to link your arguments to something that exists.
You can think with purely abstract stuff—but most people are not like that. A useful point to keep in mind is you are not your audience. What works for you doesn’t work for most other people. So adapting to other reasoning types is useful.
Make specific points that are actionable .
I agree that in theory, rational people should spend lots of time fixing very difficult problems with very uncertain payoff (like speculative rationality projects). But that’s not how things work. Our amount of time and motivation is limited, so this is probably an unreasonable assumption to have.
Assume instead that they have limited time, and judge stuff they read by the criteria of “How is this useful to me?”. If they don’t see something they can do with your information, they won’t act on it.
As for me, right now, I understand your line of thinking about biases and the importance of having better rationality, but I still fail to see what I can do with this information. Chances are that it will float in some part of my mind, but I won’t act on it when I should.
Improve the readibility of your posts. There is a lot of text there, and it’s hard to get the structure and it’s harder to skim through.
This post provides good suggestions. “Assume your audience is smart, but has limited bandwidth”
Use bullet points, and heading, and bolding. They are good.
Make people reach their own conclusions—ask questions.
You said that people tend to not know what to do when you challenge one of their personal premise. Which is normal. If they see their core beliefs challenged, people tend to get defensive.
However, that doesn’t usually happen if they reach the conclusion by themselves. So they learn much better when being asked questions, and they have to spell out the conclusion. This is why the socratic method is of interest. So ask questions.
Antagonizing people is easy, even by accident. I’m not saying you are doing that, but it’s still very important, so I add that just in case. It’s important not to make people feel on the defensive, and not using accusatory tones. It’s important to try to understand why the other thinks that way, showing you get that and agree to some extent, but still suggest improvements.
A good book on that topic is How to win Friends by Dale Carnegie—I don’t like the title, but the content is still very useful. Plus, it works.
One of the main concerns you appear to have is that EA could be better at ding rationality. It could have better conclusions, and better premises. I agree ! But it’s up to us to find ways of how to do that. Rationality is about finding the best way to adapt to reality.
What follows logically, then, is what is the most effective way of making EA better?I don’t have a good answer to that yet, but that ’s what I will try to answer. And if I have to learn about stuff like communication or psychology to find ways to be more effective, well I will have to do that then.
Yes, it can be learned—and I had to learn that in engineering school. But I didn’t like it. As I said, I found doing that sluggish, boring and not motivating. I can do it if I am forced to, but this makes me lose motivation—and keeping my motivation is very important for me to stay active. If learning about energy was purely abstract thinking, I simply wouldn’t bother, even if it’s important. So I prefer to play on my strong suits—that’s more sustainable for me.
Not gonna debate this right now (unless maybe you wanted to focus on this topic instead of others) but I wanted to clarify: When I said it’s learnable, I meant learnable in a way that you like it, don’t have motivation problems, aren’t bored, it isn’t sluggish, everything works well. Those things you talk about are serious problems – they mean something (fixable) is going wrong. That’s what I think.
I made a little list of feedback based on the I read in your posts.
Thanks. I appreciate work people do that facilitates me getting along with more EAs better, so that I can better share potentially valuable ideas with EA.
You said that you won every debate, and your way of doing works, but I have no way of knowing that from the outside.
Yeah I don’t expect anyone to trust that or to look through tens of thousands of pages of discussion history (which is publicly available FWIW). And I don’t know of any way to summarize past debates that will be very persuasive. Instead, all I really want, is that at least one person from EA is willing to debate, and if I get a good outcome, then a second person should become willing to debate, and so on. And e.g. if I get to 5 good debate outcomes with EA people then a lot more people ought to start paying some attention, considering my ideas, etc. It should be possible to get attention from EA people by a series of debates without doing marketing, making friends, or other social climbing. And starting with one at a time is fine but I shouldn’t have to go through hundreds of debates one at a time to persuade hundreds of EAs.
I think that’s a reasonable thing to ask for even if I had no past debate history. But I don’t know of any communities (besides my own) that actually offer it. I think that’s one of the major problems with the world which matters more than a lot of the causes EA works on. Imagine how much more easily EA could do huge amounts of good if just 10% of the charities and large companies were open to debate, and EAs could go win debates with them and then they’d actually change stuff.
You could provide an example of something specific that could be improved in animal welfare advocacy by using the Popper method, showing why it is superior.
I don’t have any quick win for that. Just a potential very very long debate involving learning a ton of ideas which could potentially lead to EAs changing their mind about some of these beliefs. I have long, complicated arguments regarding other EA topics too, such as AI alignment (which again depends significantly on epistemology, so Popper is relevant). I’ve been interested in talking about AI alignment for years but I don’t know any way to get anyone on the other side of the AI alignment debate to engage with me seriously.
Don’t try to sell a tool or a solution itself—show that you get better results this way (using examples). If it works, then some people will try to use the solution.
I often get results that I consider better, but which other people would evaluate differently, or wouldn’t know how to evaluate, or wouldn’t be able to replicate without learning a lot of the background knowledge I have. When people have different ideas, it often means the way of evaluating outcomes itself has to be debated/discussed – which partly means talking about concepts, abstractions, philosophy, etc. And then the specific evaluations can require a lot of discussion and debate too. So you can’t just show an outcome – there has to be substantial discussion for people to understand.
One of your post spent 22 minutes to say that people shouldn’t misquote. It’s a rather obvious conclusion that can be exposed in 3 minutes top. I think some people read that as a rant.
I’m highly confident that EAers broadly disagree with me on that topic, which is why I wrote that article. It’s not obvious. It’s controversial. And I believe it’s an ongoing, major problem on the forum that is not being solved.
It’s related to another article I’m considering writing, which would claim basically that raising intellectual standards would significantly improve EA’s effectiveness. Widespread misquoting, plus widespread not really caring about or minding misquotes, is one example of EA having intellectual standards that are too low. Low intellectual standards have negative consequences for having accurate views about the world and figuring out the right conclusions about various causes. And they also makes it extremely hard to have productive debates about hard issues, especially when there’s significant culture clash or even unfriendliness.
In general, you need either friendliness or high standards to have a productive discussion or debate. It’s super hard with neither. And friendliness towards critics with significant outsider/heresy ideas is rare in general. I think EA has more of that friendliness than typical but not nearly enough to replace high intellectual standards when dealing major differences in ideas.
Antagonizing people is easy, even by accident. I’m not saying you are doing that, but it’s still very important, so I add that just in case.
I know that I often antagonize people by accident. I’m not going to deny that or feel defensive about it. It’s a topic I’m happy to talk about openly, but IME other people often don’t want to. I have sometimes been accused of being mean, at which point I ask for quotes, at which point they usually don’t want to provide any quotes, or occasionally provide a quote they don’t want to analyze. Anyway it’s a difficult problem which I have worked on.
What follows logically, then, is what is the most effective way of making EA better?I don’t have a good answer to that yet, but that ’s what I will try to answer.
I don’t have a plan that I particularly expect to work, but I have a few things to try. One plan is getting people to debate or, failing that, to talk about issues like why debating matters. Another plan is to get a handful of people to take an interest, discuss stuff at length, learn more about my ideas, and then help change things. Another plan (that I’ve already been working on for 20 years) is to write good stuff – at EA or even just at my own websites – and maybe something good will happen as a result.
I think I’m aware of a bunch of problems and difficulties that you aren’t familiar with, which make the problem even harder. For example, I have objections to a lot of the psychology and marketing stuff you mention. But anyway, to summarize, I know something about debating issues rationally but less about getting anyone to like me or listen. One of the main problems is social hierarchies, and in very short I think any plan involving social climbing is the wrong plan. Eliezer Yudkowsky also has a lot of negative things to say about social hierarchies but unfortunately I don’t see that reflected in the EA or LW communities – I fear that no one figured out much about how to turn criticism of social hierarchies into action to actually create different types of communities.
Also, when you have conclusions that rely on different background knowledge than your audience has, it’s very hard to explain them in short ways, which are how people want and expect information, while also making it rationally persuasive (which requires explaining a lot of things people don’t already know, or else they should not find it persuasive without debating, discussing or studying it first to find out more).
“I know that I often antagonize people by accident.”
I think something that could help (maybe) is making the other person feel understood. Showing that you understand where they come from, that what the other says really makes sense for them, but that you have found some other way of seeing things that also makes sense. Direct accusations of doing stuff poorly rarely works, and comes off as judgemental. It’s better if you want to let people (have you read How to make friends by Dale Carnegie? Not perfect but gives some valuable insight). (Not sure I’m doing that with you, but you don’t seem to need it ^^)
“I’m highly confident that EAers broadly disagree with [the topic of misquoting], which is why I wrote that article.”
Still, 22 minutes is way too long. I read it for 5 minutes and did not feel this as a valuable use of my time—most of it was on the analogy with “deadnaming” but I think this derailed from the topic. This also greatly needed a structure like an executive summary style. Or a structure like 1) here’s an example of a misquote leading to a bad outcome 2) Miquoting in general poses problems 3) The EA forums needs to enfore rules against misquotes (and here’s how).
“When I said it’s learnable, I meant learnable in a way that you like it, don’t have motivation problems, aren’t bored, it isn’t sluggish, everything works well.”
Wow, this means you could have an entire class of people, including ones who have trouble with maths (with like say complex equations), and you ’d be able to teach them to do maths in ways they like ? That would be very impressive! I’d like to learn more, do you have sources on that ?
“When I said it’s learnable, I meant learnable in a way that you like it, don’t have motivation problems, aren’t bored, it isn’t sluggish, everything works well.”
Wow, this means you could have an entire class of people, including ones who have trouble with maths (with like say complex equations), and you ’d be able to teach them to do maths in ways they like ? That would be very impressive! I’d like to learn more, do you have sources on that ?
I have multiple types of writing (and videos) related to this:
educational and skill-building materials (e.g. grammar trees, text analysis or tutoring videos)
writing about how learning works (e.g. practice and mastery)
writing about epistemology – key philosophical concepts behind the other stuff
writing about why some opposing views (like genetic IQ) are mistaken
I’ve been developing and debating these ideas for many years, and I don’t know of any refutations or counter-examples to my claims, but I’m not popular/influential and have not gotten very many people to try my ideas much.
In terms of the subject matter itself, math is one of the better starting points. However, people often have some other stuff that gets in the way like issues with procrastination, motivation, project management, sleep schedule, “laziness”, planning ahead, time preference, resource budgeting (including mental energy), self-awareness, emotions, drug use (including caffeine, alcohol or nicotine), or clashes between their conscious ideas and intuitions/subconscious ideas. These things can be disruptive to math learning, so they may need to be addressed first. In other words, if one is conflicted about learning math – if part of them wants to and part doesn’t – then they may need to deal with that before studying math. There are also a lot of people who are mentally tired most of the time and they need to improve that situation rather than undertake a new project involving lots of thinking.
Also most current educational materials for math, like most topics, are not very good. It takes significant skill or help to deal with that.
There is an issue where, basically, most people don’t believe me that I have important knowledge and won’t listen. Initial skepticism is totally reasonable but I think what should happen next, from at least a few people, is a truth seeking process like a debate using rational methods instead of just ignoring something, on the assumption it’s probably wrong, with no attempt to identify any error. That way people can find errors in my ideas, or not, and either way someone can learn something.
Sounds like quite the challenge to learn maths ! I can understand why “you need to be really motivated and allocate a lot of time and resources and to avoid coffee and alcohol and cigarettes and to solve your problems of sleep and procrastination and emotions in order to learn maths” leads to not many people really learning maths !
I wouldn’t count on many people learning these skills in such a context.
And I though the issue was only because the educational material was poor.
Ok, then I’m not sure learning maths is the most valuable use of my time right now. Especially since I mostly aggregate the work of other experts and I let them do the research and the maths in my stead.
(Although I’d still be interested by the links in case that proves necessary for my research at some point in the future. Maybe the “how learning works” material could be of interest too)
Ok, all of this is interesting. Sorry for the late answer—I got caught watching the FTX debacle where I lost an ongoing project.
To summarize, I know something about debating issues rationally but less about getting anyone to like me or listen
I’m going to focus on that here.
This is related to why I was so late in answering: the longer the exchange is, the most you have to reply to. This means that the cost of answering, in time and brain resources, gets higher, lowering the proability of an answer. I think this is a reason why many people stop debating at some point.
A useful thing I try to keep in mind is that the brain tries to save energy. It can save energy by automating tasks (habits), by using shortcuts (heuristics), and by avoiding strong conclusions that would lead to a large reorganization in the way it currently does things (for instance, changing core beliefs and methods of reasoning). This avoidance can take the form of finding rationalizations to stuff it already does, or denial.
Of course, it’s not just about energy, since the brain can change its structure if there is a good reason. Motivation is a crucial part of people discussing anything—but for motivation you need a reason to keep motivated. But it’s really not obvious what the motivation is when discussing abstract methodology. What would that reason be?
Most of the time, the reason is direct feedback that it’s doing things wrong, and negative consequences if it doesn’t change. But we don’t have this feedback during an abstract discussion on methodology and epistemics.
There is no examples of feedback of the style “wow, the way that guy does things really looks better”, as you said.
Social validation doesn’t go our way either here.
You’re not at the top of a social hierachy
Plus, we’re not between friends and we’re remote in time and space (the point you made about debates being more conclusive where there is friendliness or high standards was really good, by the way).
Now, getting better and feeling right about something can be motivating to some people (like us). But if there appears to be no good pathway for me to get better, I’ll give up on the conversation, since my brain will see other stuff to do as more appealing (not right now of course. But at some point).
To prevent that, for me, the reward would be a concrete way to improve how I do things. I can agree with you that we (I) don’t have high enough standards for high-quality discourse, but that doesn’t tell me what to do. My brain cannot change if you don’t point to something specific I can apply (like a method or a rule you can enforce). Debate policies may be a start, but they won’t do if I have no idea what they look like.
We usually don’t learn by having more theoretical knowledge, althought that’s often necessary—but most of the time theoretical knowledge doesn’t necessarily influence action (think of “treat others as you would treat yourself”). But the kind of knowledge that really sticks and influences action comes from practice—by trying stuff and seeing by yourself how that works. Having these methological skills you talked about worked for you, so now you try to push them forward. This makes sense. But I can do the same only by trying and testing.
So, what could you provide your debate partners that would be attractive enough to keep them in the debate? I’m afraid that having extremely long discussions about theoretical stuff with no clear reward may be too much to expect.
“For example, I have objections to a lot of the psychology and marketing stuff you mention. ”
Now I’m interested. Do you have data that would refute what I said or that you think would work better ?
I’m afraid that having extremely long discussions about theoretical stuff with no clear reward may be too much to expect.
I don’t mind switching to saying one short thing at a time if you prefer. I find people often don’t prefer it, e.g. b/c dozens of short messages seems like too much. In my experience, people tend to stop discussing after a limited number of back-and-forths regardless of how long they are.
Ok, I understand—so if lenght isn’t the biggest problem, I guess what might cause more of an issue is that the topic is about “theoretical stuff with no clear reward”.
So I guess the main challenge is about solving that. Questions like: How can I show that this theoretical stuff can be useful in the real world ? What is the reason people might have to be interested in engaging with me ? If I can only have a limited number of replies, how do I make the most of that time, and what are the most valuables ideas, practices and concept that I can push for ?
Questions like: How can I show that this theoretical stuff can be useful in the real world ?
I have answers to this and various other things, but I don’t have short, convincing answers that work with pre-existing shared premises with most people. The difficulty is too much background knowledge is different. My ideas make more sense and can be explained in shorter ways if someone knows a lot about Karl Popper’s ideas. My Popperian background and perspective clashes with the Bayesian perspectives here and it’s not mainstream either. (There are no large Popperian communities comparable to LW or EA to go talk to instead.)
The lack of enough shared premises is also, in my experience, one of the main reasons people don’t like to debate with me. People usually don’t want to rethink fundamental issues and actually don’t know how to. If you go to a chemist and say “I disagree with the physics premises that all your chemistry ideas are based on”, they maybe won’t know how to, or want to, debate physics with you. People mostly want to talk about stuff within their field that they know about, not talk about premises that some other type of person would know about. The obvious solution to this is talk to philosophers, but unfortunately philosophy is one of the worst and most broken fields and there’s basically no one reasonable to talk to and almost no one doing reasonable work. Because philosophy is so broken, people should stop trusting it and getting premises from it. Everything else is downstream of philosophy, so it’s hurting EA and everything else. But this is a rather abstract issue which, so far, I haven’t been able to get many people to care much about.
I could phrase it using more specifics but then people will disagree with me. E.g. “induction is wrong so...” will get denials that induction is wrong. (Induction is one of the main things Popper criticized. I don’t think any philosopher has ever given a reasonable rebuttal to defend induction. I’ve gone through a lot of literature on that issue and asked a lot of people.) The people who deny induction is wrong consistently want to take next steps that I think are the wrong approach, such as debating induction without using literature references or ignoring the issue. Whereas I think the next step should basically be to review the literature instead of making ad hoc arguments. But that’s work. I’ve done that work but people don’t want to trust my results (which is fine) and also don’t want to do the work themselves, which leaves it difficult to make progress.
Have you tried putting stuff in a visual way ? Like breaking down the steps of your (different) reasoning in a diagram , in order to show why you have a different conclusion on a specific topic EA is doing.
For instance, let’s say one conclusion you have is “EAs interested in animal welfare should do X”. You could present stuff this way : [Argument A] + [Argument B] → [I use my way of estimating things] → [Conlusion X].
Maybe this could help.
unfortunately philosophy is one of the worst and most broken fields and there’s basically no one reasonable to talk to and almost no one doing reasonable work. Because philosophy is so broken, people should stop trusting it and getting premises from it.
Huhu can’t say I disagree, I really have problems seeing what I can get from the field of philosophy most of the time, in terms of practical advice on how to improve that works, not just ideas. (although saying “there’s no one reasonable to talk to in the field XXX” would flag you as a judgemental person nobody should talk to, so be careful issuing judgements around like that)
But it’s very hard to say “this is bad” about something without proposing something people better can turn to instead. And despite exchanging with you, I still don’t picture that “better” thing to turn to.
For instance, one of the (many) reasons the anti-capitalism movement is absolutely failing is not because capitalism is good (it’s pretty clear it’s leading us to environmental destruction) or because people support it (there have been surveys in France showing that a majority of people think we need to get out of the myth of infinite growth). It has a lot to do with how hard it is to actually picture alternatives to this system, how hard it is to put forward these alternatives, and how hard it is to implement them. Nothing can change if I can’t picture ways of doing things differently.
judge public intellectuals by how they handle debate, and judge ideas by the current objective state of the debate
read and engage with some other philosophers (e.g. Popper, Goldratt and myself)
actually write down what’s wrong with the bad philosophers in a clear way instead of just disliking them (this will facilitate debating and reaching conclusions about which ones are actually good)
investigate what philosophical premises you hold, and their sources, and reconsider them
There are sub-steps, e.g. to raise intellectual standards people need to improve their ability to read, write and analyze text, and practice that until a significantly higher skill level and effectiveness is intuitive/easy. That can be broken down into smaller steps such as learning grammar, learning to make sentence tree diagrams, learning to make paragraph tree diagrams, learning to make multi-paragraph tree diagrams, etc.
I have a forum people can join and plenty of writing and videos which include actionable suggestions about steps to take. I’ve also have proposed things that I think people can picture, like that all arguments are addressed in truth-seeking and time-efficient ways instead of ignored. If that was universal, it would have consequences such as it being possible to go to a charity or company and telling them some ideas and making some arguments and then, if you’re right, they probably change. If 10% of charities were open to changing due to rational argument, it’d enable a lot of resources to be used more effectively.
BTW, I don’t think it’s a good idea to have much confidence in your political opinions (or spend much time or effort on them) without doing those other kinds of activities first.
Ah, this is more like it: a list of stuff to do. Good !
Now that I see it in that format, maybe an interesting EA forum post would be to use the list above, and provide links for each of them. You could redirect each item to the best source you have found or produced on this topic. I feel it would be easier to convince people to adopt better rationality practices if they have a list of how to do that.
Well maybe not everyone, but I am drawing conclusions from my personal case: you seem to have some interesting techniques in store, but personally right now I just don’t see how acquire them—so the best links you have for that could help greatly.
This would centralize the information in one spot (that you can redirect people to in future debates and works).
An unrelated note: I liked your post on the damage big companies were doing on your forum. I don’t really understand why you think the damage they do is not compatible with capitalism—I don’t see anything in the definition of capitalism that would preclude such an outcome. But it was an interesting post.
I’m a bit more skeptical on your post about how to judge the ability of experts, however. If having a debate policy were a common practice, and it was notorious that people who refuse them have something to hide, then it would work. But right now such an advice doesn’t seem that useful, because very few people have such a debate policy—so you can’t distinguish between people who have something to hide and people who would be ok with the concept if they had heard about it. I don’t see such a practice becoming mainstream for the next few decades.
So in the absence of that, how can I really judge which experts are reliable ?
I’d like to judge by openness in debates, but it’s not clear to me how to get this information quickly. Especially when I’m seeing an expert for the first time.
For instance, let’s take someone like Nate Hagens. How would you go to judge his reliability?
Anyway, I have plenty more things I could try. I have plenty to say. And I know there’s plenty of room for improvement in my stuff including regarding organization. I will keep posting things at EA for now. Even if I stop, I’ll keep posting at my own sites. Even if no one listens, it doesn’t matter so much; I like figuring out and writing about these things; it’s my favorite activity.
An unrelated note: I liked your post on the damage big companies were doing on your forum.
FYI, it’s hard for me to know what post you mean without a link or title because I have thousands of posts, and I often have multiple posts about the same topic.
I don’t really understand why you think the damage they do is not compatible with capitalism—I don’t see anything in the definition of capitalism that would preclude such an outcome.
The definition of capitalism involves a free market where the initiation of force (including fraud) is prohibited. Today, fraud is pretty widespread at large companies. Also, many versions of capitalism allow the government to use force, but they do not allow the government to meddle in the economy and give advantages to some companies over others which are derived from the government’s use of force (so some companies are, indirectly via the government, using force against competitors). Those are just two examples (of many).
(I may not reply further about capitalism or anything political, but I thought that would be short and maybe helpful.)
so you can’t distinguish between people who have something to hide and people who would be ok with the concept if they had heard about it.
You can tell them about the debate policy concept and see how they react. You can also look at whether they respond to criticisms of their work. You can also make a tree of the field and look at whether that expert is contributing important nodes to it or not.
I don’t see such a practice becoming mainstream for the next few decades.
I think it could become important, widespread and influential in a few years if it had a few thousand initial supporters. I think getting even 100 initial supporters is the biggest obstacle, then turning that into a bigger group is second. Then once you have a bigger group that can be vocal enough in online discussions, they can get noticed by popular intellectuals and bring up debate policies to them and get responses of some kinds. Then you just need one famous guy to like the idea and it can get a lot more attention and it will then be possible to say “X has a debate policy; why don’t you?” And I can imagine tons of fans bringing that up in comment sections persistently for many of the popular online intellectuals. It’s easy to imagine fans of e.g. Jordan Peterson bugging him about it endlessly until he does it.
I think the reason that doesn’t happen is that most people don’t actually seem to want it, like it or care, so getting to even 100 supporters of the idea is very hard. The issue IMO is the masses resisting, rejecting or not caring about the idea (of the few who see it, most dislike or ignore it), including at EA, for reasons I don’t understand well enough.
For instance, let’s take someone like Nate Hagens. How would you go to judge his reliability?
I glanced at the table of contents and saw mention of Malthus. That’s a topic I know about, so I could read that section and be in a pretty good position to evaluate it or catch errors. Finding a section where I have expertise and checking that is a useful technique.
There’s a fairly common thing where people read the newspaper talking about their field and they are like “wow it’s so bad. this is so amateurish and full of obvious errors”. Then they read the newspaper on any other topic and believe the quality is decent. It isn’t. You should expect the correctness of the parts you know less about to probably be similar to the part you know a lot about.
At a glance at the Malthus section, the book seems to be on the same side as Malthus, which I disagree with. So a specific thing I’d look for is whether the book brings up and tries to address some of the arguments on my side that I regard as important. If it ignores the side of the debate I favor, and doesn’t have any criticisms of anything I believe, that’d be bad. I did a text search for “Godwin” and there are no results. (Godwin is a classical liberal political philosopher from the same time period as Malthus who I like a lot. He wrote a book about why Malthus was wrong.) There are also no results for “Burke” and no mention of Adam Smith (nor turgot, bastiat, condorcet, mises, rothbard, hayek). I see it as a potentially bad sign to look at old thinkers/writers only to bring up one who is on your side without talking about other ideas from the time period including disagreements and competing viewpoints. It can indicate bias to cherry pick one past thinker to bring up.
That’s inconclusive. Maybe it gives fair summaries of rival viewpoints and criticizes them; I didn’t look enough to actually know. I don’t want to spend more time and energy on this right now (also I dislike the format and would want to download a copy of the book to read it more). I think it gives you some idea about ways to approach this – methods – even though I didn’t actually do much. Also, in my experience, the majority of books like this will fail at fact checking at least once if you check five random cites, so that would be worth checking if you care about whether the facts in the book are trustworthy.
Ok—however, while this is better, this list is still very long, and quite daunting. It’s good as an index, but not as a “here’s the top priority stuff”.
I think a question you should ask yourself is “If I can only have a limited number of exchanges with people, and they have a limited time, what do I want them to learn?”. And then just mention a few things that are the best/most useful stuff you have in store.
This way people get a sample of what you can offer, and then they may be like “oh ok this might be useful, maybe I’ll dig more into that”.
Mentioning “read the entire work of this guy” or “check my entire forum” is probably not something people will readily use- because from the outside I have no way of knowing if this is a good use of my time. It would take too much effort just to check. So I need a sample that tells me “hey, that’s interesting” that pushes me to go onward.
So having a list of 1) Actionable advice 2) with the list with the best stuff to redirect people too would be useful.
You can also look at experts whether they respond to criticisms of their work
Same question—how do I check that ? There’s no under “answer section” under scientific papers or books, except some are ones. They could have answered the criticism in the - how do I check that quickly ?
For instance, from what I read, Nate Hagens did take into account the classic points put forward against the claims of Malthus (although he didn’t really quote many names). But it’s all over the book—so there’s no quick way of checking that.
I think the debate policy could become important, widespread and influential in a few years if it had a few thousand initial supporters. [...] I think the reason that doesn’t happen is that most people don’t actually seem to want it, like it or care, so getting to even 100 supporters of the idea is very hard. The issue IMO is the masses resisting, rejecting or not caring about the idea (of the few who see it, most dislike or ignore it), including at EA, for reasons I don’t understand well enough.
I think one reason most people are not interested is that they don’t feel concerned by the idea. I don’t feel concerned by it. It feels it could work for public intellectuals, but everybody else has no use for it (maybe they’re wrong, but it doesn’t feel like it). And public intellectuals are a hard to reach public.
It’s also not obvious what the benefits of the idea would be. I understand there are benefits, but there’s no visible result you can see for them, which makes it less attractive. And even if there were debates following this policy, it’s not guaranteed this would change the state of the debate: many papers have been shown as non-replicable, but they are still widely cited since the rebuttals have not publicized as much.
I get that this would be really useful if many prominent experts used that—because you could reach to them and they’d have to answer.
I think I’m going to have to quit writing anything substantive at EA due to the license change, so if you want to keep discussing with me I think you’ll have to join my forum. That sucks but I don’t see a better option.
Ok—I subscribed to the forum but I don’t know how to answer to the comment you linked to.
I’ll answer here.
Why should anyone believe me about the quality or importance of anything I say, or be interested to keep going past reading one or two things? Because they can’t point out any errors so far.
Interesting, but I don’t know if this is the right criteria. One thing is, I can’t point out to an error you made because I can’t evaluate your claims. Our discussion was on abstract points of methodology, not facts or stuff you can verify—so of course I can’t point out to an error, because there is no real result to check.
Now I know I should keep an open mind, which I do, especially since I can’t point to errors in the reasoning itself. But it’s hard to believe things I can’t verify and see by myself.
Which is why I keep asking for stuff like examples and concrete things. It’s easier to grasp these and to verify them.
If you get ideas from public intellectuals who are doing rationality wrong, then you are in trouble too, not just them. You need to do rationality things right yourself and/or find thought leaders who are doing things right. So it is each individual’s problem even if they aren’t a public intellectual.
It’s really not obvious to anyone that “not having a debate policy” is “doing rationality wrong”. Especially when the concept itself is so uncommon. If this is the criteria I really don’t know who is doing rationality right (but then again, I don’t really know who is doing rationality right). Then again, most people do not get challenged into debates. Even EAs. So it makes sense that they think such a concept is not for them.
Just to test, you’ll be happy to know I adopted a debate policy ! We’ll see what results that provides in 10 years.
I’m more interested in enabling someone to become a great thinker by a large effort, not in offering some quick wins.
Ah, ok. I see where we differ here.
I try to have the most impact I can in the world, so I judge what I do by “what positive impact did this have?” As such, quick wins that can target a larger public have a larger impact, and a higher chance of changing things, so I decided on that. Which is why this seems more important to me.
But it appears that you have a different goal in mind—you seek high-level discussions with like-minded individuals. I can understand that.
Same for the CC BY license. I know I’d have less impact if I left the forum, and what I write is there with the goal of being shared anyway, so I don’t really care about that.
Same for the CC BY license. I know I’d have less impact if I left the forum, and what I write is there with the goal of being shared anyway, so I don’t really care about that.
I have more drafts to go through so there will be more posts soon.
If you or anyone else thinks that any of them should be on the EA forum, you can post them at EA as link posts. In general, I don’t plan to link post my own stuff at EA going forward, for several reasons, but if even one person thinks it would add much value to EA, they are welcome to do it.
Ok—I subscribed to the forum but I don’t know how to answer to the comment you linked to.
To post on my forum, you have to pay $20 (once, not recurring). I know the communication on this isn’t amazing (Discourse has limited options) though there should be a banner and some info about it in a few places, but I know sometimes people still don’t see it. There’s a subscribe button on the home page but it’s in a menu on mobile instead of directly visible. It takes you to https://discuss.criticalfallibilism.com/s and then the payment flow is with a standard plugin that uses Stripe.
If it’s a financial burden for you, I can give you free access.
If you can afford it, then I’ll have to ask you to pay, because my general policy is if people value a discussion with me less than $20 then I shouldn’t talk with them. I skip that policy when I go participate at other communities, but I’m quitting the EA forum now.
Ok—I though the $20 were for making posts, I didn’t think it was for answering.
I don’t think I will pay $20 because all the money I earn beyond my basic needs is going to charities.
I can understand the CC BY issue, if you’ve had problems with it in the past. If you think you can have more impact by retaining property over what you write, then this is what you should do.
I don’t think I will pay $20 because all the money I earn beyond my basic needs is going to charities.
If $20 got you even a 1% chance to find out that much of your money and effort is going to the wrong charities and causes, wouldn’t that be a good deal? Error correction is high value.
I think what EA is doing by getting people to donate that much (all above basic needs) is extremely harmful to people like you. I’d believe that even if I didn’t also believe that the majority of EA causes and efforts were counter-productive.
There’s something really problematic about thinking a cause is so important that you’ll make large personal sacrifices for it, but not being willing to do much to pursue potential error correction. EA has a lot of people who will go to great lengths to help their causes – they just are so sure they’re right(?) that they don’t seem to think debating critics is very important. It’s weird. If you think every dollar you donate is a big deal, you should also think every tiny bit of risk reduction and error correction is a big deal. Those things are scarcer than dollars and can easily have larger impacts. But I come here and say I think EA is wrong about important issues, and I want to debate, and I ask if EA has any organized debate methods or even just individuals who’d be happy to debate much. And the answer was no and also no one seems to think that’s very bad or risky. That shows a widespread lack of respect for the risk of being wrong about causes that people are investing all their money above basic needs in, and a disinterest in criticism.
Anyway, if you find my ideas implausible and not worth pursuing or debating, or still don’t really value my time more than the time of the next guy you could talk with instead, then we should part ways.
Sorry—I exagerated a bit. I do not donate everything above my basic needs—still quite a good chunk but not everything.
I try to spend quite some time on error correction (and sometimes buy books instead of getting them from a library) - but in this realm I am still weighting that against, say, the impact I could obtain by donating to an animal charity instead. But I’m ready to do some spending if I feel there’s a good chance to know more and improve.
The problem here is rather that I am not sure subscribing to this forum will really allow me to improve.
I absolutely agree to your claim that EA has a lack of organized debate method, and could improve on fighting against bias. I could probably improve on that too, I think. I can agree with the “lacking methodology”.
However, to actually improve, I need practical advice on how to improve. Or an example: for instance, seeing a debate where I see that a specific claim very important in EA is not impactful (for instance, that donating to charities that do corporate outreach in factory farming), and seeing the methodology that led to this claim.
I want to point out that criticism of what exists currently is important but not enough—the way I personally work is that I need to see something better in order to update correctly. Then I can be inspired by that better approach.
For instance, I read your criticism of The Scout Mindset—it’s interesting, there are good points, for instance that the examples she gives could be really biased. But what would add even more value to your post is recommending a book which does the same thing but better (so basically, a book about how to get better at updating how we view the world, written in a clear, streamlined way, with examples and practical advice—just more rigorous).
I really like to improve. But I need practical stuff for that—and I asked for it and still feel you didn’t answer that (besides taking up a debate policy—you also made a list of actions but with no links to go deeper).
I fear it could prove difficult for you to spread your ideas even further without a greater focus on that part.
But I come here and say I think EA is wrong about important issues
By the way, have you issued claims about EA being wrong on its list of priorities ? You have done so on methodology—which is important, but not the most engaging topic, so few people interacted with it (which is too bad). But have tried to make more specific claims, like “EA is wrong about putting effort on factory farming” ?
Oh, I had wrote a full answer in your curi.us debate space, but it says I need an account (it’s weird that the “post public answer” box appears if it doesn’t even if I don’t have an account).
I think I’ll take up your offer to have an access to the forum just for a few months, please.
Oh, and thanks for the concern you’re showing me, that’s kind :)
OK, I gave CF forum posting access to your account.
You’re right that I should make the curi.us comment section clearer than the current small-print note. If you lost the text of what you wrote, I should be able to retrieve it for you from logs.
Ok, this is a very good claim. I find that a very useful insight. Since it’s “no one’s job”, everything is decentralized, it means it’s hard for useful feedback to reach the ones that could use it. And there is no negative consequences for being wrong on some stuff, so it keeps going.
I really have trouble seeing how to fix that, however (are there some movements out there where this is the case?).
I haven’t read the rest yet and might not get to it today, but I’ll give a comment on this.
I think the solution is that anyone (who cares a lot) can take personal, individual responsibility for addressing criticism. I do that with my own tiny movement. There is no need to divide up responsibility. Multiple people can separately do this at once.
Isn’t that too much work? To the extent that anyone else does something useful, it’s less work. If the movement is tiny, then it’s a ton of work. If the movement is pretty big, and a lot of people do useful tasks, then it’s less work.
If you take responsibility, you don’t have to do everything yourself. You can delegate to anyone willing to do stuff. You just have to monitor their work. If you endorse it and it’s wrong, you were wrong. If it’s partly wrong and partly useful, you can endorse just part of it and specify which part.
You also don’t even have to delegate everything. People may do stuff without you asking them to. You can take responsibility even if you’re not seen as a leader and have zero people who will do tasks at your request. What you do is figure out what usable, endorsable essays and other materials the movement has, and figure out what’s missing, and fill in whatever’s missing to your satisfaction. With a larger movement, many people’s opinion would be that nothing crucial is missing, so their initial workload is merely reviewing what exists, learning about it, getting a kind of organized overview of it figured out, and being satisfied.
When critics come along and want to debate, maybe someone will answer them in a way you consider satisfactory. Or maybe not. If you take responsibility for these ideas and think they’re true, then you should monitor to see that critics get answered in ways you’re content with. If there are any gaps where debate or answers aren’t happening to your satisfaction, then you need to fill in that gap (or, in the alternative, admit there is a gap, and say unfortunately you’re too busy to deal with the whole gap, so not all criticism has been answered and debated yet, so you aren’t confident you’re right).
To fill in a gap where some critic is raising an issue, there are two basic scenarios, the easy and hard one:
easy scenario: the issue is addressed somewhere. the critic just needs to be provided with a link or cite. sometimes a little bit of extra text is needed to explain how the generic information in the article answers the specific information the critic brings up. i’ve called this bridging material. it’s a type of personalization or customization. there are a lot of cases where a paragraph of customization can add a lot to a cite/link. you also may need to specify which part of a link/cite is relevant rather than the whole thing, and may need to give disclaimers/exclusions to go with the link/cite.
all of that is pretty fast and a pretty low amount of work. and it’s something that people can contribute to a movement without being geniuses or great essay writers or anything like that. people who just read and liked a lot of a movement’s materials can help by sharing the right links in the right places. if a movement has a lot of literature, then this will probably address over 80% of criticism.
hard scenario: the issue is not already addressed somewhere. new ideas/arguments/literature are needed. this is more work. if this comes up a bunch and it’s too much work for you, and others won’t help enough, then the movement isn’t really fully fleshed out and you shouldn’t be confident about being right.
this is how i do stuff with my own tiny philosophy movement. i monitor for issues that should be addressed but no one else addresses. i have some fans who will sometimes provide links to existing essays so i don’t have to. I occasionally delegate something though not very often. since my movement is tiny i don’t expect a ton of help, but on the other hand i can refer to the writing of dead authors to answer a lot of issues. i try to learn from and build on some authors i found that i think did good work. it’d be really problematic to try to do everything myself from scratch. if i tried to reinvent the wheel, i’d almost certainly come up with worse ideas than already exist. instead i try to find and understand the best existing ideas and then add some extra insights, changes and reorganization.
and i have a debate policy so if a critic will neither use nor criticize my debate policy (and someone has linked him to it), then i don’t think i need to answer his criticism (unless i actually think he has a good point, in which case i’d want to that point even if it has no advocates who are smart, reasonable or willing to debate). i have a forum where critics can come and talk about my philosophy. EA has a forum too which is one of the main reasons I’m talking to EA at all. not enough movements, groups or individuals even have forums (which IMO is a major problem with the world).
(I do not count facebook groups, subreddits, twitter, discords, or slacks as forums. social media and chatrooms are different than forums. comment sections on blogs, substacks, news articles, etc., also aren’t proper forums. having an actual forum matters IMO. Examples of forum software include Discourse, phpbb and google groups. I view forums as a bit of a remnant of the old internet that has lost a ton of popularity to social media. i think LW/EA partly have forums because the community started before smartphones got so popular.)
Ok, very well. I’m not sure it’s reasonable to expect everyone to take responsibility for changing on a topic—but changing requires efforts and time, and it’s not realistic to expect every people to do all of that.
In an ideal world we’d look for everything by ourselves, but in reality we just don’t have time to dig into everything.But this links to motivation that’s the topic of the other response.
Ok, very well. I’m not sure it’s reasonable to expect everyone to take responsibility for changing on a topic—but changing requires efforts and time, and it’s not realistic to expect every people to do all of that.
I don’t expect everyone to do it. I expect more than zero people to do it.
Or, if it is zero people, then I expect people to acknowledge a serious, urgent problem, and to appreciate me pointing it out, and to stop assuming their group/side is right about things which no one on in their group/side (including themselves) will take responsibility for the correctness of.
Skimming and other ways of reducing reading can work well and I’ve been interested in them for a long time. Getting better at reading helps too (I’ve read over 400,000 words in a day, so 10,000 doesn’t seem like such a daunting journey to me). But ignoring arguments, when no one on your side has identified any error, is problematic. So I suggest people should often reply to the first error (if no one else already did that in a way you find acceptable). That makes progress possible in ways that silence doesn’t.
If you think the length and organization of writing is itself an error that is making engaging unreasonably burdensome, then that is the first error that you’ve identified, and you could say that instead of saying nothing. At that point there are ways for problem solving and progress to happen, e.g. the author (or anyone who agrees with him) could give a counter-argument, a rewrite, or a summary (particularly if you identify a specific area of interest – then they could summarize just the part you care about).
I recently posted about replying to the first error:
It’s particularly important to do this with stuff which criticizes your ideas – which claims you’re wrong about something important and impactful – so it’s highly relevant to you.
If you think the length and organization of writing is itself an error that is making engaging unreasonably burdensome, then that is the first error that you’ve identified, and you could say that instead of saying nothing.
This is a good point -I just think that most people are not even aware that this is an option (admitting you didn’t read everything but still want to engage isn’t obivous in our way of doing things).
I recently posted about replying to the first error:
I read your post on long articles—it provides some really useful insights, so thanks for that. I still think it could be a bit more attractive to readers (summary, bullet points, more titles and sections, bolding, exemples, maybe 3 minutes shorter), but it was worth reading. The fact you said “don’t stop reading unless you spotted an error” helped too ^^
Attracting readers is a different activity than truth seeking. Articles should be evaluated primarily by whether people can refute what the article says or not. If I avoid errors that anyone knows about, then I’ve done a great job. A rational forum should be able to notice that, value it and engage with it, without me doing anything extra to get attention.
Truth seeking and attracting typical readers are different skills. People usually aren’t great at both. A community that emphasizes and rewards attracting will tend to get issues wrong and alienate rational people.
I got to a major, motivating point (“a bias where long criticism is frequently ignored”) in the third sentence. If someone is unable to recognize that as something to care about, or gets bored before getting that far, then I don’t think they’re the right audience for me. They could also find out about “Method: Reply to the First Important Error” by reading the bullet point outline.
I read far worse writing all the time. It’s not a big deal. Readers should be flexible and tolerant, learn to skim as desired, etc. They should also pick up on less prominent quality signals like clarity.
Any time I spend on polishing means less writing and research. I write or edit daily. I used to edit/polish less and publish more, and I still think that might have been better. There are tradeoffs. I now have a few hundred thousand unpublished words awaiting editing, including over 30,000 words in EA-related drafts since I started posting here.
I’m also more concerned with attracting especially smart, knowledgeable, high-effort readers than attracting a large number of readers. Put another way, the things you’re asking for are not how I decide what articles or authors to read.
Anyway, I appreciate the feedback. I intentionally added some summary to some articles recently, which I viewed as similar to an abstract from an academic paper. I’m not necessarily against that kind of thing, but I do have concerns to take into account.
I must admit that I am trying to aim for a different approach: writing stuff adapted to human psychology.
I don’t go from postulates like “Articles should be evaluated primarily by whether people can refute what the article says or not” or “Readers should be flexible and tolerant, learn to skim as desired, etc.” It would be very nice if people were to do that. But our brains, although they can learn that to some extent with the good educational methods and the right incentives, just didn’t really evolve for doing stuff like that, so I don’t expect people to do that.
Reading text which is long, abstract, dry, remote from our daily environments, and with no direct human interactions, is possible, but this is akin to swimming against the flow: if there’s a good reason to do that, I will, but it will be much harder. And I need to know what I can get out of it—with a serious probability.
I guess that’s one reason people tend to ignore what science says: it’s boring. It has a “reader-deterring style” as one paper puts it.
I really recommend this paper by Ugo Bardi that explicits why that contributes to the decline of science:
The human mind has limits. So, how to make a mass of concepts available outside the specific fields that produced them? One option is to make them “mind-sized”. It implies breaking down complex ideas into sub-units that can be easily digested.
Science is, after all, a human enterprise and it has to be understood in human terms, otherwise it becomes a baroque accumulation of decorative items. [...]
Scientific production and communication cannot be seen as separate tasks: they are one and the same thing.
The brain is better at processing stuff that is concrete. Visual stuff like pictures. Metaphores. Examples. Bullet points and bolding. There’s a much better chance that people read things that the brain can process easily—and it’s useful even for your readers that are able to read dry stuff.
I think you’re mistaken about evolutionary psychology and brains, but I don’t know how to correct you (and many other people similar to you) because your approach is not optimized for debate and (boring!?) scholarship like mine. That is one of many topics where I’d have some things to say if people changed their debate methodology, scholarly standards, etc. (I already tried debating this topic and many others in the past, but I found that it didn’t work well enough and I identified issues like debate methodology as the root cause of various failures.)
I also agree with and already (try to) do some of what you say. I have lots of material breaking things into smaller parts and making it easier to learn. But there are difficulties, e.g. when the parts are small then the value from each one individually (usually) becomes small too. To get a big result people have to learn many small parts and combine them, which can be hard and require persistence and project management. You’re not really saying anything new to me, which is fine, but FYI I already know about additional difficulties which it’s harder to find answers for.
The brain is better at processing stuff that is concrete. Visual stuff like pictures.
I’m personally not a very visual thinker and I’m good at abstract thinking. This reads to me as denying my lived experience or forgetting that other types of people exist. If you had said that the majority of people like pictures, then I could have agreed with you. It’s not that big a deal – I’m used to ignoring comments that assume I don’t exist or make general statements about what people are like which do not apply to me. I’m not going to get offended and stop talking to you over it. But I thought it was relevant enough to mention.
I think you’re mistaken about evolutionary psychology and brains, but I don’t know how to correct you (and many other people similar to you) because your approach is not optimized for debate
I’m actually interested in that—if you have found sources and documents that provide a better picture of how brains work, I’d be interested. The way I work in debate is that if you provide somehing that explains the world in a better way than my current explanation, then I’ll use it.
I’m personally not a very visual thinker and I’m good at abstract thinking. This reads to me as denying my lived experience or forgetting that other types of people exist.
Ok, I didn’t mean that everybody is like that, I was making a generalization. Sorry you took it that way. What I had in mind was that when you see something hapening in front of you it sticks much better than reading about it.
I’m actually interested in that—if you have found sources and documents that provide a better picture of how brains work, I’d be interested. The way I work in debate is that if you provide somehing that explains the world in a better way than my current explanation, then I’ll use it.
I have already tried telling people about evolutionary psychology and many other topics that they are interested in.
I determined that it mostly doesn’t work due to incorrect debate methodology, lack of intellectual skills (e.g. tree-making skills or any alternative to accomplish the same organizational purposes), too-low intellectual standards (like being dismissive of “small” errors instead of thinking errors merit post mortems), lack of persistence, quitting mid-discussion without explanation (often due to bias against claims you’re losing to in debate), poor project management, getting emotional, lack of background knowledge, lack of willingness to get new background knowledge mid-discussion, unwillingness to proceed in small, organized steps, imprecision, etc.
Hence I’ve focused on topics with priority which I believe are basically necessary prerequisite issues before dealing with the other stuff productively.
In other words, I determined that standard, widespread, common sense norms for rationality and debate are inadequate to reach true conclusions about evolutionary psychology, AGI, animal welfare, capitalism, what charity interventions should be pursued, and so on. The meta and methodological issues need to be dealt with first. And people’s disinterest in those issues and resistance to dealing with them is a sign of irrationality and bias – it’s part of the problem.
So I don’t want to attempt to discuss evolutionary psychology with you because I don’t think it will work well due to those other issues. I don’t think you will discuss such a complex, hard issue in a way that will actually lead to a correct conclusion, even if that requires e.g. reading books and practicing skills as part of the process (which I suspect it would require). Like you’ll make an inductivist or justificationist argument, and then I’ll mention that Popper refuted that, and then to resolve the issue we’ll need a whole sub-discussion where you engage with Popper in a way capable of reaching an accurate conclusion. That will lead to some alternatives like you could read and study Popper, or you could review the literature for Popper critics who already did that who you could endorse, or you could argue that Popper is actually irrelevant, or there are other options but none are particularly easy. And there can be many layers of sub-issues, like most people should significantly improve their reading skills before it’s reasonable to try to read a lot of complex literature and expect to find the truth (rather than doing it more for practice), and people should improve their grammar skills before expecting to write clear enough statements in debates, and people should improve their math and logic skills before expecting to actually get much right in debates, and people should improve their introspection skills before expecting to make reasonably unbiased claims in debates (and also so they can more accurately monitor when they’re defensive or emotional).
I tried, many times, starting with an object level issue, discussing it until a few errors happened, and then trying to pivot the discussion to the issues which caused and/or prevented correction of those errors. I tried using an initial discussion as a demonstration that the meta problems actually exist, that the debate won’t work and will be full of errors, etc. I found basically that no one ever wanted to pivot to the meta topic. Having a few errors pointed out did not open their eyes to a bigger picture problem. One of the typical responses is doing a quick, superficial “fix” for each error and then wanting to move on without thinking about root causes, what process caused the error, what other errors the same process would cause, etc.
Sorry you took it that way.
This is an archetypical non-apology that puts blame on the person you’re speaking to. It’s a well known stereotype of how to do fake apologies. If you picked up this speech pattern by accident because it’s a common pattern that you’ve heard a lot, and you don’t realize what it means, then I wanted to warn you because you’ll have a high chance of offending people by apologizing this way. I think maybe it’s an accident here because I didn’t get a hostile vibe from you in the rest; this one sentence doesn’t fit well. It’s also an inaccurate sentence since I didn’t take it that way. I said how it reads. I spoke directly about interpretations rather than simply having one interpretation I took for granted and replied based on. I showed awareness that it could be read, interpreted or intended in multiple ways. I was helpfully letting you know about a problem rather than being offended.
I feel like we are starting to hit a dead-end here, which is a pity since I really want to learn stuff.
The problem is :
I am interested in learning concrete stuff to improve the way I think about the world
You point out that methodology and better norms for rationality and debate are necessary to get a productive conversation (which I can agree with, to some extent)
Except I have no way of knowing that your conclusions are better than mine. It’s entirely possible that yours are better—you spent a lot of time on this. But I just don’t have the motivation to do the many, many prerequisites you asked for, unless I’ve seen from experience that they provide better results.
This is the show don’t tell problem: you’ve told me you’ve got better conclusions (which is possible). But you’ve not shown me that. I need to see that from experience.
I may be motivated to spend some time on improving rationality norms, and change my conclusions. But not without a (little) debate on some concrete stuff that would help understand that I can improve.
How about challenging my conclusion that energy depletion is a problem neglected by many, and that we’re starting to hit limits to growth ? We could do that in the other post you pointed to.
This is an archetypical non-apology that puts blame on the person you’re speaking to.
True. It was a mistake on my part. It’s just that the sentence “I’m used to ignoring comments that assume I don’t exist” felt a bit passive-agressive, so I got passive-agressive as well on that.
It’s not very rational. I shouldn’t have done that, you’re right.
How about challenging my conclusion that energy depletion is a problem neglected by many, and that we’re starting to hit limits to growth ?
OK, as a kind of demonstration, I will try engaging about this some, and I will even skip over asking about why this issue is an important priority compared to alternative issues.
First question: What thinkers/ideas have you read that disagree with you, and what did you do to address them and conclude that they’re wrong?
First, most of what I’m saying challenges deeply what is usually said about energy, resources or the economy.
So the ideas that disagree with me are the established consensus, which is why I’m already familiar with the counter-arguments usually put forward against to energy depletion:
We’ve heard about it earlier and didn’t “run out”
Prices will increase gradually
Technology will improve and solve the problem
We can have a bigger economy and less energy
We’ll just adapt
So in my post I tried my best to adress these points by explaining why ecological economists and other experts on energy and resources think they won’t solve the problem (and I’m in the process of writing a post more focused on adressing explicited these counter-arguments).
I also read some more advanced arguments against what these experts said (debates with Richard Heinberg, articles criticizing Jean-Marc Jancovici). But each time I’ve seen limits to the reasoning. For instance, what was said againt the Limits to growth report (turns out most criticism didn’t adress the core points of the report).
I’m not aware of any major thinker that is fluent on the topic of energy and its relationship with the economy, and optimistic on the topic. However, the one that was the most knowledgeable about this that I found was Dave Denkenberger, director of ALLFED, and we had a lot of exchanges, where he put some solid criticism against what I said. For some of what I wrote, I had to change my mind. For some other stuff, I had to check the litterature and I found limits that he didn’t take into account (like on investment). This was interesting (and we still do not agree, which I find weird). But I tried my best to find reviewers that could criticize what I said.
Preliminaries:
I think the following rules, to supplement EA’s discussion norms, would make the debate better. CB, would you agree to them?
No misquotes.
No editing or deleting messages.
Clarifying details: Any text in quotation marks, or blockquoted, should be a 100% exact quote (and not misleading, taken out of context, or misattributed) if a reasonable reader might expect it to be a (direct, literal) quote. Basically, use copy/paste for quotes and then don’t change them. Losing formatting (e.g. italics or links) when quoting is OK because, as far as I know, EA’s software has no reasonable way to quote with formatting (though some text editors, like Ulysses, can help).
Ok, this is fine for me. I didn’t know you could create posts for debates , good to know :)
Maybe I should add you as a co-author to the post so that you get notifications for replies to it? Is that OK?
Oh, yes, good idea
This was a good debate. A bit long to read but I tend to agree that we should focus more on methodology. Excelent epistemology depends on an excelent methology I believe. I agree completely about the point raised by Elliot Temple about errors. We should focus on typos for asthetics and ease to read but small errors should be corrected as soon as possible. New posts in this Forum that are not upvoted much should surface for more views so we could expect more engagement to content not being discussed and debated currently.
I disagree with multiple things but lets focus on bias. If millions of people used your approach (as written, not actually doing all the same things as you), do you think that would work well, or would bias be a widespread problem? In other words, if lots of people say things like “I already try to do all you said in my mind” and “I feel like I have a track record of changing my viewpoint”, what sort of overall results do you expect?
Well, while I feel that this way of doing gives, at least for me, higher quality results compared to before, I don’t really know if it’s suited to how millions of people can think.
Bias would probably be a problem—but I have trouble seeing how to fix that in a systematic way. I’ve read a lot about bias so I try to be aware of these: when I see a pattern of thought in my brain that matches a bias, I try to compensate for that.
However, the way I work wouldn’t really match how most people usually do things. There is stuff that that the brain does and that I find really difficult to tackle on a wide scale.
For instance, I don’t really see how to change:
The tendency to reject really scary and frightening information that challenges deeply how we see the world and elicits really negative feelings. I can personally handle such feelings (not sure why), but for many people this would be too overwhelming, and I understand why.
Examples include research on collapse, wild animal suffering, or dealing with the fact that our industrial civilization has a net negative impact on the world when you include factory farming.
There is also some stuff that can also really threaten our sense of identity.
The fact that most people make decisions based on their senses (how they feel and see the world around them), and less based on abstract thoughts. It makes sense since we evolved in the natural world, but that means most profile don’t act on threats that are distant in time and space.
It’s why tackling climate change before seeing negative consequences around us is extremely hard—it’s also why there was more concern about climate after heatwaves.
The fact that you need a lot of time to do the research I mentioned, and most people don’t have the time since they have much more pressing matters to adress (food, providing for their family, handling daily life).
The fact that challenging the consensus is hard to do. As this article puts it, “consensus is our tribal glue”. Acknowledgement of something very different from consensus (ex. there are limits to growth) means rejecting not only the familiar but something that may have embodied our status, past efforts, our hopes and even our collective mythology.
Now, all of this stuff makes sense from an evolutionary perspective—we didn’t evolve to find truth, although some weird people try to get there. But I don’t see a way to get millions of people to change their approach there (let alone design debate rules or institutions that would enfore that).
I still think that using the method I try to apply would provide better results overall (certainly not optimal, but better). But I don’t really know how to make this way of thinking widespread—I actually don’t even think it’s possible.
My initial focus is on getting you to make a change (to use some written rationality policies), or more broadly getting a small number of interested people to change who post on rationality-related internet forums. Maybe it could spread from there but I’m not concerned about spreading it to the masses until after I figure out how to spread it to tens of people and then see how well it works out for them.
I have experience with other people saying this kind of thing about personally being able to avoid bias. This applies not only with the population in general, but also with the kind of people who post on EA and have read some books and articles about being unbiased. In other words, I find that many EA type people think that they are significantly less biased than most people, but I think most of them have major biases they aren’t seeing.
I have both theoretical arguments and practical experiences telling me that most of them, even at EA, are mistaken about themselves. Statistically, these kinds of claims are usually wrong. Do you agree or disagree?
(I’m counting partial bias as being wrong – many of those people are less biased than average, but there are still significant bias issues. It’s a significant problem rather than a non-problem, so I count them as mistaken.)
Based on the information I have about you, from my perspective, I should not trust you about your lack of bias. I should assign > 50% probability to you being mistaken. Do you agree or disagree?
Sure, it’s very possible that I am biased. It’s very hard not to be.
And I’ve never really seen a good advice on how to avoid bias besides ‘read about bias so you can be aware of it when it happens’.
Which I why I try to get some feedback. When faces with a criticism I try to reach the point where every item of criticism gets solved in one of two ways :
Either I have conceded that my vision was not correct on some of the point
Either the other person has no counterpoint to what I said
Not easy, though.
However, if I’m biased, I just want to be told in what way, and on what specific points with examples. Otherwise I cannot improve.
I have developed several other pieces of advice about how to overcome bias. The one I’ve been talking about is pre-commit to written rationality policies and then have transparency when following them. A debate policy is one example but I don’t think it’s that hard to come up with others. Here’s one I just made up: “Read one article per week by someone from a rival tribe”. You could write that policy down, in public, and then follow it, and post each article you read so that there’s transparency: observers can see that you’re following it.
(You could get more extreme transparency by livestreaming yourself reading the articles, but probably you could just post links to them when you read them and most people would believe you. Also it’d be pretty hard to fool yourself about whether or not you read the articles. And it’s mostly just the people who are fooling themselves with their bias who I think matter. The people who are purposefully, consciously lying are a small minority who I don’t want to worry about.)
Getting back on topic, when I suggest written rationality policies as another way to help deal with bias, no one from EA wanted to do it (so far), nor did anyone give an argument that refutes it and explains why it’s a bad idea, nor did anyone share alternative ways to solve the same problems that they claim are better.
You said:
What I’m trying to say is that I don’t think doing all this stuff in your mind is a good enough approach, and that I think you should use some written policies. That advice isn’t just for other people who are worse at rationality. I think it’s a good idea for you.
Also, btw, even if you were 100% unbiased, I would still recommend using written rationality policies. They can set a good example for others and they can help you create a good reputation by persuading/showing people that you’re approaching things rationally (so they can see for themselves instead of trusting you).
OK, I may give it a try then.
Just a guess: I think one reason is that makes people in EA not totally convinced by the written policy you propose is that, from an outside perspective , it’s not really clear how doing that really changes things. Now, I’m sure that from your own perspective, it really had an impact for you. Which is great !
But for outsiders the benefits aren’t obvious. Your debate policy, for instance, appears to seem useful for busy people that don’t know when to stop debating and have many people proposing them to debate, but otherwise are doing well. Ok, but very few people can relate to that.
Maybe something that you could improve in your articles (although I must admit I just read 2 of your EA forum posts and your debate policy + 2-3 others) is by giving examples of policies that really appear to make a difference, even from the outside. Even give an example of how you went from “biased” to “less biased”. People love stories and examples.
It’d be good to have “templates”, debate policies that you propose and that others can just adopt on the spot. Maybe you already did such an article, but I didn’t see it in what i read so it should be more prominent.
In my opinion, the most valuable line your wrote in your last comment was “Read one article per week by someone from a rival tribe”. It’s direct, I can use that right now , and I can see the point. If you propose people straightforward stuff like that right away, I’m sure more people could relate to your suggestion.
Anyway, do you have a link with examples of debate policies that in your opinion alleviate bias? I’ll try to apply that.
Our perspectives and ways of thinking are very different. I find it confusing that you value the examples more than the concepts. And I find it confusing that you ask for more examples instead of just thinking of some yourself. I guess you can’t? Which, to me, indicates that you didn’t understand the concepts involved. But you don’t seem to be aiming to understand the concepts better.
I don’t think anyone will adopt my suggested policies without understanding the concepts, but I could be wrong. I’m also not sure it’s a good idea to adopt policies without understanding the concepts behind them. If you don’t understand the concepts well, then you don’t really understand their purpose, and therefore are likely to do a lot of things which defeat the purpose. Also, you can’t correctly judge if the policy is good without understanding the conceptual reasoning that leads to the policy. And you can’t tell if you’re using a policy right if you don’t understand its purpose well, which is a conceptual issue.
My arguments in favor of policies are conceptual, not about the concretes of specific policies. If someone doesn’t understand the concepts (like the rule of law), and therefore doesn’t understand my arguments, then why would they like or want the policies? Some policies might happen to fit with some pre-existing way of thinking they have, but overall it mostly just won’t work.
And if people systematically ignore ideas they can’t easily use right away without changing their conceptual framework much, and favor ideas that are easy to practice immediately within their current conceptual framework, then that is a huge systematic bias that will prevent people from considering, debating or adopting new, better concepts. It’s a bias favoring the status quo, the already known, the similar, etc. That’s in addition to being a bias against abstractions and concepts, which I think are necessary to being a very effective thinker.
Trying to explain another way, there is the “teach a man to fish” parable. And you seem to want me to give you fish (specific policies) instead of caring about my explanations of how to get your own fish.
Our perspectives are indeed very different.
Why do I like examples, and do I think it’s a good idea to add more?
It’s because, with my way of thinking, I have trouble reasoning in purely abstract terms. In my mind, I do a mental map based on stuff that exists in the real world: energy, materials, nature, relations between people, institutions, emotions, pleasure and suffering, etc. I can manipulate this stuff in my mind and scale it up or down, but it has to start from something I have to recognize first in the real world. Concept don’t stick as much—they’re too abstract, too blurry.
That’s why I have trouble with legal lingo or long equations. The language used is often too remote from reality. I can understand concepts, but first I have to see how they apply to the real world—like a direct example of what a law will actually do to a person.
This is why I like to have examples. You are telling me that the concept of debate policy is sound. I can understand that—and I think I understand the theory behind what you are saying. But I have no idea how to put that into practice, because what you say is not linked to actual actions I can take.
To continue on the “teach a man a fish” parable, it’s not that I want you to just give me a fish. I want you to show me what a fish looks like, to show me different types of fish so I can learn to recognize them (and then, eventually, catch them).
This is the first time I encounter the concept of “debate policy”, and the only example I have, your own debate policy, is not suited to what I do. So I’d like to see other examples of such policies, and examples of how that would actually turn out in a conversation.
I think your argument would be more persuasive with that.
Thanks for sharing. I am curious to know more:
Do you consider that difference in style or a weakness? If it’s a weakness, is it super important or only somewhat important?
Are you trying to change it? Do you think it can be changed?
That was helpful for understanding your perspective.
I’m concerned that most people on EA are too intolerant or uncurious talk to people with large differences in perspective. The result is basically that if they don’t see the value of something quickly, and also they won’t debate, then there’s no way to tell it to them. The result of that is that EA keeps a bunch of biases and errors, unnecessarily, because it’s not open to some types of criticism. I appreciate that you’re being more friendly and open minded than others. Unfortunately, I don’t think posting examples of rationality policies will be persuasive to most people who don’t currently have goals like being open to debate more effectively. Most of them seem content to dismiss me and take the risk that they’re in the wrong and, due to their actions, they are preventing the disagreement from being resolved. They don’t seem to understand or mind the risk of betting on being right about important issues they ignore some criticism about (with their careers and millions of dollars used less effectively if they’re wrong). Unless they actually want policies to address that risk – unless it’s a problem they want to solve – then I don’t think example solutions will work. Disagreements about goals have to be dealt with before methods of achieving the goals I think are good. Does that perspective on some of the difficulties (for my project of reforming EA) make sense to you?
I see that as both a weakness and a strength. A weakness in the sense that it’s hard to do stuff that requires complex equations including abstract terms. This includes for instance most of physics, post-graduate maths, optics, mechanics, chemistry or all the section of economics like finance or accounting. I don’t think it can really be changed. I can do stuff like that but it’s hard, abstract, sluggish, demotivating and I don’t stick long. This is why I usually never do calculations myself.
But I’m still interested in understanding how the world works. So this forces me to find way to understand all this stuff by mapping how it applies in the real world.
For instance, I have trouble understanding explanations on economics like it’s done in finance with maths everywhere and shares and credit and stuff like that. But these are layers of abstraction over the real world. So I try to look directly at the economy from a biophysical perspective. For instance, seeing money as a claim on goods and services, which requires materials and energy, meaning money is a claim on natural resources. Or seeing debt as a promise of future goods and services—meaning we run into problem if debt grows faster than the economy. Meaning I also skip all the weird hypothesis many economists do like perfect markets, infinite substituability, prices as indicators of scarcity or stuff like that.
This is why I see that as a strength: as I have trouble understanding the math-heavy stuff, I have to find ways to express how all of that applies to reality in simpler terms, which is actually the endgoal. It’s more clear in my head, and also more engaging when discussing it with most people.
Oh, yes, to a certain degree, like most people. But less than most people in my opinion. It’s just that change takes time and effort and time and effort, and more importantly, a way to be persuasive. Won’t affect everyone, but a portion might be interested.
I think most people in EA would absolutely agree with the goal “reducing bias”, or “being more right” or stuff like that, at least in theory. But I think what they would really like is a straightforward way of doing that, with proven results.
Basically, to see that your approach works. Right now they have no way of knowing whether what you propose provides good results. People tend to ignore problems for which there is no good solution, so, in addition to saying that something is conceptually important, you have to provide the solution.
It’s a bit like overthrowing capitalism - in theory, this might be very important (as capitalism failed time and time again to switch to anything ecologically sustainable—but you can debate that), but since there is no credible pathway on how to do that wih our current means, most people turn away from that.
That’s why I recommand examples of policies. It’s much easier to sell something that I can use right away. Selling something that I have to build up from scratch, with uncertain results, is much less attractive. You might be interested in that link.
You might also like this Astral Codex post : general criticism usually doesn’t trigger any change afterwards. While specifying specific points that we can change has more potential, because we see better what to do (and what we shouldn’t have done).
I think all types of abstract, conceptual, logical or mathematical thinking are learnable skills which are a significant part of what learning about rationality involves. As usual, I have arguments and I’m open to debate.
I have put substantial effort into teaching some of this stuff, e.g. by building from sentence grammar trees (focused on understanding a sentence) to paragraph trees (focused on understanding relationships between sentences) to higher level trees (e.g. about relationships between paragraphs). There are many things people could do to practice and get better at things. I’ve found few people want to try very persistently though. Lots of people keep looking around for things where they can have some sort of immediate success and avoid or give up on stuff that would take weeks (let alone months or years). Also a lot of people focus their learning mostly on school subjects or stuff related to their career.
I don’t disagree with that. But unfortunately I don’t think the level of tolerance, while above average, is enough for many of them to deal with me. My biggest concern, though, is that moderators will censor or ban me if I’m too unpopular for too long. That is how most forums work and EA doesn’t have adequate written policies to clearly differentiate itself or prevent that. I’ve seen nothing acknowledging that problem, discussing the upsides and temptations, and stating how they avoid it while avoiding the downsides of leaving people uncensored and unbanned. Also, EA does enforce various norms, many of which are quite non-specific (e.g. civility), and it’s not that hard to make an excuse about someone violating norms and then get rid of them. People commonly do that kind of thing without quoting a single example, and sometimes without even (inaccurately) paraphrasing any examples. And if someone writes a lot of things, you can often cherry pick a quote or two which is potentially offensive, especially out of the long discussion context it comes from.
Things like downvotes can be early warning signs of harsher measures. If someone does the whole Feynman thing and doesn’t care what other people think, and ignores downvotes, people tend to escalate. They were downvoting for a reason. If they can’t socially pressure you into changing with downvotes, they’ll commonly try other ways to get what they want. On a related note, I was disappointed when I found out that both Reddit and Hacker News don’t just let users vote content to the front page and leave it at that. Moderators control what’s on the front page significantly. When the voting plus algorithm gets a result they like, they leave it alone. When they don’t like the result, they manually make changes. I originally naively thought that people setting up a voting system would treat it like an explicit written policy guarantee – whatever is voted up the most should be on top (according to a fair algorithm that also decays upvotes based on age). But actually there are lots of unwritten, hidden rules and people aren’t just happy to accept the outcome of voting. (Note: Even negative karma posts sometimes get too much visibility on small forums or subreddits, thus motivating people to suppress them further because they aren’t satisfied with the algorithm’s results. Some people aren’t like “Anyone can see it’s got −10 karma and then make a decision about whether to trust the voters or investigate the outlier or what.” Some people are intolerant and want to suppress stuff they dislike.)
I don’t know somewhere else better to go though. And my own forum is too small.
I will post more examples. I have multiple essays in progress.
Broadly, if EA is a place where you can come to compete with others at marketing your ideas to get social status and popularity, that is a huge problem. That is not a rationality forum. That’s a status hierarchy like all the others. A rationality forum must have mechanisms for unpopular ideas to get attention, to disincentivize social climbing behaviors, to help enable people to stand up to, resist or call out social pressures, etc. It should have design features to help attention get allocated in other ways besides whatever is conventionally appealing (or attention grabbing) to people that marketing focuses on.
One of the big things I think EA is missing – and I have the same complaint about basically everyone else (again it’s not a way EA is worse) – is anyone who takes responsibility for answering criticism. No one in particular feels responsible for seeing that anyone answers criticism or questions. Stuff can just be ignored and if that turns out to be a mistake, it’s no one’s fault, no one is to blame, it was no one’s job to have avoided that outcome. And there’s no attempt to organize debate. I think a lot of debate happens anyway but it’s systematically biased to be about sub-issues instead of questioning people’s premises like I do. Most people learn stuff (or specialize in it) based on some premises they don’t study that much, and then they only want to have debates and address criticism that treats those premises as givens like they’re used to, but if you challenge their fundamental premises then they don’t know what to do, don’t like it, and won’t engage. And the lack of anyone having responsibility for anything, combined with people not wanting to deal with fundamental challenges, results in basically EA being fundamentally wrong about some issues and staying that way. People tend not to even try to learn a subject in terms of all levels of abstraction, from the initial premises to the final details, so then they won’t debate the other parts because they can’t, which is a big problem when it’s widespread. E.g. all claims about animal welfare, AI alignment, or clean water interventions depend in some way on epistemology. Most people who know something about factory farms do not know enough to defend their epistemological premises in debate. Even if they do know some epistemology, it’s probably just Bayesian epistemology and they aren’t in a position to debate with a Popperian about fundamental issues like whether induction works at all, and they haven’t read Popper, and they don’t want to read Popper, and they don’t know of any literature which refutes Popper that they can endorse, and they don’t know of any expert on their side who has read Popper and can debate the matter competently … but somehow that’s OK with them instead of seeming awful. Certainly almost everyone who cares about factory farms would just be confused instead of thinking “omg, thanks for bringing this up, I will start reading Popper now”. And of course Popperian disagreements are just one example of many. And even if Popper is totally right about epistemology and Bayes is wrong, what difference does that make to factory farming? That is a complex matter and it’d take a lot of work to make all the updates, and a lot of the relevance is indirect and requires complicated chains of reasoning to get from the more fundamental subject to the less fundamental one. But there would very likely be many updates.
It’s too much to ask for. We live in an inadequate society as Yudkowsky would say. Rationality stuff is really, really broken. People should be happy and eager to embark on speculative rationality projects that involve lots of hard work for no guaranteed results – because the status quo is so bad and intolerable that they really want to try for better. Anyone who won’t do that has some kind of major disagreement with not only me but also, IMO, Yudkowsky.
One way to see that my approach works is that I will win every single debate including while making unexpected, counter-intuitive claims and challenging widely held EA beliefs. But people mostly won’t debate so it’s hard to demonstrate that. Also even if people began debates, they would mostly want to talk about concrete subjects like nutrition or poverty, not about debate methodology. But debating debate methodology basically has to come first, followed by debating epistemology, because the other stuff is downstream of that. If people are reasonable enough and acknowledge their weaknesses and inabilities you can skip a lot of stuff and still have a useful discussion, but what will end up happening with most people is they make around one basic error per paragraph or more, and when you try to point one out they make two more when responding, so it becomes an exponential mess which they will never untangle. They have to improve their skills and fundamentals, or be very aware of their ignorance (like some young children sorta are), before they can debate hard stuff effectively. But that’s work. By basic errors I mean things like writing something ambiguous, misreading something, forgetting something relevant that they read or wrote recently, using a biased framing for an issue, logical errors, mathematical errors, factual errors, grammar errors, not answering questions, or writing something different than what they meant. In a world where almost everyone does those types of errors around once a paragraph or more, in addition to being biased, and also not wanting to debate … it’s hard. Also people frequently try to write complex stuff, on purpose, despite lacking the skill to handle that complexity, so they just make messes.
The other way to see it works, besides debating me, is to consider it conceptually. It has reasoning. As best I know, there are criticisms of alternatives and no known refutations of my claims. If anyone knows otherwise they are welcome to speak up. But that’d require things like reviewing the field, understanding what I’m saying, etc. Which gets into issues of how people allocate attention and what happens when no one even tries to refute something because a whole group of people all won’t allocate attention to it and there’s no leader who takes responsibility for either engaging with it or delegating.
Well that was more than enough for now so I’ll just stop here. I have a lot of things i’d be interested in talking about if anyone was willing, and i appreciate that you’re talking with me. I could keep writing more but I already wrote 4600 words before this 1800 so I really need to stop now.
Wow, that was really interesting. Let me answer that.
Ok, this is a very good claim. I find that a very useful insight. Since it’s “no one’s job”, everything is decentralized, it means it’s hard for useful feedback to reach the ones that could use it. And there is no negative consequences for being wrong on some stuff, so it keeps going.
I really have trouble seeing how to fix that, however (are there some movements out there where this is the case?).
But it’s worth making a post about it, I think. In the form of “EA should...”.
Yes, it can be learned—and I had to learn that in engineering school. But I didn’t like it. As I said, I found doing that sluggish, boring and not motivating. I can do it if I am forced to, but this makes me lose motivation—and keeping my motivation is very important for me to stay active. If learning about energy was purely abstract thinking, I simply wouldn’t bother, even if it’s important. So I prefer to play on my strong suits—that’s more sustainable for me.
It’s true that this should be the case, I agree. However I am not certain the EA forum is a “rationality forum” per se. Rationality is important there, but it’s not a place where you debate rationality itself.
Less Wrong is a rationality forum. There are people debating rationality and stuff like that. So very abstract discussions about whether Bayes is good make sense there. Have your tried to post on Less Wrong ? Maybe your content would receive more interest there?
The EA Forum, however, feels more like a place more directly linked to action itself. You propose stuff related to action itself, like new causes, you give status updates, you explain why the priorization of some stuff should change… Some of it can be abstract, like discussions on longtermism, but there is a general vibe of “Ok, what do we do about this information? How can it help us act better?”.
For instance, I had some useful feedback about my energy post: one person said it wasn’t totally fitting the expected content of the forum. People here don’t expect broad reports about entire topics (no matter how important).
Instead, what he suggested was to make smaller posts, about one specific point (like “Models of EA about the future are missing a scenario were we fail at the energy transition”, with the causal reasoning). What’s important here is that there is only one matter at hand, with something actionable (we should do that specific thing, and here is why it could help us doing good better).
I’ve never really heard of anyone here being banned unless they wrote some really bad stuff (like accusing someone of malevolence. We shouldn’t accuse people of bad faith or malevolence here). So I wouldn’t worry too much about that.
Even by browsing into your history I didn’t that much stuff that was so downvoted (you had one at “-10” disagreement but it didn’t affect your karma, which was at +2 → I can think of very few forums disagreeing but not downvoting).
I made a little list of feedback based on the I read in your posts. You are free to use it or not, I’m just listing stuff that came up in my mind—I’ve only read a sample of your writings, so it might not apply to everything.
All posts should try to answer the question “How can we do good better?”.
Right now, it isn’t necessarily obvious how your posts answer this question. It might feel off-topic (although I understand why you think it’s on-topic).
You said that you won every debate, and your way of doing works, but I have no way of knowing that from the outside.
One interesting stuff to publish would be an example of “how could your reasoning method improve the way we’re currently doing stuff in EA?”. For instance, you said Popper could improve how animal activists work. You could provide an example of something specific that could be improved in animal welfare advocacy by using the Popper method, showing why it is superior.
Don’t try to sell a tool or a solution itself—show that you get better results this way (using examples). If it works, then some people will try to use the solution.
There’s too much to read, so people don’t have extensive time to engage with everything. Try to be succint.
One of your post spent 22 minutes to say that people shouldn’t misquote. It’s a rather obvious conclusion that can be exposed in 3 minutes top. I think some people read that as a rant.
Use examples showing why the topic is important (or even stories). It allows to link your arguments to something that exists.
You can think with purely abstract stuff—but most people are not like that. A useful point to keep in mind is you are not your audience. What works for you doesn’t work for most other people. So adapting to other reasoning types is useful.
Make specific points that are actionable .
I agree that in theory, rational people should spend lots of time fixing very difficult problems with very uncertain payoff (like speculative rationality projects). But that’s not how things work. Our amount of time and motivation is limited, so this is probably an unreasonable assumption to have.
Assume instead that they have limited time, and judge stuff they read by the criteria of “How is this useful to me?”. If they don’t see something they can do with your information, they won’t act on it.
As for me, right now, I understand your line of thinking about biases and the importance of having better rationality, but I still fail to see what I can do with this information. Chances are that it will float in some part of my mind, but I won’t act on it when I should.
Improve the readibility of your posts. There is a lot of text there, and it’s hard to get the structure and it’s harder to skim through.
This post provides good suggestions. “Assume your audience is smart, but has limited bandwidth”
Use bullet points, and heading, and bolding. They are good.
Make people reach their own conclusions—ask questions.
You said that people tend to not know what to do when you challenge one of their personal premise. Which is normal. If they see their core beliefs challenged, people tend to get defensive.
However, that doesn’t usually happen if they reach the conclusion by themselves. So they learn much better when being asked questions, and they have to spell out the conclusion. This is why the socratic method is of interest. So ask questions.
Antagonizing people is easy, even by accident. I’m not saying you are doing that, but it’s still very important, so I add that just in case. It’s important not to make people feel on the defensive, and not using accusatory tones. It’s important to try to understand why the other thinks that way, showing you get that and agree to some extent, but still suggest improvements.
A good book on that topic is How to win Friends by Dale Carnegie—I don’t like the title, but the content is still very useful. Plus, it works.
One of the main concerns you appear to have is that EA could be better at ding rationality. It could have better conclusions, and better premises. I agree ! But it’s up to us to find ways of how to do that. Rationality is about finding the best way to adapt to reality.
What follows logically, then, is what is the most effective way of making EA better? I don’t have a good answer to that yet, but that ’s what I will try to answer. And if I have to learn about stuff like communication or psychology to find ways to be more effective, well I will have to do that then.
Not gonna debate this right now (unless maybe you wanted to focus on this topic instead of others) but I wanted to clarify: When I said it’s learnable, I meant learnable in a way that you like it, don’t have motivation problems, aren’t bored, it isn’t sluggish, everything works well. Those things you talk about are serious problems – they mean something (fixable) is going wrong. That’s what I think.
Thanks. I appreciate work people do that facilitates me getting along with more EAs better, so that I can better share potentially valuable ideas with EA.
Yeah I don’t expect anyone to trust that or to look through tens of thousands of pages of discussion history (which is publicly available FWIW). And I don’t know of any way to summarize past debates that will be very persuasive. Instead, all I really want, is that at least one person from EA is willing to debate, and if I get a good outcome, then a second person should become willing to debate, and so on. And e.g. if I get to 5 good debate outcomes with EA people then a lot more people ought to start paying some attention, considering my ideas, etc. It should be possible to get attention from EA people by a series of debates without doing marketing, making friends, or other social climbing. And starting with one at a time is fine but I shouldn’t have to go through hundreds of debates one at a time to persuade hundreds of EAs.
I think that’s a reasonable thing to ask for even if I had no past debate history. But I don’t know of any communities (besides my own) that actually offer it. I think that’s one of the major problems with the world which matters more than a lot of the causes EA works on. Imagine how much more easily EA could do huge amounts of good if just 10% of the charities and large companies were open to debate, and EAs could go win debates with them and then they’d actually change stuff.
I don’t have any quick win for that. Just a potential very very long debate involving learning a ton of ideas which could potentially lead to EAs changing their mind about some of these beliefs. I have long, complicated arguments regarding other EA topics too, such as AI alignment (which again depends significantly on epistemology, so Popper is relevant). I’ve been interested in talking about AI alignment for years but I don’t know any way to get anyone on the other side of the AI alignment debate to engage with me seriously.
I often get results that I consider better, but which other people would evaluate differently, or wouldn’t know how to evaluate, or wouldn’t be able to replicate without learning a lot of the background knowledge I have. When people have different ideas, it often means the way of evaluating outcomes itself has to be debated/discussed – which partly means talking about concepts, abstractions, philosophy, etc. And then the specific evaluations can require a lot of discussion and debate too. So you can’t just show an outcome – there has to be substantial discussion for people to understand.
I’m highly confident that EAers broadly disagree with me on that topic, which is why I wrote that article. It’s not obvious. It’s controversial. And I believe it’s an ongoing, major problem on the forum that is not being solved.
It’s related to another article I’m considering writing, which would claim basically that raising intellectual standards would significantly improve EA’s effectiveness. Widespread misquoting, plus widespread not really caring about or minding misquotes, is one example of EA having intellectual standards that are too low. Low intellectual standards have negative consequences for having accurate views about the world and figuring out the right conclusions about various causes. And they also makes it extremely hard to have productive debates about hard issues, especially when there’s significant culture clash or even unfriendliness.
In general, you need either friendliness or high standards to have a productive discussion or debate. It’s super hard with neither. And friendliness towards critics with significant outsider/heresy ideas is rare in general. I think EA has more of that friendliness than typical but not nearly enough to replace high intellectual standards when dealing major differences in ideas.
I know that I often antagonize people by accident. I’m not going to deny that or feel defensive about it. It’s a topic I’m happy to talk about openly, but IME other people often don’t want to. I have sometimes been accused of being mean, at which point I ask for quotes, at which point they usually don’t want to provide any quotes, or occasionally provide a quote they don’t want to analyze. Anyway it’s a difficult problem which I have worked on.
I don’t have a plan that I particularly expect to work, but I have a few things to try. One plan is getting people to debate or, failing that, to talk about issues like why debating matters. Another plan is to get a handful of people to take an interest, discuss stuff at length, learn more about my ideas, and then help change things. Another plan (that I’ve already been working on for 20 years) is to write good stuff – at EA or even just at my own websites – and maybe something good will happen as a result.
I think I’m aware of a bunch of problems and difficulties that you aren’t familiar with, which make the problem even harder. For example, I have objections to a lot of the psychology and marketing stuff you mention. But anyway, to summarize, I know something about debating issues rationally but less about getting anyone to like me or listen. One of the main problems is social hierarchies, and in very short I think any plan involving social climbing is the wrong plan. Eliezer Yudkowsky also has a lot of negative things to say about social hierarchies but unfortunately I don’t see that reflected in the EA or LW communities – I fear that no one figured out much about how to turn criticism of social hierarchies into action to actually create different types of communities.
Also, when you have conclusions that rely on different background knowledge than your audience has, it’s very hard to explain them in short ways, which are how people want and expect information, while also making it rationally persuasive (which requires explaining a lot of things people don’t already know, or else they should not find it persuasive without debating, discussing or studying it first to find out more).
On these few other points:
I think something that could help (maybe) is making the other person feel understood. Showing that you understand where they come from, that what the other says really makes sense for them, but that you have found some other way of seeing things that also makes sense.
Direct accusations of doing stuff poorly rarely works, and comes off as judgemental. It’s better if you want to let people (have you read How to make friends by Dale Carnegie? Not perfect but gives some valuable insight).
(Not sure I’m doing that with you, but you don’t seem to need it ^^)
Still, 22 minutes is way too long. I read it for 5 minutes and did not feel this as a valuable use of my time—most of it was on the analogy with “deadnaming” but I think this derailed from the topic. This also greatly needed a structure like an executive summary style. Or a structure like 1) here’s an example of a misquote leading to a bad outcome 2) Miquoting in general poses problems 3) The EA forums needs to enfore rules against misquotes (and here’s how).
Wow, this means you could have an entire class of people, including ones who have trouble with maths (with like say complex equations), and you ’d be able to teach them to do maths in ways they like ? That would be very impressive! I’d like to learn more, do you have sources on that ?
I have multiple types of writing (and videos) related to this:
educational and skill-building materials (e.g. grammar trees, text analysis or tutoring videos)
writing about how learning works (e.g. practice and mastery)
writing about epistemology – key philosophical concepts behind the other stuff
writing about why some opposing views (like genetic IQ) are mistaken
I’ve been developing and debating these ideas for many years, and I don’t know of any refutations or counter-examples to my claims, but I’m not popular/influential and have not gotten very many people to try my ideas much.
In terms of the subject matter itself, math is one of the better starting points. However, people often have some other stuff that gets in the way like issues with procrastination, motivation, project management, sleep schedule, “laziness”, planning ahead, time preference, resource budgeting (including mental energy), self-awareness, emotions, drug use (including caffeine, alcohol or nicotine), or clashes between their conscious ideas and intuitions/subconscious ideas. These things can be disruptive to math learning, so they may need to be addressed first. In other words, if one is conflicted about learning math – if part of them wants to and part doesn’t – then they may need to deal with that before studying math. There are also a lot of people who are mentally tired most of the time and they need to improve that situation rather than undertake a new project involving lots of thinking.
Also most current educational materials for math, like most topics, are not very good. It takes significant skill or help to deal with that.
There is an issue where, basically, most people don’t believe me that I have important knowledge and won’t listen. Initial skepticism is totally reasonable but I think what should happen next, from at least a few people, is a truth seeking process like a debate using rational methods instead of just ignoring something, on the assumption it’s probably wrong, with no attempt to identify any error. That way people can find errors in my ideas, or not, and either way someone can learn something.
Sounds like quite the challenge to learn maths ! I can understand why “you need to be really motivated and allocate a lot of time and resources and to avoid coffee and alcohol and cigarettes and to solve your problems of sleep and procrastination and emotions in order to learn maths” leads to not many people really learning maths !
I wouldn’t count on many people learning these skills in such a context.
And I though the issue was only because the educational material was poor.
Ok, then I’m not sure learning maths is the most valuable use of my time right now. Especially since I mostly aggregate the work of other experts and I let them do the research and the maths in my stead.
(Although I’d still be interested by the links in case that proves necessary for my research at some point in the future. Maybe the “how learning works” material could be of interest too)
Each individual thing is a solvable problem. But, yes, I don’t expect many people to solve a long list of problems. But I still claim it is possible.
Here’s some of the info https://criticalfallibilism.com/practice-and-mastery/
Thanks !
Ok, all of this is interesting. Sorry for the late answer—I got caught watching the FTX debacle where I lost an ongoing project.
I’m going to focus on that here.
This is related to why I was so late in answering: the longer the exchange is, the most you have to reply to. This means that the cost of answering, in time and brain resources, gets higher, lowering the proability of an answer. I think this is a reason why many people stop debating at some point.
A useful thing I try to keep in mind is that the brain tries to save energy. It can save energy by automating tasks (habits), by using shortcuts (heuristics), and by avoiding strong conclusions that would lead to a large reorganization in the way it currently does things (for instance, changing core beliefs and methods of reasoning). This avoidance can take the form of finding rationalizations to stuff it already does, or denial.
Of course, it’s not just about energy, since the brain can change its structure if there is a good reason. Motivation is a crucial part of people discussing anything—but for motivation you need a reason to keep motivated. But it’s really not obvious what the motivation is when discussing abstract methodology. What would that reason be?
Most of the time, the reason is direct feedback that it’s doing things wrong, and negative consequences if it doesn’t change. But we don’t have this feedback during an abstract discussion on methodology and epistemics.
There is no examples of feedback of the style “wow, the way that guy does things really looks better”, as you said.
Social validation doesn’t go our way either here.
You’re not at the top of a social hierachy
Plus, we’re not between friends and we’re remote in time and space (the point you made about debates being more conclusive where there is friendliness or high standards was really good, by the way).
Now, getting better and feeling right about something can be motivating to some people (like us). But if there appears to be no good pathway for me to get better, I’ll give up on the conversation, since my brain will see other stuff to do as more appealing (not right now of course. But at some point).
To prevent that, for me, the reward would be a concrete way to improve how I do things. I can agree with you that we (I) don’t have high enough standards for high-quality discourse, but that doesn’t tell me what to do. My brain cannot change if you don’t point to something specific I can apply (like a method or a rule you can enforce). Debate policies may be a start, but they won’t do if I have no idea what they look like.
We usually don’t learn by having more theoretical knowledge, althought that’s often necessary—but most of the time theoretical knowledge doesn’t necessarily influence action (think of “treat others as you would treat yourself”). But the kind of knowledge that really sticks and influences action comes from practice—by trying stuff and seeing by yourself how that works. Having these methological skills you talked about worked for you, so now you try to push them forward. This makes sense. But I can do the same only by trying and testing.
So, what could you provide your debate partners that would be attractive enough to keep them in the debate? I’m afraid that having extremely long discussions about theoretical stuff with no clear reward may be too much to expect.
Now I’m interested. Do you have data that would refute what I said or that you think would work better ?
I don’t mind switching to saying one short thing at a time if you prefer. I find people often don’t prefer it, e.g. b/c dozens of short messages seems like too much. In my experience, people tend to stop discussing after a limited number of back-and-forths regardless of how long they are.
Ok, I understand—so if lenght isn’t the biggest problem, I guess what might cause more of an issue is that the topic is about “theoretical stuff with no clear reward”.
So I guess the main challenge is about solving that. Questions like: How can I show that this theoretical stuff can be useful in the real world ? What is the reason people might have to be interested in engaging with me ? If I can only have a limited number of replies, how do I make the most of that time, and what are the most valuables ideas, practices and concept that I can push for ?
I have answers to this and various other things, but I don’t have short, convincing answers that work with pre-existing shared premises with most people. The difficulty is too much background knowledge is different. My ideas make more sense and can be explained in shorter ways if someone knows a lot about Karl Popper’s ideas. My Popperian background and perspective clashes with the Bayesian perspectives here and it’s not mainstream either. (There are no large Popperian communities comparable to LW or EA to go talk to instead.)
The lack of enough shared premises is also, in my experience, one of the main reasons people don’t like to debate with me. People usually don’t want to rethink fundamental issues and actually don’t know how to. If you go to a chemist and say “I disagree with the physics premises that all your chemistry ideas are based on”, they maybe won’t know how to, or want to, debate physics with you. People mostly want to talk about stuff within their field that they know about, not talk about premises that some other type of person would know about. The obvious solution to this is talk to philosophers, but unfortunately philosophy is one of the worst and most broken fields and there’s basically no one reasonable to talk to and almost no one doing reasonable work. Because philosophy is so broken, people should stop trusting it and getting premises from it. Everything else is downstream of philosophy, so it’s hurting EA and everything else. But this is a rather abstract issue which, so far, I haven’t been able to get many people to care much about.
I could phrase it using more specifics but then people will disagree with me. E.g. “induction is wrong so...” will get denials that induction is wrong. (Induction is one of the main things Popper criticized. I don’t think any philosopher has ever given a reasonable rebuttal to defend induction. I’ve gone through a lot of literature on that issue and asked a lot of people.) The people who deny induction is wrong consistently want to take next steps that I think are the wrong approach, such as debating induction without using literature references or ignoring the issue. Whereas I think the next step should basically be to review the literature instead of making ad hoc arguments. But that’s work. I’ve done that work but people don’t want to trust my results (which is fine) and also don’t want to do the work themselves, which leaves it difficult to make progress.
Hmm, this is complicated indeed.
Have you tried putting stuff in a visual way ? Like breaking down the steps of your (different) reasoning in a diagram , in order to show why you have a different conclusion on a specific topic EA is doing.
For instance, let’s say one conclusion you have is “EAs interested in animal welfare should do X”. You could present stuff this way : [Argument A] + [Argument B] → [I use my way of estimating things] → [Conlusion X].
Maybe this could help.
Huhu can’t say I disagree, I really have problems seeing what I can get from the field of philosophy most of the time, in terms of practical advice on how to improve that works, not just ideas. (although saying “there’s no one reasonable to talk to in the field XXX” would flag you as a judgemental person nobody should talk to, so be careful issuing judgements around like that)
But it’s very hard to say “this is bad” about something without proposing something people better can turn to instead. And despite exchanging with you, I still don’t picture that “better” thing to turn to.
For instance, one of the (many) reasons the anti-capitalism movement is absolutely failing is not because capitalism is good (it’s pretty clear it’s leading us to environmental destruction) or because people support it (there have been surveys in France showing that a majority of people think we need to get out of the myth of infinite growth). It has a lot to do with how hard it is to actually picture alternatives to this system, how hard it is to put forward these alternatives, and how hard it is to implement them. Nothing can change if I can’t picture ways of doing things differently.
The alternatives are things like:
raise intellectual standards
have debate policies
use rational debate to reject lots of bad ideas
judge public intellectuals by how they handle debate, and judge ideas by the current objective state of the debate
read and engage with some other philosophers (e.g. Popper, Goldratt and myself)
actually write down what’s wrong with the bad philosophers in a clear way instead of just disliking them (this will facilitate debating and reaching conclusions about which ones are actually good)
investigate what philosophical premises you hold, and their sources, and reconsider them
There are sub-steps, e.g. to raise intellectual standards people need to improve their ability to read, write and analyze text, and practice that until a significantly higher skill level and effectiveness is intuitive/easy. That can be broken down into smaller steps such as learning grammar, learning to make sentence tree diagrams, learning to make paragraph tree diagrams, learning to make multi-paragraph tree diagrams, etc.
I have a forum people can join and plenty of writing and videos which include actionable suggestions about steps to take. I’ve also have proposed things that I think people can picture, like that all arguments are addressed in truth-seeking and time-efficient ways instead of ignored. If that was universal, it would have consequences such as it being possible to go to a charity or company and telling them some ideas and making some arguments and then, if you’re right, they probably change. If 10% of charities were open to changing due to rational argument, it’d enable a lot of resources to be used more effectively.
BTW, I don’t think it’s a good idea to have much confidence in your political opinions (or spend much time or effort on them) without doing those other kinds of activities first.
Ah, this is more like it: a list of stuff to do. Good !
Now that I see it in that format, maybe an interesting EA forum post would be to use the list above, and provide links for each of them. You could redirect each item to the best source you have found or produced on this topic. I feel it would be easier to convince people to adopt better rationality practices if they have a list of how to do that.
Well maybe not everyone, but I am drawing conclusions from my personal case: you seem to have some interesting techniques in store, but personally right now I just don’t see how acquire them—so the best links you have for that could help greatly.
This would centralize the information in one spot (that you can redirect people to in future debates and works).
An unrelated note: I liked your post on the damage big companies were doing on your forum. I don’t really understand why you think the damage they do is not compatible with capitalism—I don’t see anything in the definition of capitalism that would preclude such an outcome. But it was an interesting post.
I’m a bit more skeptical on your post about how to judge the ability of experts, however. If having a debate policy were a common practice, and it was notorious that people who refuse them have something to hide, then it would work. But right now such an advice doesn’t seem that useful, because very few people have such a debate policy—so you can’t distinguish between people who have something to hide and people who would be ok with the concept if they had heard about it. I don’t see such a practice becoming mainstream for the next few decades.
So in the absence of that, how can I really judge which experts are reliable ?
I’d like to judge by openness in debates, but it’s not clear to me how to get this information quickly. Especially when I’m seeing an expert for the first time.
For instance, let’s take someone like Nate Hagens. How would you go to judge his reliability?
I have tried many centralizing or organizing things. Here’s an example of one which has gotten almost no response or interest: https://www.elliottemple.com/reason-and-morality/
Anyway, I have plenty more things I could try. I have plenty to say. And I know there’s plenty of room for improvement in my stuff including regarding organization. I will keep posting things at EA for now. Even if I stop, I’ll keep posting at my own sites. Even if no one listens, it doesn’t matter so much; I like figuring out and writing about these things; it’s my favorite activity.
FYI, it’s hard for me to know what post you mean without a link or title because I have thousands of posts, and I often have multiple posts about the same topic.
The definition of capitalism involves a free market where the initiation of force (including fraud) is prohibited. Today, fraud is pretty widespread at large companies. Also, many versions of capitalism allow the government to use force, but they do not allow the government to meddle in the economy and give advantages to some companies over others which are derived from the government’s use of force (so some companies are, indirectly via the government, using force against competitors). Those are just two examples (of many).
(I may not reply further about capitalism or anything political, but I thought that would be short and maybe helpful.)
You can tell them about the debate policy concept and see how they react. You can also look at whether they respond to criticisms of their work. You can also make a tree of the field and look at whether that expert is contributing important nodes to it or not.
I think it could become important, widespread and influential in a few years if it had a few thousand initial supporters. I think getting even 100 initial supporters is the biggest obstacle, then turning that into a bigger group is second. Then once you have a bigger group that can be vocal enough in online discussions, they can get noticed by popular intellectuals and bring up debate policies to them and get responses of some kinds. Then you just need one famous guy to like the idea and it can get a lot more attention and it will then be possible to say “X has a debate policy; why don’t you?” And I can imagine tons of fans bringing that up in comment sections persistently for many of the popular online intellectuals. It’s easy to imagine fans of e.g. Jordan Peterson bugging him about it endlessly until he does it.
I think the reason that doesn’t happen is that most people don’t actually seem to want it, like it or care, so getting to even 100 supporters of the idea is very hard. The issue IMO is the masses resisting, rejecting or not caring about the idea (of the few who see it, most dislike or ignore it), including at EA, for reasons I don’t understand well enough.
I glanced at the table of contents and saw mention of Malthus. That’s a topic I know about, so I could read that section and be in a pretty good position to evaluate it or catch errors. Finding a section where I have expertise and checking that is a useful technique.
There’s a fairly common thing where people read the newspaper talking about their field and they are like “wow it’s so bad. this is so amateurish and full of obvious errors”. Then they read the newspaper on any other topic and believe the quality is decent. It isn’t. You should expect the correctness of the parts you know less about to probably be similar to the part you know a lot about.
At a glance at the Malthus section, the book seems to be on the same side as Malthus, which I disagree with. So a specific thing I’d look for is whether the book brings up and tries to address some of the arguments on my side that I regard as important. If it ignores the side of the debate I favor, and doesn’t have any criticisms of anything I believe, that’d be bad. I did a text search for “Godwin” and there are no results. (Godwin is a classical liberal political philosopher from the same time period as Malthus who I like a lot. He wrote a book about why Malthus was wrong.) There are also no results for “Burke” and no mention of Adam Smith (nor turgot, bastiat, condorcet, mises, rothbard, hayek). I see it as a potentially bad sign to look at old thinkers/writers only to bring up one who is on your side without talking about other ideas from the time period including disagreements and competing viewpoints. It can indicate bias to cherry pick one past thinker to bring up.
That’s inconclusive. Maybe it gives fair summaries of rival viewpoints and criticizes them; I didn’t look enough to actually know. I don’t want to spend more time and energy on this right now (also I dislike the format and would want to download a copy of the book to read it more). I think it gives you some idea about ways to approach this – methods – even though I didn’t actually do much. Also, in my experience, the majority of books like this will fail at fact checking at least once if you check five random cites, so that would be worth checking if you care about whether the facts in the book are trustworthy.
Ok—however, while this is better, this list is still very long, and quite daunting. It’s good as an index, but not as a “here’s the top priority stuff”.
I think a question you should ask yourself is “If I can only have a limited number of exchanges with people, and they have a limited time, what do I want them to learn?”. And then just mention a few things that are the best/most useful stuff you have in store.
This way people get a sample of what you can offer, and then they may be like “oh ok this might be useful, maybe I’ll dig more into that”.
Mentioning “read the entire work of this guy” or “check my entire forum” is probably not something people will readily use- because from the outside I have no way of knowing if this is a good use of my time. It would take too much effort just to check. So I need a sample that tells me “hey, that’s interesting” that pushes me to go onward.
So having a list of 1) Actionable advice 2) with the list with the best stuff to redirect people too would be useful.
Same question—how do I check that ? There’s no under “answer section” under scientific papers or books, except some are ones. They could have answered the criticism in the - how do I check that quickly ?
For instance, from what I read, Nate Hagens did take into account the classic points put forward against the claims of Malthus (although he didn’t really quote many names). But it’s all over the book—so there’s no quick way of checking that.
I think one reason most people are not interested is that they don’t feel concerned by the idea. I don’t feel concerned by it. It feels it could work for public intellectuals, but everybody else has no use for it (maybe they’re wrong, but it doesn’t feel like it). And public intellectuals are a hard to reach public.
It’s also not obvious what the benefits of the idea would be. I understand there are benefits, but there’s no visible result you can see for them, which makes it less attractive. And even if there were debates following this policy, it’s not guaranteed this would change the state of the debate: many papers have been shown as non-replicable, but they are still widely cited since the rebuttals have not publicized as much.
I get that this would be really useful if many prominent experts used that—because you could reach to them and they’d have to answer.
I replied but I deleted it after finding out about the new CC BY license requirement. You can read my reply where I’d mirrored it at https://discuss.criticalfallibilism.com/t/rational-debate-methodology-at-effective-altruism/1510/40?u=elliot
I think I’m going to have to quit writing anything substantive at EA due to the license change, so if you want to keep discussing with me I think you’ll have to join my forum. That sucks but I don’t see a better option.
Ok—I subscribed to the forum but I don’t know how to answer to the comment you linked to.
I’ll answer here.
Interesting, but I don’t know if this is the right criteria. One thing is, I can’t point out to an error you made because I can’t evaluate your claims. Our discussion was on abstract points of methodology, not facts or stuff you can verify—so of course I can’t point out to an error, because there is no real result to check.
Now I know I should keep an open mind, which I do, especially since I can’t point to errors in the reasoning itself. But it’s hard to believe things I can’t verify and see by myself.
Which is why I keep asking for stuff like examples and concrete things. It’s easier to grasp these and to verify them.
It’s really not obvious to anyone that “not having a debate policy” is “doing rationality wrong”. Especially when the concept itself is so uncommon. If this is the criteria I really don’t know who is doing rationality right (but then again, I don’t really know who is doing rationality right).
Then again, most people do not get challenged into debates. Even EAs. So it makes sense that they think such a concept is not for them.
Just to test, you’ll be happy to know I adopted a debate policy ! We’ll see what results that provides in 10 years.
Ah, ok. I see where we differ here.
I try to have the most impact I can in the world, so I judge what I do by “what positive impact did this have?” As such, quick wins that can target a larger public have a larger impact, and a higher chance of changing things, so I decided on that. Which is why this seems more important to me.
But it appears that you have a different goal in mind—you seek high-level discussions with like-minded individuals. I can understand that.
Same for the CC BY license. I know I’d have less impact if I left the forum, and what I write is there with the goal of being shared anyway, so I don’t really care about that.
I just put up 7 more EA-related articles at https://curi.us The best way to find all my EA related articles is https://curi.us/2529-effective-altruism-related-articles
I have more drafts to go through so there will be more posts soon.
If you or anyone else thinks that any of them should be on the EA forum, you can post them at EA as link posts. In general, I don’t plan to link post my own stuff at EA going forward, for several reasons, but if even one person thinks it would add much value to EA, they are welcome to do it.
To post on my forum, you have to pay $20 (once, not recurring). I know the communication on this isn’t amazing (Discourse has limited options) though there should be a banner and some info about it in a few places, but I know sometimes people still don’t see it. There’s a subscribe button on the home page but it’s in a menu on mobile instead of directly visible. It takes you to https://discuss.criticalfallibilism.com/s and then the payment flow is with a standard plugin that uses Stripe.
If it’s a financial burden for you, I can give you free access.
If you can afford it, then I’ll have to ask you to pay, because my general policy is if people value a discussion with me less than $20 then I shouldn’t talk with them. I skip that policy when I go participate at other communities, but I’m quitting the EA forum now.
I also just wrote more about my issues with the CC BY license at https://forum.effectivealtruism.org/posts/WEAXu8yTt5XbKq4wJ/ignoring-small-errors?commentId=Z7Nh36x3brvzC3Jpm
Ok—I though the $20 were for making posts, I didn’t think it was for answering.
I don’t think I will pay $20 because all the money I earn beyond my basic needs is going to charities.
I can understand the CC BY issue, if you’ve had problems with it in the past. If you think you can have more impact by retaining property over what you write, then this is what you should do.
If $20 got you even a 1% chance to find out that much of your money and effort is going to the wrong charities and causes, wouldn’t that be a good deal? Error correction is high value.
I think what EA is doing by getting people to donate that much (all above basic needs) is extremely harmful to people like you. I’d believe that even if I didn’t also believe that the majority of EA causes and efforts were counter-productive.
There’s something really problematic about thinking a cause is so important that you’ll make large personal sacrifices for it, but not being willing to do much to pursue potential error correction. EA has a lot of people who will go to great lengths to help their causes – they just are so sure they’re right(?) that they don’t seem to think debating critics is very important. It’s weird. If you think every dollar you donate is a big deal, you should also think every tiny bit of risk reduction and error correction is a big deal. Those things are scarcer than dollars and can easily have larger impacts. But I come here and say I think EA is wrong about important issues, and I want to debate, and I ask if EA has any organized debate methods or even just individuals who’d be happy to debate much. And the answer was no and also no one seems to think that’s very bad or risky. That shows a widespread lack of respect for the risk of being wrong about causes that people are investing all their money above basic needs in, and a disinterest in criticism.
Anyway, if you find my ideas implausible and not worth pursuing or debating, or still don’t really value my time more than the time of the next guy you could talk with instead, then we should part ways.
Sorry—I exagerated a bit. I do not donate everything above my basic needs—still quite a good chunk but not everything.
I try to spend quite some time on error correction (and sometimes buy books instead of getting them from a library) - but in this realm I am still weighting that against, say, the impact I could obtain by donating to an animal charity instead. But I’m ready to do some spending if I feel there’s a good chance to know more and improve.
The problem here is rather that I am not sure subscribing to this forum will really allow me to improve.
I absolutely agree to your claim that EA has a lack of organized debate method, and could improve on fighting against bias. I could probably improve on that too, I think. I can agree with the “lacking methodology”.
However, to actually improve, I need practical advice on how to improve. Or an example: for instance, seeing a debate where I see that a specific claim very important in EA is not impactful (for instance, that donating to charities that do corporate outreach in factory farming), and seeing the methodology that led to this claim.
I want to point out that criticism of what exists currently is important but not enough—the way I personally work is that I need to see something better in order to update correctly. Then I can be inspired by that better approach.
For instance, I read your criticism of The Scout Mindset—it’s interesting, there are good points, for instance that the examples she gives could be really biased. But what would add even more value to your post is recommending a book which does the same thing but better (so basically, a book about how to get better at updating how we view the world, written in a clear, streamlined way, with examples and practical advice—just more rigorous).
I really like to improve. But I need practical stuff for that—and I asked for it and still feel you didn’t answer that (besides taking up a debate policy—you also made a list of actions but with no links to go deeper).
I fear it could prove difficult for you to spread your ideas even further without a greater focus on that part.
By the way, have you issued claims about EA being wrong on its list of priorities ? You have done so on methodology—which is important, but not the most engaging topic, so few people interacted with it (which is too bad). But have tried to make more specific claims, like “EA is wrong about putting effort on factory farming” ?
I don’t want to CC BY license my replies, so here are links. I don’t want to reply this way in general and may not do it again.
https://discuss.criticalfallibilism.com/t/elliot-temple-and-corentin-biteau-discussion/1543/5?u=elliot
https://discuss.criticalfallibilism.com/t/elliot-temple-and-corentin-biteau-discussion/1543/6?u=elliot
Oh, I had wrote a full answer in your curi.us debate space, but it says I need an account (it’s weird that the “post public answer” box appears if it doesn’t even if I don’t have an account).
I think I’ll take up your offer to have an access to the forum just for a few months, please.
Oh, and thanks for the concern you’re showing me, that’s kind :)
OK, I gave CF forum posting access to your account.
You’re right that I should make the curi.us comment section clearer than the current small-print note. If you lost the text of what you wrote, I should be able to retrieve it for you from logs.
Also regarding evaluating a book, I just did 4 demonstrations for EA at https://forum.effectivealtruism.org/posts/yKd7Co5LznH4BE54t/game-i-find-three-errors-in-your-favorite-text and 3 of them include a screen recording of my whole process.
I haven’t read the rest yet and might not get to it today, but I’ll give a comment on this.
I think the solution is that anyone (who cares a lot) can take personal, individual responsibility for addressing criticism. I do that with my own tiny movement. There is no need to divide up responsibility. Multiple people can separately do this at once.
Isn’t that too much work? To the extent that anyone else does something useful, it’s less work. If the movement is tiny, then it’s a ton of work. If the movement is pretty big, and a lot of people do useful tasks, then it’s less work.
If you take responsibility, you don’t have to do everything yourself. You can delegate to anyone willing to do stuff. You just have to monitor their work. If you endorse it and it’s wrong, you were wrong. If it’s partly wrong and partly useful, you can endorse just part of it and specify which part.
You also don’t even have to delegate everything. People may do stuff without you asking them to. You can take responsibility even if you’re not seen as a leader and have zero people who will do tasks at your request. What you do is figure out what usable, endorsable essays and other materials the movement has, and figure out what’s missing, and fill in whatever’s missing to your satisfaction. With a larger movement, many people’s opinion would be that nothing crucial is missing, so their initial workload is merely reviewing what exists, learning about it, getting a kind of organized overview of it figured out, and being satisfied.
When critics come along and want to debate, maybe someone will answer them in a way you consider satisfactory. Or maybe not. If you take responsibility for these ideas and think they’re true, then you should monitor to see that critics get answered in ways you’re content with. If there are any gaps where debate or answers aren’t happening to your satisfaction, then you need to fill in that gap (or, in the alternative, admit there is a gap, and say unfortunately you’re too busy to deal with the whole gap, so not all criticism has been answered and debated yet, so you aren’t confident you’re right).
To fill in a gap where some critic is raising an issue, there are two basic scenarios, the easy and hard one:
easy scenario: the issue is addressed somewhere. the critic just needs to be provided with a link or cite. sometimes a little bit of extra text is needed to explain how the generic information in the article answers the specific information the critic brings up. i’ve called this bridging material. it’s a type of personalization or customization. there are a lot of cases where a paragraph of customization can add a lot to a cite/link. you also may need to specify which part of a link/cite is relevant rather than the whole thing, and may need to give disclaimers/exclusions to go with the link/cite.
all of that is pretty fast and a pretty low amount of work. and it’s something that people can contribute to a movement without being geniuses or great essay writers or anything like that. people who just read and liked a lot of a movement’s materials can help by sharing the right links in the right places. if a movement has a lot of literature, then this will probably address over 80% of criticism.
hard scenario: the issue is not already addressed somewhere. new ideas/arguments/literature are needed. this is more work. if this comes up a bunch and it’s too much work for you, and others won’t help enough, then the movement isn’t really fully fleshed out and you shouldn’t be confident about being right.
this is how i do stuff with my own tiny philosophy movement. i monitor for issues that should be addressed but no one else addresses. i have some fans who will sometimes provide links to existing essays so i don’t have to. I occasionally delegate something though not very often. since my movement is tiny i don’t expect a ton of help, but on the other hand i can refer to the writing of dead authors to answer a lot of issues. i try to learn from and build on some authors i found that i think did good work. it’d be really problematic to try to do everything myself from scratch. if i tried to reinvent the wheel, i’d almost certainly come up with worse ideas than already exist. instead i try to find and understand the best existing ideas and then add some extra insights, changes and reorganization.
and i have a debate policy so if a critic will neither use nor criticize my debate policy (and someone has linked him to it), then i don’t think i need to answer his criticism (unless i actually think he has a good point, in which case i’d want to that point even if it has no advocates who are smart, reasonable or willing to debate). i have a forum where critics can come and talk about my philosophy. EA has a forum too which is one of the main reasons I’m talking to EA at all. not enough movements, groups or individuals even have forums (which IMO is a major problem with the world).
(I do not count facebook groups, subreddits, twitter, discords, or slacks as forums. social media and chatrooms are different than forums. comment sections on blogs, substacks, news articles, etc., also aren’t proper forums. having an actual forum matters IMO. Examples of forum software include Discourse, phpbb and google groups. I view forums as a bit of a remnant of the old internet that has lost a ton of popularity to social media. i think LW/EA partly have forums because the community started before smartphones got so popular.)
Ok, very well. I’m not sure it’s reasonable to expect everyone to take responsibility for changing on a topic—but changing requires efforts and time, and it’s not realistic to expect every people to do all of that.
In an ideal world we’d look for everything by ourselves, but in reality we just don’t have time to dig into everything.But this links to motivation that’s the topic of the other response.
On a related note, have you read that post ? It may be interesting: https://www.cold-takes.com/honesty-about-reading/
I don’t expect everyone to do it. I expect more than zero people to do it.
Or, if it is zero people, then I expect people to acknowledge a serious, urgent problem, and to appreciate me pointing it out, and to stop assuming their group/side is right about things which no one on in their group/side (including themselves) will take responsibility for the correctness of.
Skimming and other ways of reducing reading can work well and I’ve been interested in them for a long time. Getting better at reading helps too (I’ve read over 400,000 words in a day, so 10,000 doesn’t seem like such a daunting journey to me). But ignoring arguments, when no one on your side has identified any error, is problematic. So I suggest people should often reply to the first error (if no one else already did that in a way you find acceptable). That makes progress possible in ways that silence doesn’t.
If you think the length and organization of writing is itself an error that is making engaging unreasonably burdensome, then that is the first error that you’ve identified, and you could say that instead of saying nothing. At that point there are ways for problem solving and progress to happen, e.g. the author (or anyone who agrees with him) could give a counter-argument, a rewrite, or a summary (particularly if you identify a specific area of interest – then they could summarize just the part you care about).
I recently posted about replying to the first error:
https://forum.effectivealtruism.org/posts/iBXdjXR9cwur8qpwc/critically-engaging-with-long-articles
It’s particularly important to do this with stuff which criticizes your ideas – which claims you’re wrong about something important and impactful – so it’s highly relevant to you.
This is a good point -I just think that most people are not even aware that this is an option (admitting you didn’t read everything but still want to engage isn’t obivous in our way of doing things).
I read your post on long articles—it provides some really useful insights, so thanks for that. I still think it could be a bit more attractive to readers (summary, bullet points, more titles and sections, bolding, exemples, maybe 3 minutes shorter), but it was worth reading. The fact you said “don’t stop reading unless you spotted an error” helped too ^^
Attracting readers is a different activity than truth seeking. Articles should be evaluated primarily by whether people can refute what the article says or not. If I avoid errors that anyone knows about, then I’ve done a great job. A rational forum should be able to notice that, value it and engage with it, without me doing anything extra to get attention.
Truth seeking and attracting typical readers are different skills. People usually aren’t great at both. A community that emphasizes and rewards attracting will tend to get issues wrong and alienate rational people.
I got to a major, motivating point (“a bias where long criticism is frequently ignored”) in the third sentence. If someone is unable to recognize that as something to care about, or gets bored before getting that far, then I don’t think they’re the right audience for me. They could also find out about “Method: Reply to the First Important Error” by reading the bullet point outline.
I read far worse writing all the time. It’s not a big deal. Readers should be flexible and tolerant, learn to skim as desired, etc. They should also pick up on less prominent quality signals like clarity.
Any time I spend on polishing means less writing and research. I write or edit daily. I used to edit/polish less and publish more, and I still think that might have been better. There are tradeoffs. I now have a few hundred thousand unpublished words awaiting editing, including over 30,000 words in EA-related drafts since I started posting here.
I’m also more concerned with attracting especially smart, knowledgeable, high-effort readers than attracting a large number of readers. Put another way, the things you’re asking for are not how I decide what articles or authors to read.
Anyway, I appreciate the feedback. I intentionally added some summary to some articles recently, which I viewed as similar to an abstract from an academic paper. I’m not necessarily against that kind of thing, but I do have concerns to take into account.
Oh, ok. I understand better your approach.
I must admit that I am trying to aim for a different approach: writing stuff adapted to human psychology.
I don’t go from postulates like “Articles should be evaluated primarily by whether people can refute what the article says or not” or “Readers should be flexible and tolerant, learn to skim as desired, etc.” It would be very nice if people were to do that. But our brains, although they can learn that to some extent with the good educational methods and the right incentives, just didn’t really evolve for doing stuff like that, so I don’t expect people to do that.
Reading text which is long, abstract, dry, remote from our daily environments, and with no direct human interactions, is possible, but this is akin to swimming against the flow: if there’s a good reason to do that, I will, but it will be much harder. And I need to know what I can get out of it—with a serious probability.
I guess that’s one reason people tend to ignore what science says: it’s boring. It has a “reader-deterring style” as one paper puts it.
I really recommend this paper by Ugo Bardi that explicits why that contributes to the decline of science:
The brain is better at processing stuff that is concrete. Visual stuff like pictures. Metaphores. Examples. Bullet points and bolding. There’s a much better chance that people read things that the brain can process easily—and it’s useful even for your readers that are able to read dry stuff.
I think you’re mistaken about evolutionary psychology and brains, but I don’t know how to correct you (and many other people similar to you) because your approach is not optimized for debate and (boring!?) scholarship like mine. That is one of many topics where I’d have some things to say if people changed their debate methodology, scholarly standards, etc. (I already tried debating this topic and many others in the past, but I found that it didn’t work well enough and I identified issues like debate methodology as the root cause of various failures.)
I also agree with and already (try to) do some of what you say. I have lots of material breaking things into smaller parts and making it easier to learn. But there are difficulties, e.g. when the parts are small then the value from each one individually (usually) becomes small too. To get a big result people have to learn many small parts and combine them, which can be hard and require persistence and project management. You’re not really saying anything new to me, which is fine, but FYI I already know about additional difficulties which it’s harder to find answers for.
I’m personally not a very visual thinker and I’m good at abstract thinking. This reads to me as denying my lived experience or forgetting that other types of people exist. If you had said that the majority of people like pictures, then I could have agreed with you. It’s not that big a deal – I’m used to ignoring comments that assume I don’t exist or make general statements about what people are like which do not apply to me. I’m not going to get offended and stop talking to you over it. But I thought it was relevant enough to mention.
I’m actually interested in that—if you have found sources and documents that provide a better picture of how brains work, I’d be interested. The way I work in debate is that if you provide somehing that explains the world in a better way than my current explanation, then I’ll use it.
Ok, I didn’t mean that everybody is like that, I was making a generalization. Sorry you took it that way. What I had in mind was that when you see something hapening in front of you it sticks much better than reading about it.
I have already tried telling people about evolutionary psychology and many other topics that they are interested in.
I determined that it mostly doesn’t work due to incorrect debate methodology, lack of intellectual skills (e.g. tree-making skills or any alternative to accomplish the same organizational purposes), too-low intellectual standards (like being dismissive of “small” errors instead of thinking errors merit post mortems), lack of persistence, quitting mid-discussion without explanation (often due to bias against claims you’re losing to in debate), poor project management, getting emotional, lack of background knowledge, lack of willingness to get new background knowledge mid-discussion, unwillingness to proceed in small, organized steps, imprecision, etc.
Hence I’ve focused on topics with priority which I believe are basically necessary prerequisite issues before dealing with the other stuff productively.
In other words, I determined that standard, widespread, common sense norms for rationality and debate are inadequate to reach true conclusions about evolutionary psychology, AGI, animal welfare, capitalism, what charity interventions should be pursued, and so on. The meta and methodological issues need to be dealt with first. And people’s disinterest in those issues and resistance to dealing with them is a sign of irrationality and bias – it’s part of the problem.
So I don’t want to attempt to discuss evolutionary psychology with you because I don’t think it will work well due to those other issues. I don’t think you will discuss such a complex, hard issue in a way that will actually lead to a correct conclusion, even if that requires e.g. reading books and practicing skills as part of the process (which I suspect it would require). Like you’ll make an inductivist or justificationist argument, and then I’ll mention that Popper refuted that, and then to resolve the issue we’ll need a whole sub-discussion where you engage with Popper in a way capable of reaching an accurate conclusion. That will lead to some alternatives like you could read and study Popper, or you could review the literature for Popper critics who already did that who you could endorse, or you could argue that Popper is actually irrelevant, or there are other options but none are particularly easy. And there can be many layers of sub-issues, like most people should significantly improve their reading skills before it’s reasonable to try to read a lot of complex literature and expect to find the truth (rather than doing it more for practice), and people should improve their grammar skills before expecting to write clear enough statements in debates, and people should improve their math and logic skills before expecting to actually get much right in debates, and people should improve their introspection skills before expecting to make reasonably unbiased claims in debates (and also so they can more accurately monitor when they’re defensive or emotional).
I tried, many times, starting with an object level issue, discussing it until a few errors happened, and then trying to pivot the discussion to the issues which caused and/or prevented correction of those errors. I tried using an initial discussion as a demonstration that the meta problems actually exist, that the debate won’t work and will be full of errors, etc. I found basically that no one ever wanted to pivot to the meta topic. Having a few errors pointed out did not open their eyes to a bigger picture problem. One of the typical responses is doing a quick, superficial “fix” for each error and then wanting to move on without thinking about root causes, what process caused the error, what other errors the same process would cause, etc.
This is an archetypical non-apology that puts blame on the person you’re speaking to. It’s a well known stereotype of how to do fake apologies. If you picked up this speech pattern by accident because it’s a common pattern that you’ve heard a lot, and you don’t realize what it means, then I wanted to warn you because you’ll have a high chance of offending people by apologizing this way. I think maybe it’s an accident here because I didn’t get a hostile vibe from you in the rest; this one sentence doesn’t fit well. It’s also an inaccurate sentence since I didn’t take it that way. I said how it reads. I spoke directly about interpretations rather than simply having one interpretation I took for granted and replied based on. I showed awareness that it could be read, interpreted or intended in multiple ways. I was helpfully letting you know about a problem rather than being offended.
I feel like we are starting to hit a dead-end here, which is a pity since I really want to learn stuff.
The problem is :
I am interested in learning concrete stuff to improve the way I think about the world
You point out that methodology and better norms for rationality and debate are necessary to get a productive conversation (which I can agree with, to some extent)
Except I have no way of knowing that your conclusions are better than mine. It’s entirely possible that yours are better—you spent a lot of time on this. But I just don’t have the motivation to do the many, many prerequisites you asked for, unless I’ve seen from experience that they provide better results.
This is the show don’t tell problem: you’ve told me you’ve got better conclusions (which is possible). But you’ve not shown me that. I need to see that from experience.
I may be motivated to spend some time on improving rationality norms, and change my conclusions. But not without a (little) debate on some concrete stuff that would help understand that I can improve.
How about challenging my conclusion that energy depletion is a problem neglected by many, and that we’re starting to hit limits to growth ? We could do that in the other post you pointed to.
True. It was a mistake on my part. It’s just that the sentence “I’m used to ignoring comments that assume I don’t exist” felt a bit passive-agressive, so I got passive-agressive as well on that.
It’s not very rational. I shouldn’t have done that, you’re right.
OK, as a kind of demonstration, I will try engaging about this some, and I will even skip over asking about why this issue is an important priority compared to alternative issues.
First question: What thinkers/ideas have you read that disagree with you, and what did you do to address them and conclude that they’re wrong?
Ok, interesting question.
First, most of what I’m saying challenges deeply what is usually said about energy, resources or the economy.
So the ideas that disagree with me are the established consensus, which is why I’m already familiar with the counter-arguments usually put forward against to energy depletion:
We’ve heard about it earlier and didn’t “run out”
Prices will increase gradually
Technology will improve and solve the problem
We can have a bigger economy and less energy
We’ll just adapt
So in my post I tried my best to adress these points by explaining why ecological economists and other experts on energy and resources think they won’t solve the problem (and I’m in the process of writing a post more focused on adressing explicited these counter-arguments).
I also read some more advanced arguments against what these experts said (debates with Richard Heinberg, articles criticizing Jean-Marc Jancovici). But each time I’ve seen limits to the reasoning. For instance, what was said againt the Limits to growth report (turns out most criticism didn’t adress the core points of the report).
I’m not aware of any major thinker that is fluent on the topic of energy and its relationship with the economy, and optimistic on the topic. However, the one that was the most knowledgeable about this that I found was Dave Denkenberger, director of ALLFED, and we had a lot of exchanges, where he put some solid criticism against what I said. For some of what I wrote, I had to change my mind. For some other stuff, I had to check the litterature and I found limits that he didn’t take into account (like on investment). This was interesting (and we still do not agree, which I find weird). But I tried my best to find reviewers that could criticize what I said.
Is this a typo? I don’t understand.
Oh yeah that was a typo, sorry. I fixed it.