I read the whole post. Thanks for your work. It is extensive. I will revisit it. More than once. You cite a comment of mine, a listing of my cringy ideas. That’s fine, but my last name is spelled “Scales” not “Scale”. :)
About scout mindset and group epistemics in EA
No. Scout mindset is not an EA problem. Scout and soldier mindset partition mindset and prioritize truth-seeking differently. To reject scout mindset is to accept soldier mindset.
Scout mindset is intellectual honesty. Soldier mindset is not. Intellectual honesty aids epistemic rationality. Individual epistemic rationality remains valuable. Whether in service of group epistemics or not. Scout mindset is a keeper. EA suffers soldier mindset, as you repeatedly identified but not by name. Soldier mindset hinders group epistemics.
We are lucky. Julia Galef has a “grab them by the lapel and shake them” interest in intellectual honesty. EA needs scout mindset.
Focus on scout mindset supports individual epistemics. Yes.
scout mindset
critical thinking skills
information access
research training
domain expertise
epistemic challenges
All those remain desirable.
Epistemic status
EA’s support epistemic status announcements to serve group epistemics. Any thoughts on epistemic status? Did I miss that in your post?
Moral uncertainty
Moral uncertainty is not an everyday problem. Or remove selfish rationalizations. Then it won’t be. Or revisit the revised uncertainty, I suppose.
Integrity
Integrity combines:
intellectual honesty
introspective efficacy
interpersonal honesty
behavioral self-correction
assess->plan->act looping efficacy
Personal abilities bound those behaviors. So do situations. For example, constantly changing preconditions of actions bound integrity. Another bound is your interest in interpersonal honesty. It’s quite a lever to move yourself through life, but it can cost you.
Common-sense morality is deceptively simple
Common-sense morality? Not much eventually qualifies. Situations complicate action options. Beliefs complicate altruistic goals. Ignorance complicates option selection. Internal moral conflicts reveal selfish and altruistic values. Selfishness vs altruism is common-sense moral uncertainty.
Forum karma changes
Yes. Lets see that work.
Allow alternate karma scoring. One person one vote. As a default setting.
Allow karma-ignoring display. On homepage. Of Posts. And latest comments. As a setting.
Allow hide all karma. As a setting.
Leave current settings as an alternate.
Diversifying funding sources and broader considerations
Tech could face lost profits in the near future. “Subprime Attention Crisis” by Tim Hwang suggests why. An unregulated ad bubble will gut Silicon Valley. KTLO will cost more, percentage-wise. Money will flow to productivity growth without employment growth.′
Explore income, savings, credit, bankruptcy and unemployment trends. Understand the implications. Consumer information will be increasingly worthless. The consumer class is shrinking. Covid’s UBI bumped up Tech and US consumer earnings temporarily. US poverty worsened. Economic figures now mute reality. Nevertheless, the US economic future trends negatively for the majority.
“Opportunity zones” will be a predictive indicator despite distorted economic data, if they ever become reality. There are earlier indicators. Discover some.
Financial bubbles will pop, plausibly simultaneously. Many projects will evaporate. Tech’s ad bubble will cost the industry a lot.
Conclusion
Thanks again for the post. I will explore the external links you gave.
I offered one suggestion (among others) in a red team last year: to prefer beliefs to credences. Bayesianism has a context alongside other inference methods. IBT seems unhelpful, however. It is what I advocate against, but I didn’t have a name for it.
Would improved appetite regulation, drug aversion, and kinesthetic homeostasis please our plausible ASI overlords? I wonder. How do you all feel about being averse to alcohol, disliking of pot, and indifferent to chocolate? The book “Sodium Hunger: The Search for a Salty Taste” reminds me that cravings can have a benefit, in some contexts. However, drugs like alcohol, pot, and chocolate would plausibly get no ASI sympathy. Would the threat of intelligent, benevolent ASI that take away interest in popular drugs (e.g ,through bodily control of us) be enough to halt AI development? Such a genuine threat might defeat the billionaire-aligned incentives behind AI development.
By the way, would EA’s enjoy installing sewage and drinking water systems in small US towns 20-30 years from now? I am reminded of “The End Of Work” by Jeremy Rifkin. Effective altruism will be needed from NGO’s working in the US, I suspect.
Great fun post!
I read the whole post. Thanks for your work. It is extensive. I will revisit it. More than once. You cite a comment of mine, a listing of my cringy ideas. That’s fine, but my last name is spelled “Scales” not “Scale”. :)
About scout mindset and group epistemics in EA
No. Scout mindset is not an EA problem. Scout and soldier mindset partition mindset and prioritize truth-seeking differently. To reject scout mindset is to accept soldier mindset.
Scout mindset is intellectual honesty. Soldier mindset is not. Intellectual honesty aids epistemic rationality. Individual epistemic rationality remains valuable. Whether in service of group epistemics or not. Scout mindset is a keeper. EA suffers soldier mindset, as you repeatedly identified but not by name. Soldier mindset hinders group epistemics.
We are lucky. Julia Galef has a “grab them by the lapel and shake them” interest in intellectual honesty. EA needs scout mindset.
Focus on scout mindset supports individual epistemics. Yes.
scout mindset
critical thinking skills
information access
research training
domain expertise
epistemic challenges
All those remain desirable.
Epistemic status
EA’s support epistemic status announcements to serve group epistemics. Any thoughts on epistemic status? Did I miss that in your post?
Moral uncertainty
Moral uncertainty is not an everyday problem. Or remove selfish rationalizations. Then it won’t be. Or revisit the revised uncertainty, I suppose.
Integrity
Integrity combines:
intellectual honesty
introspective efficacy
interpersonal honesty
behavioral self-correction
assess->plan->act looping efficacy
Personal abilities bound those behaviors. So do situations. For example, constantly changing preconditions of actions bound integrity. Another bound is your interest in interpersonal honesty. It’s quite a lever to move yourself through life, but it can cost you.
Common-sense morality is deceptively simple
Common-sense morality? Not much eventually qualifies. Situations complicate action options. Beliefs complicate altruistic goals. Ignorance complicates option selection. Internal moral conflicts reveal selfish and altruistic values. Selfishness vs altruism is common-sense moral uncertainty.
Forum karma changes
Yes. Lets see that work.
Allow alternate karma scoring. One person one vote. As a default setting.
Allow karma-ignoring display. On homepage. Of Posts. And latest comments. As a setting.
Allow hide all karma. As a setting.
Leave current settings as an alternate.
Diversifying funding sources and broader considerations
Tech could face lost profits in the near future. “Subprime Attention Crisis” by Tim Hwang suggests why. An unregulated ad bubble will gut Silicon Valley. KTLO will cost more, percentage-wise. Money will flow to productivity growth without employment growth.′
Explore income, savings, credit, bankruptcy and unemployment trends. Understand the implications. Consumer information will be increasingly worthless. The consumer class is shrinking. Covid’s UBI bumped up Tech and US consumer earnings temporarily. US poverty worsened. Economic figures now mute reality. Nevertheless, the US economic future trends negatively for the majority.
“Opportunity zones” will be a predictive indicator despite distorted economic data, if they ever become reality. There are earlier indicators. Discover some.
Financial bubbles will pop, plausibly simultaneously. Many projects will evaporate. Tech’s ad bubble will cost the industry a lot.
Conclusion
Thanks again for the post. I will explore the external links you gave.
I offered one suggestion (among others) in a red team last year: to prefer beliefs to credences. Bayesianism has a context alongside other inference methods. IBT seems unhelpful, however. It is what I advocate against, but I didn’t have a name for it.
Would improved appetite regulation, drug aversion, and kinesthetic homeostasis please our plausible ASI overlords? I wonder. How do you all feel about being averse to alcohol, disliking of pot, and indifferent to chocolate? The book “Sodium Hunger: The Search for a Salty Taste” reminds me that cravings can have a benefit, in some contexts. However, drugs like alcohol, pot, and chocolate would plausibly get no ASI sympathy. Would the threat of intelligent, benevolent ASI that take away interest in popular drugs (e.g ,through bodily control of us) be enough to halt AI development? Such a genuine threat might defeat the billionaire-aligned incentives behind AI development.
By the way, would EA’s enjoy installing sewage and drinking water systems in small US towns 20-30 years from now? I am reminded of “The End Of Work” by Jeremy Rifkin. Effective altruism will be needed from NGO’s working in the US, I suspect.