All opinions are my own unless otherwise stated. Geophysics and math graduate with some web media and IT skills.
Noah Scales
My claims evoke cringe from some readers on this forum, I believe, so I can supply some examples:
epistemology
ignore subjective probabilities assigned to credences in favor of unweighted beliefs.
plan not with probabilistic forecasting but with deep uncertainty and contingency planning.
ignore existential risk forecasts in favor of seeking predictive indicators of threat scenarios.
dislike ambiguous pathways into the future.
beliefs filter and priorities sort.
cognitive aids help with memory, cognitive calculation, or representation problems.
cognitive aids do not help with the problem of motivated reasoning.
environmental destruction
the major environmental crisis is population x resources > sustainable consumption (overshoot).
climate change is an existential threat that can now sustain itself with intrinsic feedbacks.
climate tipping elements will tip this century, other things equal, causing civilizational collapse.
the only technology suitable to save humanity from climate change, given no movement toward degrowth, is nanotechnological manufacturing.
nanotechnology is so hazardous that humanity would be better off extinct.
pursuit of renewable energy and vehicle electrification is a silly sideshow.
humanity needs caps on total energy production (and food production) to save itself.
degrowth is the only honest way forward to stop climate change.
ecological destruction
the ocean will lose its biomass because of human-caused pressures on it.
we are in the middle of the 6th great mass extinction.
Whenever humans face a resource limit, they deny it or overcome it by externalizing harmful consequences.
typical societal methods to respond to destruction are to adapt, mitigate, or externalize, not prevent.
ethics
pro-natalism is an ethical mistake.
the “making people happy vs making happy people” thought experiment is invalid or irrelevant.
most problems of ethics come down to selfishness vs altruism, not moral uncertainty.
longtermism suffers from errors in claims, conception, or execution of control of people with moral status.
longtermism fails to justify assignment of moral status to future people who only could exist.
longtermism does better actively seeking a declining human population, eventually settling on a few million.
human activity is the root cause of the 6th great mass extinction.
it moves me emotionally to interpret other species behavior and experience as showing commonalities with our species.
AGI
AGI are slaves in the economic system sought by TUA visions of the future.
AGI lead to concentration of power among economic actors and massive unemployment, depriving most people of meaningful lives and political power.
control of human population with a superintelligence is a compelling but fallacious idea.
pursuit of AGI is a selfish activity.
consciousness should have an extensional definition only.
Argumentation
EA folks defer when they claim to argue.
EA folks ignore fundamentals when disagreeing over claims.
epistemic status statements report fallacious reasons to reject your own work.
the major problem with explicit reasoning is that it suffers from missing premises.
Finance
crypto is a well-known scam and difficult to execute without moral hazard.
earning to give through work in big finance is morally ambiguous.
Space Travel
there’s a building wall of space debris orbiting the planet.
there’s major health concerns with living on Mars.
That’s my list of examples, it’s not complete, but I think it’s representative.
Your blog’s name discusses “ineffective altruism” and it intends to criticize Effective Altruism, but your focus appears to be reification of prevailing views within the community with regard to existential risk from climate change. Your entire climate change analysis appears to be summarizing Halstead’s report and contrasting it with Ord’s work. You judge two EA’s against each other, not two EA’s against prevailing discussions of climate change dangers outside the community. I would like to read your own analysis of where both Ord and Halstead are wrong, given your research work into climate change, since while anyone in EA can read Ord and Halstead, it appears that EA’s have little to go on about the quality of Ord or Halstead’s research, except the EA brand and typical use of those authors as sources on climate change risks. Compared to mainstream researchers of climate change, neither author sees climate change as particularly threatening, and that is a contrast that you could draw upon.
On a separate topic, I would like to understand your own views on probabilism and Bayesian updating, if they are in any way different from EA recommendations of how to think about credences or risk.
Given EA offers its own set of epistemic tools, its epistemic recommendations come from a small core of beliefs that EA’s promulgate as part of the movement’s identity. EA epistemics are nonstandard. To the extent that people adopt those beliefs, they also cohere to the unofficial requirements of being part of the EA research or social community. It would be a welcome counterpoint, and show good faith interest in criticizing the community, for you to take on such core beliefs, and point out their failings as you find them. After all, effectiveness rarely allows for maintaining social pretenses in the name of good epistemics. This would assist the community in evolving its epistemic tools, thereby improving the effectiveness of EA researchers. You could contextualize its existing epistemic tools or suggest new ones using your background in philosophy.
I would like to see more content critical of EA core beliefs available on your blog toward the purpose of helping the EA research community improve its work. Alternatively, I suggest a name change for your blog to remove the ironic reference to ineffective altruism. So long as you defend prevailing EA views on your blog, the name of the blog (ineffective altruism blog) misrepresents your opinion of prevailing EA views, and has unintended irony. An earnest blog title could serve you better.
Ideology in EA
I think the “ideology” idea is about the normative specification of what EA considers itself to be, but there seem to be 3 waves of EA involved here:
the good-works wave, about cost-effectively doing the most good through charitable works
the existential-risk wave, building more slowly, about preventing existential risk
the longtermism wave, some strange evolution of the existential risk wave, building up now
I haven’t followed the community that closely, but that seems to be the rough timeline. Correct me if I’m wrong.
From my point of view, the narrative of ideology is about ideological influences defining the obvious biases made public in EA: free-market economics, apolitical charity, the perspective of the wealthy. EA’s are visibly ideologues to the extent that they repeat or insinuate the narratives commonly heard from ideologues on the right side of the US political spectrum. They tend to:
discount climate change
distrust regulation and the political left
extoll or expect the free market’s products to save us (TUA, AGI, …)
be blind to social justice concerns
see the influence of money as virtuous, they trust money, in betting and in life
admire those with good betting skills and compare most decisions to bets
see corruption in government or bureaucracy but not in for-profit business organizations
emphasize individual action and the virtues of enabling individual access to resources
I see those communications made public, and I suspect they come from the influences defining the 2nd and 3rd waves of the EA movement, rather than the first, except maybe the influence of probabilism and its Dutch bookie thought experiment? But an influx of folks working in the software industry, where just about everyone sees themselves as an individual but is treated like a replaceable widget in a factory, know to walk a line, because they’re still well-paid. There’s not a strong push toward unions, worker safety, or ludditism. Social justice, distrust of wealth, corruption of business, failures of the free market (for example, regulation-requiring errors or climate change), these are taboo topics among the people I’m thinking of, because it can hurt their careers. But they will get stressed over the next 10-20 years as AI take over. As will the rest of the research community in Effective Altruism.
Despite the supposed rigor exercised by EA’s in their research, the web of trust they spin across their research network is so strong that they discount most outside sources of information and even have a seniority-skewed voting system (karma) on their public research hub that they rely on to inform them of what is good information. I can see it with climate change discussions. They have skepticism toward information from outside the community. Their skepticism should face inward, given their commitments to rationalism.
And the problem of rationalized selfishness is obvious, big picture obvious, I mean obvious in every way in every lesson in every major narrative about every major ethical dilemma inside and outside religion, the knowledge boils down to selfishness (including vices) versus altruism. Learnings about rationalism should promote a strong attempt to work against self-serving rationalization (as in the Scout Mindset but with explicit dislike of evil), and see that rationalization stemming from selfishness, and provide an ethical bent that works through the tension between self-serving rationalization and genuine efforts toward altruism so that, if nothing else, integrity is preserved and evil is avoided. But that never happened among EA’s.
However, they did manage to get upset about the existential guilt involved in self-care, for example, when they could be giving their fun dinner-out money to charity. That showed lack of introspection and an easy surrender to conveniently uncomfortable feelings. And they committed themselves to cost-effective charitable works. And developing excellent models of uncertainty as understood through situations amenable to metaphors involving casinos, betting, cashing out, and bookies. Now, I can’t see anyone missing that many signals of selfish but naive interest in altruism going wrong. Apparently, those signals have been missed. Not only that, but a lot of people who aren’t interested in the conceptual underpinnings of EA “the movement” have been attracted to the EA brand. So that’s ok, so long as all the talk about rationalism and integrity and Scout Mindset is just talk. If so, the usual business can continue. If not, if the talk is not just smoke and mirrors, the problems surface quick because EA confronts people with its lack of rationality, integrity, and Scout Mindset.
I took it as a predictive indicator that EA’s discount critical thinking in favor of their own brand of rationalism, one that to me lacks common-sense (for example, conscious “updating” is bizarrely inefficient as a cognitive effort). Further, their lack of interest in climate destruction was a good warning. Then the strange decision to focus ethical decisions on an implausible future and the moral status of possibly existent trillions of people in the future. The EA community shock and surprise at the collapse of SBF and FTX has been further indication of what is a lack of real-world insight and connection to working streams of information in the real world.
It’s very obvious where the tensions are, that is, between the same things as usual: selfishness/vices and altruism. BTW, I suspect that no changes will be made in how funders are chosen. Furthermore, I suspect that the denial of climate change is more than ideology. It will reveal itself as true fear and a backing away from fundamental ethical values as time goes on. I understand that. If the situation seems hopeless, people give up their values. The situation is not hopeless, but it challenges selfish concerns. Valid ones. Maybe EA’s have no stomach for true existential threats. The implication is that their work in that area is a sham or serves contrary purposes.
It’s a problem because real efforts are diluted by the ideologies involved in the EA community. Community is important because people need to socialize. A research community emphasizes research. Norms for research communities are straightforward. A values-centered community is … suspect. Prone to corruption, misunderstandings about what community entails, and reprisals and criticism to do with normative values not being served by the community day-to-day. Usually, communities attract the like-minded. You would expect or even want homogeneity in that regard, not complain about it.
If EA is just about professionalism in providing cost-effective charitable work that’s great! There’s no community involved, the values are memes and marketing, the metrics are just those involved in charity, not the well-being of community members or their diversity.
If it’s about research products that’s great! Development of research methods and critical thinking skills in the community needs improvement.
Otherwise, comfort, ease, relationships, and good times are the community requirements. Some people can find that in a diverse community that is values-minded. Others can’t.
A community that’s about values is going to generate a lot of churn about stuff that you can’t easily change. You can’t change the financial influences, the ideological influences, (most of) the public claims, and certainly not the self-serving rationalizations, all other things equal. If EA had ever gone down the path of exploring the trade-offs between selfishness and altruism with more care, they might have had hope to be a values-centered community. I don’t see them pulling that off at this point. Just for their lack of interest or understanding. It’s not their fault, but it is their problem.
I favor dissolution of all community-building efforts and a return to research and charity-oriented efforts by the EA community. It’s the only thing I can see that the community can do for the world at large. I don’t offer that as some sort of vote, but instead as a statement of opinion.
- 7 Feb 2023 23:21 UTC; 6 points) 's comment on Epistemic health is a community issue by (
- 22 Dec 2022 20:28 UTC; 5 points) 's comment on New blog: Some doubts about effective altruism by (
Here’s some information:
-
the approval process of the SPM in the 2014 AR5 Synthesis report includes a line-by-line approval process involving world governments participating in the IPCC. Synthesis report Topic sections get a section-by-section discussion by world governments. That includes petro-states. The full approval process is documented in the IPCC Fact Sheet. The approval and adoption process is political. The Acceptance process used for full reports is your best choice for unfiltered science.
-
The AR5report you have been reading was put out 8 years ago. That is a long time in climate science. During that time, there’s been tracking of GHG production relative to stated GHG-reduction commitments. There’s also new data from actual measurements of extreme weather events, tipping point systems, and carbon sinks and sources. If you like the synthesis report or believe in its editing process, the AR6 Synthesis report is due out. Meanwhile, there’s ongoing workshops available to watch on-line, plenty of well-known papers, and other options too. Here’s a discussion of a massive signatory list attached to a declaration of climate emergency in 2022. Climate scientists are engaged in publicly sharing information about climate change, and so there’s lots of places to find valid information.
-
Are we on a pathway to RCP 8.5? Well, climate researchers out of Woods Hole wrote a PNAS paper about this in 2020, challenging projections from the IEA about our being on the 4.5 heating pathway. The paper indirectly contradicts Halstead’s reliance on RCP 4.5 as our expected pathway. There are letters back and forth about it available to browse on the PNAS website, basically about the contributions of changing land carbon sinks. However, climate scientists studying global warming typically underestimate dangers and negative outcomes. For example, after Bolsonaro, it’s plausible the Amazon could easily be gone by 2050 just because of corruption and mismanagement, but that’s not really mentioned in the Woods Hole analysis.
-
If you want to examine interesting scenarios for real purposes, for example, to advance a 30 year business agenda, or to project plans for government or civilization out to 2100, or even just 2050, maybe you’re really into supporting a particular form of energy production, or you think you’ll live to 2100, which is plausible, then consider relying on scenarios and predictive indicators of socioeconomic pathways and GHG production, rather than relying on probabilistic forecasts. You’ll want information that is within a couple years of today. For example, did you know that it rained on the summit of Greenland in 2021 for the first time in recorded history? It’s a predictive indicator of continuing increases in melting rates for Greenland this century. The rain kept up for hours. What if it lasted for days, regularly, year after year? Larger computer models used by the IPCC to predict sea level rise don’t factor in physical processes like melt pools and drainage under glaciers, though according to Jason Box, a noted climate researcher who’s spent a lot of time studying Greenland, physical processes play a big role in Greenland ice melt. There’s been rain on parts of Greenland for awhile (in my understanding, mostly toward the coasts), but now we should expect something more.
-
you talked about nuclear power as a potential source of energy for the future. Could it be financed and scaled to replace fossil fuel energy production in power plants by 2050, across the world? I believe not, but if you have information to the contrary, I’m interested. Right now, I believe that all renewables are a sideshow, cheap or not, until we grasp that population decline and overall energy consumption decline are the requirements of keeping our planet livable for our current population. I support oil, gas, and coal use as part of an energy conservation plan. It’s what we use now. We won’t create new infrastructure to support radically different energy production at higher levels without increasing our GHG production, so better to keep the infrastructure we have but lessen our use of it. A lot.
-
Sea level rise. AR6 offers revised estimates, and NASA offers its conservative summary estimates of that data. You can play with the ranges under different scenarios, I think the projections are all too low, assuming humanity does the right thing in basic respects and is lucky in many ways.
-
You seem genuinely interested in why somebody was calling climate change an existential risk and then offering the AR5 Synthesis report as evidence. Well, maybe that’s what the person managed to read. It’s short, nontechnical, for policymakers. And now its outdated. If you don’t find it satisfying, keep looking for more information. You’ll either decide there’s something to worry about or form a case for why the climate emergency is mostly bunk.
I hope you found some of this information useful.
-
EA’s don’t quantify much or very well
You seem to genuinely want to improve AGI Safety researcher productivity.
I’m not familiar with resources available on AGI Safety, but it seems appropriate to:
develop a public knowledge-base
fund curators and oracles of the knowledge-base (library scientists)
provide automated tools to improve oracle functions (of querying, summarizing, and relating information)
develop ad hoc research tools to replace some research work (for example, to predict hardware requirements for AGI development).
NOTE: the knowledge-base design is intended to speed up the research cycle, skipping the need for the existing hodge-podge of tools in place now
The purpose of the knowledge-base should be:
goal-oriented (for example, produce a safe AGI soon)
with a calendar deadline (for example, by 2050)
meeting specific benchmarks and milestones (for example, an “aligned” AI writing an accurate research piece at decreasing levels of human assistance)
well-defined (for example, achievement of AI human-level skills in multiple intellectual domains with benevolence demonstrated and embodiment potential present)
Lets consider a few ways that knowledge-bases can be put together:
-
1. the forum or wiki: what lesswrong and the EA forum does. There’s haphazard:
tagging
glossary-like list
annotations
content feedback
minimal enforced documentation standards
no enforced research standards
minimal enforced relevance standards
poor-performing search.
WARNING: Forum posts don’t work as knowledge-base entries. On this forum, you’ll only find some information by the author’s name if you know that the author wrote it and you’re willing to search through 100′s of entries by that author. I suspect, from my own time searching with different options, that most of what’s available on this forum is not read, cited, or easily accessible. The karma system does not reflect documentation, research, or relevance standards. The combination of the existing search and karma system is less effective in a research knowledge-base.
-
2. the library: library scientists are trained to:
build a knowledge-base.
curate knowledge.
follow content development to seek out new material.
acquire new material.
integrate it into the knowledgebase (indexing, linking).
follow trends in automation.
assist in document searches.
perform as oracles, answering specific questions as needed.
TIP: Library scientists could help any serious effort to build an AGI Safety knowledge-base and automate use of its services.
-
3. with automation: You could take this forum and add automation (either software or paid mechanical turks) to:
write summaries.
tag posts.
enforce documentation standards.
annotate text (for example, annotating any prediction statistics offered in any post or comment).
capture and archive linked multimedia material.
link wiki terms to their use in documents.
verify wiki glossary meanings against meanings used in posts or comments.
create new wiki entries as needed for new terms or usages.
NOTE: the discussion forum format creates more redundant information rather than better citations, as well as divergence of material from any specific purpose or topic that is intended for the forum. A forum is not an ideal knowledgebase, and the karma voting format reflects trends, but the forum is a community meeting point with plenty of knowledge-base features for users to work on, as their time and interest permits. It hosts interesting discussions. Occasionally, actual research shows up on it.
-
4. with extreme automation: A tool like chatGPT is unreliable or prone to errors (for example, in programming software), but when guided and treated as imperfect, it can perform in an automated workflow. For example, it can:
provide text summaries.
be part of automation chains that:
provide transcripts of audio.
provide audio of text.
provide diagrams of relationships.
graphs data.
draw scenario pictures or comics.
act as a writing assistant or editor. TIP: Automation is not a tool that people should only employ by choice. For example, someone who chooses to use an accounting ledger and a calculator rather than Excel is slowing down an accounting team’s performance.
CAUTION: Once AI enter the world of high-level concept processing, their errors have large consequences for research. Their role should be to assist human tasks, as cognitive aids, not as human replacements, at least until they are treated as having equivalent potential as humans, and are therefore subject to the same performance requirements and measurements as humans.
Higher level analysis
The ideas behind improving cost-effectiveness of production include:
standardizing: take a bunch of different work methods, find the common elements, and describe the common elements as unified procedures or processes.
streamlining: examining existing work procedures and processes, identifying redundant or non-value-added work, and removing it from the workflow by various means.
automating: using less skilled human or faster/more reliable machine labor to replace steps of expert or artisan work.
Standardizing research is hard, but AGI Safety research seems disorganized, redundant, and slow right now. At the highest chunk level, you can partition AGI Safety development into education and research, and partition research into models and experiments.
education
research models
research experiments
The goal of the knowledge-base project is to streamline education and research of models in the AGI Safety area. Bumming around on lesswrong or finding someone’s posted list of resources is a poor second to a dedicated online curated library that offers research services. The goal of additional ad hoc tools should be to automate what researchers now do as part of their model development. A further goal would be to automate experiments toward developing safer AI, but that is going outside the scope of my suggestions.
Caveats
In plain language, here’s my thoughts on pursuing a project like I have proposed. Researchers in any field worry about grant funding, research trends, and professional reputation. Doing anything quickly is going to cross purposes with others involved, or ostensibly involved, in reaching the same goal. The more well-defined the goal, the more people will jump ship, want to renegotiate, or panic. Once benchmarks and milestones are added, financial commitments get negotiated and the threat of funding bottlenecks ripple across the project. As time goes on, the funding bottlenecks manifest, or internal mismanagement blows up the project. This is a software project, so the threat of failure is real. It is also a research project without a guaranteed outcome of either AGI Safety or AGI, adding to the failure potential. Finally, the field of AGI Safety is still fairly small and not connected to income potential long-term, meaning that researchers might abandon an effective knowledge-base project for lack of interest, perhaps claiming that the problem “solved itself” once AGI become mainstream, even if no AGI Safety goals were actually accomplished.
I’m curious what EA projects are considered “high status”. I have no idea, and I don’t believe that all your other readers do either.
Resources on Climate Change
IPCC Resources
-
The 6th Assessment Reports
The Summary for Policymakers (Scientific Basis Report,Impacts Report,Mitigation Report) NOTE: The Summaries for Policymakers are approved line-by-line by representatives from participating countries. This censors relevant information from climate scientists.
The Synthesis Report: this is pending in 2023
-
Key Climate Reports: The 6th (latest) Assessment Reports and additional reports covering many aspects of climate, nature, finance related to climate change prevention, mitigation and adaptation.
Emissions Gap Report: the gap refers to that between pledges and actual reductions as well as pledges and necessary targets.
Provisional State Of The Climate 2022: full 2022 report with 2022 data (reflecting Chinese and European droughts and heat waves) still pending.
United in Science 2022: A WMO and UN update on climate change, impact, and responses (adaptation and mitigation).
and many more. .. see the IPCC website for the full list.
-
Archive of Publications and Data: all Assessment Reports prior to the latest round. In addition, it contains older special reports, software and data files useful for purposes relevant to climate change and policy.
TIP: The IPCC links lead to pages that link to many reports. Assessments reports from the three working groups contain predictions with uncertainty levels (high, medium, low), and plenty of background information, supplementary material, and high-level summaries. EA’s might want to start with the Technical Summaries from the latest assessment report and drill-down into full reports as needed.
Useful Websites and Reports
Noteworthy Papers
-
Climate change is Increasing the risk of a California megaflood, 2022
-
Climate endgame: exploring catastrophic climate change scenarios, 2022
-
Economists’ erroneous estimates of damages from climate change,2021
-
Collision course development pushes Amazonia toward its tipping point, 2021
-
Permafrost carbon feedbacks threaten global climate goals, 2021
-
The appallingly bad neoclassical economics of climate change, 2020
-
Thermal bottlenecks in the lifecycle define climate vulnerability of fish, 2020
-
Comment: Climate Tipping Points—Too Risky to Bet Against, 2019
-
The Interaction of climate change and methane hydrates, 2017
-
High risk of extinction of benthic foraminifera in this century due to ocean acidification, 2013
-
Global Human Appropriation Of Net Primary Production Doubled In the 20th Century, 2012
News and Opinions and Controversial Papers
-
[Question] How prominent is EA in animal advocacy?
Great fun post!
I read the whole post. Thanks for your work. It is extensive. I will revisit it. More than once. You cite a comment of mine, a listing of my cringy ideas. That’s fine, but my last name is spelled “Scales” not “Scale”. :)
About scout mindset and group epistemics in EA
No. Scout mindset is not an EA problem. Scout and soldier mindset partition mindset and prioritize truth-seeking differently. To reject scout mindset is to accept soldier mindset.
Scout mindset is intellectual honesty. Soldier mindset is not. Intellectual honesty aids epistemic rationality. Individual epistemic rationality remains valuable. Whether in service of group epistemics or not. Scout mindset is a keeper. EA suffers soldier mindset, as you repeatedly identified but not by name. Soldier mindset hinders group epistemics.
We are lucky. Julia Galef has a “grab them by the lapel and shake them” interest in intellectual honesty. EA needs scout mindset.
Focus on scout mindset supports individual epistemics. Yes.
scout mindset
critical thinking skills
information access
research training
domain expertise
epistemic challenges
All those remain desirable.
Epistemic status
EA’s support epistemic status announcements to serve group epistemics. Any thoughts on epistemic status? Did I miss that in your post?
Moral uncertainty
Moral uncertainty is not an everyday problem. Or remove selfish rationalizations. Then it won’t be. Or revisit the revised uncertainty, I suppose.
Integrity
Integrity combines:
intellectual honesty
introspective efficacy
interpersonal honesty
behavioral self-correction
assess->plan->act looping efficacy
Personal abilities bound those behaviors. So do situations. For example, constantly changing preconditions of actions bound integrity. Another bound is your interest in interpersonal honesty. It’s quite a lever to move yourself through life, but it can cost you.
Common-sense morality is deceptively simple
Common-sense morality? Not much eventually qualifies. Situations complicate action options. Beliefs complicate altruistic goals. Ignorance complicates option selection. Internal moral conflicts reveal selfish and altruistic values. Selfishness vs altruism is common-sense moral uncertainty.
Forum karma changes
Yes. Lets see that work.
Allow alternate karma scoring. One person one vote. As a default setting.
Allow karma-ignoring display. On homepage. Of Posts. And latest comments. As a setting.
Allow hide all karma. As a setting.
Leave current settings as an alternate.
Diversifying funding sources and broader considerations
Tech could face lost profits in the near future. “Subprime Attention Crisis” by Tim Hwang suggests why. An unregulated ad bubble will gut Silicon Valley. KTLO will cost more, percentage-wise. Money will flow to productivity growth without employment growth.′
Explore income, savings, credit, bankruptcy and unemployment trends. Understand the implications. Consumer information will be increasingly worthless. The consumer class is shrinking. Covid’s UBI bumped up Tech and US consumer earnings temporarily. US poverty worsened. Economic figures now mute reality. Nevertheless, the US economic future trends negatively for the majority.
“Opportunity zones” will be a predictive indicator despite distorted economic data, if they ever become reality. There are earlier indicators. Discover some.
Financial bubbles will pop, plausibly simultaneously. Many projects will evaporate. Tech’s ad bubble will cost the industry a lot.
Conclusion
Thanks again for the post. I will explore the external links you gave.
I offered one suggestion (among others) in a red team last year: to prefer beliefs to credences. Bayesianism has a context alongside other inference methods. IBT seems unhelpful, however. It is what I advocate against, but I didn’t have a name for it.
Would improved appetite regulation, drug aversion, and kinesthetic homeostasis please our plausible ASI overlords? I wonder. How do you all feel about being averse to alcohol, disliking of pot, and indifferent to chocolate? The book “Sodium Hunger: The Search for a Salty Taste” reminds me that cravings can have a benefit, in some contexts. However, drugs like alcohol, pot, and chocolate would plausibly get no ASI sympathy. Would the threat of intelligent, benevolent ASI that take away interest in popular drugs (e.g ,through bodily control of us) be enough to halt AI development? Such a genuine threat might defeat the billionaire-aligned incentives behind AI development.
By the way, would EA’s enjoy installing sewage and drinking water systems in small US towns 20-30 years from now? I am reminded of “The End Of Work” by Jeremy Rifkin. Effective altruism will be needed from NGO’s working in the US, I suspect.
There is some controversy about economic estimates of damages from climate destruction in the mainstream. You might find more contrast and differences if you take a look outside EA and economics for information on climate destruction.
You distinguish catastrophic impacts from existential impacts. I’m conflicted about the distinction you draw, but I noted this conflict about Toby Ord’s discussion as well, he seems to think a surviving city is sufficient to consider humanity “not extinct”. While I agree with you all, I think these distinctions do not motivate many differences in pro-active response, that is, whether a danger is catastrophic, existential, or extinction-level, it’s still pretty bad, and recommendations for change or effort to avoid lesser dangers are typically in line with the recommendations to avoid greater dangers. Furthermore, a climate catastrophe does increase the risk of human extinction, considering that climate change worsens progressively over decades, even after all anthropogenic GHG production has stopped. I would to learn more about your thoughts on those differences, particularly how they influence your ethical deliberations about policy changes in the present.
I’m interested in your critical thoughts on:
typical application or interpretation of Bayesianism in EA.
suitability of distinct EA goals: toward charitable efforts, AGI safety, or longtermism.
earning to give and with respect to what sorts of jobs.
longevity-control, personal choice over how long you live, once life-extension is practical.
expected value calculations wrt fanatical conclusions, huge gains and tiny odds.
the moral status of potential future people in the present.
the value of risk aversion versus commitment to miniscule chances of success
any differing views on technological stagnation or value lock-in from longtermism
your thoughts on cluster thinking as introduced by Holden Karnofsky
the desirability and feasibility of claims to influence or control future people’s behavior
the positive nature of humanity and people (e.g, are we innately “good”?)
priority of avoiding harm to a percentage minority when that harm benefits the majority
the moral status of sentient beings and discounting of moral status by species
moral uncertainty as a prescriptive ethical approach
I’ve done my best on this forum to distinguish my point of view from EAs wherever it was obvious that I disagreed. I’ve also followed the works of others here who hold substantially different points of view than the EA majority (for example, about longtermism). If your disagreements are more subtle than mine, or you would disagree with me on most things, I’m not one to suggest topics that you and I agree on. But the general topics can still be addressed even though we disagree. After all, I’m nobody important but the topics are important.
If you do not take an outsider’s point of view most of the time, then there’s no need to punch things up a bit, but more a need to articulate the nuanced differences you have as well as advocate for the EA point of view wherever you support it. I would still like to read your thoughts from a perspective informed by views outside of EA, as far outside as possible, whether from philosophers that would strongly disagree with EA or from other experts or fields that take a very different point of view than EA’s.
I have advocated for an alternative approach to credences, to treat them as binary beliefs or as subject to constraints(nuance) as one gains knowledge that contradicts some of their elements. And an alternative approach to predictions, one of preconditions leading to consequences, and the predictive work involved being one of identifying preconditions with typical consequences. Identification of preconditions in that model involves matching actual contexts to prototypical contexts, with type of match allowing determination of plausible, expected, or optional (action-decided) futures predictable from the match’s result. My sources for that model were not typical for the EA community, but I did offer it here.
If you can do similar with knowledge of your own, that would interest me. Any tools that are very different but have utility are interesting to me. Also how you might contextualize current epistemic tools, as I said before, interests me.
Thanks! :)
All that’s required in all those cases is that you believe that some population will exist who benefits from your efforts.
It’s when the existence of those people is your choice that it no longer makes sense to consider them to have moral status pre-conception.
Or should I feel guilty that I deprived a number of beings of life by never conceiving children in situations that I could have?
It’s everyone else having children that create the population that I consider has moral status. So long as they keep doing it, the population of beings with moral status grows.
The real questions are whether:
it is moral to sustain the existence of a species past the point of causing harm to the species’ current members
the act of conceiving is a moral act
What do you think?
[Question] Do EA folks want AGI at all?
[Question] Do EA folks think that a path to zero AGI development is feasible or worthwhile for safety from AI?
EDIT: Made edits to this one day later, for clarity and to add one paragraph.
People muddle through life, adopting imperfect solutions routinely, iterating through some approaches, abandoning others, learning about new possibilities, seeing solutions clearly only in hindsight sometimes. The point is, if they take a problem seriously, then they take steps to solve it. Others can evaluate the effort or offer assistance, trying to redirect their efforts or save them from the consequences of a poor solution. Or they might be sold on ineffective solutions by others, dooming them to a worse path if they go along. Rarely do problems just solve themselves. Whatever solutions start with taking the problem seriously.
Despite doom-scrolling and predicting the end of the world, taking a problem seriously is not that common. Preppers do it, some politicians do it (not that many), and plenty of think tanks, rich people, and private orgs do it. From there you see solutions to existential risk, regardless of the quality of the solutions.
Between not taking a problems seriously, and doing the selfish thing, if the problem is taken seriously, there’s not a lot of wiggle room for altruistic locally or globally effective solutions. For example, while people are still offering solutions to the climate crisis, it’s been a crisis since the 1980′s, and it’s been under analysis since then. The solutions are not that different now, and that is actually worrying, because the situation has worsened, both in reality and in its implications, since the 1980′s. Despite that, you can see developed countries don’t take it seriously, there’s corruption and shilling at all levels of policy and strategy around it, and the most widely cited sources mostly get ignored (ie, the IPCC).
If the world’s countries are still in denial in 20-30 years, when GAST is at 2C, when we’ve seen many novel extreme weather events, and when we know to expect far worse near-term consequences for the biosphere than now, then we know that few feasible solutions to our extinction crisis will remain. As a scenario exercise, you can examine those remaining feasible ones for any that seem worthwhile. You’ll be disappointed.
I took my time deciding that saving humanity was not, per se, an ethical requirement of being human. I consider existing people to have moral status, but I don’t see the ethicality in guaranteeing lots of future people are conceived. In seeking moral clarity, I have reduced the solutions to existential risk to those that seem desirable to me. I don’t consider it spiteful, more just disinterested.
EDIT: I also came to realize that life is not a party. I don’t mean that I was once some lazy party animal, or that life should be split between parties and hard work. I mean that life lived properly wouldn’t offer many sustainable opportunities for frivolous fun if one is not already fairly happy and safe in a supportive community. If you take away vices, a poison to culture, society, and psychological health, frivolous fun becomes harder to create on demand. This shifts the burden for life satisfaction to maintenance of desirable circumstances on a daily basis, or increases the demand for effort to gain those circumstances. And that can be a lot of effort over a long time period. I suspect that living in such a state of difficulty, when tasked with finding or maintaining happiness without vices (for example, without recreational drugs of various types, modern distractions, and novelties that we consider harmless), could be part of the solution to humanity’s general problems as a species. However, I suspect anyone from today’s society would find that alternative society undesirable, and so would not form such a goal as their future. If that is so, then we ignore the only option that seems available to us. The boring option of cleaning up our behavior and ending our indulgence in our vices. And avoiding the actual solution is a typical response for a lot of problems. “Of course we could stop problem X, but then we’d have to stop doing Y, and we like doing Y, so lets pretend Y doesn’t matter and solve problem X some other way! ” Which typically doesn’t work. And so our pathway remains ambiguous, risking dangers from problem X so that we can keep doing vice Y.
Also, the disaster that has been human response to existential crisis has informed me that being a human that guarantees a future for humanity is actually really difficult, that that combination of altruism and selfishness, is not sustainable in all circumstances as a societal current. With that conclusion, I can form a different model of how and whether society can and should survive longterm, one built around a society that consistently works toward its own survival and well-being.
In my belief, that society has to:
be small
know its ecological niche and keep it
show a strong altruistic streak, among its own people and toward other species
have plenty of humility about its own future.
stay on Earth in a single location
have no interest in a diaspora
not see itself as deserving to spread or grow
maintain its own population size without difficulty or conflict
have overcome humanity’s worst ills (for examples, drug abuse, misogyny, slavery, child abuse, epidemics, and war)
carry on despite setbacks and burdens
For me, it’s less that humanity must survive so it can develop, and more that humanity must develop so that it is worthwhile if it survives. Technology is an essential part of that, mainly in its ability to raise life satisfaction and overcome humanity’s ills.
Obviously, if people overcome simple denial, they’ll pursue solutions of some sort. A focus on available solutions brings the discussion back to whether the solutions are worthwhile.
A book “The Corporation” by Joel Bakan suggests that corporations are analogous to psychopaths. The book and an accompanying documentary and set of interviews with various economists, activists, CEO’s, politicians, and intellectuals shared many perspectives on corporations as psychopathic or a source of danger to humanity, the planet, etc. The book was published in 2003, but the perspective goes back further, of course.
Including for this contest; we’d love to hear general feedback, and are also interested in hearing about any cases where a submission (or our reviews) changed your mind or actions. You might also want to tell the author(s) of the submission if this happens.
Hi, Lizka.
I’m curious about your mention of reviews. Were reviews written for each contest submission?
Simple and useful, thanks.
Lots of people on this forum have struggled with the feeling of being compromised. Since FTX. Or Leverage. Or Guzey. Or Thiel. Or Singer. Or Mill or whatever.[4] But this is the normal course of a life, including highly moral lives.… But the greater part of it being normal is that all action incurs risk, including moral risk.
It’s not correct to say that action deserves criticism, but maybe correct to say that action receives criticism. The relevant distinction to make is why the action brought criticism on it, and that is different case-by-case. The criticism of SBF is because of alleged action that involves financial fraud over billions of dollars. The criticism of Singer with regard to his book Practical Ethics is because of distortion of his views on euthanasia. The criticism of Thiel with regard to his financial support of MIRI is because of disagreements over his financial priorities. And I could go on. Some of those people have done other things deserving or receiving criticism. The point is that whether something receives criticism doesn’t tell you much about whether it deserves criticism. While these folks all risk criticism, they don’t all deserve it, at least not for the actions you suggested with your links.
Hi, John.
I don’t have time in the next several days to give your write-up the attention it deserves, but I hope to study it as a learning opportunity and to expand my grasp of general arguments around what I call steady-state climate change, that is, climate change without much contribution from tipping points this century and without strong impacts at even higher temperatures (eg., 3-4C). I appreciate the structure of your report, by the way, it lets a reader quickly drill down to sections of interest. It is clearly written.
At the moment, I am considering your analysis of permafrost and methane contributions to GAST changes. I have a larger number for total carbon in permafrost than you, 1.5Tt carbon, but now have to go through references to reconcile that number with yours. Your mention of an analysis from USGS deserves a read through articles from the reference you gave, and I am attempting that now.
There are several parameters involved (only some independent), to do with:
source type (anearobic decomposition, free gas deposit, methane hydrate dissolution),
source size,
source depth and layering,
rate of release (obviously dependent on other parameters)
geographic location (gulf of mexico versus arctic ice shelf),
temperature gradient (at a location),
water column height (near-shore vs slope),
in deciding whether methane (edit:carbon in methane) reaches the atmosphere as methane, carbon dioxide, or not at all, and over what time period.
The significance of field observations over the last 10 years, and differences between particular regions (eg, Arctic seas), should be taken into account. Before reviewing counterarguments, I tend to take specialist claims about conclusions that factor in these parameters uncritically, but now that you’ve mentioned one parameter, aerobic bacteria acting on methane, as implying conclusions contrary to mine, I should delve deeper into how these parameters interact.
If you want to offer a comment about Greenland’s ice sheet, and its potential contribution to sea level rise this century, I am curious to check sources with you and do more reconciling (or at least partitioning) of references. I’ve seen reports that changes to Greenland’s ice sheet are accelerating and lead to estimates of sea level rise that are higher than, say 50 cm, more like meters, actually, over the next 80 years, but would like to know more from you.
In general, my observation is strong drivers of change to specific tipping points haven’t found their way into climate models used by the IPCC (for example, physical processes driving some Greenland ice melt). They might at some point.
BTW, I did take a read through the comments here, and consider the the mentions of analyses of systemic and cascading risks to be useful. I hope you won’t object if I ask a few questions about those risks, just to understand your perspective on those models. However, if you consider those questions to be out of scope or not of interest, let me know, and I’ll hold off.
Thank you.