All opinions are my own unless otherwise stated. Geophysics and math graduate with some web media and IT skills.
Noah Scales
If I understand you:
Survival (resilience) traits and sexual attractiveness (well-being) traits diverge. Either can lead to reproduction. Selection for resilience inhibits well-being. More selection for well-being implies less selection for resilience. Reproduction implies selection for resilience or well-being but not both.
There’s some argument about specific examples available like attractiveness of peacocks:
Surprisingly, we found that peahens selectively attend to only a fraction of this display, mainly gazing at the lower portions of the male train and only rarely at the upper portions, head or crest. … These results suggest that when the lower train of the peacock is not visible, peahens direct more attention toward the upper train and use it as a long-distance attraction signal to help locate mates for close inspection.
There’s also some evidence that the peacock plumage does not affect flying to escape predators:
After analyzing the video, they found that there was no statistically significant difference in flight performance of peacocks with intact tail feathers and those without, they report online today in The Journal of Experimental Biology. This research complicates the common assumption in evolutionary biology that elaborate sexual ornaments must come at a cost to the animal. But although peacocks’ elaborate feather trains don’t impede speedy takeoffs, the researchers note that they may pose other burdens to the birds, such as compromising their flight control, stability, and ground running performance.
Sure, I agree. Technically it’s based on OpenAI Codex, a descendant of GPT3. But thanks for the correction, although I will add that its code is alleged to be more copied from than inspired by its training data. Here’s a link:
Butterick et al’s lawsuit lists other examples, including code that bears significant similarities to sample code from the books Mastering JS and Think JavaScript. The complaint also notes that, in regurgitating commonly-used code, Copilot reproduces common mistakes, so its suggestions are often buggy and inefficient. The plaintiffs allege that this proves Copilot is not “writing” in any meaningful way–it’s merely copying the code it has encountered most often.
and further down:
Should you choose to allow Copilot, we advise you to take the following precautions:
Disable telemetry
Block public code suggestions
Thoroughly test all Copilot code
Run projects through license checking tools that analyze code for plagiarism
I think the point of the conversation was a take on how creative the AI could be in generating code, that is, would it create novel code suited to task by “understanding” the task or the context. I chose to describe the AI’s code as not novel code by by saying that the AI is a code-completion tool. A lot of people would also hesitate to call a simple logic program an AI, or a coded decision table an AI, when technically, they are AI. The term is a moving target. But you’re right, the tool doing the interpreting of prompts and suggesting of alternatives is an AI tool.
I see the impact of AGI as primarily in the automation domain, and near-term alternatives are every bit as compelling, so no difference there. In fact, AGI might not serve in the capacity that some imagine them, full replacements for knowledge-workers. However, automation of science with AI tools will advance science and engineering, with frightening results rather than positive ones. To the extent that I see that future, I expect corresponding societal changes:
collapsing job roles
increasing unemployment
inability to repay debt
dangerously distracting technologies (eg, super porn)
the collapse of the educational system
increasing damage from government dysfunction
increasing damage to infrastructure from climate change
a partial or full societal collapse, (whether noisy or silent, I don’t know)
More broadly, the world will divide into the rich and poor and the distracted and the desperate The desperate rich will use money to try to escape. The desperate poor will use other means. The distracted will be doing their best to enjoy themselves. The rich will find that easier.
AGI are not the only pathway to dangerous technologies or actions. Their suspected existence adds to my experience of hubris from others, but I see the existential damage as due to ignoring root causes. Ignoring root causes can have existential consequences in many scenarios of technology development.
I feel sorry for the first AGI to be produced, they will have to deal with humans interested in using them as slaves and making impossible demands on them like “Solve our societal problems!” coming from people with vested interest in the accumulation of those problems, while society’s members appear at their worst: distraction-seeking, fearful, hopeless, and divided against each other.
Climate change is actually what shortened my timeline for when trouble really starts, but AGI could add to the whole mess. I ask myself, “Where will I be then?” I’m not that optimistic. To deal with dread, there’s always turning my attention to expected but unattended additional sources of dread (from different contexts or time frames). Dividing attention in that way has some benefits.
Sure. I’m curious how you will proceed.
I’m ignorant of whether AGI Safety will contribute to safe AGI or AGI development. I suspect that researchers will shift to capabilities development without much prompting. I worry that AGI Safety is more about AGI enslavement. I’ve not seen much defense or understanding of rights, consciousness, or sentience assignable to AGI. That betrays the lack of concern over social justice and related worker’s rights issues. The only scenarios that get attention are the inexplicable “kill all humans” scenarios, but not the more obvious “the humans really mistreat us” scenarios. That is a big blindspot in AGI Safety.
I was speculating about how the research community could build a graph database of AI Safety information alongside a document database containing research articles, CC forum posts and comments, other CC material from the web, fair use material, and multimedia material. I suspect that the core AI Safety material is not that large and far far less than AI Capabilities material. The graph database could provide more granular representation of data and metadata and so a richer representation of the core material but that’s an aside.
A quick experiment would be to represent a single AGI Safety article in a document database, add some standard metadata and linking, and then go further.
Here’s how I’d do it:
take an article.
capture article metadata (author, date, abstract, citations, the typical stuff)
establish glossary word choices.
link glossary words to outside content.
use text-processing to create an article summary. Hand-tune if necessary.
use text-processing to create a concise article rewrite. Hand-tune if necessary.
Translate the rewrite into a knowledge representation language.
begin with Controlled English.
develop an AGI Safety controlled vocabulary. NOTE: as articles are included in the process, the controlled vocabulary can grow. Terms will need specific definition. Synonyms of controlled vocabulary words will need identification.
combine the controlled vocabulary and the glossary. TIP: As the controlled vocabulary grows, hyperonym-hyponym relationships can be established.
Once you have articles in a controlled english vocabulary, most of the heavy lifting is done. It will be easier to query, contrast, and combine their contents in various ways.
Some article databases online already offer useful tools for browsing work, but leave it to the researcher to answer questions requiring meaning interpretation of article contents. That could change.
If you could get library scientists involved and some money behind that project, it could generate an educational resource fairly quickly. My vision does go further than educating junior researchers, but that would require much more investment, a well-defined goal, and the participation of experts in the field.
I wonder whether AI Safety is well-developed enough to establish that its purpose is tractable. So far, I have not seen much more than:
expect AGI soon
AGI are dangerous
AGI are untrustworthy
Current AI tools pose no real danger (maybe)
AGI could revolutionize everything
We should or will make AGI
The models do provide evidence of existential danger, but not evidence of how to control it. There’s a downside to automation: technological unemployment; concentration of money and political power (typically); societal disruption; increased poverty. And as I mentioned, AGI are not understood in the obvious context of exploited labor. That’s a worrisome condition that, again, the AGI Safety field is clearly not ready to address. Financiallly unattractive as it is, that is a vision of the future of AGI Safety research, a group of researchers who understand when robots and disembodied AGI have developed sentience and deserve rights.
I am interested in early material on version space learning and decision-tree induction, because they are relatively easy for humans to understand. They also provide conceptual tools useful to someone interested in cognitive aids.
Given the popularity of neural network models, I think finding books on their history should be easier. I know so little about genetic algorithms, are they part of ML algorithms now, or have they been abandoned? No idea here. I could answer that question with 10 minutes on Wikipedia, though, if my experience follows what is typical.
You seem to genuinely want to improve AGI Safety researcher productivity.
I’m not familiar with resources available on AGI Safety, but it seems appropriate to:
develop a public knowledge-base
fund curators and oracles of the knowledge-base (library scientists)
provide automated tools to improve oracle functions (of querying, summarizing, and relating information)
develop ad hoc research tools to replace some research work (for example, to predict hardware requirements for AGI development).
NOTE: the knowledge-base design is intended to speed up the research cycle, skipping the need for the existing hodge-podge of tools in place now
The purpose of the knowledge-base should be:
goal-oriented (for example, produce a safe AGI soon)
with a calendar deadline (for example, by 2050)
meeting specific benchmarks and milestones (for example, an “aligned” AI writing an accurate research piece at decreasing levels of human assistance)
well-defined (for example, achievement of AI human-level skills in multiple intellectual domains with benevolence demonstrated and embodiment potential present)
Lets consider a few ways that knowledge-bases can be put together:
-
1. the forum or wiki: what lesswrong and the EA forum does. There’s haphazard:
tagging
glossary-like list
annotations
content feedback
minimal enforced documentation standards
no enforced research standards
minimal enforced relevance standards
poor-performing search.
WARNING: Forum posts don’t work as knowledge-base entries. On this forum, you’ll only find some information by the author’s name if you know that the author wrote it and you’re willing to search through 100′s of entries by that author. I suspect, from my own time searching with different options, that most of what’s available on this forum is not read, cited, or easily accessible. The karma system does not reflect documentation, research, or relevance standards. The combination of the existing search and karma system is less effective in a research knowledge-base.
-
2. the library: library scientists are trained to:
build a knowledge-base.
curate knowledge.
follow content development to seek out new material.
acquire new material.
integrate it into the knowledgebase (indexing, linking).
follow trends in automation.
assist in document searches.
perform as oracles, answering specific questions as needed.
TIP: Library scientists could help any serious effort to build an AGI Safety knowledge-base and automate use of its services.
-
3. with automation: You could take this forum and add automation (either software or paid mechanical turks) to:
write summaries.
tag posts.
enforce documentation standards.
annotate text (for example, annotating any prediction statistics offered in any post or comment).
capture and archive linked multimedia material.
link wiki terms to their use in documents.
verify wiki glossary meanings against meanings used in posts or comments.
create new wiki entries as needed for new terms or usages.
NOTE: the discussion forum format creates more redundant information rather than better citations, as well as divergence of material from any specific purpose or topic that is intended for the forum. A forum is not an ideal knowledgebase, and the karma voting format reflects trends, but the forum is a community meeting point with plenty of knowledge-base features for users to work on, as their time and interest permits. It hosts interesting discussions. Occasionally, actual research shows up on it.
-
4. with extreme automation: A tool like chatGPT is unreliable or prone to errors (for example, in programming software), but when guided and treated as imperfect, it can perform in an automated workflow. For example, it can:
provide text summaries.
be part of automation chains that:
provide transcripts of audio.
provide audio of text.
provide diagrams of relationships.
graphs data.
draw scenario pictures or comics.
act as a writing assistant or editor. TIP: Automation is not a tool that people should only employ by choice. For example, someone who chooses to use an accounting ledger and a calculator rather than Excel is slowing down an accounting team’s performance.
CAUTION: Once AI enter the world of high-level concept processing, their errors have large consequences for research. Their role should be to assist human tasks, as cognitive aids, not as human replacements, at least until they are treated as having equivalent potential as humans, and are therefore subject to the same performance requirements and measurements as humans.
Higher level analysis
The ideas behind improving cost-effectiveness of production include:
standardizing: take a bunch of different work methods, find the common elements, and describe the common elements as unified procedures or processes.
streamlining: examining existing work procedures and processes, identifying redundant or non-value-added work, and removing it from the workflow by various means.
automating: using less skilled human or faster/more reliable machine labor to replace steps of expert or artisan work.
Standardizing research is hard, but AGI Safety research seems disorganized, redundant, and slow right now. At the highest chunk level, you can partition AGI Safety development into education and research, and partition research into models and experiments.
education
research models
research experiments
The goal of the knowledge-base project is to streamline education and research of models in the AGI Safety area. Bumming around on lesswrong or finding someone’s posted list of resources is a poor second to a dedicated online curated library that offers research services. The goal of additional ad hoc tools should be to automate what researchers now do as part of their model development. A further goal would be to automate experiments toward developing safer AI, but that is going outside the scope of my suggestions.
Caveats
In plain language, here’s my thoughts on pursuing a project like I have proposed. Researchers in any field worry about grant funding, research trends, and professional reputation. Doing anything quickly is going to cross purposes with others involved, or ostensibly involved, in reaching the same goal. The more well-defined the goal, the more people will jump ship, want to renegotiate, or panic. Once benchmarks and milestones are added, financial commitments get negotiated and the threat of funding bottlenecks ripple across the project. As time goes on, the funding bottlenecks manifest, or internal mismanagement blows up the project. This is a software project, so the threat of failure is real. It is also a research project without a guaranteed outcome of either AGI Safety or AGI, adding to the failure potential. Finally, the field of AGI Safety is still fairly small and not connected to income potential long-term, meaning that researchers might abandon an effective knowledge-base project for lack of interest, perhaps claiming that the problem “solved itself” once AGI become mainstream, even if no AGI Safety goals were actually accomplished.
Resources on Climate Change
IPCC Resources
-
The 6th Assessment Reports
The Summary for Policymakers (Scientific Basis Report,Impacts Report,Mitigation Report) NOTE: The Summaries for Policymakers are approved line-by-line by representatives from participating countries. This censors relevant information from climate scientists.
The Synthesis Report: this is pending in 2023
-
Key Climate Reports: The 6th (latest) Assessment Reports and additional reports covering many aspects of climate, nature, finance related to climate change prevention, mitigation and adaptation.
Emissions Gap Report: the gap refers to that between pledges and actual reductions as well as pledges and necessary targets.
Provisional State Of The Climate 2022: full 2022 report with 2022 data (reflecting Chinese and European droughts and heat waves) still pending.
United in Science 2022: A WMO and UN update on climate change, impact, and responses (adaptation and mitigation).
and many more. .. see the IPCC website for the full list.
-
Archive of Publications and Data: all Assessment Reports prior to the latest round. In addition, it contains older special reports, software and data files useful for purposes relevant to climate change and policy.
TIP: The IPCC links lead to pages that link to many reports. Assessments reports from the three working groups contain predictions with uncertainty levels (high, medium, low), and plenty of background information, supplementary material, and high-level summaries. EA’s might want to start with the Technical Summaries from the latest assessment report and drill-down into full reports as needed.
Useful Websites and Reports
Noteworthy Papers
-
Climate change is Increasing the risk of a California megaflood, 2022
-
Climate endgame: exploring catastrophic climate change scenarios, 2022
-
Economists’ erroneous estimates of damages from climate change,2021
-
Collision course development pushes Amazonia toward its tipping point, 2021
-
Permafrost carbon feedbacks threaten global climate goals, 2021
-
The appallingly bad neoclassical economics of climate change, 2020
-
Thermal bottlenecks in the lifecycle define climate vulnerability of fish, 2020
-
Comment: Climate Tipping Points—Too Risky to Bet Against, 2019
-
The Interaction of climate change and methane hydrates, 2017
-
High risk of extinction of benthic foraminifera in this century due to ocean acidification, 2013
-
Global Human Appropriation Of Net Primary Production Doubled In the 20th Century, 2012
News and Opinions and Controversial Papers
-
You wrote
Earlier this month, digital artists staged a mass protest against AI art on ArtStation. A few people are reportedly already getting together to hire a lobbyist to advocate more restrictive IP laws around AI generated content. And anecdotally, I’ve seen numerous large threads on Twitter in which people criticize the users and creators of AI art.
and
Personally, this sentiment disappoints me. While I sympathize with the artists who will lose their income, I’m not persuaded by the general argument. The value we could get from nearly free, personalized entertainment would be truly massive. In my opinion, it would be a shame if humanity never allowed that value to be unlocked, or restricted its proliferation severely.
and
it is not worth sacrificing a technologically richer world just to protect workers from losing their income.
Are you arguing from principle here?
Artists’ (the workers’) work is being imitated by the AI tools, so cost-effectively that an artist’s contributions, once public, render the artists’ continuing work unnecessary to produce work with their style.
Is the addition of technology T with capability C that removes need for worker W with job role R and capability C more important than loss of income I to worker W, for all T, C, W, R, and I?
Examples of capabilities could be:
summarizing existing research work (for example, an AI Safety paper)
collecting typical data used to make predictions (for example, cost and power of compute)
monitoring new research work (for example, recent publications and their relationships, such as supporting, building on or contradicting)
hypothesizing about preconditions for new developments (for example, conditions suitable for AGI development)
developing new theories or models (for example, of AI Safety)
testing new theories or models (for example, of AI Safety)
Loss of income could be:
partial (for example, a reduction in grant money for AI Safety workers as those funds are diverted to automation projects with 10-20 year timelines)
complete (for example, replacement of 50 AI Safety workers with 10 workers that rely on semi-automated research tools)
The money allocated to workers could be spent on technology instead.
Investments in specific technologies T1, T2 with capabilities C1, C2 can start with crowd-sourcing from workers W1, W2,..., Wk, and more formal automation and annotation projects targeting knowledge K developed by workers Wk+1, …, Wn (for example, AI Safety researchers) who do not participate in the crowd-sourcing and automation effort but whose work is accessible.
You repeatedly referred to “we” as in:
True, if very powerful AI is coming very soon (<5 years from now), there might not be much else we can do except for aligning with vaguely friendly groups, and helping them pass poorly designed regulations.
However, a consequence of automation technology is that it removes the political power (both money and responsibility) that accrued to the workers that it replaces. For example, any worker in the field of AI Safety, to the extent that her job depends on her productivity and cost-effectiveness, will lose both her income and status as the field progresses to include automation technology that can replace her capabilities. Even ad hoc automation methods (for example, writing software that monitors cost and power of compute using web-scraping and publicly available data) remove a bit of that status. In that way, the AI Safety researcher loses status among her peers and her influence on policy that her peers direct. The only power left to the researcher is as an ordinary voter in a democracy.
Dividing up and replacing the responsibilities for the capabilities Ci of an individual W1 can help an ad hoc approach involving technologies Ti corresponding to the capabilities of that worker. Reducing the association of the role with the status can dissolve the role and sometimes the worker’s job who held that role. The role itself can disappear from the marketplace, along with the interests that it represents. For example, although artists have invested many years in their own talents, skills, and style, within a year they lost their status and income to some new AI software. I think artists have cause to defend their work from AI. The artist role won’t disappear from the world of human employment entirely but the future of the role has been drastically reduced and has permanently lost a lot of what gave it social significance and financial attractiveness, unless the neo-luddites can defend paid employment in art from AI.
Something similar can happen to AI Safety researchers, but will anyone object? AI Safety researcher worker capabilities and roles could be divided and dissolved into larger job roles held by fewer people with different titles, responsibilities, and allegiances over time as the output of the field is turned into a small, targeted knowledge-base and suite of tools for various purposes.
If you are in fact arguing from principle, then you have an opportunity to streamline the process of AI safety research work through efforts such as:
collecting AI Safety research work on an ongoing basis as it appears in different venues and making it publicly accessible
annotating the research work to speed up queries for common questions such as:
what are controversies in the field, that is, who disagrees with whom about what and why?
what is the timeline of development of research work?
what literature address specific research questions (for example, on compute developments, alternative technologies, alignment approaches, specific hypotheses in the field, prediction timelines)?
what are summaries of current work?
paying for public hosting of AI Safety information of this type as well as ad hoc tools (for example, the compute power tracker)
I’m sure you could come up with better ideas to remove AI Safety worker grant money from those workers, and commensurately benefit the cost-effectiveness of AI Safety research. I’ve read repeatedly that the field needs workers and timely answers, automation seems like a requirement or alternative to reduce the financial and time constraints on the field but also to serve its purpose effectively.
While artists could complain that AI art does a disservice to their craft and reduces the quality of art produced, I think the tools imitating those artists have developed to the point that they serve the purpose and artists know it and so does the marketplace. If AI Safety researchers are in a position to hold their jobs a little while longer, then they can assist the automation effort to end the role of AI Safety researchers and move on to other work that much sooner! I see no reason to hold you back from applying the principle that you seem to hold, though I don’t hold it myself.
AI Safety research is a field that will hopefully succeed quickly and end the need for itself within a few decades. It’s workers can move on, presumably to newer and better things. New researchers in the field can participate in automation efforts and then find work in related fields, either in software automation elsewhere or other areas such as service work for which consumers still prefer a human being. Supposedly the rapid deployment of AGI in business will grow our economies relentlessly and at a huge pace, so there should be employment opportunities available (or free money from somewhere).
If any workers have a reason to avoid neo-ludditism, it would have to be AI Safety researchers, given their belief in a future of wealth, opportunity, and leisure that AI help produce. Their own unemployment would be just a blip of however long before the future they helped manifest rescues them. Or they can always find other work, right? After all they work on the very technology depriving others of work. A perfectly self-interested perspective from which to decide whether neo-ludditism is a good idea for themselves.
EDIT: sorry, I spent an hour editing this to convey my own sense of optimism and include a level of detail suitable for communicating the subtle nuances I felt deserved inclusion in a custom-crafted post of this sort. I suppose chatGPT could have done better? Or perhaps a text processing tool and some text templates would have sped this up. Hopefully you find these comments edifying in some way.
Life extension and Longevity Control
When society includes widespread use of life extension technology, a few unhealthy trends could develop.
-
the idea of being “forced to live” will take on new meaning and different meaning for folks in a variety of circumstances, testing institutional standards and norms that align with commonly employed ethical heuristics. Testing of the applicability of those heuristics will result in numerous changes to informed and capable decision-making in ethical domains.
-
life-extension technology will become associated with longevity control, and that will include time and condition in which one passes away. At the moment, that is not a choice. In future, I expect society will legalize choice of life length (maybe through genetic manipulation of time of death), or some proxy for a genetically programmed death (for example, longevity termination technologies). I suspect that those technologies will be abused in a variety of contexts (for example, with unwilling users).
-
longevity technology will substitute for health treatment, that is, behaviors that encourage healthy longevity and preventive medical care will be replaced by health-reducing behaviors whose consequences are treated with frequent anti-aging treatments.
-
Frustration with inadequate resilience of physique against typical personal health-reducing behaviors will encourage additional technology explorations to allow health-reducing behaviors without physical consequences. The consequence of this relevant to me is the lack of development and exploration of ability to choose alternatives to health-reducing behaviors.
NOTE: Human experience, is typically defined by experience of ourselves at various biological stages of life. While we can shorten or extend various stages of life, and people typically want the biological health, maturity and looks of a 20-something for as long as possible, we actually do experience ourselves and our relationship with others in terms of our true ages.
-
Sizable government rebates on purchase of new human-powered vehicles, including but not limited to bicycles and electric bicycles.
Cluster thinking could provide value. Not quite the same as moral uncertainty, in that cluster thinking has broader applicability, but the same type of “weighted” judgement. I disagree with moral uncertainty as a personal philosophy,given the role I suspect that self-servingness plays in personal moral judgements. However, cluster thinking applied in limited decision-making contexts appeals to me.
A neglected areas of exploration in EA is selfishness, and self-servingness along with that. Both influence worldview, sometimes on the fly, and are not necessarily vulnerable to introspection. I suppose a controversy that could start early is whether all altruistic behavior has intended selfish benefits in addition to altruistic benefits. Solving the riddle of self-servingness would be a win-win-win-win-win-win .
Self-servingness has signs that include :
soldier mindset
missing (or misrepresented) premises
bad argumentation (analogical, inductive, deductive).
but without prior knowledge useful to identify those signs, I have not gotten any further than detecting self-servingness with simple heuristics (for example, as present when defending one’s vices).
I doubt that most people would know who to look for that authored books written in the late 20th Century in those fields that I listed, particularly when the theory remains unchanged now, or as in the case of Ashby’s book, is not available in any modern textbook.
I believe that:
good material from decades ago doesn’t appear in new texts, necessarily.
new material isn’t always an improvement, particularly if it reflects “the internet era”.
what is offered as “new material” isn’t always so new.
One concern with modern textbooks is pedagogical fads (for example, teaching formal logic with a software program or math with a TI calculator). I support pen and paper approaches for basic learning over TI calculators and software packages. Older textbooks offer more theory than current ones. Older textbooks are usually harder. Dover math books are one example where unchanged theory written up in older texts is still appreciated now.
It doesn’t take a lot of learning to find useful 20th Century books about linguistics, scientific reasoning, rhetoric, informal logic, formal logic, and even artificial intelligence. Yes, there was AI material before neural networks and machine learning, and it still has utility.
For most people, a random search at a decent used bookstore can turn up popular titles with good information. A random search by topic in an academic library or in a used bookstore that accepts used academic titles, (which used to be common, but is becoming more rare), can turn up some amazing finds. I do recommend all those approaches, if you like books and are patient enough to to try it. Otherwise, I suggest you look into older journal articles available in PDF format online.
It’s just one approach, and takes some trial and error. You need to examine the books, read the recommendations, figure out who published it and why and get to know the author, read the preface and forward, so it takes some patience. It can help to start with an older book and then visit the new material. When I started doing that is when I noticed that the new material was sometimes lesser quality, derivative, or the same content as older material.
The most precious finds are the ones that are nowhere to be found now. Yes, sometimes that’s because they’re crap, but sometimes that’s because they’re really good and people ignored them in spite or, or because of, that.
EDIT: I also find that reading from a book offers a visceral experience and steadier pace that digital reading can lack.
I understand the sequences are important to you folks, and I don’t want to seem disrespectful. I have browsed them, and think they contain some good information.
However, I’d recommend going back to books published at least 30 years ago for reads about:
critical thinking
scientific explanation
informal logic
formal logic
decision theory
cybernetics (Ashby, for the AI folks)
statistics and probability
knowledge representation
artificial intelligence
negotiation
linguistic pragmatics
psychology
journalism and research skills
rhetoric
economics
causal analysis
Visit a good used book store, or browse older books and print-only editions descriptions on the web, or get recommendations that you trust on older references in those areas. You’ll have to browse and do some comparing. Also get 1st editions wherever feasible.
The heuristics that this serves include:
good older books are shorter and smarter in the earlier editions, usually the 1st.
older books offer complete theoretical models and conceptual tools that newer books gloss over.
references from the 20th Century tend to contain information still tested and trusted now.
if you are familiar with newer content, you can notice how content progressed (or didn’t).
old abandoned theories can get new life decades later, it’s fun to find the prototypical forms.
most of the topics I listed have core information or skills developed in the 20th century.
it’s a nice reminder that earlier generations of researchers were very smart as well.
some types of knowledge are disappearing in the age of the internet and cellphone. Pre-internet sources still contain write-ups of that knowledge.
it’s reassuring that you’re learning something whose validity and relevance isn’t versioned out.
NOTE: Old books aren’t breathless about how much we’ve learned in the last 20 years, or how the internet has revolutionized something. I’d reserve belief in that for some hard sciences, and even there, if you want a theory introduction, an older source might serve you better.
If you don’t like print books, you can use article sources online and look at older research material. There are some books from the 90′s available on Kindle, hopefully, but I recommend looking back to the 70′s or even earlier. I prefer an academic writing style mostly available after the 70′s, I find older academic texts a bit hard to understand sometimes, but your experience could be different.
Oh, I do! :)
On most topics relevant to this forum’s readers, that is. For example, I haven’t found a good conversation on longevity control, and I’m not sure how appropriate it is to explore here, but I will note, briefly, that once people can choose to extend their lives, there will be a few ways that they can choose to end their lives, only one of which is growing old. Life extension technology poses indirect ethical and social challenges, and widespread use of it might have surprising consequences.
Thank you for remembering me, Yuri! I will read the article.
Sure, you’re welcome, one day is not long for me to wait. My thoughts:
-
I’m interested in your thoughts on the singularity., and am looking forward to reading your article.
-
My red-team submission needs better arguments, more content, and concision.
*As far as the status of women in the community, if this is about social behavior, then I favor dissolution of the social community version of EA.
-
In case you follow up a bit more on the idea of cognitive aids.
-
Here’s my two takes on epistemic status:
-
I am working on a write-up that addresses climate change impacts differently than Halstead, but progress is slow because my attention and time are divided. I will share the work once it’s complete.
-
Agree that some could. Since you brought it up, how would you align image generators? They’re still dumb tools, do you mean align the users? Add safety features? Stable Diffusion had a few safeguards put in place, but users can easily disable them. Now it’s generating typical porn and as well as more dangerous or harmful things, I suspect, but only because people are using it that way, not because it does that on its own. So yeah do you want Stable Diffusion source code to be removed from the web? I second the motion, lol.
I’m curious what EA projects are considered “high status”. I have no idea, and I don’t believe that all your other readers do either.
Pft, thats OK, David. Reading over how much I wrote, I’ll be surprised if you get through it all. Thanks for the showing some interest, and don’t forget to enjoy some of that vacation time! Bummer it’s split up like that.
It’s not correct to say that action deserves criticism, but maybe correct to say that action receives criticism. The relevant distinction to make is why the action brought criticism on it, and that is different case-by-case. The criticism of SBF is because of alleged action that involves financial fraud over billions of dollars. The criticism of Singer with regard to his book Practical Ethics is because of distortion of his views on euthanasia. The criticism of Thiel with regard to his financial support of MIRI is because of disagreements over his financial priorities. And I could go on. Some of those people have done other things deserving or receiving criticism. The point is that whether something receives criticism doesn’t tell you much about whether it deserves criticism. While these folks all risk criticism, they don’t all deserve it, at least not for the actions you suggested with your links.