Just to note: I have a COI in commenting on this subject.
I strong downvoted your comment, as it reads to me as making bold claims whilst providing little supporting evidence. References to “lots of people in this area” could be considered to be a use case of the bandwagon fallacy.
ElliotJDavies
As you write:
The result will be a singularity, understood as a fundamental discontinuity in human history beyond which our fate depends largely on how we interact with artificial agents
The discontinuity is a result of humans no longer being the smartest agents in the world, and no longer being in control of our own fate. After this point, we’ve entered an event horizon where the output is almost entirely unforeseeable.
If you have accelerating growth that isn’t sustained for very long, you get something like population growth from 1800-2000
If, after surpassing humans, intelligence “grows” exponentially for another 200 years, do you not think we’ve passed an event horizon? I certainly do!
If not, using the metric of single agent intelligence (i.e. not the sum of intelligence in a group of agents), at what point during an exponential growth curve that intersects human level intelligence, would you defining as crossing the event horizon?
I feel this claim is disconnected with the definition of the singularity given in the paper:
The singularity hypothesis begins with the supposition that artificial agents will gain the ability to improve their own intelligence. From there, it is claimed that the intelligence of artificial agents will grow at a rapidly accelerating rate, producing an intelligence explosion in which artificial agents quickly become orders of magnitude more intelligent than their human creators. The result will be a singularity, understood as a fundamental discontinuity in human history beyond which our fate depends largely on how we interact with artificial agents
Further in the paper you write:
The singularity hypothesis posits a sustained period of accelerating growth in the general intelligence of artificial agents.
[Emphasis mine]. I can’t see any reference for either the original definition and later addition of “sustained”.
Intelligence Explosion: For a sustained period
[...]
Extraordinary claims require extraordinary evidence: Proposing that exponential or hyperbolic growth will occur for a prolonged period [Emphasis mine]I’m not sure why “prolonged period” or “sustained” was used here?
I am also not sure what is meant by prolonged period? 5 years? 100 years?
For the answer to the above, why do you believe would this be required?
Just to help nail down the crux here, I don’t see why more than a few days of an intelligence explosion is required for a singularity event.
Circuits’ energy requirements have massively increased—increasing costs and overheating.[6]
I’m not sure I understand this claim, and I can’t see that it’s supported by the cited paper.
Is the claim that energy costs have increased faster than computation? This would be cruxy, but it would also be incorrect.
EAGxNordics
The joy in righteousness
This is a new one to me! Interesting!
To identify one crux with the idea of using morality to motivate behaviour (e.g. “abolitionism”), is the assumption it needs to be completely grassroots. The argument often becomes: did slavery end because everyone found it to be morally bad, or because economic factors ect. changed the country fundamentally.
It becomes much more plausible that morality played an important role, when you modify the claim: Slavery ended because a group of important people realised it was morally wrong, and displayed moral leadership in changing laws.
While I don’t think that was inappropriate, it seems fair to give Owen at least some lead time to prepare a statement of his perspective on the matter.
I think your right about this, and have changed my mind.
I would generally view reaching out to a reasonable number of active Forum participants individually as not brigading. This is less likely to create a sufficient mass effect to mislead observers about the community’s range of views.
I think about it this way. If a post was written critically about me, I would suspect 5-10% of people that know me in the community to see it, and 0.5% to comment. If I reach out to everyone I have ever been friendly with, I expect these numbers would be 50% and 5%, respectively. In other words, there would be 10x more comments defending me if I reach out to friends than if I don’t.
I think for independent observers, reading comments in fictional scenario above, it’s useful to know whether comments were of unsolicited or not.
A small group of strong-vote wielding users can significantly effect the course of discussion on the Forum through their voting.
Totally agree. Why would the same not be true of comments?
I wrote a report for CE on an AMR idea; the cost-effectiveness analyses of which will be released soon and I will post here when they are!
Hey Akhil, is there any update here?
Astroturfing and troll farms are different from friends and people on your side saying their opinion
This is correct. What I am talking about is brigading.
Astroturfing and troll farms are only similar in the mechanism behind their ability to distort public opinion. That mechanism is: People are influenced by the tone and volume of comments they read.Are you saying you’re against people being allowed to tell their friends and supporters about something they consider to be unethical and encouraging them to vote and comment according to their conscience?
Yes, this is brigading. There are things you can do mitigate this brigading effect, for example: (1) Begin comments with “I am here from …” or “This post was shared by...”. (2) Commenters acknowledge, when asked, that the post was shared with them.
Take your case (i.e. Ben Pace’s post on nonlinear), neither (1) or (2) was done. In fact, I found myself needing to comment alluding to this effect, after I confirmed this was the case with one of your collaborators.
Why would it be bad if he was given advance warning about this report?
Some people - to be completely frank, like yourself—will use advanced notice to schedule their friends, fans and colleagues to write defensive comments. A high concentration of these types of comments can distort the quality of the conversation. This is commonly referred to as brigading.
This strategy is so effective, that foreign governments have setup “troll-farms”, and companies have setup “astroturfing” operations to benefit from degrading the quality of certain conversations on the internet.Also, it does say in the document that Owen was given advanced notice. His document says that he saw the draft and disagreed with aspects of it that they didn’t address in the post.
I would create a distinction between giving someone a read of a draft ahead of time, and actively communicating the date and time something is posted.
Edit: Added third paragraph, changed wording on first sentence of second paragraph.
There’s been some complaints from a banned EA Forum user that the timing of this post, and the timing of comments that bolster the character of Owen, have been coordinated. Whilst I think it’s unlikely this is the case, I would love to see the following:
- Confirmation from OP (@EV UK Board) that Owen was not given advanced warning on the posting of this report. Or if he was, some discussion around the potential issues with doing so.
- Some further discussion in the EA Forum team, and perhaps rules set, on coordinated posting (AKA “brigading”).
In the business context, you could imagine a recruiter having the option to buy a booth at a university specialising in the area the company is working in vs. buying one at a broad career fair of a top university. While the specialised university may bring more people that have trained in and are specialised in your area, you might still go for the top university as talent there might have overall greater potential, has the ability to more easily pivot or can contribute in more general areas like leadership, entrepreneurship, communications or similar.
I think this is a spot on analogy, and something we’ve discussed in our group a lot.
Meta note: I’m not going to spend much more time on nonlinear threads, since I think it’s among the poorer uses of my time. With this in mind, I hope people don’t take unilateral actions (e.g. deanonymizing Chloe or Alice) after discussing in this thread, because I suspect at this point threads like these filter for specific people and are less representative of the EA community as a whole.
Oh thanks for flagging, I will retract it now
As we later received more screenshots, it seems like we actually received definitive confirmation that the conversation on that date did indeed not result in Alice getting food.
I’m waiting for Ben, or someone else, to make a table of claims, counter claims, and what the evidence shows. Because nonlinear providing evidence that doesn’t support their claims seems to be a common occurance.
Just to give a new example,Kat screenshots herselfreplying “mediating! Appreciate people not talking to loud on the way back[...] ” here, to provide evidence supporting that there was not a substantial discussion that occurred. However, I can only interpret the use of “mediating!” to indicate that there was in-fact a substantial amount of discussion at play.
Edit: Retracted as correctly pointed out by @Sean_o_h , I read meditation as mediation.
Excited to hear both of these announcements!
This is great!