Fwiw I didn’t downvote this comment, though I would guess the downvotes were based on the somewhat personal remarks/rhetoric. I’m also finding it hard to parse some of what you say.
A system or pattern or general belief that leads to a defect or plausible potential defect (even if there is some benefit to it), and even if this defect is abstract or somewhat disagreed upon.
This still leaves a lot of room for subjective interpretation, but in the interests of good faith, I’ll give what I believe is a fairly clear example from my own recent investigations: it seems that somewhere between 20-80% of the EA community believe that the orthogonality thesis shows that AI is extremely likely to wipe us all out. This is based on a drastic misreading of an often-cited 10-year old paper, which is available publicly for any EA to check.
Another odd belief, albeit one which seems more muddled than mistaken is the role of neglectedness in ‘ITN’ reasoning. What we ultimately care about is the amount of good done per resource unit, ie, roughly, <importance>*<tractability>. Neglectedness is just a heuristic for estimating tractability absent more precise methods. Perhaps it’s a heuristic with interesting mathematical properties, but it’s not a separate factor, as it’s often presented. For example, in 80k’s new climate change profile, they cite ‘not neglected’ as one of the two main arguments against working on it. I find this quite disappointing—all it gives us is a weak a priori probabilistic inference which is totally insensitive to the type of things the money has been spent on and the scale of the problem, which seems much less than we could learn about tractability by looking directly at the best opportunities to contribute to the field, as Founders Pledge did.
Also, it seems like you are close to implicating literally any belief?
I don’t know why you conclude this. I specified ‘belief shared widely among EAs and not among intelligent people in general’. That is a very small subset of beliefs, albeit a fairly large subset of EA ones. And I do think we should be very cautious about a karma system that biases towards promoting those views.
I don’t know why you conclude this. I specified ‘belief shared widely among EAs and not among intelligent people in general’. That is a very small subset of beliefs, albeit a fairly large subset of EA ones. And I do think we should be very cautious about a karma system that biases towards promoting those views.
You are right. My mindset writing this comment was bad, but I remember thinking the reply seemed not specific and general, and I reacted harshly, this was unnecessary and wrong.
This still leaves a lot of room for subjective interpretation, but in the interests of good faith, I’ll give what I believe is a fairly clear example from my own recent investigations: it seems that somewhere between 20-80% of the EA community believe that the orthogonality thesis shows that AI is extremely likely to wipe us all out. This is based on a drastic misreading of an often-cited 10-year old paper, which is available publicly for any EA to check.
I do not know the details of the orthogonality thesis and can’t speak to this very specific claim (but this is not at all refuting you, I am just literally clueless and can’t comment on something I don’t understand).
To both say the truth and be agreeable, it’s clear that the beliefs in AI safety are from EAs following the opinions of a group of experts. This just comes from people’s outright statements.
In reality, those experts are not at the majority of AI people and it’s unclear exactly how EA would update or change its mind.
Furthermore, I see things like the below, that, without further context, could be wild violations of “epistemic norms”, or just common sense.
For background, I believe this person is interviewing or speaking to researchers in AI, some of whom are world experts. Below is how they seem to represent their processes and mindset when communicating with these experts.
One of my models about community-building in general is that there’s many types of people, some who will be markedly more sympathetic to AI safety arguments than others, and saying the same things that would convince an EA to someone whose values don’t align will not be fruitful. A second model is that older people who are established in their careers will have more formalized world models and will be more resistance to change. This means that changing one’s mind requires much more of a dialogue and integration of ideas into a world model than with younger people. The thing I want to say overall: I think changing minds takes more careful, individual-focused or individual-type-focused effort than would be expected initially.
I think one’s attitude as an interviewer matters a lot for outcomes. Like in therapy, which is also about changing beliefs and behaviors, I think the relationship between the two people substantially influences openness to discussion, separate from the persuasiveness of the arguments. I also suspect interviewers might have to be decently “in-group” to have these conversations with interviewees. However, I expect that that in-group-ness could take many forms: college students working under a professor in their school (I hear this works for the AltProtein space), graduate students (faculty frequently do report their research being guided by their graduate students) or colleagues. In any case, I think the following probably helped my case as an interviewer: I typically come across as noticeably friendly (also AFAB), decently-versed in AI and safety arguments, and with status markers. (Though this was not a university-associated project, I’m a postdoc at Stanford who did some AI work at UC Berkeley).
The person who wrote the above is concerned about image, PR and things like initial conditions, and this is entirely justified, reasonable and prudent for any EA intervention or belief. Also, the person who wrote the above is conscientious, intellectually modest, and highly thoughtful, altruistic and principled.
However, at the same time, at least from their writing above, their entire attitude seems to be based on conversion—yet, their conversations is not with students or laypeople like important public figures, but the actual experts in AI.
So if you’re speaking with the experts in AI and adopting this attitude that they are preconverts and you are focused on working around their beliefs, it seems like, in some reads of this, would be that you are cutting off criticism and outside thought. In this ungenerous view, it’s a further red flag that you have to be so careful—that’s an issue in itself.
For context, in any intervention, getting the opinion or updating from experts is sort of the whole game (maybe once you’re at “GiveWell levels” and working with dozens of experts it’s different, but even then I’m not sure—EA has updated heavily on cultured meat from almost a single expert).
Fwiw I didn’t downvote this comment, though I would guess the downvotes were based on the somewhat personal remarks/rhetoric. I’m also finding it hard to parse some of what you say.
This still leaves a lot of room for subjective interpretation, but in the interests of good faith, I’ll give what I believe is a fairly clear example from my own recent investigations: it seems that somewhere between 20-80% of the EA community believe that the orthogonality thesis shows that AI is extremely likely to wipe us all out. This is based on a drastic misreading of an often-cited 10-year old paper, which is available publicly for any EA to check.
Another odd belief, albeit one which seems more muddled than mistaken is the role of neglectedness in ‘ITN’ reasoning. What we ultimately care about is the amount of good done per resource unit, ie, roughly, <importance>*<tractability>. Neglectedness is just a heuristic for estimating tractability absent more precise methods. Perhaps it’s a heuristic with interesting mathematical properties, but it’s not a separate factor, as it’s often presented. For example, in 80k’s new climate change profile, they cite ‘not neglected’ as one of the two main arguments against working on it. I find this quite disappointing—all it gives us is a weak a priori probabilistic inference which is totally insensitive to the type of things the money has been spent on and the scale of the problem, which seems much less than we could learn about tractability by looking directly at the best opportunities to contribute to the field, as Founders Pledge did.
I don’t know why you conclude this. I specified ‘belief shared widely among EAs and not among intelligent people in general’. That is a very small subset of beliefs, albeit a fairly large subset of EA ones. And I do think we should be very cautious about a karma system that biases towards promoting those views.
You are right. My mindset writing this comment was bad, but I remember thinking the reply seemed not specific and general, and I reacted harshly, this was unnecessary and wrong.
I do not know the details of the orthogonality thesis and can’t speak to this very specific claim (but this is not at all refuting you, I am just literally clueless and can’t comment on something I don’t understand).
To both say the truth and be agreeable, it’s clear that the beliefs in AI safety are from EAs following the opinions of a group of experts. This just comes from people’s outright statements.
In reality, those experts are not at the majority of AI people and it’s unclear exactly how EA would update or change its mind.
Furthermore, I see things like the below, that, without further context, could be wild violations of “epistemic norms”, or just common sense.
For background, I believe this person is interviewing or speaking to researchers in AI, some of whom are world experts. Below is how they seem to represent their processes and mindset when communicating with these experts.
The person who wrote the above is concerned about image, PR and things like initial conditions, and this is entirely justified, reasonable and prudent for any EA intervention or belief. Also, the person who wrote the above is conscientious, intellectually modest, and highly thoughtful, altruistic and principled.
However, at the same time, at least from their writing above, their entire attitude seems to be based on conversion—yet, their conversations is not with students or laypeople like important public figures, but the actual experts in AI.
So if you’re speaking with the experts in AI and adopting this attitude that they are preconverts and you are focused on working around their beliefs, it seems like, in some reads of this, would be that you are cutting off criticism and outside thought. In this ungenerous view, it’s a further red flag that you have to be so careful—that’s an issue in itself.
For context, in any intervention, getting the opinion or updating from experts is sort of the whole game (maybe once you’re at “GiveWell levels” and working with dozens of experts it’s different, but even then I’m not sure—EA has updated heavily on cultured meat from almost a single expert).