In general, I think it’s much better to first attempt to have a community conflict internally before I have it externally. This doesn’t really apply to criminal behaviour or sexual abuse. I am centrally talking about disagreements, eg the Bostrom stuff, fallout around the FTX stuff, Nonlinear stuff, now this manifest stuff.
Why do I think this?
If I want to credibly signal I will listen and obey norms, it seems better to start with a small discourse escalation rather than a large one. Starting a community discussion on twitter is like jumping straight to a shooting war.
Many external locations (eg twitter, the press) have very skewed norms/incentives to the forum and so many parties can feel like they are the victim. I find when multiple parties feel they are weaker and victimised that is likely to cause escalation.
Many spaces have less affordance for editing comments, seeing who agrees with who, having a respected mutual party say “woah hold up there”
It is hard to say “I will abide by the community sentiment” if I have already started the discussion elsewhere in order to shame people. And if I don’t intend to abide by the community sentiment, why am I trying to manage a community conflict in the first place. I might as well just jump straight to shaming.
It is hard to say “I am open to changing my mind” if I have set up the conflict in a way that leads to shaming if the other person doesn’t change theirs. It’s like holding a gun to someone’s head and saying that this is just a friendly discussion.
I desire reconciliation. I have hurt people in this community and been hurt by them. In both case to the point of tears and sleepless night. But still I would prefer reconciliation and growth over a escalating conflict
Conflict is often negative sum, so lets try and have it be the least negative sum as possible.
Probably a good chunk of it is church norms, centred around 1 Corinthians 6[2]. I don’t really endorse this, but I think it’s good to be clear why I think thinks.
Personal examples:
Last year I didn’t like that Hanania was a main speaker at manifest (iirc) so I went to their discord and said so. I then made some votes. The median user agreed with me and so Hanania didn’t speak. I doubt you heard about this, because I did it on the manifold discord. I hardly tweeted about it or anything. This and the fact I said I wouldn’t created a safe space to have the discussion and I largely got what I wanted.
You might think this is a comment is directed at a specific person, but I bet you are wrong. I dislike this behaviour when it is done by at least 3 different parties that I can think of.
If any of you has a dispute with another, do you dare to take it before the ungodly for judgment instead of before the Lord’s people? 2 Or do you not know that the Lord’s people will judge the world? And if you are to judge the world, are you not competent to judge trivial cases? 3 Do you not know that we will judge angels? How much more the things of this life! 4 Therefore, if you have disputes about such matters, do you ask for a ruling from those whose way of life is scorned in the church? 5 I say this to shame you. Is it possible that there is nobody among you wise enough to judge a dispute between believers? 6 But instead, one brother takes another to court—and this in front of unbelievers!
7 The very fact that you have lawsuits among you means you have been completely defeated already. Why not rather be wronged? Why not rather be cheated?
I agree with the caveat that certain kinds of more reasonable discussion can’t happen on the forum because the forum is where people are fighting.
For instance, because of the controversy I’ve been thinking a lot recently about antiracism recently—like what would effective antiracism look like; what lessons can we take from civil rights and what do we have to contribute (cool ideas on how to leapfrog past or fix education gaps? discourse norms that can facilitate hard but productive discussions about racism? advocating for literal reparations?) I have deleted a shortform I was writing on this because I think ppl would not engage with it positively. and I suspect I am missing the point somehow. I suspect people actually just want to fight, and the point is to be angry.
On the meta level, I have been pretty frustrated (with both sides though not equally) on the manner in which some people are arguing, and the types of arguments they use, and the motivations they. I think in some ways it is better to complain about that off the forum. It’s worse for feedback, but that’s also a good thing because the cycle of righteous rage does not continue on the forum. And you get different perspectives
(i wonder if a crux here is that you have a lot of twitter followers and I don’t. If you tweet you are speaking to an audience; if I tweet I am speaking to weird internet friends)
So I sort of agree, though depending on the topic I think it could quickly get a lot of eyes on it. I would prefer to discuss most things that are controversial/personal, not on twitter.
I feel like I want 80k to do more cause prioritisation if they are gonna direct so many people. Seems like 5 years ago they had their whole ranking thing which was easy to check. Now I am less confident in the quality of work that is directing lots of people in a certain direction.
Idk, many of the people they are directing would just do something kinda random which an 80k rec easily beats. I’d guess the number of people for whom 80k makes their plans worse in an absolute sense is kind of low and those people are likely to course correct.
Otoh, I do think people/orgs in general should consider doing more strategy/cause prio research, and if 80k were like “we want to triple the size of our research team to work out the ideal marginal talent allocation across longtermist interventions” that seems extremely exciting to me. But I don’t think 80k are currently being irresponsible (not that you explicitly said that, for some reason I got a bit of that vibe from your post).
80k could be much better than nothing and yet still missing out on a lot of potential impact, so I think your first paragraph doesn’t refute the point.
I agree with this, and have another tangential issue, which might be party of why cause prioritizing seems unclear? Their website seems confusing and overloaded to me.
Compare giving what we can’s page which has good branding and simple language. IMO 80,000 hours page has too much text and too much going on front page. Bring both websites up on your phone and judge for yourself.
These are the front page of EA for many people so are pretty important. These websites aren’t really for most of us, they are for fresh people so need to be punchy, straightforward and attractive. After clicking a couple pages bank things can get heavier.
Compare giving what we can’s page which has good branding and simple language. IMO 80,000 hours page has too much text and too much going on front page. Bring both websites up on your phone and judge for yourself.
My understanding is that 80k have done a bunch of A/B testing which suggested their current design outcompetes ~most others (presumably in terms of click-throughs / amount of time users spend on key pages).
You might not like it, but this is what peak performance looks like.
I hope I’m wrong and this is the deal, that would be an excellent approach. Would be interesting to see what the other designs they tested were, but obviously I won’t.
I know of at least 1 NDA of an EA org silencing someone for discussing what bad behaviour that happened at that org. Should EA orgs be in the practice of making people sign such NDAs?
“Chesterton’s TAP” is the most rationalist buzzword thing I’ve ever heard LOL, but I am putting together that what Chana said is that she’d like there to be some way for people to automatically notice (the trigger action pattern) when they might be adopting an abnormal/atypical governance plan and then reconsider whether the “normal” governance plan may be that way for a good reason even if we don’t immediately know what that reason is (the Chesterton’s fence)?
I have no idea, but would like to! With things like “organizational structure” and “nonprofit governance”, I really want to understand the reference class (even if everyone in the reference class does stupid bad things and we want to do something different).
Strongly agree that moving forward we should steer away from such organizational structures; much better that something bad is aired publicly before it has a chance to become malignant
Some things I don’t think I’ve seen around FTX, which are probably due to the investigation, but still seems worth noting. Please correct me if these things have been said.
I haven’t seen anyone at the FTXFF acknowledge fault for negligence in not noticing that a defunct phone company (north dimension) was paying out their grants.
This isn’t hugely judgemental from me, I think I’d have made this mistake too, but I would like it acknowledged at some point
The FTX Foundation grants were funded via transfers from a variety of bank accounts, including North Dimension-8738 and Alameda-4456 (Primary Deposit Accounts), as well as Alameda-4464 and FTX Trading-9018
I haven’t seen anyone at CEA acknowledge that they ran an investigation in 2019-2020 on someone who would turn out to be one of the largest fraudsters in the world and failed to turn up anything despite seemingly a number of flags.
I remain confused
As I’ve written elsewhere I haven’t seen engagement on this point, which I find relatively credible, from one of the Time articles:
“Bouscal recalled speaking to Mac Aulay immediately after one of Mac Aulay’s conversations with MacAskill in late 2018. “Will basically took Sam’s side,” said Bouscal, who recalls waiting with Mac Aulay in the Stockholm airport while she was on the phone. (Bouscal and Mac Aulay had once dated; though no longer romantically involved, they remain close friends.) “Will basically threatened Tara,” Bouscal recalls. “I remember my impression being that Will was taking a pretty hostile stance here and that he was just believing Sam’s side of the story, which made no sense to me.””
My comment on the above “While other things may have been bigger errors, this once seems most sort of “out of character” or “bad normsy”. And I know Naia well enough that this moves me a lot, even though it seems so out of character for [will] (maybe 30% that this is a broadly accurate account). This causes me consternation, I don’t understand and I think if this happened it was really bad and behaviour like it should not happen from any powerful EAs (or any EAs frankly).”
I haven’t read too much into this and am probably missing something.
Why do you think FTXFF was receiving grants via north dimension? The brief googling I did only mentioned north dimension in the context of FTX customers sending funds to FTX (specifically this SEC complaint). I could easily have missed something.
Grants were being made to grantees out of North Dimension’s account—at least one grant recipient confirmed receiving one on the Forum (would have to search for that). The trustee’s second interim report shows that FTXFF grants were being paid out of similar accounts that received customer funds.
It’s unclear to me whether FTX Philanthrophy (the actual 501c3) ever had any meaningful assets to its name, or whether (m)any of the grants even flowed through accounts that it had ownership of.
Certainly very concerning. Two possible mitigations though:
Any finding of negligence would only apply to those with duties or oversight responsibilities relating to operations. It’s not every employee or volunteer’s responsibility to be a compliance detective for the entire organization.
It’s plausible that people made some due dilligence efforts that were unsuccessful because they were fed false information and/or relied on corrupt experts (like “Attorney-1” in the second interim trustee report). E.g., if they were told by Legal that this had been signed off on and that it was necessary for tax reasons, it’s hard to criticize a non-lawyer too much for accepting that. Or more simply, they could have been told that all grants were made out of various internal accounts containing only corporate monies (again, with some tax-related justification that donating non-US profits through a US charity would be disadvantageous).
Feels like we’ve had about 3 months since the FTX collapse with no kind of leadership comment. Uh that feels bad. I mean I’m all for “give cold takes” but how long are we talking.
I am pretty sure there is no strong legal reason for people to not talk at this point. Not like totally confident but I do feel like I’ve talked to some people with legal expertise and they thought it would probably be fine to talk, in addition to my already bullish model.
I often see people thinking that this is bragading or something when actually most people just don’t want to write a response, they either like or dislike something
If it were up to me I might suggest an anonymous “I don’t know” button and an anonymous “this is poorly framed” button.
When I used to run a lot of facebook polls, it was overwhelmingly men who wrote answers, but if there were options to vote, the gender was much more even. My hypothesis was that a kind of argumentative usually man tended to enjoy writing long responses more. And so blocking lower effort/less antagonistic/ more anonymous responses meant I heard more from this kind of person.
I don’t know if that is true on the forum, but I would guess that the higher effort it is to respond the more selective the responses become in some direction. I guess I’d ask if you think that the people spending the most effort are likely to be the most informed. In my experience, they aren’t.
More broadly I think it would be good if the forum optionally took some information about users—location, income, gender, cause area, etc and on answers with more than say 10 votes would display some kind of breakdown. I imagine it would sometimes be interesting to find out how exactly agreement and disagreement cut on different issues.
edit More broadly I think it would be good if the forum tried to find clusters and pattterns in votes, perhaps allowing users to self nominate categories and then showing how categories split once there were enough votes. I’m a little wary of the forum deciding what categories are important and embedding that, but I’d like to see if an opinion was mainly liked by longtermists, women, etc.
Also I think it’s good to be able to anonymously express unpopular views. For most of human history it’s been unpopular to express support for LGBT+, the rights of women, animals. But if anonymous systems had existed we might have seen more support for such views. Likewise, pushing back against powerful people is easier if you can do it anonymously.
It seems like we could use the new reactions for some of this. At the moment they’re all positive but there could be some negative ones. And we’d want to be able to put the reactions on top level posts (which seems good anyway).
I think that it is generally fine to vote without explanations, but it would be nice to know why people are disagreeing or disliking something. Two scenarios come to mind:
If I write a comment that doesn’t make any claim/argument/proposal and it gets downvotes, I’m unclear what those downvotes mean.
If I make a post with a claim/argument/proposal and it gets downvoted without any comments, it isn’t clear what aspect of the post people have a problem with.
I remember writing in a comment several months ago about how I think that theft from an individual isn’t justified even if many people benefit from it, and multiple people disagreed without continuing the conversation. So I don’t know why they disagreed, or what part of the argument they through was wrong. Maybe I made a simple mistake, but nobody was willing to point it out.
I also think that you raise good points regarding demographics and the willingness of different groups of people to voice their perspectives.
I agree it would be nice to know, but in every case someone has decided they do want to vote but don’t want to comment. Sometimes I try and cajole an answer, but ultimately I’m glad they gave me any information at all.
If anyone who disagrees with me on the manifest stuff who considers themselves inside the EA movement, I’d like to have some discussions with a focus on consensus-building. ie we chat in DMs and the both report some statements we agreed on and some we specifically disagreed on.
The EA forum should not seek to have opinions on non-EA events. I don’t mean individual EAs shouldn’t have opinions, I mean that as a group we shouldn’t seek to judge individual event. I don’t think we’re very good at it.
I don’t like Hanania’s behaviour either and am a little wary of systems where norm breaking behaviour gives extra power, such as being endlessly edgy. But I will take those complaints to the manifold community internally.
EAGs are welcome to invite or disinvite whoever CEA likes. Maybe one day I’ll complain. But do I want EAGs to invite a load of manifest’s edgiest speakers? Not particularly.
It is fine for there to be spaces with discussion that I find ugly. If people want to go to these events, that’s up to them.
I dislike having unresolved conflicts which ossify into an inability to talk about things. Someone once told me that the couples who stay together are either great at settling disputes or almost never fight. We fight a bit and we aren’t great at settling it. I guess I’d like us to fight less (say we aren’t interested in conflicty posts) or to get better at making up (come to consensus afterwards, grow and change)
Only 1-6% of attendees at manifest had issues along eugenicsy lines in the feedback forms. I don’t think this is worth a huge change.
I would imagine it’s worth $10mns to avoid EA becoming a space full of people who fearmonger based on the races, genders or sexualities of others. I don’t think that’s very likely.
To me, current systems for taxing discussion of eugenics seem fine. There is the odd post that gets downvoted. If it were good and convincing it would be upvoted. so far it hasn’t been. seems fine. I am not scared of bad arguments [1]
Black people are probably not avoiding Manifest because of these speakers because that theory doesn’t seem to hold up for tech, rationalism[2], EA or several other communities.
I don’t know what people want when they point at “distancing EA from rationalism”
Manifest was fun for me, and it and several other events I went to in the bay felt like I let out a breath that I never knew I was holding. I am pretty careful what I say about you all sometimes and it’s tiring. I guess that’s true for some of you too. It was nice (and surprisingly un-edgy for me) to be in a space where I didn’t have to worry about offending people a lot. I enjoy having spaces where I feel safer.
There is a tradeoff between feeling safe and expression. I would have more time for some proposals if people acknowledged the costs they are putting on others. Even small costs, even costs I would willingly pay are still costs and to have that be unmentionable feels gaslighty.
There are some incentives in this community to be upset about things and to be blunt in response. Both of these things seem bad. I’d prefer incentives towards working together to figure out how the world is and impliment the most effective morally agreeable changes per unit resource. This requires some truthseeking, but probably not the maximal amount. and some kindness, but probably not the maximal amount.
LessWrong doesn’t have any significant discussion of eugenics either. As I (weakly) understand it they kicked many posters off who wanted to talk about such things.
Nathan, could you summarize/clarify for us readers what your views are? (or link to whatever comment or document has those views?) I suspect that I agree with you on a majority of aspects and disagree on a minority, but I’m not clear on what your views are.
I’d be interested to see some sort of informal and exploratory ‘working group’ on inclusion-type stuff within EA, and have a small group conversation once a month or so, but I’m not sure if there are many (any?) people other than me that would be interested in having discussions and trying to figure out some actions/solutions/improvements.[1]
^ We had something like this for talent pipelines and hiring (it was High Impact Talent Ecosystem, and it was somehow connected to or organized by SuccessIf, but I’m not clear and exactly what the relationship was), but after a few months the organizer stopped and I’m not clear on why. In fact, I’m vaguely considering picking up the baton and starting some kind of a monthly discussion group about talent pipelines, coaching/developing talent, etc.
One limitation here: you have a view about Manifest. Your interlocutor would have a different view. But how do we know if those views are actually representative of major groupings?
My hunch is that, if equipped with a mind probe, we would find at least two major axes with several meaningfully different viewpoints on each axis. Overall, I’d predict that I would find at least four sizable clusters, probably five to seven.
Beyond that, yes you are likely right, but I don’t know how to have that discussion better. I tried using polls and upvoted quotes as a springboard in this post (Truth-seeking vs Influence-seeking—a narrower discussion) but people didn’t really bite there.
Suggestions welcome.
It is kind of exhausting to keep trying to find ways to get better samples of the discourse, without a sense that people will eventually go “oh yeah this convinces me”. If I were more confident I would have more energy for it.
I don’t think those were most of the questions I was looking for, though. This isn’t a criticism: running the poll early risks missing important cruxes and fault lines that haven’t been found yet; running it late means that much of the discussion has already happened.
There are also tradeoffs with viewpoints.xyz being accessible (=better sampling) and the data being rich enough. Limitation to short answer stems with a binary response (plus an ambiguous “skip”) lends itself to identifying two major “camps” more easily that clusters within those camps. In general, expanding to five-point Likert scales would help, as would some sort of branching.
For example, I’d want to know—conditional on “Manifest did wrong here” / “the platforming was inappropriate”—what factors were more or less important to the respondent’s judgment. On a 1-5 scale, how important do you find [your view that the organizers did not distance themselves from the problematic viewpoints / the fit between the problematic viewpoints and a conference for the forecasting community / an absence of evidence that special guests with far-left or at least mainstream viewpoints on the topic were solicited / whatever]. And: how much would the following facts or considerations, if true, change your response to a hypothetical situation like the Manifest conference? Again, you can’t get how much on a binary response.
Maybe all that points out to polling being more of a post-dialogue event, and accepting that we would choose discussants based on past history & early reactions. For example, I would have moderately high confidence that user X would represent a stance close to a particular pole on most issues, while I would represent a stance that codes as “~ moderately progressive by EA Forum standards.”
I don’t think those were most of the questions I was looking for, though. This isn’t a criticism: running the poll early risks missing important cruxes and fault lines that haven’t been found yet; running it late means that much of the discussion has already happened.
Often it feels like I can never please people on this forum. I think the poll is significantly better than no poll.
I think the poll is significantly better than no poll.
Yeah, I agree with that! I don’t find it inconsistent with the idea that the reasonable trade-offs you made between various characteristics in the data-collection process make the data you got not a good match for the purposes I would like data for. They aregood data for people interested in the answer to certain other questions. No one can build a (practical) poll for all possible use cases, just as no one can build a (reasonably priced) car that is both very energy-efficient and has major towing/hauling chops.
As useful as viewpoints.xyz is, I will mention that for maybe 50% or 60% of the questions, my reaction was “it depends.” I suppose you can’t really get around that unless the person creating the questions spends much more time to carefully craft them (which sort of defeats the purpose of a quick-and-dirty poll), or unless you do interviews (which are of course much more costly). I do think there is value in the quick-and-dirty MVP version, but it’s usefullness has a pretty noticable upper bound.
Sam Harris takes Giving What We Can pledge for himself and for his meditation company “Waking Up”
Harris references MacAksill and Ord as having been central to his thinking and talks about Effective Altruism and exstential risk. He publicly pledges 10% of his own income and 10% of the profit from Waking Up. He also will create a series of lessons on his meditation and education app around altruism and effectiveness.
Harris has 1.4M twitter followers and is a famed Humanist and New Athiest. The Waking Up app has over 500k downloads on android, so I guess over 1 million overall.
Harris is a marmite figure—in my experience people love him or hate him.
It is good that he has done this.
Newswise, it seems to me it is more likely to impact the behavior of his listeners, who are likely to be well-disposed to him. This is a significant but currently low-profile announcement. As will the courses be on his app.
I don’t think I’d go spreading this around more generally, many don’t like Harris and for those who don’t like him, it could be easy to see EA as more of the same (callous superior progessivism).
In the low probability (5%?) event that EA gains traction in that space of the web (generally called the Intellectual Dark Web—don’t blame me, I don’t make the rules) I would urge caution for EA speakers who might pulled into polarising discussion which would leave some groups feeling EA ideas are “not for them”.
This seems quite likely given EA Survey data where, amongst people who indicated they first heard of EA from a Podcast and indicated which podcast, Sam Harris’ strongly dominated all other podcasts.
More speculatively, we might try to compare these numbers to people hearing about EA from other categories. For example, by any measure, the number of people in the EA Survey who first heard about EA from Sam Harris’ podcast specifically is several times the number who heard about EA from Vox’s Future Perfect. As a lower bound, 4x more people specifically mentioned Sam Harris in their comment than selected Future Perfect, but this is probably dramatically undercounting Harris, since not everyone who selected Podcast wrote a comment that could be identified with a specific podcast. Unfortunately, I don’t know the relative audience size of Future Perfect posts vs Sam Harris’ EA podcasts specifically, but that could be used to give a rough sense of how well the different audiences respond.
Notably, Harris has interviewed several figures associated with EA; Ferriss only did MacAskill, while Harris has had MacAskill, Ord, Yudkowsky, and perhaps others.
This is true, although for whatever reason the responses to the podcast question seemed very heavily dominated by references to MacAskill.
This is the graph from our original post, showing every commonly mentioned category, not just the host (categories are not mutually exclusive). I’m not sure what explains why MacAskill really heavily dominated the Podcast category, while Singer heavily dominated the TED Talk category.
An alternate stance on moderation (from @Habryka.)
This is from this comment responding to this post about there being too many bans on LessWrong. Note how the LessWrong is less moderated than here in that it (I guess) responds to individual posts less often, but more moderated in that I guess it rate limits people more without reason.
I found it thought provoking. I’d recommend reading it.
Thanks for making this post!
One of the reasons why I like rate-limits instead of bans is that it allows people to complain about the rate-limiting and to participate in discussion on their own posts (so seeing a harsh rate-limit of something like “1 comment per 3 days” is not equivalent to a general ban from LessWrong, but should be more interpreted as “please comment primarily on your own posts”, though of course it shares many important properties of a ban).
This is a pretty opposite approach to the EA forum which favours bans.
Things that seem most important to bring up in terms of moderation philosophy:
Moderation on LessWrong does not depend on effort
“Another thing I’ve noticed is that almost all the users are trying. They are trying to use rationality, trying to understand what’s been written here, trying to apply Baye’s rule or understand AI. Even some of the users with negative karma are trying, just having more difficulty.”
Just because someone is genuinely trying to contribute to LessWrong, does not mean LessWrong is a good place for them. LessWrong has a particular culture, with particular standards and particular interests, and I think many people, even if they are genuinely trying, don’t fit well within that culture and those standards.
In making rate-limiting decisions like this I don’t pay much attention to whether the user in question is “genuinely trying ” to contribute to LW, I am mostly just evaluating the effects I see their actions having on the quality of the discussions happening on the site, and the quality of the ideas they are contributing.
Motivation and goals are of course a relevant component to model, but that mostly pushes in the opposite direction, in that if I have someone who seems to be making great contributions, and I learn they aren’t even trying, then that makes me more excited, since there is upside if they do become more motivated in the future.
I sense this is quite different to the EA forum too. I can’t imagine a mod saying I don’t pay much attention to whether the user in question is “genuinely trying”. I find this honesty pretty stark. Feels like a thing moderators aren’t allowed to say. “We don’t like the quality of your comments and we don’t think you can improve”.
Signal to Noise ratio is important
Thomas and Elizabeth pointed this out already, but just because someone’s comments don’t seem actively bad, doesn’t mean I don’t want to limit their ability to contribute. We do a lot of things on LW to improve the signal to noise ratio of content on the site, and one of those things is to reduce the amount of noise, even if the mean of what we remove looks not actively harmful.
We of course also do other things than to remove some of the lower signal content to improve the signal to noise ratio. Voting does a lot, how we sort the frontpage does a lot, subscriptions and notification systems do a lot. But rate-limiting is also a tool I use for the same purpose.
Old users are owed explanations, new users are (mostly) not
I think if you’ve been around for a while on LessWrong, and I decide to rate-limit you, then I think it makes sense for me to make some time to argue with you about that, and give you the opportunity to convince me that I am wrong. But if you are new, and haven’t invested a lot in the site, then I think I owe you relatively little.
I think in doing the above rate-limits, we did not do enough to give established users the affordance to push back and argue with us about them. I do think most of these users are relatively recent or are users we’ve been very straightforward with since shortly after they started commenting that we don’t think they are breaking even on their contributions to the site (like the OP Gerald Monroe, with whom we had 3 separate conversations over the past few months), and for those I don’t think we owe them much of an explanation. LessWrong is a walled garden.
You do not by default have the right to be here, and I don’t want to, and cannot, accept the burden of explaining to everyone who wants to be here but who I don’t want here, why I am making my decisions. As such a moderation principle that we’ve been aspiring to for quite a while is to let new users know as early as possible if we think them being on the site is unlikely to work out, so that if you have been around for a while you can feel stable, and also so that you don’t invest in something that will end up being taken away from you.
Feedback helps a bit, especially if you are young, but usually doesn’t
Maybe there are other people who are much better at giving feedback and helping people grow as commenters, but my personal experience is that giving users feedback, especially the second or third time, rarely tends to substantially improve things.
I think this sucks. I would much rather be in a world where the usual reasons why I think someone isn’t positively contributing to LessWrong were of the type that a short conversation could clear up and fix, but it alas does not appear so, and after having spent many hundreds of hours over the years giving people individualized feedback, I don’t really think “give people specific and detailed feedback” is a viable moderation strategy, at least more than once or twice per user. I recognize that this can feel unfair on the receiving end, and I also feel sad about it.
I do think the one exception here is that if people are young or are non-native english speakers. Do let me know if you are in your teens or you are a non-native english speaker who is still learning the language. People do really get a lot better at communication between the ages of 14-22 and people’s english does get substantially better over time, and this helps with all kinds communication issues.
Again this is very blunt but I’m not sure it’s wrong.
We consider legibility, but its only a relatively small input into our moderation decisions
It is valuable and a precious public good to make it easy to know which actions you take will cause you to end up being removed from a space. However, that legibility also comes at great cost, especially in social contexts. Every clear and bright-line rule you outline will have people budding right up against it, and de-facto, in my experience, moderation of social spaces like LessWrong is not the kind of thing you can do while being legible in the way that for example modern courts aim to be legible.
As such, we don’t have laws. If anything we have something like case-law which gets established as individual moderation disputes arise, which we then use as guidelines for future decisions, but also a huge fraction of our moderation decisions are downstream of complicated models we formed about what kind of conversations and interactions work on LessWrong, and what role we want LessWrong to play in the broader world, and those shift and change as new evidence comes in and the world changes.
I do ultimately still try pretty hard to give people guidelines and to draw lines that help people feel secure in their relationship to LessWrong, and I care a lot about this, but at the end of the day I will still make many from-the-outside-arbitrary-seeming-decisions in order to keep LessWrong the precious walled garden that it is.
I try really hard to not build an ideological echo chamber
When making moderation decisions, it’s always at the top of my mind whether I am tempted to make a decision one way or another because they disagree with me on some object-level issue. I try pretty hard to not have that affect my decisions, and as a result have what feels to me a subjectively substantially higher standard for rate-limiting or banning people who disagree with me, than for people who agree with me. I think this is reflected in the decisions above.
I do feel comfortable judging people on the methodologies and abstract principles that they seem to use to arrive at their conclusions. LessWrong has a specific epistemology, and I care about protecting that. If you are primarily trying to…
argue from authority,
don’t like speaking in probabilistic terms,
aren’t comfortable holding multiple conflicting models in your head at the same time,
or are averse to breaking things down into mechanistic and reductionist terms,
then LW is probably not for you, and I feel fine with that. I feel comfortable reducing the visibility or volume of content on the site that is in conflict with these epistemological principles (of course this list isn’t exhaustive, in-general the LW sequences are the best pointer towards the epistemological foundations of the site).
It feels cringe to read that basically if I don’t get the sequences lessWrong might rate limit me. But it is good to be open about it. I don’t think the EA forum’s core philosophy is as easily expressed.
If you see me or other LW moderators fail to judge people on epistemological principles but instead see us directly rate-limiting or banning users on the basis of object-level opinions that even if they seem wrong seem to have been arrived at via relatively sane principles, then I do really think you should complain and push back at us. I see my mandate as head of LW to only extend towards enforcing what seems to me the shared epistemological foundation of LW, and to not have the mandate to enforce my own object-level beliefs on the participants of this site.
Now some more comments on the object-level:
I overall feel good about rate-limiting everyone on the above list. I think it will probably make the conversations on the site go better and make more people contribute to the site.
Us doing more extensive rate-limiting is an experiment, and we will see how it goes. As kave said in the other response to this post, the rule that suggested these specific rate-limits does not seem like it has an amazing track record, though I currently endorse it as something that calls things to my attention (among many other heuristics).
Also, if anyone reading this is worried about being rate-limited or banned in the future, feel free to reach out to me or other moderators on Intercom. I am generally happy to give people direct and frank feedback about their contributions to the site, as well as how likely I am to take future moderator actions. Uncertainty is costly, and I think it’s worth a lot of my time to help people understand to what degree investing in LessWrong makes sense for them.
This is a pretty opposite approach to the EA forum which favours bans.
If you remove ones for site-integrity reasons (spamming DMs, ban evasion, vote manipulation), bans are fairly uncommon. In contrast, it sounds like LW does do some bans of early-stage users (cf. the disclaimer on this list), which could be cutting off users with a high risk of problematic behavior before it fully blossoms. Reading further, it seems like the stuff that triggers a rate limit at LW usually triggers no action, private counseling, or downvoting here.
As for more general moderation philosophy, I think the EA Forum has an unusual relationship to the broader EA community that makes the moderation approach outlined above a significantly worse fit for the Forum than for LW. As a practical matter, the Forum is the ~semi-official forum for the effective altruism movement. Organizations post official announcements here as a primary means of publishing them, but rarely on (say) the effectivealtruism subreddit. Posting certain content here is seen as a way of whistleblowing to the broader community as a whole. Major decisionmakers are known to read and even participate in the Forum.
In contrast (although I am not an LW user or a member of the broader rationality community), it seems to me that the LW forum doesn’t have this particular relationship to a real-world community. One could say that the LW forum is the official online instantiation of the LessWrong community (which is not limited to being an online community, but that’s a major part of it). In that case, we have something somewhat like the (made-up) Roman Catholic Forum (RCF) that is moderated by designees of the Pope. Since the Pope is the authoritative source on what makes something legitimately Roman Catholic, it’s appropriate for his designees to employ a heavier hand in deciding what posts and posters are in or out of bounds at the RCF. But CEA/EVF have—rightfully—mostly disowned any idea that they (or any other specific entity) decide what is or isn’t a valid or correct way to practice effective altruism.
One could also say that the LW forum is an online instantiation of the broader rationality community. That would be somewhat akin to John and Jane’s (made up) Baptist Forum (JJBF) that is moderated by John and Jane. One of the core tenets of Baptist polity is that there are no centralized, authoritative arbiters of faith and practice. So JJBF is just one of many places that Baptists and their critics can go to discuss Baptist topics. It’s appropriate for John and Jane to to employ a heavier hand in deciding what posts and posters are in or out of bounds at the JJBF because there are plenty of other, similar places for them to go. JJBF isn’t anything special. But as noted above, that isn’t really true of the EA Forum because of its ~semi-official status in a real-world social movement.
It’s ironic that—in my mind—either a broader or narrower conception of what LW is would justify tighter content-based moderation practices, while those are harder to justify in the in-between place that the EA Forum occupies. I think the mods here do a good job handling this awkward place for the most part by enforcing viewpoint-neutral rules like civility and letting the community manage most things through the semi-democratic karma method (although I would be somewhat more willing to remove certain content than they are).
This also roughly matches my impression. I do think I would prefer the EA community to either go towards more centralized governance or less centralized governance in the relevant way, but I agree that given how things are, the EA Forum team has less leeway with moderation than the LW team.
But CEA/EVF have—rightfully—mostly disowned any idea that they (or any other specific entity) decide what is or isn’t a valid or correct way to practice effective altruism.
Apart from choosing who can attend their conferences which are the de facto place that many community members meet, writing their intro to EA, managing the effective altruism website and offering criticism of specific members behaviour.
Seems like they are the de facto people who decide what is or isn’t valid way to practice effective altruism. If anything more than the LessWrong team (or maybe rationalists are just inherently unmanageable).
I agree on the ironic point though. I think you might assume that the EA forum would moderate more than LW, but that doesn’t seem to be the case.
Status note: This comment is written by me and reflects my views. I ran it past the other moderators, but they might have major disagreements with it.
I agree with a lot of Jason’s view here. The EA community is indeed much bigger than the EA Forum, and the Forum would serve its role as an online locus much less well if we used moderation action to police the epistemic practices of its participants.
I don’t actually think this that bad. I think it is a strength of the EA community that it is large enough and has sufficiently many worldviews that any central discussion space is going to be a bit of a mishmash of epistemologies.[1]
Some corresponding ways this viewpoint causes me to be reluctant to apply Habryka’s philosophy:[2]
Something like a judicial process is much more important to me. We try much harder than my read of LessWrong to apply rules consistently. We have the Forum Norms doc and our public history of cases forms something much closer to a legal code + case law than LW has. Obviously we’re far away from what would meet a judicial standard, but I view much of my work through that lens. Also notable is that all nontrivial moderation decisions get one or two moderators to second the proposal.
Related both to the epistemic diversity, and the above, I am much more reluctant to rely on my personal judgement about whether someone is a positive contributor to the discussion. I still do have those opinions, but am much more likely to use my power as a regular user to karma-vote on the content.
Some points of agreement:
Old users are owed explanations, new users are (mostly) not
Agreed. We are much more likely to make judgement calls in cases of new users. And much less likely to invest time in explaining the decision. We are still much less likely to ban new users than LessWrong. (Which, to be clear, I don’t think would have been tenable on LessWrong when they instituted their current policies, which was after the launch of GPT-4 and a giant influx of low quality content.)
I try really hard to not build an ideological echo chamber
Most of the work I do as a moderator is reading reports and recommending no official action. I have the internal experience of mostly fighting others to keep the Forum an open platform. Obviously that is a compatible experience with overmoderating the Forum into an echo chamber, but I will at least bring this up as a strong point of philosophical agreement.
Final points:
I do think we could potentially give more “near-ban” rate limits, such as the 1 comment/3 days. The main benefit of this I see is as allowing the user to write content disagreeing with their ban.
Controversial point! Maybe if everyone adopted my own epistemic practices the community would be better off. It would certainly gain in the ability to communicate smoothly with itself, and would probably spend less effort pulling in opposite directions as a result, but I think the size constraints and/or deference to authority that would be required would not be worth it.
I do think we could potentially give more “near-ban” rate limits, such as the 1 comment/3 days. The main benefit of this I see is as allowing the user to write content disagreeing with their ban.
I think the banned individual should almost always get at least one final statement to disagree with the ban after its pronouncement. Even the Romulans allowed (will allow?) that. Absent unusual circumstances, I think they—and not the mods—should get the last word, so I would also allow a single reply if the mods responded to the final statement.
More generally, I’d be interested in ~”civility probation,” under which a problematic poster could be placed for ~three months as an option they could choose as an alternative to a 2-4 week outright ban. Under civility probation, any “probation officer” (trusted non-mod users) would be empowered to remove content too close to the civility line and optionally temp-ban the user for a cooling-off period of 48 hours. The theory of impact comes from the criminology literature, which tells us that speed and certainty of sanction are more effective than severity. If the mods later determined after full deliberation that the second comment actually violated the rules in a way that crossed the action threshold, then they could activate the withheld 2-4 week ban for the first offense and/or impose a new suspension for the new one.
We are seeing more of this in the criminal system—swift but moderate “intermediate sanctions” for things like failing a drug test, as opposed to doing little about probation violations until things reach a certain threshold and then going to the judge to revoke probation and send the offender away for at least several months. As far as due process, the theory is that the offender received their due process (consideration by a judge, right to presumption of innocence overcome only by proof beyond a reasonable doubt) in the proceedings that led to the imposition of probation in the first place.
How are we going to deal emotionally with the first big newspaper attack against EA?
EA is pretty powerful in terms of impact and funding.
It seems only an amount of time before there is a really nasty article written about the community or a key figure.
Last year the NYT wrote a hit piece on Scott Alexander and while it was cool that he defended himself, I think he and the rationalist community overreacted and looked bad.
I would like us to avoid this.
If someone writes a hit piece about the community, Givewell, Will MacAskill etc, how are we going to avoid a kneejerk reaction that makes everything worse?
I suggest if and when this happens:
individuals largely don’t respond publicly unless they are very confident they can do so in a way that leads to deescalation.
articles exist to get clicks. It’s worth someone (not necessarily me or you) responding to an article in the NYT, but if, say a niche commentator goes after someone, fewer people will hear it if we let it go.
let the comms professionals deal with it. All EA orgs and big players have comms professionals. They can defend themselves.
if we must respond (we often needn’t) we should adopt a stance of grace, curiosity and humility. Why do they think these things are true? What would convince us?
Personally I hate being attacked and am liable to feel defensive and respond badly. I assume you are no different. I’d like to think about this so that if and when it happens we can avoid embarrassing ourselves and the things we care about.
Yeah, I think the community response to the NYT piece was counterproductive, and I’ve also been dismayed at how much people in the community feel the need to respond to smaller hit pieces, effectively signal boosting them, instead of just ignoring them. I generally think people shouldn’t engage with public attacks unless they have training in comms (and even then, sometimes the best response is just ignoring).
Debate weeks every other week and we vote on what the topic is.
I think if the forum had a defined topic (especially) in advance, I would be more motivated to read a number of post on that topic.
One of the benefits of the culture war posts is that we are all thinking about the same thing. If we did that on topics perhaps with dialogues from experts, that would be good and on a useful topic.
A crux for me at the moment is whether we can shape debate weeks in a way which leads to deep rather than shallow engagement. If we were to run debate weeks more often, I’d (currently) want to see them causing people to change their mind, have useful conversations, etc… It’s something I’ll be looking closely at when we do a post-mortem on this debate week experiment.
Also, every other week seems prima facie a bit burdensome for un-interested users. Additionally, I want top-down content to only be a part of the Forum. I wouldn’t want to over-shepherd discussion and end up with less wide-ranging and good quality posts.
Happy to explore other ways to integrate polls etc if people like them and they lead to good discussions though.
Hi Nathan! I like suggestions and would like to see more suggestions. But I don’t know what the theory of change is for the forum, so I find it hard to look at your suggestion and see if it maps onto the theory of change.
Re this: “One of the benefits of the culture war posts is that we are all thinking about the same thing.”
I’d be surprised if 5% of EAs spent more than 5 minutes thinking about this topic and 20% of forum readers spent more than 5 minutes thinking about it. I’d be surprised if there were more than 100 unique commenters on posts related to that topic. Why does this matter? Well, prioritising a minority of subject-matter interested people over the remaining majority could be a good way to shrink your audience.
Why is shrinking audience bad? If this forum focused more on EA topics and some people left I am not sure that would be bad. I guess it would be slightly good on expectation.
And to be clear I mean if we focused on “are AIs deserving of moral value” “what % of money should be spent on animal welfare”
I agree that there’s a lot of advantage of occasionally bringing a critical mass of attention to certain topics where this moves the community’s understanding forward vs. just hoping we end up naturally having the most important conversations.
Weird idea: What if some forum members were chosen as “jurors”, and their job is to read everything written during the debate week, possibly ask questions, and try to come to a conclusion?
I’m not that interested in AI welfare myself, but I might become interested if such “jurors” who recorded their opinion before and after made a big update in favor of paying attention to it.
To keep the jury relatively neutral, I would offer people the chance to sign up to “be a juror during the first week of August”, before the topic for the first week of August is actually known.
Thanks Nathan! People seem to like it so we might use it again in the future. If you or anyone else has feedback that might improve the next iteration of it, please let us know! You can comment here or just dm.
But I think there’s work to do on the display of the aggregate.
I imagine there should probably be a table somewhere at least (a list of each person and what they say).
This might show a distribution, above.
There must be some way to just not have the icons overlap with each other like this. Like, use a second dimension, just to list them. Maybe use a wheat plot? I think strip plots and swarm plots could also be options.
Really appreciate all the feedback and suggestions! This is definitely more votes than we expected. 😅
I implemented a hover-over based on @Agnes Stenlund’s designs in this PR, though our deployment is currently blocked (by something unrelated), so I’m not sure how long it will take to make it to the live site.
I may not have time to make further changes to the poll results UI this week, but please keep the comments coming—if we decide to run another debate or poll event, then we will iterate on the UI and take your feedback into account.
The orange line above the circles makes it look like there’s a similar number of people at the extreme left and the extreme right, which doesn’t seem to be the case
I don’t think it would help much for this question, but I could imagine using this feature for future questions in which the ability to answer anonymously would be important. (One might limit this to users with a certain amount of karma to prevent brigading.)
I note some of my confusion that might have been shared by others. I initially had thought that the option from users was between binary “agree” and “disagree” and thought the method by which a user could choose was by dragging to one side or another. I see now that this would signify maximal agreement/disagreement, although maybe users like me might have done so in error. Perhaps something that could indicate this more clearly would be helpful to others.
Thanks Brad, I didn’t foresee that! (Agree react Brad’s comment if you experienced the same thing). Would it have helped if we had marked increments along the slider? Like the below but prettier? (our designer is on holiday)
Yeah, if there were markers like “neutral”, “slightly agree”, “moderately agree”, “strongly agree”, etc. that might make it clearer.
After the decision by the user registers, a visual display that states something like “you’ve indicated that you strongly agree with the statement X. Redrag if this does not reflect your view or if something changes your mind and check out where the rest of the community falls on this question by clicking here.”
I’d love to hear more from the disagree reactors. They should feel very free to dm. I’m excited to experiment more with interactive features in the future, so critiques are especially useful now!
I am not confident that another FTX level crisis is less likely to happen, other than that we might all say “oh this feels a bit like FTX”.
Changes:
Board swaps. Yeah maybe good, though many of the people who left were very experienced. And it’s not clear whether there are due diligence people (which seems to be what was missing).
Orgs being spun out of EV and EV being shuttered. I mean, maybe good though feels like it’s swung too far. Many mature orgs should run on their own, but small orgs do have many replicable features.
More talking about honesty. Not really sure this was the problem. The issue wasn’t the median EA it was in the tails. Are the tails of EA more honest? Hard to say
We have now had a big crisis so it’s less costly to say “this might be like that big crisis”. Though notably this might also be too cheap—we could flinch away from doing ambitious things
Large orgs seem slightly more beholden to comms/legal to avoid saying or doing the wrong thing.
OpenPhil is hiring more internally
Non-changes:
Still very centralised. I’m pretty pro-elite, so I’m not sure this is a problem in and of itself, though I have come to think that elites in general are less competent than I thought before (see FTX and OpenAI crisis)
Little discussion of why or how the affiliation with SBF happened despite many well connected EAs having a low opinion of him
Little discussion of what led us to ignore the base rate of scamminess in crypto and how we’ll avoid that in future
Little discussion of why or how the affiliation with SBF happened despite many well connected EAs having a low opinion of him
Little discussion of what led us to ignore the base rate of scamminess in crypto and how we’ll avoid that in future
For both of these comments, I want a more explicit sense of what the alternative was. Many well-connected EAs had a low opinion of Sam. Some had a high opinion. Should we have stopped the high-opinion ones from affiliating with him? By what means? Equally, suppose he finds skepticism from (say) Will et al, instead of a warm welcome. He probably still starts the FTX future fund, and probably still tries to make a bunch of people regranters. He probably still talks up EA in public. What would it have taken to prevent any of the resultant harms?
Likewise, what does not ignoring the base rate of scamminess in crypto actually look like? Refusing to take any money made through crypto? Should we be shunning e.g. Vitalik Buterin now, or any of the community donors who made money speculating?
For both of these comments, I want a more explicit sense of what the alternative was.
Not a complete answer, but I would have expected communication and advice for FTXFF grantees to have been different. From many well connected EAs having a low opinion of him, we can imagine that grantees might have been urged to properly set up corporations, not count their chickens before they hatched, properly document everything and assume a lower-trust environment more generally, etc. From not ignoring the base rate of scamminess in crypto, you’d expect to have seen stronger and more developed contingency planning (remembering that crypto firms can and do collapse in the wake of scams not of their own doing!), more decisions to build more organizational reserves rather than immediately ramping up spending, etc.
The measures you list would have prevented some financial harm to FTXFF grantees, but it seems to me that that is not the harm that people have been most concerned about. I think it’s fair for Ben to ask about what would have prevented the bigger harms.
Ben said “any of the resultant harms,” so I went with something I saw a fairly high probability. Also, I mostly limit this to harms caused by “the affiliation with SBF”—I think expecting EA to thwart schemes cooked up by people who happen to be EAs (without more) is about as realistic as expecting (e.g.) churches to thwart schemes cooked up by people who happen to be members (without more).
To be clear, I do not think the “best case scenario” story in the following three paragraphs would be likely. However, I think it is plausible, and is thus responsive to a view that SBF-related harms were largely inevitable.
In this scenario, leaders recognized after the 2018 Alameda situation that SBF was just too untrustworthy and possibly fraudulent (albeit against investors) to deal with—at least absent some safeguards (a competent CFO, no lawyers who were implicated in past shady poker-site scandals, first-rate and comprehensive auditors). Maybe SBF wasn’t too far gone at this point—he hadn’t even created FTX in mid-2018 -- and a costly signal from EA leaders (we won’t take your money) would have turned him—or at least some of his key lieutenants—away from the path he went down? Let’s assume not, though.
If SBF declined those safeguards, most orgs decline to take his money and certainly don’t put him on podcasts. (Remember that, at least as of 2018, it sounds like people thought Alameda was going nowhere—so the motivation to go against consensus and take SBF money is much weaker at first.) Word gets down to the rank-and-file that SBF is not aligned, likely depriving him of some of his FTX workforce. Major EA orgs take legible action to document that he is not in good standing with them, or adopt a public donor-acceptability policy that contains conditions they know he can’t/won’t meet. Major EA leaders do not work for or advise the FTXFF when/if it forms.
When FTX explodes, the comment from major EA orgs is that they were not fully convinced he was trustworthy and cut off ties from him when that came to light. There’s no statutory inquiry into EVF, and no real media story here. SBF is retrospectively seen as an ~apostate who was largely rejected by the community when he showed his true colors, despite the big $$ he had to offer, who continued to claim affiliation with EA for reputational cover. (Or maybe he would have gotten his feelings hurt and started the FTX Children’s Hospital Fund to launder his reputation? Not very likely.)
A more modest mitigation possibility focuses more on EVF, Will, and Nick. In this scenario, at least EVF doesn’t take SBF’s money. He isn’t mentioned on podcasts. Hopefully, Will and Nick don’t work with FTXFF, or if they do they clearly disaffiliate from EVF first. I’d characterize this scenario as limiting the affiliation with SBF by not having what is (rightly or wrongly) seen as EA’s flagship organization and its board members risk lending credibility to him. In this scenario, the media narrative is significantly milder—it’s much harder to write a juicy narrative about FTXFF funding various smaller organizations, and without the ability to use Will’s involvement with SBF as a unifying theme. Moreover, when FTX explodes in this scenario, EVF is not paralyzed in the same way it was in the actual scenario. It doesn’t have a CC investigation, ~$30MM clawback exposure, multiple recused board members, or other fires of its own to put out. It is able to effectively lead/coordinate the movement through a crisis in a way that it wasn’t (and arguably still isn’t) able to due to its own entanglement. That’s hardly avoiding all the harms involved in affiliation with SBF . . . but I’d argue it is a meaningful reduction.
The broader idea there is that it is particularly important to isolate certain parts of the EA ecosystem from the influence of low-trustworthiness donors, crypto influence, etc. This runs broader than the specific examples above. For instance, it was not good to have an organization with community-health responsibilities like EVF funded in significant part by a donor who was seen as low-trustworthiness, or one who was significantly more likely to be the subject of whistleblowing than the median donor.
It’s likely that no single answer is “the” sole answer. For instance, it’s likely that people believed they could assume that trusted insiders were more significantly more ethical than the average person. The insider-trusting bias has bitten any number of organizations and movements (e.g., churches, the Boy Scouts). However, it seems clear from Will’s recent podcast that the downsides of being linked to crypto were appreciated at some level. It would take a lot for me to be convinced that all that $$ wasn’t a major factor.
The Scout Mindset deserved 1/10th of the marketing campaign of WWOTF. Galef is a great figurehead for rational thinking and it would have been worth it to try and make her a public figure.
I think much of the issue is that: 1. It took a while to ramp up to being able to do things such as the marketing campaign for WWOTF. It’s not trivial to find the people and buy-in necessary. Previous EA books haven’t had similar. 2. Even when you have that capacity, it’s typically much more limited than we’d want.
You are an EA, if you want to be. Reading this forum is enough. Giving a little of your salary effectively is enough. Trying to get an impactful job is enough. If you are trying even with a fraction of your resources to make the world better and chatting with other EAs about it, you are one too.
What surprises me about this work is that it does not seem to include the more aggressive (for lack of a better word) alternatives I have heard being thrown around, like “Suffering-free”, or “Clean”, or “cruelty-free”.
For what it’s worth, my first interpretation of “no-kill meat” is that you’re harvesting meat from animals in ways that don’t kill them. Like amputation of parts that grow back.
I am really not the person to do it, but I still think there needs to be some community therapy here. Like a truth and reconciliation committee. Working together requires trust and I’m not sure we have it.
Curious if you have examples of this being done well in communities you’ve been aware of? I might have asked you this before.
I’ve been part of an EA group where some emotionally honest conversations were had, and I think they were helpful but weren’t a big fix. I think a similar group later did a more explicit and formal version and they found it helpful.
I think the strategy fortnight worked really well. I suggest that another one is put in the calendar (for say 3 months time) and then rather than dripfeeding comment we sort of wait and then burst it out again.
It felt better to me, anyway to be like “for these two weeks I will engage”
I want to once again congratulate the forum team on this voting tool. I think by doing this, the EA forum is at the forefront of internal community discussions. No communities do this well and it’s surprising how powerful it is.
I hope Will MacAskill is doing well. I find it hard to predict how he’s doing as a person. While there have been lots of criticisms (and I’ve made some) I think it’s tremendously hard to be the Shelling person for a movement. There is a seperate axis however, and I hope in himself he’s doing well and I imagine many feel that way. I hope he has an accurate picture here.
Since it looks like you’re looking for an opinion, here’s mine:
To start, while I deeply respect GiveWell’s work, in my personal opinion I still find it hard to believe that any GiveWell top charity is worth donating to if you’re planning to do the typical EA project of maximizing the value of your donations in a scope sensitive and impartial way.
…Additionally, I don’t think other x-risks matter nearly as much as AI risk work (though admittedly a lot of biorisk stuff is now focused on AI-bio intersections).
Instead, I think the main difficult judgement call in EA cause prioritization right now is “neglected animals” (eg invertebrates, wild animals) versus AI risk reduction.
AFAICT this also seems to be somewhat close to the overall view of the EA Forum as well as you can see in some of the debate weeks (animals smashed humans) and the Donation Election (where neglected animal orgs were all in the top, followed by PauseAI).
This comparison is made especially difficult because OP funds a lot of AI but not any of the neglected animal stuff, which subjects the AI work to significantly more diminished marginal returns.
To be clear, AI orgs still do need money. I think there’s a vibe that all the AI organizations that can be funded by OpenPhil are fully funded and thus AI donations are not attractive to individual EA forum donors. This is not true. I agree that their highest priority parts are fully funded and thus the marginal cost-effectiveness of donations is reduced. But this marginal cost-effectiveness is not eliminated, and it still can be high. I think there are quite a few AI orgs that are still primarily limited by money and would do great things with more funding. Additionally it’s not healthy for these orgs to be so heavily reliant on OpenPhil support.
So my overall guess is if you think AI is only 10x or less important in the abstract than work on neglected animals, you should donate to the neglected animals due to this diminishing marginal returns issue.
I currently lean a bit towards AI is >10x neglected animals and therefore I want to donate to AI stuff, but I really don’t think this is settled, it needs more research, and it’s very reasonable to believe the other way.
~
Ok so where to donate? I don’t have a good systematic take in either the animal space or the AI space unfortunately, but here’s a shot:
For starters, in the AI space, a big issue for individual donors is that unfortunately it’s very hard to properly evaluate AI organizations without a large stack of private information that is hard to come by. This private info has greatly changed my view of what organizations are good in the AI space. On the other hand you can basically evaluate animal orgs well enough with only public info, and the private info only improves the eval a little bit.
Moreover, in the neglected animal space, I do basically trust the EA Animal Welfare Fund to allocate money well and think it could be hard for an individual to outperform that. Shrimp Welfare Project also looks compelling.
I think the LTFF is worth donating to but to be clear I don’t think the LTFF actually does all-considered work on the topic—they seem to have an important segment of expertise that seems neglected outside the LTFF, but they definitely don’t have the expertise to cover and evaluate everything. I do think the LTFF would be a worthy donation choice.
If I were making a recommendation I would concur with the recommend the three AI orgs in OpenPhil’s list: Horizon, ARI, and CLTR—they are all being recommended by individual OpenPhil staff for good reason.
There are several other orgs I think are worth considering as well and you may want to think about options that are only available to you as an individual, such as political donations. Or think about ways where OpenPhil may not be able to do as well in the AI space, like PauseAI or digital sentience work, both of which still look neglected.
~
A few caveats/exceptions to my above comment:
I’m very uncertain about whether AI really is >10x neglected animals and I cannot emphasize enough that reasonable and very well-informed people can disagree on this issue and I could definitely imagine changing my mind on this over the next year.
I’m not shilling for my own orgs in this comment to keep it less biased, but those are also options.
I don’t mean to be mean to GiveWell. Of course donating to GiveWell is very good and still better than 99.99% of charitable giving!
Another area I don’t consider but probably should is organizations like Giving What We Can that work somewhat outside these cause areas but may have sufficient multipliers that it still is very cost-effective. I think meta-work on top of global health and development work (such as improving its effectiveness or getting more people to like it / do it better) can often lead to larger multipliers since there’s magnitudes more underlying money in that area + interest in the first place.
I don’t appropriately focus on digital sentience, which OpenPhil is also not doing and could also use some help. I think this could be fairly neglected. Work that aims to get AI companies to commit towards not committing animal mistreatment is also an interesting and incredibly underexplored area that I don’t know much about.
There’s a sizable amount of meta-strategic disagreement / uncertainty within the AI space that I gloss over here (imo Michael Dickens does a good job of overviewing this even if I have a lot of disagreements with his conclusions).
I do think risk aversion is underrated as a reasonable donor attitude that can vary between donors and does make the case for focusing on neglected animals stronger. I don’t think there’s an accurate and objective answer about how risk averse you ought to be.
I agree with this comment. Thanks for this clear overview.
The only element where I might differ is whether AI really is >10x neglected animals.
My main issue is that while AI is a very important topic, it’s very hard to know whether AI organizations will have an overall positive or negative (or neutral) impact. First, it’s hard to know what will work and what won’t accidentally increase capabilities. More importantly, if we end up in a future aligned with human values but not animals or artificial sentience, this could still be a very bad world in which a large number of individuals are suffering (e.g., if factory farming continues indefinitely).
My tentative and not very solid view is that work at the intersection of AI x animals is promising (eg work that aims to get AI companies to commit towards not committing animal mistreatment), and attempts for a pause are interesting (since they give us more time to figure out stuff).
If you think that an aligned AGI will truly maximise global utility, you will have a more positive outlook.
But since I’m rather risk averse, I devote most of my resources to neglected animals.
I’m very uncertain about whether AI really is >10x neglected animals and I cannot emphasize enough that reasonable and very well-informed people can disagree on this issue and I could definitely imagine changing my mind on this over the next year. This is why I framed my comment the way I did hopefully making it clear that donating to neglected animal work is very much an answer I endorse.
I also agree it’s very hard to know whether AI organizations will have an overall positive or negative (or neutral) impact. I think there’s higher-level strategic issues that make the picture very difficult to ascertain even with a lot of relevant information (imo Michael Dickens does a good job of overviewing this even if I have a lot of disagreements). Also the private information asymmetry looms large here.
I also agree that “work that aims to get AI companies to commit towards not committing animal mistreatment” is an interesting and incredibly underexplored area. I think this is likely worth funding if you’re knowledgable about the space (I’m not) and know of good opportunities (I currently don’t).
Regarding AI x animals donation opportunities, all of this is pretty new but I know a few. Hive launched a Ai for Animals website, with an upcoming conference: https://www.aiforanimals.org/
I think it’s normal, and even good that the EA community doesn’t have a clear prioritization of where to donate. People have different values and different beliefs, and so prioritize donations to different projects.
It is hard to know exactly how high impact animal welfare funding opportunities interact with x-risk ones
What do you mean? I don’t understand how animal welfare campaigns interact with x-risks, except for reducing the risk of future pandemics, but I don’t think that’s what you had in mind (and even then, I don’t think those are the kinds of pandemics that x-risk minded people worry about)
I don’t know what the general consensus on the most impactful x-risk funding opportunities are
It seems clear to me that there is no general consensus, and some of the most vocal groups are actively fighting against each other.
I don’t really know what orgs do all-considered work on this topic. I guess the LTFF?
You can see Giving What We Can recommendations for global catrastrophic risk reduction on this page[1] (i.e. there’s also Longview’s Emerging Challenges Fund). Many other orgs and foundations work on x-risk reduction, e.g. Open Philanthropy.
I am more confused/inattentive and this community is covering a larger set of possible choices so it’s harder to track what consensus is
I think that if there were consensus that a single project was obviously the best, we would all have funded it already, unless it was able to productively use very very high amounts of money (e.g., cash transfers)
I notice some people (including myself) reevaluating their relationship with EA.
This seems healthy.
When I was a Christian it was extremely costly for me to reduce my identification and resulted in a delayed and much more final break than perhaps I would have wished[1]. My general view is that people should update quickly, and so if I feel like moving away from EA, I do it when I feel that, rather than inevitably delaying and feeling ick.
Notably, reducing one’s identification with the EA community need not change one’s poise towards effective work/donations/earn to give. I doubt it will change mine. I just feel a little less close to the EA community than once I did, and that’s okay.
I don’t think I can give others good advice here, because we are all so different. But the advice I would want to hear is “be part of things you enjoy being part of, choose an amount of effort to give to effectiveness and try to be a bit more effective with that each month, treat yourself kindly because you too are a person worthy of love”
I think a slow move away from Christianity would have been healthier for me. Strangely I find it possible to imagine still being a Christian, had things gone differently, even while I wouldn’t switch now.
The vibe at EAG was chill, maybe a little downbeat, but fine. I can get myself riled up over the forum, but it’s not representative! Most EAs are just getting on with stuff.
(This isn’t to say that forum stuff isn’t important, its just as important as it is rather than what should define my mood)
Here’s a screenshot (open in new tab to see it in slightly higher resolution). I’ve also made a spreadsheet with the individual voting results, which gives all the info that was on the banner just in a slightly more annoying format.
We are also planning to add native way to look back at past events as they appeared on the site :), although this isn’t a super high priority atm.
We have thought about that. Probably the main reason we haven’t done this is because of this reason, on which I’ll quote myself on from an internal slack message:
Currently if someone makes an anon account, they use an anonymous email address. There’s usually no way for us, or, by extension, someone who had full access to our database, to deanonymize them. However, if we were to add this feature, it would tie the anonymous comments to a primary account. Anyone who found a vulnerability in that part of the code, or got an RCE on us, would be able post a dump that would fully deanonymize all of those accounts.
“Rather than just doing what feels right, we use evidence and careful analysis to find the very best causes to work on.”
It reads to me as arrogant, and epitomises the worst caracatures my friends do of EAs. Read it in a snarky voice (such as one might if they struggled with the movement and were looking to do research) “Rather that just doing what feels right...”
I suggest it gets changed to one of the following:
“We use evidence and careful analysis to find the very best causes to work on.”
“It’s great when anyone does a kind action no matter how small or effective. We have found value in using evidence and careful analysis to find the very best causes to work on.”
I am genuinely sure whoever wrote it meant well, so thank you for your hard work.
I also thought this when I first read that sentence on the site, but I find it difficult (as I’m sure its original author does) to communicate its meaning in a subtler way. I like your proposed changes, but to me the contrast presented in that sentence is the most salient part of EA. To me, the thought is something like this:
“Doing good feels good, and for that reason, when we think about doing charity, we tend to use good feeling as a guide for judging how good our act is. That’s pretty normal, but have you considered that we can use evidence and analysis to make judgments about charity?”
The problem IMHO is that without the contrast, the sentiment doesn’t land. No one, in general, disagrees in principle with the use of evidence and careful analysis: it’s only in contrast with the way things are typically done that the EA argument is convincing.
I would choose your statement over the current one.
I think the sentiment lands pretty well even with a very toned down statement. The movement is called “effective altruism”. I think often in groups are worried that outgroups will not get their core differences when generally that’s all outgroups know about them.
I don’t think that anyone who visits that website won’t think that effectiveness isn’t a core feature. And I don’t think we need to be patronising (as EAs are charactured as being in conversations I have) in order to make known something that everyone already knows.
Several journalists (including those we were happy to have write pieces about WWOTF) have contacted me but I think if I talk to them, even carefully, my EA friends will be upset with me. And to be honest that upsets me.
We are in the middle of a mess of our own making. We deserve scrutiny. Ugh, I feel dirty and ashamed and frustrated.
To be clear, I think it should be your own decision to talk to journalists, but I do also just think that it’s just better for us to tell our own story on the EA Forum and write comments, and not give a bunch of journalists the ability to greatly distort the things we tell them in a call, with a platform and microphone that gives us no opportunity to object or correct things.
I have been almost universally appalled at the degree to which journalists straightforwardly lie in interviews, take quotes massively out of context, or make up random stuff related to what you said, and I do think it’s better that if you want to help the world understand what is going on, that you write up your own thoughts in your own context, instead of giving that job to someone else.
I think 90% of the answer to this is risk aversion from funders, especially LTFF and OpenPhil, see here. As such many things struggled for funding, see here.
We should acknowledge that doing good policy research often involves actually talking to and networking with policy people. It involves running think tanks and publishing policy reports, not just running academic institutions and publishing papers. You cannot do this kind of research well in a vacuum.
That fact combined with funders who were (and maybe still are) somewhat against funding people (except for people they knew extremely well) to network with policy makers in any way, has lead to (maybe is still leading to) very limited policy research and development happening.
I am sure others could justify this risk averse approach, and there are totally benefits to being risk averse. However in my view this was a mistake (and is maybe an ongoing mistake). I think was driven by the fact that funders were/are: A] not policy people, so do/did not understand the space so are were hesitant to make grants; B] heavily US centric, so do/did not understand the non-US policy space; and C] heavily capacity constrained, so do/did not have time to correct for A or B.
– –
(P.S. I would also note that I am very cautious about saying there is “a lack of concrete policy suggestions” or at least be clear what is meant by this. This phrase is used as one of the reasons for not funding policy engagement and saying we should spend a few more years just doing high level academic work before ever engaging with policy makers. I think this is just wrong. We have more than enough policy suggestions to get started and we will never get very very good policy design unless we get started and interact with the policy world.)
My current model is that actually very few people who went to DC and did “AI Policy work” chose a career that was well-suited to proposing policies that help with existential risk from AI. In-general people tried to choose more of a path of “try to be helpful to the US government” and “become influential in the AI-adjacent parts of the US government”, but there are almost no people working in DC whose actual job it is to think about the intersection of AI policy and existential risk. Mostly just people whose job it is to “become influential in the US government so that later they can steer the AI existential risk conversation in a better way”.
I find this very sad and consider it one of our worst mistakes, though I am also not confident in that model, and am curious whether people have alternative models.
but there are almost no people working in DC whose actual job it is to think about the intersection of AI policy and existential risk.
That’s probably true because it’s not like jobs like that just happen to exist within government (unfortunately), and it’s hard to create your own role descriptions (especially with something so unusual) if you’re not already at the top.
That said, I think the strategy you describe EAs to have been doing can be impactful? For instance, now that AI risk has gone mainstream, some groups in government are starting to work on AI policy more directly, and if you’re already working on something kind of related and have a bunch of contacts and so on, you’re well-positioned to get into these groups and even get a leading role.
What’s challenging is that you need to make career decisions very autonomously and have a detailed understanding of AI risk and related levers to carve out your own valuable policy work at some point down the line (and not be complacent with “down the line never comes until it’s too late”). I could imagine that there are many EA-minded individuals who went into DC jobs or UK policy jobs with the intent to have an impact on AI later, but they’re unlikely to do much with that because they’re not proactive enough and not “in the weeds” enough with thinking about “what needs to happen, concretely, to avert an AI catastrophe?.”
Even so, I think I know several DC EAs who are exceptionally competent and super tuned in and who’ll likely do impactful work down the line, or are already about to do such things. (And I’m not even particularly connected to that sphere, DC/policy, so there are probably many more really cool EAs/EA-minded folks there that I’ve never talked to or read about.)
The slide Nathan is referring to. “We didn’t listen” feels a little strong; lots of people were working on policy detail or calling for it, it just seems ex post like it didn’t get sufficient attention. I agree directionally though, and Richard’s guesses at the causes (expecting fast take-off + business-as-usual politics) seem reasonable to me.
I talked to someone outside EA the other day who said that in a competive tender they wouldn’t apply to EA funders because they thought the process would likely to go to someone with connections to OpenPhil.
Please post your jobs to Twitter and reply with @effective_jobs. Takes 5 minutes. and the jobs I’ve posted and then tweeted have got 1000s of impressions.
Or just DM me on twitter (@nathanpmyoung) and I’ll do it. I think it’s a really cheap way of getting EAs to look at your jobs. This applies to impactful roles in and outside EA.
Here is an example of some text:
-tweet 1
Founder’s Pledge Growth Director
@FoundersPledge are looking for someone to lead their efforts in growing the amount that tech entrepreneurs give to effective charities when they IPO.
I get why I and other give to Givewell rather than catastrophic risk—sometimes it’s good to know your “Impact account” is positive even if all the catastrophic risk work was useless.
But why do people not give to animal welfare in this case? Seems higher impact?
And if it’s just that we prefer humans to animals that seems like something we should be clear to ourselves about.
Also I don’t know if I like my mental model of an “impact account”. Seems like my giving has maybe once again become about me rather than impact.
This is exactly why I mostly give to animal charities. I do think there’s higher uncertainty of impact with animal charities compared to global health charities so I still give a bit to AMF. So roughly 80% animal charities, 20% global health.
Thanks for brining our convo here! As context for others, Nathan and I had a great discussion about this which was supposed to be recorded...but I managed to mess up and didn’t capture the incoming audio (i.e. everything Nathan said) 😢
Guess I’ll share a note I made about this (sounds AI written because it mostly was, generated from a separate rambly recording). A few lines are a little spicier than I’d ideally like but 🤷
Donations and Consistency in Effective Altruism
I believe that effective altruists should genuinely strive to practice effective altruism. By this, I mean that there are individuals who earnestly and seriously agree with the core arguments that animal welfare charities deserve significant financial support, both in relative and absolute terms. However, they do not always follow through on these convictions when it comes to donations.
Many, for example, will eagerly nod along with introductory presentations for university effective altruism groups often highlight the fact that a tiny fraction of all donations go toward animal welfare causes, even within EA.
And, as far as I can tell, very few if any EAs affirmatively dispute that animal welfare as a cause is simply more important and neglected, and similarly as tractable, as global poverty. But their donations do not seem to reflect this, going to GiveWell-type charities like GiveDirectly or Against Malaria Foundation instead of animal welfare organizations.
While supporting poverty alleviation efforts is commendable in its own right – after all we want poor people having more money and fewer dying from preventable diseases – it seems incongruous given their professed beliefs.
Without delving too deeply into speculation or psychoanalysis regarding individual motivations behind these donation choices; one possibility is simply an emotional preference for contributing toward human-centric causes over those focused on animals’ well-being.
To be clear: I am not claiming any personal moral superiority here; my own charitable giving record is awfully small in relative terms. Nonetheless I encourage fellow EAs who share concerns about factory farming’s abhorrent nature and have resources available for philanthropy to seriously consider allocating their donations toward animal welfare causes.
Thanks for posting this. I had branching out my giving strategy to conclude some animal-welfare organizations on the to-do list, but this motivated me to actually pull the trigger on that.
I think most of the animal welfare neglect comes from the fact that if people are deep enough into EA to accept all of its “weird” premises they will donate to AI safety instead. Animal welfare is really this weird midway spot between “doesn’t rest on controversial claims” and “maximal impact”.
Definitely part of the explanation, but my strong impression from interaction irl and on Twitter is that many (most?) AI-safety-pilled EAs donate to GiveWell and much fewer to anything animal related.
I think ~literally except for Eliezer (who doesn’t think other animals are sentient), this isn’t what you’d expect from the weirdness model implied.
Assuming I’m not badly mistaken about others’ beliefs and the gestalt (sorry) of their donations, I just don’t think they’re trying to do the most good with their money. Tbc this isn’t some damning indictment—it’s how almost all self-identified EAs’ money is spent and I’m not at all talking about ‘normal person in rich country consumption.’
I continue to think that a community this large needs mediation functions to avoid lots of harm with each subsequent scandal.
People asked for more details. so I wrote the below.
Let’s look at some recent scandals and I’ll try and point out some different groups that existed.
FTX—longtermists and non-lontermists, those with greater risk tolerance and less
Bostrom—rationalists and progressives
Owen Cotton-Barrett—looser norms vs more robust, weird vs normie
Nonlinear—loyalty vs kindness, consent vs duty of care
In each case, the community disagrees on who we should be and what we should be. People write comments to signal that they are good and want good things and shouldn’t be attacked. Other people see these and feel scared that they aren’t what the community wants.
This is tiring and anxiety inducing for all parties. In all cases here there are well intentioned, hard working people who have given a lot to try and make the world better who are scared they cannot trust their community to support them if push comes to shove. There are people horrified at the behaviour of others, scared that this behaviour will repeat itself, with all the costs attached. I feel this way, and I don’t think I am alone.
I think we need the community equivalent of therapy and mediation. We have now got to the stage where national media articles get written about our scandals and people threaten litigation. I just don’t think that a community of 3000 apes can survive this without serious psychological costs which in turn affect work and our lives. We all don’t want to be chucked out of a community which is safety and food and community for us. We all don’t want that community to become a hellhole. I don’t, SBF doesn’t, the woman hurt by OCB doesn’t, Kat and Emerson and Chloe and Alice don’t.
That’s not to say that all behaviour is equal, but that I think the frame here is empathy, boundary setting and safety, not conflict, auto-immune responses and exile.
What do I suggest?
After each scandal we have spaces to talk about our feelings, then we discuss what we think the norms of the community should be. Initially there will be disagreement but in time as we listen to those we disagree with we may realise how we differ. Then we can try and reintegrate this understanding to avoid it happening again. That’s what trust is—the confidence that something won’t happen above tolerance.
A concrete example
After the Bostrom stuff we had rationalist and progressive EAs in disagreement. Some thought he’d responded well, others badly. I think there was room for a discussion, to hear how unsafe his behaviour had left people feeling “do people judge my competence based on the colour of my skin?” “will my friends be safe here?”. I don’t think these feelings can be dismissed as wokery gone mad. But I think the other group had worries too “Will I be judged for things I said years ago?” “Seemingly even an apology isn’t enough”. I find I can empathise with both groups.
And I suggest what we want is some norms around this. Norms about things we do and don’t do. The aim should be to reduce community stress through there being bright lines and costs for behaviour we deem bad. And ways for those who do unacceptable things to come back to the community. I think there could be mutually agreeable ones, but I think the process would be tough.
We’d have to wrestle with how Bostrom and Hanson’s productivity seems related to their ability to think weird or ugly thoughts. We’d have to think about if mailing lists 20 years ago were public or private. We’d have to think about what value we put on safety. And we’d have to be willing not to pick up the sword if it didn’t go our way.
But I think there are acceptable positions here. Where people acknowledge harmful patterns of behaviour, perhaps even voluntarily leave for a time. Where people talk about the harm and the benefit created by those they disagree with. Where others see that some value weirdness/creativity more/less than they do. Where we rejoice in what we have achieved and mourn over how we have hurt one another. Where we grow to be a kinder, more mature community.
Intermission
This stuff breaks my heart. Not because I am good, but because I have predictably hurt people and been hurt by people in the past. And I’d like the cycle to stop. In my own life, conflict has never been the way out of this. Either I should leave people I cannot work with, or share and listen to those I can. And it is so hard and I fail often, but it’s better than becoming jaded and cruel or self-hating and perfectionist. I am broken, I am enough, I can be better. EA is flawed, EA is good, EA can improve. The world is awful, the world is better that it used to be, the world can improve.
As it is
Currently, I think we aren’t doing this work, so every subsequent scandal adds another grievance to the pile. And I guess people are leaving the community. If we spend millions a year trying to get graduates, isn’t it worth spending the same to keep long time members? I don’t know if there is a way to keep Kat and Emerson, Alice and Chloe, the concerned global healthy worker and the person who thinks SBF did nothing wrong, and me and you, but currently I don’t see us spending nearly the appropriate amount of mental effort or resources.
Oh and I’m really not angling to do this work. I have suggestions, sure, but I think the person should be widely trusted by the community as neutral and mature.
Community health is also like the legal system in that they enforce sanctions so I wonder if that reduces the chance that someone reaches out to them to mediate.
A previous partner and I did a sex and consent course together online I think it’s helped me be kinder in relationships.
Useful in general.
More useful if, you:
- have sex casually— see harm in your relationships and want to grow - are poly
As I’ve said elsewhere I think a very small proportion of people in EA are responsible for most of the relationship harms. Some of bad actors, who need to be removed, some are malefactors, who have either lots of interactions or engage in high risk behaviours and accidentally cause harm. I would guess I have more traits of the second category than almost all of you. So people like me should do the most work to change.
So most of you probably don’t need this, but if you are in some of the above groups, I’d recommend a course like this. Save yourself the heartache of upsetting people you care about.
Can we have some people doing AI Safety podcast/news interviews as well as Yud?
I am concerned that he’s gonna end up being the figurehead here. I assume someone is thinking of this, but I’m posting here to ensure that it is said. I am pretty sure that people are working on this, but I think it’s good to say this anyway.
We aren’t a community who says “I guess he deserves it” we say “who is the best person for the job?”. Yudkowsky, while he is an expert isn’t a median voice. His estimates of P(doom) are on the far tail of EA experts here. So if I could pick 1 person I wouldn’t pick him and frankly I wouldn’t pick just one person.
Some other voices I’d like to see on podcasts/ interviews:
Toby Ord
Paul Christiano
Ajeya Cotra
Amanda Askell
Will MacAskill
Joe Carlsmith*
Katja Grace*
Matthew Barnett*
Buck Schlegeris
Luke Meulhauser
Again, I’m not saying noone has thought of this (80%) they have. But I’d like to be 97% sure, so I’m flagging it.
I am a bit confused by your inclusion of Will MacAskill. Will has been on a lot of podcasts, while for Eliezer I only remember 2. But your text sounds a bit like you worry that Eliezer will be too much on podcasts and MacAskill too little (I don’t want to stop MacAskill from going on podcasts btw. I agree that having multiple people present different perspectives on AGI safety seems like a good thing).
I don’t think you should be optimizing to avoid extreme views, but in favor of those with the most robust models, who can also communicate them effectively to the desired audience. I agree that if we’re going to be trying anything resembling public outreach it’d be good to have multiple voices for a variety of reasons.
On the first half of the criteria I’d feel good about Paul, Buck, and Luke. On the second half I think Luke’s blog is a point of evidence in favor. I haven’t read Paul’s blog, and I don’t think that LessWrong comments are sufficiently representative for me to have a strong opinion on either Paul or Buck.
I notice I am pretty skeptical of much longtermist work and the idea that we can make progress on this stuff just by thinking about it.
I think future people matter, but I will be surprised if, after x-risk reduction work, we can find 10s of billions of dollars of work that isn’t busywork and shouldn’t be spent attempting to learn how to get eg nations out of poverty.
I have heard one anecdote of an EA saying that they would be less likely to hire someone on the basis of their religion because it would imply they were less good at their job less intelligent/epistemically rigorous. I don’t think they were involved in hiring, but I don’t think anyone should hold this view.
Here is why:
As soon as you are in a hiring situation, you have much more information than priors. Even if it were true that, say, ADHD[1] were less rational then the interview process should provide much more information than such a prior. If that’s not the case, get a better interview process, don’t start being prejudiced!
People don’t mind meritocracy, but they want a fair shake. If I heard that people had a prior that ADHD folks were less likely to be hard working, regardless of my actual performance in job tests, I would be less likely to want to be part of this community. You might lose my contributions. It seems likely to me that we come out ahead by ignoring small differences in groups so people don’t have to worry about this. People are very sensitive to this. Let’s agree not to defect. We judge on our best guess of your performance, not on appearances.
I would be unsurprised if this kind of thinking cut only one way. Is anyone suggesting they wouldn’t hire poly people because of the increased drama or men because of the increased likelihood of sexual scandal? No! We already think some information is irrelevant/inadmissible as a prior in hiring. Because we are glad of people’s right to be different or themselves. To me, race and religion clearly fall in this space. I want people to feel they can be human and still have a chance of a job.
I wouldn’t be surprised if this cashed out to “I hire people like me”. In this example was the individual really hiring on the basis of merit or did they just find certain religious people hard to deal with. We are not a social club, we are trying to do the most good. We want the best, not the people who are like us.
This pattern matches to actual racism/sexism. Like “sometimes I don’t get hired because people think Xs are worse at jobs”. How is that not racism? Seems bad.
Counterpoints:
Sometime gut does play a play a role. We think someone would get better on our team. Some might argue that it’s fine to use this as a tiebreaker. Or that its better to be honest that this is what’s going on.
Personally I think they points outweigh the counterpoints.
Hiring processes should hire the person who seems most likely to do the best job. And candidates should be confident this is happening. But for both predictive reasons, community welfare reasons and avoiding obvious pitfalls reasons I think small priors around race, religion, sexuality, gender, sexual practice should be discounted[2]. If you think the candidate is better or worse, it should show in the interview process. And yes, I get that gut plays a role, but I’d be really wary of gut that feeds clear biases. I think a community where we don’t do that comes out ahead and does more good.
I would be unsurprised if this kind of thinking cut only one way. Is anyone suggesting they wouldn’t hire poly people because of the increased drama or men because of the increased likelihood of sexual scandal?
In the wake of the financial crisis it was not uncommon to see suggestions that banks etc. should hire more women to be traders and risk managers because they would be less temperamentally inclined towards excessive risk taking.
I think that we have more-or-less agreed as societies that there are some traits that is is okay to use to make choices about people (mainly: their actions/behaviors), and there are some traits that is is not okay to use (mainly: things that the person didn’t choose and isn’t responsible for). Race, religion, gender, and the like are widely accepted[1] as not socially acceptable traits to use when evaluating people’s ability to be a member of a team.[2] But there are other traits that we commonly treat as acceptable to use as the basis of treating people differently, such as what school someone went to, how many years of work experience they have, if they have a similar communication style as us, etc.
I think I might split this into two different issues.
One issue is: it isn’t very fair to give or withhold jobs (and other opportunities) based on things that people didn’t really have much choice in (such as where they were born, how wealthy their parents were, how good of an education they got in their youth, etc.)
A separate issue is: it is ineffective to employment decisions (hiring, promotions, etc.) based on things that don’t predict on-the-job success.
Sometimes these things line up nicely (such as how it isn’t fair to base employment decisions on hair color, and it is also good business to not base employment decisions on hair color). But sometimes they don’t line up so nicely: I think there are situations where it makes sense to use “did this person go to a prestigious school” to make employment decisions because that will get you better on-the-job performance; but it also seems unfair because we are in a sense rewarding this person for having won the lottery.[3]
In a certain sense I suppose this is just a mini rant about how the world is unfair. Nonetheless, I do think that a lot of conversations about hiring and discriminations get the two different issues conflated.
Employment is full of laws, but even in situations where there isn’t any legal issue (such as inviting friends over for a movie party, or organizing a book club) I view it as somewhat repulsive to include/exclude people based on gender/race/religion/etc. Details matter a lot, and I can think of exceptions, but that is more or less my starting point.
I’ve heard the phrase “genetic lottery,” and I suspect genes to contribute a lot to academic/career success. But lots of other things outside a person’s control affect how well they perform: being born in a particular place, how good your high school teachers were, stability of the household, if your parents had much money, and all the other things that we can roughly describe as “fortune” or “luck” or “happenstance.”
I know lots of people with lots of dispositions experience friction with just declining their parents’ religions, but that doesn’t mean I “get it” i.e., conflating religion with birth lotteries and immutability seems a little unhinged to me.
There may be a consensus that it’s low status to say out loud “we only hire harvard alum” or maybe illegal (or whatever), but there’s not a lot of pressure to actually try reducing implicit selection effects that end up in effect quite similar to a hardline rule. And I think harvard undergrad admissions have way more in common with lotteries than religion does!
I think the old sequencesy sort of “being bad at metaphysics (rejecting reductionism) is a predictor of unclear thinking” is fine! The better response to that is “come on, no one’s actually talking about literal belief in literal gods, they’re moreso saying that the social technologies are valuable or they’re uncomfortable just not stewarding their ancestors’ traditions” than like a DEI argument.
There is more to get into here but two main things:
I guess some EAs, and some who I think do really good work do literally believe in literal gods
I don’t actually think this is that predictive. I know some theists who are great at thinking carefully and many athiests who aren’t. I reckon I could distinguish the two in a discussion better than rejecting the former out of hand.
...they would be less likely to hire someone on the basis of their religion because it would imply they were less good at their job.
Some feedback on this post: this part was confusing. I assume that what this person said was something like “I think a religious person would probably be harder to work with because of X”, or “I think a religious person would be less likely to have trait Y”, rather than “religious people are worse at jobs”.
The specifics aren’t very important here, since the reasons not to discriminate against people for traits unrelated to their qualifications[1] are collectively overwhelming. But the lack of specifics made me think to myself: “is that actually what they said?”. It also made it hard to understand the context of your counterarguments, since there weren’t any arguments to counter.
Religion can sometimes be a relevant qualification, of course; if my childhood synagogue hired a Christian rabbi, I’d have some questions. But I assume that’s not what the anecdotal person was thinking about.
The person who was told this was me, and the person I was talking to straight up told me he’d be less likely to hire Christians because they’re less likely to be intelligent
Please don’t assume that EAs don’t actually say outrageously offensive things—they really do sometimes!
Edit: A friend told me I should clarify this was a teenage edgelord—I don’t want people to assume this kind of thing gets said all the time!
And since posting this I’ve said this to several people and 1 was like “yeah no I would downrate religious people too”
I think a poll on this could be pretty uncomfortable reading. If you don’t, run it and see.
Put it another way, would EAs discriminate against people who believe in astrology? I imagine more than the base rate. Part of me agrees with that, part of me thinks its norm harming to do. But I don’t think this one is “less than the population”
“I think religious people are less likely to have trait Y” was one form I thought that comment might have taken, and it turns out “trait Y” was “intelligence”.
Now that I’ve heard this detail, it’s easier to understand what misguided ideas were going through the speaker’s mind. I’m less confused now.
“Religious people are bad at jobs” sounds to me like “chewing gum is dangerous” — my reaction is “What are you talking about? That sounds wrong, and also… huh?”
By comparison, “religious people are less intelligent” sounds to me like “chewing gum is poisonous” — it’s easier to parse that statement, and compare it to my experience of the world, because it’s more specific.
*****
As an aside: I spend a lot of time on Twitter. My former job was running the EA Forum. I would never assume that any group has zero members who say offensive things, including EA.
I think the strongest reason to not do anything that even remotely looks like employer discrimination based on religion is that it’s illegal, at least for the US, UK, and European Union countries, which likely jointly encompasses >90% of employers in EA.
(I wouldn’t be surprised if this is true for most other countries as well, these are just the ones I checked).
There’s also the fact that, as a society and subject to certain exceptions, we’ve decided that employers shouldn’t be using an employee’s religious beliefs or lack thereof as an assessment factor in hiring. I think that’s a good rule from a rule-utilitarian framework. And we can’t allow people to utilize their assumptions about theists, non-theists, or particular theists in hiring without the rule breaking down.
The exceptions generally revolve around personal/family autonomy or expressive association, which don’t seem to be in play in the situation you describe.
I think that I generally agree with what you are suggesting/proposing, but there are all kinds of tricky complications. The first thing that jumps to my mind is that sometimes hiring the person who seems most likely to do the best job ends up having a disparate impact, even if there was no disparate treatment. This is not a counterargument, of course, but more so a reminder that you can do everything really well and still end up with a very skewed workforce.
I generally agree with the meritocratic perspective. It seems a good way (maybe the best?) to avoid tit-for-tat cycles of “those holding views popular in some context abuse power → those who don’t like the fact that power was abused retaliate in other contexts → in those other contexts, holding those views results in being harmed by people in those other contexts who abuse power”.
Good point about the priors. Strong priors about these things seem linked to seeing groups as monoliths with little within-group variance in ability. Accounting for the size of variance seems under-appreciated in general. E.g., if you’ve attended multiple universities, you might notice that there’s a lot of overlap between people’s “impressiveness”, despite differences in official university rankings. People could try to be less confused by thinking in terms of mean/median, variance, and distributions of ability/traits more, rather than comparing groups by their point estimates.
Some counter-considerations:
Religion and race seem quite different. Religion seems to come with a bunch of normative and descriptive beliefs that could affect job performance—especially in EA—and you can’t easily find out about those beliefs in a job interview. You could go from one religion to another, from no religion to some religion, or some religion to no religion. The (non)existence of that process might give you valuable information about how that person thinks about/reflects on things and whether you consider that to be good thinking/reflection.
For example, from a irreligious perspective, it might be considered evidence of poor thinking if a candidate thinks the world will end in ways consistent with those described in the Book of Revelation, or think that we’re less likely to be in a simulation because a benevolent, omnipotent being wouldn’t allow that to happen to us.
Anecdotally, on average, I find that people who have gone through the process of abandoning the religion they were raised with, especially at a young age, to be more truth-seeking and less influenced by popular, but not necessarily true, views.
Religion seems to cover too much. Some forms of it seems to offer immunity to act in certain ways, and the opportunity to cheaply attack others if they disagree with it. In other communities, religion might be used to justify poor material/physical treatment of some groups of people, e.g. women and gay people. While I don’t think being accepting of those religions will change the EA community too much, it does say something to/negatively affect the wider world if there’s sufficient buy-in/enough of an alliance/enough comfort with them.
But yeah, generally, sticking to the Schelling point of “don’t discriminate by religion (or lack-thereof)” seems good. Also, if someone is religious and in EA (i.e., being in an environment that doesn’t have too many people who think like them), it’s probably good evidence that they really want to do good and are willing to cooperate with others to do so, despite being different in important ways. It seems a shame to lose them.
Oh, another thought. (sorry for taking up so much space!) Sometimes something looks really icky, such as evaluating a candidate via religion, but is actually just standing in for a different trait. We care about A, and B is somewhat predictive of A, and A is really hard to measure, then maybe people sometimes use B as a rough proxy for A.
I think that this is sometimes used as the justification for sexism/racism/etc, where the old-school racist might say “I want a worker who is A, and B people are generally not A.” If the relationship between A and B is non-existent or fairly weak, then we would call this person out for discriminating unfairly. But now I’m starting to think of what we should do if there really is a correlation between A and B (such as sex and physical strength). That is what tends to happen if a candidate is asked to do an assessment that seems to have nothing to do with the job, such as clicking on animations of colored balloons: it appears to have nothing to do with the job, but it actually measures X, which is correlated with Y, which predicts on-the-job success.
I’d rather be evaluated as an individual than as a member of a group, and I suspect that in-group variation is greater than between-group variation, echoing what you wrote about the priors being weak.
As with many statements people make about people in EA, I think you’ve identified something that is true about humans in general.
I think it applies less to the average person in EA than to the average human. I think people in EA are more morally scrupulous and prone to feeling guilty/insufficiently moral than the average person, and I suspect you would agree with me given other things you’ve written. (But let me know if that’s wrong!)
I find statements of the type “sometimes we are X” to be largely uninformative when “X” is a part of human nature.
Compare “sometimes people in EA are materialistic and want to buy too many nice things for themselves; EA has a materialism problem” — I’m sure there are people in EA like this, and perhaps this condition could be a “problem” for them. But I don’t think people would learn very much about EA from the aforementioned statements, because they are also true of almost every group of people.
I sense that it’s good to publicly name serial harassers who have been kicked out of the community, even if the accuser doesn’t want them to be. Other people’s feeling matter too and I sense many people would like to know who they are.
I think there is a difference between different outcomes, but if you’ve been banned from EA events then you are almost certainly someone I don’t want to invite to parties etc.
It does not. There are a small number of co-funding situations where money from other donors might flow through Open Philanthropy operated mechanisms, but it isn’t broadly possible to donate to Open Philanthropy itself (either for opex or regranting).
Unbalanced karma is good actually. it means that the moderators have to do less. I like the takes of the top users more than the median user and I want them to have more but not total influence.
Appeals to fairness don’t interest me—why should voting be fair?
A friend asked about effective places to give. He wanted to donate through his payroll in the UK. He was enthusiastic about it, but that process was not easy.
It wasn’t particularly clear whether GiveWell or EA Development Fund was better and each seemed to direct to the other in a way that felt at times sketchy.
It wasn’t clear if payroll giving was an option
He found it hard to find GiveWell’s spreadsheet of effectiveness
Feels like making donations easy should be a core concern of both GiveWell and EA Funds and my experience made me a little embarrassed to be honest.
Has anyone ever run a competition for EA related short stories?
Why would this be a good idea? * Narratives resonate with people and have been used to convey ideas for 1000s of years * It would be low cost and fun * Using voting on this forum there is the same risk of “bad posts” as for any other post
How could it work? * Stories submitted under a tag on the EA forum. * Rated by upvotes * Max 5000 words (I made this up, dispute it in the comments) * If someone wants to give a reward, then there could be a prize for the highest rated * If there is a lot of interest/quality they could be collated and even published * Since it would be measured by upvotes it seems unlikely a destructive story would be highly rated (or as likely as any other destructive post on the forum)
Upvote if you think it’s a good idea. If it gets more than 40 karma I’ll write one.
I intend to strong downvote any article about EA that someone posts on here that they themselves have no positive takes on.
If I post an article, I have some reason I liked it. Even a single line. Being critical isn’t enough on it’s own. If someone posts an article, without a single quote they like, with the implication it’s a bad article, I am minded to strong downvote so that noone else has to waste their time on it.
What do you make of this post? I’ve been trying to understand the downvotes. I find it valuable in the same way that I would have found it valuable if a friend had sent me it in a DM without context, or if someone had quote tweeted it with a line like ‘Prominent YouTuber shares her take on FHI closing down’.
I find posts like this useful because it’s valuable to see what external critics are saying about EA. This helps me either a) learn from their critiques or b) rebut their critiques. Even if they are bad critiques and/or I don’t think it’s worth my time rebutting them, I think I should be aware of them because it’s valuable to understand how others perceive the movement I am connected to. I think this is the same for other Forum users. This being the case, according to the Forum’s guidance on voting, I think I should upvote them. As Lizka says here, a summary is appreciated but isn’t necessary. A requirement to include a summary or an explanation also imposes a (small) cost on the poster, thus reducing the probability they post. But I think you feel differently?
Looking forward to how it plays out! LessWrong made the intentional decision to not do it, because I thought posts are too large and have too many claims and agreement/disagreement didn’t really have much natural grounding any more, but we’ll see how it goes. I am glad to have two similar forums so we can see experiments like this play out.
My hope would be that it would allow people to decouple the quality of the post and whether they agree with it or not. Hopefully people could even feel better about upvoting posts they disagreed with (although based on comments that may be optimistic).
Perhaps combined with a possible tweak in what upvoting means (as mentioned by a few people), someone mentioned we could change “how much do you like this overall” to something that moves away form basing the reaction on an emotions. I think someone suggested something like “Do you think this post adds value” (That’s just a real hack at the alternative, I’m sure there are far better ones)
I guess African, Indian and Chinese voices are underrepresented in the AI Governance discussion. And in the unlikely case we die, we all die and it think it’s weird that half the people who will die have noone loyal to them in the discussion.
We want AI that works for everyone and it seems likely you want people who can represent billions who aren’t currently with a loyal representative.
I’m actually more concerned about the underrepresentation of certain voices as it applies to potential adverse effects of AGI (or even near-AGI) on society that don’t involve all of us dying. In the everyone-dies scenario, I would at least be similarly situated to people from Africa, India, and China in terms of experiencing the exact same bad thing that happens. But there are potential non-fatal outcomes, like locking in current global power structures and values, that affect people from non-Western countries much differently (and more adversely) than they’d affect people like me.
Yeah, in a scenario with “nation-controlled” AGI, it’s hard to see people from the non-victor sides not ending up (at least) as second-class citizens—for a long time. The fear/lack of guarantee of not ending up like this makes cooperation on safety more difficult, and the fear also kind of makes sense? Great if governance people manage to find a way to alleviate that fear—if it’s even possible. Heck, even allies of the leading state might be worried—doesn’t feel too good to end up as a vassal state. (Added later (2023-06-02): It may be a question that comes up as AGI discussions become mainstream.)
Wouldn’t rule out both American and Chinese outside of respective allied territory being caught in the crossfire of a US-China AI race.
Political polarization on both sides in the US is also very scary.
This strikes me as another variation of “EA has a diversity problem.” Good to keep in mind that is it not just about progressive notions of inclusivity, though. There may be VERY significant consequences for the people in vast swaths of the world if a tiny group of people make decisions for all of humanity. But yeah, I also feel that it is a super weird aspect of the anarchic system (in the international relations sense of anarchy) that most of the people alive today have no one representing their interests.
It also seems to echo consistent critiques of development aid not including people in decision-making (along the lines of Ivan Illich’s To Hell with Good Intentions, or more general post-colonial narratives).
What means “have noone loyal to them” and “with a loyal representative”? Are you talking about the indian government? Or are you talking about EAs talking part in discussions such as yourself? (In which case, who are you loyal to?)
And I don’t think I’m good here. I think I try to be loyal to them, but I don’t know what the chinese people want and I think if I try and guess I’ll get it wrong in some key areas.
I’m reminded of when givewell?? asked recipients how they would trade money for children’s lives and they really fucking loved saving children’s lives. If we are doing things for others benefit we should take their weightings into account.
I wish the forum had a better setting for “I wrote this post and maybe people will find it interesting but I don’t want it on the front page unless they do because that feels pretenious”
Give Directly has a President (Rory Stewart) paid $600k, and is hiring a Managing Director. I originally thought they had several other similar roles (because I looked on the website) but I talked to them an seemingly that is not the case. Below is the tweet that tipped me off but I think it is just mistaken.
Once could still take issue with the $600k (though I don’t really)
A ore important question for me though, is to ask Is it right? and Is it a good idea? I think the answer to both of these is a resounding no for a number of reasons.
- (For GiveDirectly). The premise of your entire organisation is that dollars do more good in the hands of the poor than the rich. For your organisation to then spend a huge amount of money on a CEO is arguably going against what the organisation stands for.
- Bad press for the organisation. After SBF and the Abbey etc. this shouldn’t take too much explaining
- Might reflect badly on the organisation when applying for grants
- (My personal gripe) what kind of person working to help the poorest people on earth could live with themselves earning so much given what their organisation. You have become part of the industrial aid complex which makes inequality worse—the kind of thing givedirectly almost seemed to be riling against in the first place.
High NGO salaries make me angry though, so maybe this is a bit too ranty ;).
The expectation of low salaries is one of the biggest problems hobbling the nonprofit sector. It makes it incredibly difficult to hire people of the caliber you need to run a high-performance organization.
what kind of person working to help the poorest people on earth could live with themselves earning so much given what their organisation
This is classic Copenhagen interpretation of ethics stuff. Someone making that kind of money as a nonprofit CEO could almost always make much more money in the private sector while receiving significantly less grief. You’re creating incentives that get us worse nonprofits and a worse world.
I’m interested in the evidence behind the idea that low salaries hobble the nonprofit sector. Is there research to support this outside of the for-profit market? I’m unconvinced that higher salaries (past a certain point) would lead to a better calibre of employee in the NGO field. I would have assumed that the attractiveness of running an effective and high profile org like Give directly might be enough to attract amazing candidates regardless of salary. It would be amazing to do AB testing, or even a RCT on this front but I would be imagine that would be hard to convince organisations to get involved in this research. Personally I think there are enough great leaders out there (especially for an org like givedirectly) who would happily work on 100,000 a year. the salary difference between 100k and 600k might make barely any difference at all in the pool of candidates you attract—but of course this is conjecture.
On the moral side of things, there’s a difference between taking a healthy salary of 100,000 dollars a year—enough to be in the top 0.5% of earners in the world and taking $600,000. We’re not looking for a masochist to run the best orgs, just someone who appreciates the moral weight of that degree of inequality within an organisation that purports to be supporting the world’s poorest.
If earning 600,000 rather than 100,000 is a strong incentive for a person running a non-profit, I probably don’t want them in charge. First I think that this kind of salary might lead someone to be less efficient with spending both in the American base and in distant company operations. NGOs need lean operations as they rely on year to year donations which are never secure—NGOs can’t expect to continue high growth rates of funding year on year like good businesses. Also leaders on high pay are probably likely to feel morally obligated to pay other admin staff more because of their own salary, rather than maximising the amount of money given directly to the poorest.
It may also affect the whole ethos of the organisation and respect from other staff especially in places like Kenya where staff will be getting paid far far less. Imagine you are earning a decent local wage in Kenya, which is still 100x less than your boss in America? Motivating yourself to do your job well becomes difficult. I’ve seen this personally in organisations here in Uganda where Western bosses earn far higher salaries. Local staff see the injustice within their own system then can’t get on board with the vision of the organisation. This kind of salary inequality is likely to affect organisational morale.
At least in the US, Cabinet members, judges, senior career civil servants, and state governors tend to make on average half that. I have heard of some people who would be good federal judges, mainly at the district-court level, turning down nominations because they couldn’t stomach the 85-90% pay cut from being a big-firm partner. The quality of some of these senior political and judicial leaders varies . . . but I don’t think money is the real limiting factor in US leader quality. That is, I don’t get the sense that the US would generally have better leaders if the salaries at the top were doubled or tripled.
The non-salary “benefits” and costs of working at high levels in the government are different from the non-salary “benefits” and costs of working for a non-profit. But I think they differ in ways that some people would prefer the former over the latter (or vice versa).
In other words, a belief that charities should offer their senior leaders a significantly higher salary than senior leaders in world and regional governments potentially implies that almost every developed democracy in the world should be paying their senior leaders and civil servants significantly more than they do. Maybe they should?
I don’t have a firm opinion on salaries for charitable senior officials, but I think Nick is right insofar as high salaries can cause donor disillusionment and loss of morale within the organization. So while I’m willing to start with a presumption that government-comparable salaries for mid-level+ staff are appropriate (because they have been tested by the crucilble of the democratic process), it’s reasonable to ask for evidence that significantly higher salaries improve organizational effectiveness for non-profits.
No engagement: I’ve heard of effective altruism, but do not engage with effective altruism content or ideas at all
Mild engagement: I’ve engaged with a few articles, videos, podcasts, discussions, events on effective altruism (e.g. reading Doing Good Better or spending ~5 hours on the website of 80,000 Hours)
Moderate engagement: I’ve engaged with multiple articles, videos, podcasts, discussions, or events on effective altruism (e.g. subscribing to the 80,000 Hours podcast or attending regular events at a local group). I sometimes consider the principles of effective altruism when I make decisions about my career or charitable donations.
Considerable engagement: I’ve engaged extensively with effective altruism content (e.g. attending an EA Global conference, applying for career coaching, or organizing an EA meetup). I often consider the principles of effective altruism when I make decisions about my career or charitable donations.
High engagement: I am heavily involved in the effective altruism community, perhaps helping to lead an EA group or working at an EA-aligned organization. I make heavy use of the principles of effective altruism when I make decisions about my career or charitable donations.
To me “considerably engaged” EA people are doing a lot. Their median donation is $1000. They have “engaged extensively” and “often consider the principles of effective altruism” To me, they seem “highly engaged” in EA.
I’ve met people who are giving quite a lot of money, who have perhaps tried applied to EA jobs and not succeeded. And yet they are not allowed to consider themselves “highly engaged”. I guess this leads to them feeling disillusioned. It risks creating a privileged class of those who can get jobs at EA orgs and those who can’t. What about those who think they are doing an EA job but it’s not at an EA-aligned organisation? It seems wrong to me that they can’t consider themselves highly engaged.
I would prefer:
“Considerable engagement” → “high engagement”
“High engagement” → “maximum engagement”
And I would prefer the text read as follows:
High (previously considerable) engagement: I’ve engaged extensively with effective altruism content (e.g. attending an EA Global conference, applying for career coaching, or organizing an EA meetup). I often consider the principles of effective altruism when I make decisions about my career or charitable donations, but they are not the biggest factor to me.
Maximum (previously high) engagement: I am deeply involved in the effective altruism community. Perhaps I have chosen my career using the principles of effective altruism. I might earn to give or helping to lead an EA group or working at an EA-aligned organization. Maybe I tried for several years to gain such a career but have since moved to a plan B or Z. Regardless, I make my career or resource decisions on a primarily effective altruist basis.
It’s a bit rough, but I think it allows for people who are earning to give or deeply involved with the community to say they are maximally engaged and that those who are highly engaged to put a 4 without shame. Feel free to put your own drafts in the comments.
Currently, the idea that someone could be earning to give, donating $10,000s per year and perhaps still not consider themself highly engaged in EA seems like a flaw.
I think this is part of a more general problem that people say things like “I’m not totally EA” when they donate 1%+ of their income and are trying hard. Why create a club where so many are insecure about their membership.
I can’t speak for everyone, but if you donate even 1% of your income to charities which you think are effective, you’re EA in my book.
It is one of my deepest hopes, and one of my goals for my own work at CEA, that people who try hard and donate feel like they are certainly, absolutely a part of the movement. I think this is determined by lots of things, including:
The existence of good public conversations about donations, cause prioritization, etc., where anyone can contribute
The frequency of interesting news and stories about EA-related initiatives that make people feel happy about the progress their “team” is making
I hope that the EA Survey’s categories are a tiny speck compared to these.
Thanks for providing a detailed suggestion to go with this critique!
While I’m part of the team that puts together the EA Survey, I’m only answering for myself here.
I’ve met people who are giving quite a lot of money, who have perhaps tried applied to EA jobs and not succeeded. And yet they are not allowed to consider themselves “highly engaged”. I guess this leads to them feeling disillusioned.
People can consider themselves anything they want! It’s okay! You’re allowed! I hope that a single question on the survey isn’t causing major changes to how people self-identify. If this is happening, it implies a side-effect the Survey wasn’t meant to have.
Have you met people who specifically cited the survey (or some other place the question has showed up — I think CEA might have used it before?) as a source of disillusionment?
I’m not sure I understand why people would so strongly prefer being in a “highly engaged” category vs. a “considerably engaged” category if those categories occupy the same relative position on a list. Especially since people don’t use that language to describe themselves, in my experience. But I could easily be missing something.
I want someone who earns-to-give (at any salary) to feel comfortable saying “EA is a big part of my life, and I’m closely involved in the community”. But I don’t think this should determine how the EA Survey splits up its categories on this question, and vice-versa.
*****
One change I’d happily make would be changing “EA-aligned organization” to “impact-focused career” or something like that. But I do think it’s reasonable for the survey to be able to analyze the small group of people whose professional lives are closely tied to the movement, and who spend thousands of hours per year on EA-related work rather than hundreds.
(Similarly, in a survey about the climate movement, it would seem reasonable to have one answer aimed at full-time paid employees and one answer aimed at extremely active volunteers/donors. Both of those groups are obviously critical to the movement, but their answers have different implications.)
Earning-to-give is a tricky category. I think it’s a matter of degree, like the difference between “involved volunteer/group member” and “full-time employee/group organizer”. Someone who spends ~50 hours/year trying to allocate $10,000 is doing something extraordinary with their life, and EA having a big community of people like this is excellent, but I’d still like to be able to separate “active members of Giving What We Can” from “the few dozen people who do something like full-time grantmaking or employ people to do this for them”.
*****
Put another way: Before I joined CEA, I was an active GWWC member, read a lot of EA-related articles, did some contract work for MIRI/CFAR, and went to my local EA meetups. I’d been rejected from multiple EA roles and decided to pursue another path (I didn’t think it was likely I’d get an EA job until months later).
I was pretty engaged at this point, but the nature of my engagement now that I work for CEA is qualitatively different. The opinions of Aaron!2018 should mean something different to community leaders than the opinions of Aaron!2021 — they aren’t necessarily “less important” (I think Aaron!2018 would have a better perspective on certain issues than I do now, blinded as I am by constant exposure to everything), but they are “different”.
*****
All that said, maybe the right answer is to do away with this question and create clusters of respondents who fit certain criteria, after the fact, rather than having people self-define. e.g. “if two of A, B, or C are true, choose category X”.
It’s possible that this question is mean to measure something about non-monetary contribution size, not engagement. In which case, say that.
Call it, “non-financial contribution” and put 4 as ” I volunteer more than X hours” and 5 as “I work on a cause area directly or have taken a lower than salary rate jobs”.
I’ve said that people voting anonymously is good, and I still think so, but when I have people downvoting me for appreciating little jokes that other people most on my shortform, I think we’ve become grumpy.
In my experience, this forum seems kinda hostile to attempts at humour (outside of april fools day). This might be a contributing factor to the relatively low population here!
Best sense of what’s going on (my info’s second-hand) is it would cost ~$600M to buy and distribute all of Serum Institute’s supply (>120M doses $3.90 dose +~$1/dose distribution cost) and GAVI doesn’t have any new money to do so. So they’re possibly resistant to moving quickly, which may be slowing down the WHO prequalification process, which is a gating item for the vaccine being put in vials and purchased by GAVI (via UNICEF). Natural solution for funding is for Gates to lead an effort to do so, but they are heavy supporters of the RTS,S malaria vaccine, so it’s awkward for them to put major support into the new R21 vaccine which can be produced in large quantity. Also the person most associated with R21 is Adrian Hill, who is not well-liked in the malaria field. There will also be major logistical hurdles to getting it distributed in the countries, and there are a number of bureaucracies internally in each of the countries that will all need to cooperate.
You have the facility to produce en mass, something that once given out will no longer make a profit, so there’s no incentive to make a factory, but you’re an EA true and true so you build the thing you need and make the doses. Now you have doses in a warehouse somewhere.
You have to take the vaccine all over the admittedly large state, but with a good set of roads and railroads, this is an easily solvable problem, right?
You have a pile of vaccine, potentially connections with Texan hospitals who thankfully ALL speak English and you have the funding from your company to send people to distribute the vaccine.
There may or may not be a cold chain needed so you might need refrigerated trucks, but this is a solvable problem right? Cold chain trucks can’t be that more expensive than regular trucks?
So you go out and you start directing the largest portion of vaccines to go to the large cities and health departments, just to reach your 29 million people that you’re trying to hit. You pay a good salary to your logisticians and drivers to get the vaccines where they need to go.
In a few days, you’re able to effectively get a large chunk of your doses to where they need to go, but now you run into the problem of last mile logistics, where you need to get a dose to a person.
That means that the public has to get the message that this is available for them, where they can find it and how they can do it. God forbid there be a party that is trying to PSYOP that your vaccine causes Malarial cancer or something because that would be a problem.
You’ll have your early adopters, sure but after some time the people that will follow prudent public health measures will drop off and the lines will be empty.
You’ll still have 14 million doses, which have they been properly stored? This is of course accounting for the number of Texans who just won’t get a vaccine or are perhaps too young.
So you appeal to the state government to pass a law that all 8th graders need to have this once in a lifetime vaccine and in a miracle, they make it a law. You move the needle a little bit. 7.5 Million Texans are under 18, but those might be the easiest to get as they’re actively interacting with the government at least in the capacity of education.
And as you might guess, this isn’t about Texas. This is every country.
FWIW I reached out to someone involved in this at a high level a few months ago to see if there was a potential project here. They said the problem was “persuading WHO to accelerate a fairly logistically complex process”. It didn’t seem like there were many opportunities to turn money or time into impact so I didn’t pursue anything further.
For the new R21 vaccine, WHO is currently conducting prequalification of the production facilities. As far as I understand, African governments have to wait for prequalification to finish for before they can apply for subsidized procurement and rollout through UNICEF and GAVI.
For both RTS,S and R21, there are some logistical difficulties due to the vaccines’ 4 dose schedule (First three 1 month apart—doesn’t fit all too well into existing vaccination schedules) cold-chain requirements, and timing peak immunity with the seasonality of malaria.
Lastly since there already exists cost-effective counter-measures, it’s unclear how to balance new vaccine efforts against existing measures.
My call: EA gets 3.9 out of 14 possible cult points.
The group is focused on a living leader to whom members seem to display excessively zealous, unquestioning commitment.
No
The group is preoccupied with bringing in new members.
Yes (+1)
The group is preoccupied with making money.
Partial (+0.8)
Questioning, doubt, and dissent are discouraged or even punished.
No
Mind-numbing techniques (such as meditation, chanting, speaking in tongues, denunciation sessions, debilitating work routines) are used to suppress doubts about the group and its leader(s).
No
The leadership dictates sometimes in great detail how members should think, act, and feel (for example: members must get permission from leaders to date, change jobs, get married; leaders may prescribe what types of clothes to wear, where to live, how to discipline children, and so forth).
No
The group is elitist, claiming a special, exalted status for itself, its leader(s), and members (for example: the leader is considered the Messiah or an avatar; the group and/or the leader has a special mission to save humanity).
Partial (+0.5)
The group has a polarized us- versus-them mentality, which causes conflict with the wider society.
Very weak (+0.1)
The group’s leader is not accountable to any authorities (as are, for example, military commanders and ministers, priests, monks, and rabbis of mainstream denominations).
No
The group teaches or implies that its supposedly exalted ends justify means that members would have considered unethical before joining the group (for example: collecting money for bogus charities).
Partial (+0.5)
The leadership induces guilt feelings in members in order to control them.
No
Members’ subservience to the group causes them to cut ties with family and friends, and to give up personal goals and activities that were of interest before joining the group.
No
Members are expected to devote inordinate amounts of time to the group.
Yes (+1)
Members are encouraged or required to live and/or socialize only with other group members.
Questioning, doubt, and dissent are discouraged or even punished.
I think this is probably partial, given claims in this post, and positive-agreevote concerns here (though clearly all of the agree voters might be wrong).
I think you may have very high standards? By these standards, I don’t think there are any communities at all that would score 0 here.
~
I think this is nonzero, I think subsets of the community do display “excessively zealous” commitment to a leader given “What would SBF do” stickers. Outside views of LW (or at least older versions of it would probably worry that this was an EY cult.
I was not aware of “What would SBF do” stickers. Hopefully those people feel really dumb now. I definitely know about EY hero worship but I was going to count that towards a separate rationalist/LW cult count instead of the EA cult count.
I think you may have very high standards? By these standards, I don’t think there are any communities at all that would score 0 here.
I think where we differ is that I’m not making a comparison of whether EA is worse than this compared to other groups, if every group scores in the range of 0.5-1 I’ll still score 0.5 as 0.5, and not scale 0.5 down to 0 and 0.75 down to 0.5. Maybe that’s the wrong way to approach it but I think the least culty organization can still have cult-like tendencies, instead of being 0 by definition.
Also if it’s true that someone working at GPI was facing these pressures from “senior scholars in the field”, then that does seem like reason for others to worry. There also has been a lot of discussion on the forum about the types of critiques that seem like they are acceptable and the ones that aren’t etc. Your colleague also seems to believe this is a concern, for example, so I’m currently inclined to think that 0.2 is pretty reasonable and I don’t think I should update much based on your comment-but happy for more pushback!
The group is elitist, claiming a special, exalted status for itself, its leader(s), and members (for example: the leader is considered the Messiah or an avatar; the group and/or the leader has a special mission to save humanity).
has to get more than 0.2, right? Being elitist and on a special mission to save humanity is a concerningly good descriptor of at least a decent chunk of EA.
>> The group teaches or implies that its supposedly exalted ends justify means that members would have considered unethical before joining the group (for example: collecting money for bogus charities).
> Partial (+0.5)
This seems too high to me, I think 0.25 at most. We’re pretty strong on “the ends don’t justify the means”.
>>The leadership induces guilt feelings in members in order to control them.
I don’t think it makes sense to say that the group is “preoccupied with making money”. I expect that there’s been less focus on this in EA than in other groups, although not necessarily due to any virtue, but rather because of how lucky we have been in having access to funding.
Nuclear risk is in the news. I hope: - if you are an expert on nuclear risk, you are shopping around for interviews and comment - if you are an EA org that talks about nuclear risk, you are going to publish at least one article on how the current crisis relates to nuclear risk or find an article that you like and share it - if you are an EA aligned journalist, you are looking to write an article on nuclear risk and concrete actions we can take to reduce it
[epistemic status—low, probably some element are wrong]
tl;dr - communities have a range of dispute resolution mechanisms, whether voting to public conflict to some kind of civil war - some of these are much better than others - EA has disputes and resources and it seems likely that there will be a high profile conflict at some point - What mechanisms could we put in place to handle that conflict constructively and in a positive sum way?
When a community grows as powerful as EA is, there can be disagreements about resource allocation. In EA these are likely to be significant.
There are EAs who think that the most effective cause area is AI safety. There are EAs who think it’s global dev. These people do not agree, though there can be ways to coordinate between them.
The spat between GiveWell and GiveDirectly is the beginning of this. Once there are disagreements on the scale of $10millions then some of that is gonna be sorted out over twitter. People may badmouth each other and damage the reputation of EA as a whole.
The way around this is to make solving problems easier than creating them. As in a political coalition, people need to have more benefits being inside the movement than outside it.
The EA forum already does good work here, allowing everyone to upvote posts they like.
Here are some other power sharing mechanisms: - a fund where people can either vote on cause areas, expected value, or moral weights, so that it moves based on the community’s values as a whole - a focus on “we disagree, but we respect” looking at how different parts of the community disagree but respect the effort of others - a clear mechanism of bargains, where animal EAs donate to longtermist charities in exchange for longtermists to go vegan and vice versa - some videos from key figures from different parts discussing their disagreements in a kind and human way - “I would change if” a series of posts from people saying what would make them work on different cause areas. How cheap would chicken welfare have to be before Yudkowsky moved to work on it? How cheap would AI safety had to be before it became Singer’s key talking point
Call me a pessimist, but I can’t see how a community managing $50Bn across deeply dividided prioritites will stay chummy without proper dispute resolution systems. And I suggest we should start building them now.
Beyond these, one could build a community around finding forecasts of public figures. Alternatively, I guess GPT-3 has a good shot of being able to turn verbal forecasts into data which could then be checked.
What’s the impact
I’m only gonna sketch my argument here. As above, if this gets 20 karma I’ll write a full post (but only upvote if it’s good, let’s not waste any of our time).
We seem to think forecasting improves the accuracy of commentator
If we could build a high-status award for forecasting, more commentators would hear about it and it would serve as a nudge for others to make their forecasts more visible
I am confident this would lead to better commentary (this seems arrogant, but honestly the people I know who forecast more are more epistemically humble—I think celebrities could really benefit from more humility about their predictions)
Better commentary leads to better outcomes. Effective Altruism implicitly holds that many have priority orderings that don’t match reality. The world at large underrates the best charities, the chance of biorisk, etc. Journalism which was more accurate would be more accurate about these things too which would be a massive win
Wouldn’t the winners just be superforecasters
Not currently. I don’t think it’s too hard to make pretty robust boundaries on what a public figure is. Most superforecasters are not well enough known (and sorry to the 5 EAs I can count in metaculus’ top 50). But Yglesias is well known enough. Scott Alexander, I’m less sure but I think we could come up with some minimum amount of hits, followers, etc for someone to be eligible.
How much resource would this take
Depends on a couple of things (I have pulled these numbers out of thin air) please criticise them:
Who is giving this award its prestige? If it’s a lot of money, fine. If it’s an existing org, then it’s cheaper ( 0 - $50k)
How deeply are we looking. I think you could pay someone $50k to find say 100 public sets of forecasts and maybe another $10k to make a nice website. If you want to scrape twitter using GPT3 or crowdsource that’s maybe another $50-100k
Is there an award ceremony? If so I imagine that costs as much as a wedding so maybe $10k
That looks like $60 - $220k
If this failed, why did it fail?
It got embroiled in controversy over who was included
It was attached to some existing EA org and looked badly for them
It became a niche award that no one changed their behaviour based on
I want to say thanks to people involved in the EA endeavour. I know things can be tough at times, but you didn’t have to care about this stuff, but you do. Thank you, it means a lot to me. Let’s make the world better!
Joe Rogan (largest podcaster in the world) giving repeated concerned mediocre x-risk explanations suggests that people who have contacts with him should try and get someone on the show to talk about it.
eg listen from 2:40:00 Though there were several bits like this during the show.
I’ve been musing about some critiques of EA and one I like is “what’s the biggest thing that we are missing”
In general, I don’t think we are missing things (lol) but here are my top picks:
It seems possible that we reach out to sciency tech people because they are most similar to us. While this may genuinely be the cheapest way to get people now there may be costs to the community in terms of diversity of thought (most Sci/tech people are more similar than the general population)
I’m glad to see more outreach to people in developing nations
It seems obvious that science/tech people have the most to contribute to AI safety, but.. maybe not?
Also science/tech people have a particular racial/gender makeup and there is the hidden assumption that there isn’t an effective way to reach a more broader group. (Personally I hope that a load of resources in India, Nigeria, Brazil, etc will go some way here, but I dunno still feels like a legitimate question)
People are scared what the future might look like if it is only in the view of MacAskill/Bostrom/SBF. Yeah, in fact my (poor) model of MacAskill is scared of this too. But we can surface more that we wish we had a larger group making these decisions too.
We could build better ways of outsiders feeding into decisionmaking. I read a piece about the effectiveness of community vegan meals being underrated in EA. Now I’m not saying it should be funded, but I was surprsied to read some of these conferences are 5000+ people (iirc). Maybe that genuinely is oversight. But it’s really hard for high signal information to get to decisionmakers. That really is a problem we could work on. If it’s hard for people who speak EA-ese, how much much harder is it for those who speak different community languages, whose concepts seem frustrating to us.
It seems obvious that science/tech people have the most to contribute to AI safety, but.. maybe not?
More likely to me is a scenario of diminishing returns. Ie, tech people might be the most important to first order, but there’s already a lot of brilliant tech people working on the problem, so one more won’t make much of a difference. Whereas a few brilliant policy people could devise a regulatory scheme that penalises reckless AI deployment, etc, making more differences on a marginal basis.
I would like to see posts give you more karma than comments (which would hit me hard). Seems like a highly upvtoed post is waaaaay more valuable than 3 upvoted comments on that post, but it’s pretty often the latter gives more karma than the former.
I know you can figure them out, but I don’t see them presented separately on users pages. Am I missing something? Is it shown on the website somewhere?
For good or ill, while there are posters on twitter who talk about EA, there isn’t a “scene” (a space where people use loads of EA jargon and assume everyone is EA) or at least not that I’ve seen.
UK government will pay for organisations to hire 18-24 year olds who are currently unemployed, for 6 months. This includes minimum wage and national insurance.
I imagine many EA orgs are people constrained rather than funding constrained but it might be worth it.
tl;dr EA books have a positive externality. The response should be to subsidise them
If EA thinks that certain books (doing good better, the precipice) have greater benefits than they seem, they could subsidise them.
There could be an EA website which has amazon coupons for EA books so that you can get them more cheaply if buying for a friend, or advertise said coupon to your friends to encourage them to buy the book.
I think this could cost $50,000 to $300,000 or so depending on when this is done and how popular it is expected to be, but I expect it to be often worth it.
I like this idea and think it’s worth you taking further. My initial reactions are:
Getting more EA books into peoples hands seems great and worth much more per book than the cost of a book.
I don’t know how much of a bottleneck the price of a book is to buying them for friends/club members. I know EA Oxford has given away many books, I’ve also bought several for friends (and one famous person I contacted on instagram as a long shot who actually replied.
I’d therefore be interested in something which aimed to establish whether making books cheaper was a better or worse idea than just encouraging people to gift them.
John Behar/TLYCS probably have good thoughts on this.
Do you have any thoughts as to what the next step would be. It’s not obvious to me what you’d do to research the impact of this.
Perhaps have a questionnaire asking people how many people they’d give books to at different prices. Do we know the likelihood of people reading a book they are given?
Being open minded and curious is different from holding that as part of my identity.
Perhaps I never reach it. But it seems to me that “we are open minded people so we probably behave open mindedly” is false.
Or more specifically, I think that it’s good that EAs want to be open minded, but I’m not sure that we are purely because we listen graciously, run criticism contests, talk about cruxes.
The problem is the problem. And being open minded requires being open to changing one’s mind in difficult or set situations. And I don’t have a way that’s guaranteed to get us over that line.
Someone told me they don’t bet as a matter of principle. And that this means EA/Rats take their opinions less seriously as a result. Some thoughts
I respect individual EAs preferences. I regularly tell friends to do things they are excited about, to look after themselves, etc etc. If you don’t want to do something but feel you ought to, maybe think about why, but I will support you not doing it. If you have a blanket ban on gambling, fair enough. You are allowed to not do things because you don’t want to
Gambling is addictive, if you have a problem with it, don’t do it
Betting is a useful tool. I just do take opinions a bit less seriously if people don’t do the simple thing to put their money where their mouths are. And so a blanket ban is a slight cost. Imagine if I said I had a blanket ban on double cruxxing, or giving to animal welfare charities. It’s a thing I am allowed to do, but it does just seem a bit worse
To me, this seems like something else is actually going on. Perhaps it feels like “will you bet on it” is a way that certain people can twist my arm in a way that makes me feel uncomfortable? Perhaps the people who say this have been cruel to me in the past. I don’t know, but I sense there is something else going on. If you don’t bet as a blanket policy, could you tell me why?
I don’t bet because I feel it’s a slippery slope. I also strongly dislike how opinions and debates in EA are monetised, as this strengthens even more the neoliberal vibe EA already has, so my drive for refraining to do this in EA is stronger than outside.
Edit: and I too have gotten dismissed by EAs for it in the past.
I don’t bet because it’s not a way to actually make money given the frictional costs to set it up, including my own ignorance about the proper procedure and having to remember it and keep enough capital for it. Ironically, people who are betting in this subculture are usually cargo culting the idea of wealth-maximization with the aesthetics of betting with the implicit assumption that the stakes of actual money are enough to lead to more correct beliefs when following the incentives really means not betting at all. If convenient, universal prediction markets weren’t regulated into nonexistence then I would sing a different tune.
I guess I do think the “wrong beliefs should cost you” is a lot of the gains. I guess I also think that bets should be able to be at scale of the disagreement is important, but I think that’s a much more niche view.
There are a number of possible reasons that the individual might not want to talk about publicly:
A concern about gambling being potentially addictive for them;
Being relatively risk-averse in their personal capacity (and/or believing that their risk tolerance is better deployed for more meaningful things than random bets);
Being more financially constrained than their would-be counterparts; and
Awareness of, and discomfort with, the increased power the betting norm could give people with more money.
On the third point: the bet amount that would be seen as meaningful will vary based on the person’s individual circumstances. It is emotionally tough to say—no, I don’t have much money, $10 (or whatever) would be a meaningful bet for me even though it might take $100 (or whatever) to be meaningful to you.
On the fourth point: if you have more financial resources, you can feel freer with your bets while other people need to be more constrained. That gives you more access to bet-offers as a rhetorical tool to promote your positions than people with fewer resources. It’s understandable that people with fewer resources might see that as a financial bludgeon, even if not intended as such.
I have yet to see anyone in the EA/rat world make a bet for sums that matter, so I really don’t take these bets very seriously. They also aren’t a great way to uncover people’s true probabilities because if you are betting for money that matters you are obviously incentivized to try to negotiate what you think are the worst possible odds for the person on the other side that they might be dumb enough to accept.
If anything… I probably take people less seriously if they do bet (not saying that’s good or bad, but just being honest), especially if there’s a bookmaker/platform taking a cut.
I think if I knew that I could trade “we all obey some slightly restrictive set of romance norms” for “EA becomes 50% women in the next 5 years” then that’s a trade I would advise we take.
That’s a big if. But seems trivially like the right thing to do—women do useful work and we should want more of them involved.
To say the unpopular reverse statement, if I knew that such a set of norms wouldn’t improve wellbeing in some average of women in EA and EA as a whole then I wouldn’t take the trade.
Seems worth acknowledging there are right answers here, if only we knew the outcomes of our decisions.
In defence of Will MacAskill and Nick Beckstead staying on the board of EVF
While I’ve publicly said that on priors they should be removed unless we hear arguments otherwise, I was kind of expecting someone to make those arguments. If noone will, I will.
MacAskill
MacAskill is very clever, personally kind, is a superlative networker and communicator. Imo he oversold SBF, but I guess I’d do much worse in his place. It seems to me that we should want people who have made mistakes and learned from them. Seems many EA orgs would be glad to have someone like him on the board. If anything, the question is if we don’t want too many people duplicated across EA orgs (do we want this?) which board is it most valuable to have MacAskill on? I guess EVF?
Beckstead
Beckstead is, I sense, extremely clever (generally I find OpenPhil people to be powerhouses), personally kind. I guess I think that he dropped the ball on running FTXFF well—feels like had they hired more people to manage OPS they might have queried why money was going from strange accounts, but again I don’t know the particulars (though I want to give the benefit of the doubt here). But again, it was a complicated project and I guess he sensed that speed of ramp up was the proirity. In many world’s he’d have been right.
I guess perhaps the two of them seem to have pretty similar blindspots (kind intelligent academicish EAs who scaled things really fast) so perhaps it is worth only having one on the board. Maybe it’s worth having someone who can say “hmm that seems likely too odd or shifty to be worth us doing it”. But this isn’t as much of a knockdown argument.
Feels like there should be some kind of community discussion and research in the wake of FTX, especially if no leadership is gonna do it. But I don’t know how that discussion would have legitimacy. I’m okay at such things, but honestly tend to fuck them up somehow. Any ideas?
If I were king
Use the ideas from all the varous posts
Have a big google doc where anyone can add research and also put a comment for each idea and allow people to discuss
Then hold another post where we have a final vote on what should happen
then EA orgs can see at least what some kind of community concensus things
I wrote a post on possible next steps but it got little engagement—unclear if it was a bad post or people just needed a break from the topic. On mobile, so not linking it—but it’s my only post besides shortform.
The problem as I see it is that the bulk of proposals are significantly underdeveloped, risking both applause light support and failure to update from those with skeptical priors. They are far too thin to expect leaders already dealing with the biggest legal, reputational, and fiscal crisis in EA history to do the early development work.
Thus, I wouldn’t credit a vote at this point as reflecting much more than a desire for a more detailed proposal. The problem is that it’s not reasonable to expect people to write more fleshed-out proposals for free without reason to believe the powers-that-be will adopt them.
I suggested paying people to write up a set of proposals and then voting on those. But that requires both funding and a way to winnow the proposals and select authors. I suggested modified quadratic funding as a theoretical ideal, but a jury of pro-reform posters as a more practical alternative. I thought that problem was manageable, but it is a problem. In particular, at the proposal-development stage, I didn’t want tactical voting by reform skeptics.
Strong +1 to paying people for writing concrete, actionable proposals with clear success criteria etc. - but I also think that DEI / reform is just really, really hard, and I expect relatively few people in the community to have 1) the expertise 2) the knowledge of deeper community dynamics / being able to know the current stsances on things.
Let’s assume that the time article is right about the amount of sexual harassment in EA. How big a problem is this relative to other problems? If we spend $10mn on EAGs (a guess) how much should we spend if we could halve sexual harassment in the community.
The whole sexual harassment issue isn’t something that can be easily fixed with money I think. It’s more a project of changing norms and what’s acceptable within the EA community.
The issue is it seems like many folks at the top of orgs, especially in SF, have deeply divergent views from the normal day-to-day folks joining/hearing about EA. This is going to be a huge problem moving forward from a public relations standpoint IMO.
Money can’t fix everything, but it can help some stuff, like hiring professionals outside of EA and supporting survivors who fear retaliation if they choose to speak out.
I’ll sort of publicly flag that I sort of break the karma system. Like the way I like to post comments is little and often and this is just overpowered in getting karma.
eg I recently overtook Julia Wise and I’ve been on the forum for years less than anyone else.
I don’t really know how to solve this—maybe someone should just 1 time nuke my karma? But yeah it’s true.
Note that I don’t do this deliberately—it’s just how I like to post and I think it’s honestly better to split up ideas into separate comments. But boy is it good at getting karma. And soooo much easier than writing posts.
Having EA Forum karma tells you two things about a person:
They had the potential to have had a high impact in EA-relevant ways
They chose not to.
I wouldn’t worry too much about the karma system. If you’re worried about having undue power in the discourse, one thing I’ve internalized is to use the strong upvote/downvote buttons very sparingly (e.g. I only strong-upvoted one post in 2022 and I think I never strong-downvoted any post, other than obvious spam).
I don’t think you need to start with zero karma again. The karma system is not supposed to mean very much. It is heavily favoured in certain aspects than a true representation of your skill or trustworthiness as a user on this forum. It is more or less a xp bar for social situations and is an indicator that someone posts good content here.
Aaron Gertler retired from the forum, someone who is in high regard, which got a lot of attention and sympathy. Many people were interested in the post, and it’s an easy topic to participate. So many were scrolling down to the comments to write something nice and thanking him for his work.
JP Addison did so too. He works for CEA and as a developer for the forum. His comment got more Karma than any post he made so far.
Karma is used in many places with different concepts behind it. The sum of it gives you no clear information. What I would think in your case: you are an active member of the forum, participate positively with only one post with negative karma. You participated in the FTX crisis discussion, which was an opportunity to gain or lose significant amounts of karma, but you survived it, probably with a good score.
Internetpoints can make you feel fantastic, they are a system to motivate for social interaction and to follow the community norms (in positive and negative ways).
Your modesty suits you well, but there is no need to. Stand upwards. There will always be those with few points but really good content, and those who overshoot the gems by far with activity.
Does EA have a clearly denoted place for exit interviews? Like if someone who was previously very involved was leaving, is there a place they could say why?
When answering questions, I recommend people put each separate point as a separate answer. The karma ranking system is useful to see what people like/don’t like and having a whole load of answers together muddies the water.
1) Why is EA global space constrained? Why not just have a larger venue?
I assume there is a good reason for this which I don’t know.
2) It’s hard to invite friends to EA global. Is this deliberate?
I have a close friend who finds EA quite compelling. I figured I’d invite them to EA global. They were dissuaded by the fact they had to apply and that it would cost $400.
I know that’s not the actual price, but they didn’t know that. I reckon they might have turned up for a couple of talks. Now they probably won’t apply.
Is there no way that this event could be more welcoming or is that not the point?
Re 1) Is there a strong reason to believe that EA Global is constrained by physical space? My impression is that they try to optimize pretty hard to have a good crowd and for there to be a high density of high-quality connections to be formed there.
Re 2) I don’t think EA Global is the best way for newcomers to EA to learn about EA.
EDIT: To be clear, neither 1) nor 2) are necessarily endorsements of the choice to structure EA Global in this way, just an explanation of what I think CEA is optimizing for.
EDIT 2 2021/10/11: This explanation may be wrong, see Amy Labenz’s comment here.
Personal anecdote possibly relevant for 2): EA Global 2016 was my first EA event. Before going, I had lukewarm-ish feelings towards EA, due mostly to a combination of negative misconceptions and positive true-conceptions; I decided to go anyway somewhat on a whim, since it was right next to my hometown, and I noticed that Robin Hanson and Ed Boyden were speaking there (and I liked their academic work). The event was a huge positive update for me towards the movement, and I quickly became involved – and now I do direct EA work.
I’m not sure that a different introduction would have led to a similar outcome. The conversations and talks at EAG are just (as a general rule) much better than at local events, and reading books or online material also doesn’t strike me as naturally leading to being part of a community in the same way.
It’s possible my situation doesn’t generalizes to others (perhaps I’m unusual in some way, or perhaps 2021 is different from 2016 in a crucial way such that the “EAG-first” strategy used to make sense but doesn’t anymore), and there may be other costs with having more newcomers at EAG (eg diluting the population of people more familiar with EA concepts), but I also think it’s possible my situation does generalize and that we’d be better off nudging more newcomers to come to EAG.
1) We’d like to have a larger capacity at EA Global, and we’ve been trying to increase the number of people who can attend. Unfortunately, this year it’s been particularly difficult; we had to roll over our contract with the venue from 2020 and we are unable to use the full capacity of the venue to reduce the risk from COVID. We’re really excited that we just managed to add 300 spots (increasing capacity to 800 people), and we’re hoping to have more capacity in 2022.
There will also be an opportunity for people around the world to participate in the event online. Virtual attendees will be able to enjoy live streamed content as well as networking opportunities with other virtual attendees. More details will be published on the EA Global website the week of October 11.
2) We try to have different events that are welcoming to people who are at different points in their EA engagement. For someone earlier in their exploration of EA, the EAGx conferences are going to be a better fit. From the EA Global website:
Effective altruism conferences are a good fit for anyone who is putting EA principles into action through their donations, volunteering, or career plans. All community members, new or experienced, are welcome to apply.
EA Global: London will be selecting for highly-engaged members of the community.
EAGxPrague (3-5 December) will be more suitable for those who have less experience with effective altruism.
We’ll have lots more EAGx events in 2022, including Boston, Oxford, Singapore, and Australia, as well as EA Globals in San Francisco and London as usual. We may add additional events to this plan. The dates for those events and any additional events will go up on eaglobal.org when they’re confirmed.
In the meantime, if your friend is interested in seeing some talks, they can check out hundreds of past EA Global talks on the CEA YouTube channel.
It’s a site which gets you to guess what other political groups (republicans and democrats) think about issues.
Why is it good:
1) It gets people thinking and predicting. They are asked a clear question about other groups and have to answer it. 2) It updates views in a non-patronising way—it turns out dems and repubs are much less polarised than most people think (the stat they give is that people predict 50% of repubs hold extreme views, when actually it’s 30). But rather than yelling this, or an annoying listicle, it gets people’s consent and teachest something. 3) It builds consensus. If we are actually closer to those we disagree with than we think, perhaps we could work with them. 4) It gives quick feedback. People learn best when given feedback which is close to the action. In this case, people are rapidly rewarded for thoughts like “probably most of X group” are more similar to me that I first think.
Imagine:
What percentage of neocons want insitutional reform? What % of libertarians want an end to factory farming? What % of socialists want an increase in foreign direct aid?
Conlusion
If you want to change people’s minds, don’t tell them stuff, get them to guess trustworthy values as a cutesy game.
I might start doing some policy BOTEC (Back of the envelope calculation) posts. ie where I suggest an idea and try and figure out how valuable it is. I think that do this faster with a group to bounce ideas off.
If you’d like to be added to a message chat (on whatsapp probably) to share policy BOTECs then reply here or DM me.
Is EA as a bait and switch a compelling argument for it being bad?
I don’t really think so
There are a wide variety of baits and switches, from what I’d call misleading to some pretty normal activities—is it a bait and switch when churches don’t discuss their most controversial beliefs at a “bring your friends” service? What about wearing nice clothes to a first date? [1]
EA is a big movement composed of different groups[2]. Many describe it differently.
EA is way more transparent than any comparable movement. If it is a bait and switch then it does so much more to make clear where the money goes eg (https://openbook.fyi/).
On the other hand:
I do sometimes see people describing EA too favourably or pushing an inaccurate line.
I think that transparency comes with a feature of allowing anyone to come and say “what’s going on there” and that can be very beneficial at avoiding error but also bad criticism can be too cheap.
Overall I don’t find this line that compelling. And that parts that are seem largely in the past when EA was smaller (when perhaps it mattered less). Now that EA is big, it’s pretty clear that it cares about many different things.
I think that there might be something meaningfully different between wearing nice clothes to a first date (or a job interview), as opposed to intentionally not mentioning more controversial/divisive topics to newcomers. I think there is a difference between putting your best foot forward (dressing nice, grooming, explaining introductory EA principles articulately with a ‘pitch’ you have practices) and intentionally avoiding/occluding information.
For a date, I wouldn’t feel deceived/tricked if someone dressed nice. But I would feel deceived if the person intentionally withheld or hid information that they knew I would care about. (it is almost a joke that some people lie about age, weight, height, employment, and similar traits in dating).
I have to admit that I was a bit turned off (what word is appropriate for a very weak form of disgusted?) when I learned that there has long been an intentional effort in EA to funnel people from global development to long-termism within EA.
If anything, EA now has a strong public (admittedly critical) reputation for longtermist beliefs. I wouldn’t be surprised if some people have joined in order to pursue AI alignment and got confused when they found out more than half of the donations go to GHD & animal welfare.
It is worth noting when systems introduce benefits in a few obvious ways but many small harms. An example is blocking housing. It benefits the neighbours a lot—they don’t have to have construction nearby—and the people who are harmed are just random marginal people who could have afforded a home but just can’t.
But these harms are real and should be tallied.
Much recent discussion in EA has suggested common sense risk reduction strategies which would stop clear bad behavior. Often we all agree on the clear bad behaviour.
But the risk reduction strategies would also often set norms against a range of greyer behaviour that the suggestors don’t engage in or doesn’t seem valuable to them. If you don’t work with your coworkers, then suggesting it be normed against, seems fine—it would make it hard for people to end up in weird living situations. But I know people who have loved living with coworkers. That’s a diffuse harm.
Mainly I think this involves acknowledging people are a lot weirder than you think. People want things I don’t expect them to want, they consent in business, housing and relationships to things I’d never expect them to. People are wild. And I think it’s worth there being bright lines against some kinds of behaviour that is bad or nearly always bad—I’d suggest dating your reports is ~ very unwise—but a lot is about human preferences and to understand that we need to elicit both wholesome and illicit preferences or consider harms that are diffuse.
Note that I’m not saying which way the balance of harms falls, but that both types should be counted.
I suggest there is waaaay to much to be on top of in EA and noone knows who is checking what. So some stuff goes unchecked. If there were a narrower set of “core things we study” then it seems more likely that those things would have been gone over by someone in detail and hence fewer errors in core facts.
One of the downsides of EA being so decentralized, I guess. I’m imagining an alternative history EA in which is was all AI alignment or it was all tropical disease prevention, and in those worlds the narrowing of “core things we study” would possibly result in more eyeballs on each thing.
I think the wiki should be about summarising and synthesising articles on this forum.
- There are lots of great articles which will be rarely reread - Many could do with more links to eachother and to other key peices - Many could be better edited, combined etc - The wiki could take all content and aim to turn it into a minimal viable form of itself
I think that the forum wiki should focus on taking chunks of article text and editing it, rather than pointing people to articles. So take all of the articles on global dev, squish them together or shorten them.
So there would be a page on “research debt” which would contain this article and also any more text that seemed relevant, but maybe without the introduction. Then a preface on how it links to other EA topics, a link to the original article and links to ways it interacts with other EA topics. It might turn out that that page had 3 or 4 articles squished into one or was broken into 3 or 4 pages. But like Wikipedia you could then link to “research debt” and someone could easily read it.
[Epistemic Status: low, I think this is probably wrong, but I would like to debug it publicly]
If I have a criticism of EA along Institutional Decision Making lines, it is this:
For a movement that wants to change how decisions get made, we should make those changes in our own organisations first.
Examples of good progress: - prizes—EA orgs have offered prizes for innovation - voting systems—it’s good that the forum is run on upvotes and that often I think EA uses the right tool for the job in terms of voting
Things I would like to see more of: - an organisation listening to prediction markets/polls. If we believe nations should listen to forecasting can we make clearer which markets our orgs are looking and and listening to? - an organisation run by prediction markets. The above but taking it further - removing siloes in EA. If you have confidence to email random people it’s relatively easy to get stuff done, but can we lower the friction to allow good ideas to spread further? - etc
It’s fine if we think these things will never work, but it seems weird to me that we think improvements would work elsewhere but that we don’t want them in our orgs. That’s like being NIMBY about our own suggested improvements.
Counterarguments - these aren’t solutions people are actually arguing for. Yeah this is an okay point. But I think the seeds of them exist.
- prediction markets work in big orgs not small ones. Maybe, but isn’t it worth running one small inefficient organisation to try and learn the failure modes before we suggest this for nation states
A set of EA jobs twitter bots which each retweet a specific set of hashtags eg #AISafety#EAJob, #AnimalSuffering#EAJob, etc etc. Please don’t get hung up on these, we’d actually need to brainstorm the right hashtags.
Does anyone know people working on reforming the academic publishing process?
Coronavirus has caused journalists to look for scientific sources. There are no journal articles because of the lag time. So they have gone to preprint servers like bioRxiv (pronounced bio-archive). These servers are not peer reviewed so some articles are of low quality. So people have gone to twitter asking for experts to review the papers.
This is effectively a new academic publishing paradigm. If there were support for good papers (somehow) you would have the key elements of a new, perhaps better system.
With Coronavirus providing a lot of impetus for change, those working in this area could find this an important time to increase visibility of their work.
The shifts in forum voting patterns across the EU and US seem worthy of investigation.
I’m not saying there is some conspiracy, it seems pretty obvious that EU and US EAs have different views and that appears in voting patterns but it seems like we could have more self knowledge here.
Agreed, and I think @Peter Wildeford has pointed that out in recent threads—it’s very unlikely to be a ‘conspiracy’ and much more likely that opinions and geographical locations are highly correlated. I can remember some recent comments of mine that swung from slighty upvoted to highly downvoted and back to slightly upvoted
This might be something that the Forum team is better placed to answer, but if anyone can think of a way to try to tease this out using data on the public API let me know and I can try and investigate it
Yeah it’s true, I was mostly just responding of the empirical question of how to identify/measure that split on the Forum itself.
As to dealing with the split and what it represents, my best guess is that there is a Bay-concentrated/influenced group of users who have geographically concentrated views, which much of the rest of EA disagree with/to varying extents find their beliefs/behaviour rude or repugnant or wrong.[1] The longer term question is if that group and the rest of EA[2] can cohere together under one banner or not.
I don’t know the answer there, but I’d very much prefer it to be discussion and mutual understanding rather than acrimony and mutual downvoting. But I admit I have been acrimonious and downvoted others on the Forum, so not sure those on the other side to me[3] would think I’m a good choice to start that dialogue.
Perhaps the feeling is mutual? I don’t know, certainly I think many members of this culture (not just in EA/Rationalist circles but beyond in the Bay) find ‘normie’ culture morally wrong and intelorable
There have been a free comments about this. And I’m surprised the forum team hasn’t weighed in yet with data or comments. Are there actually voting trends which are differ across timezones? If so how do those patterns work? Should we do anything about it.
I’ve also found myself reactionary downvoting recently which I didn’t like but might have actually been fine just on the other side. That isn’t good at all so so guilty here too
Without reading too much into it, there’s a similar amount of negativity about the state of EA as there is a lack of confidence in its future. That suggests to me that there’s a lot of people who think EA should be reformed to survive (rather than ‘it’ll dwindle and that’s fine’ or ‘I’m unhappy with it but it’ll be okay’)?
It has an emotional impact on me to note that FTX claims are now trading at 50%. This means that on expectation, people are gonna get about half of what their assets were worth, had they help them until this time.
I don’t really understand whether it should change the way we understand the situation, but I think a lot of people’s life savings were wrapped up here and half is a lot better than nothing.
I am not confident on the reasons why this is, but I think it’s because Anthropic and the cryptocurrency Solana are now trading a lot higher. My last memory (bad do not trust) is that FTX has about 11bn in debt against 4bn in assets. I think Anthropic and the Sol they hold have both gone up by about a billion since then.
I dunno folks, but I hope people get their money back—and I know that includes some of you.
Lots of discussion, a reasonable amount of new information, but what should our final update be:
Have HLI acted fine or badly?
Is there a pattern of misquoting and bad scholarship?
Have global health orgs in general moved towards Self-reported WellBeing (SWB) as a way to measure interventions?
Has HLI generally done good/cost effective work?
I think that the forum comments model is very poor at this. After all, if there were widespread agreement (as I think there could be) then I think that would be a load of all our minds. We could have a discussion once and then not need to have it again.
As it is, I’m sure many people have taken away different things from this and I we’ll probably discsuss it again the next time the Happier Lives Institute or StrongMinds posts to the forum and I guess there has been some more bad blood created in the meantime.
Consensus is good and we don’t even try to reach it after big discussions.
If you’re commenting on a post, it helps to start off with points of agreement and genuine compliments about things you liked. Try to be honest and non-patronizing: a comment where the only good thing you say is “your english is very good” will not be taken well, or a statement that “we both agree that murder is bad”. And don’t overthink it, a simple “great post” (if honest) is never unappreciated.
Another point is that the forum tends to have a problem with “nitpicking”, where the core points of a post are ignored in favor of pointing out minor, unimportant errors. Try to engage with the core points of an argument, or if you are pointing out a small error, preface it with “this is a minor nitpick”, and put it at the end of your comment.
So a criticism would look like:
“Very interesting post! I think X is a great point that more people should be talking about. However, I strongly disagree with core point Y, for [reasons]. Also, a minor nitpick: statement Z is wrong because [reasons]”
I think the above is way less likely to feel like an “attack”, even though the strong disagreements and critiques are still in there.
I agree that it’s worth saying something about sexual behaviour. Here are my broad thoughts:
I am sad about women having bad experiences, I think about it a lot
I want to be accurate in communication
I think it’s easy to reduce harms a lot without reducing benefits
Firstly, I’m sad about the current situation. Seems like too many women in EA have bad experiences. There is a discussion about what happens in other communities or tradeoffs. But first it’s really sad.
More than this, it seems worth dwelling on what it *feels* like. I guess for many it’s fine. But for some it can be exhausting or sad or uncomfortable. Women in EA complain to me about their treatment as women at lot, men much less. Seems notable.
But I don’t know what norms should be. I don’t know what’s best for EA women, for EA in general, for the world in general. In short, I don’t know how to optimise norms.
But harms seem easier to understand. It does seem to me there are some low cost, high benefit improvements. Particularly in people who have patterns of upsetting women.
Personally, I have really upset 2 or 3 women in EA around romance. I’ve said or done things that have left them sad for months. And I don’t think this is okay.
To them, I am sorry.
How do they feel? Well I sense, really sad. We’re not talking Time magazine stuff here, but I think they felt belittled, disrespected, judged and, briefly, unsafe. I don’t want anyone to feel like this, let alone because of me.
And compared to their suffering, and my sadness at it, it just seems pretty cheap to change my behaviour. To go on dates with a smaller group of people in EA, to create patterns to avoid situations I handle poorly, to spend time imagining women’s lives.
So I’m not gonna give a blanket pronouncement or say we are the worst. But personally, I am pretty flawed and I would prefer to change rather than hurt other people. And if you see that pattern in your life then I suggest taking real, actual steps.
I’d suggest you ask yourself. “Are there any women who, as a result of my actions in the last 2 years are seething or deeply upset.”
For most people the answer is no. Like seriously, the answer can be “no, you’re fine”. But if it’s yes, women are people right? Do you really believe that there aren’t some improvements possible here?
Some suggestions to yesses:
Talk to a trusted friend. How do they think you do here?
Imagine how much you would do to avoid the last woman being upset. Spend at least that much time avoiding the next woman being upset
I dislike the tribal nature of this discussion, that on some level it feels culture war-ey. So again, I don’t think this for everyone, but it is for me
But I really would recommend going to quality sex and relationship courses. I went to one run by a tantra group and I think it just made me a lot kinder and helped me reduce risks
Talk to women you’ve dated. How did they feel?
If you struggle with empathy with women, perhaps start with empathy for me. Trust me, you don’t want to feel like this. It’s horrible to have people who are upset as a result of my actions.
Most of all, I would recommend building empathy. I wish I had sat down and just written how the women I fancied felt, even for 5 minutes. And talked it over with a friend.
Take an interest in the mental lives of people you care about.
So I guess, the thing I could say was “If you continue patterns of romantic behaviour that frequently upset women that you could easily make less risky then I’ll be really upset with you and sad” as, if I were to continue I’d be so angry at myself.
Romance is not without risk—I don’t think this is a purely harm reducing question (though I could move to that opinion). But I think it’s possible to just reduce risks a lot while maintaining benefit. And if I have the option to do that and I choose not to, that’s basically my definition of bad.
Daniel’s Heavy Tail Hypothesis (HTH) vs. this recent comment from Brian saying that he thinks that classic piece on ‘Why Charities Usually Don’t Differ Astronomically in Expected Cost-Effectiveness’ is still essentially valid.
Seems like Brian is arguing that there are at most 3-4 OOM differences between interventions whereas Daniel seems to imply there could be 8-10 OOM differences?
Here is my first draft, basically there will be a plan money prediction market predicting what they community will vote on a central question (here “are the top 1% more than 10,000x as efffective as the median”) then we have a discussion and we vote and then resolve.
It is unclear to me that if we chose cause areas again, we would choose global developement
The lack of a focus on global development would make me sad
This issue should probably be investigated and mediated to avoid a huge community breakdown—it is naïve to think that we can just swan through this without careful and kind discussion
With better wiki features and a way to come to consensus on numbers I reckon this forum can write a career guide good enough to challenge 80k. They do great work, but we are many.
There were too few parties on the last night of EA global in london which led to overcrowding, stressed party hosts and wasting a load of people’s time.
I suggest in future that there should be at least n/200 parties where n is the number of people attending the conference.
I don’t think CEA should legislate parties, but I would like to surface in people’s minds that if there are fewer than n/200 parties, then you should call up your friend with most amenable housemates and tell them to organise!
Has rethink priorities ever thought of doing a survey of non-EAs? Perhaps paying for a poll? I’d be interested in questions like “What do you think of Effective Altruism? What do you think of Effective Altruists?”
Only asking questions of those who are currently here is survivorship bias. Likewise we could try and find people who left and ask why.
I have some things I do like and then some clarifications.
I like that we are trying new mechanisms. If we were going to try and be a community that lasts we need to build ways of working that don’t have the failure modes that others have had in the past. I’m not particularly optimistic about this specific donation election, but I like that we are doing it. For this reason I’ve donated a little and voted.
I don’t think this specific donation election mechanism adds a lot. Money already gets allocated on a kind of voting system—you choose how you spend it. Gathering everyone’s votes and then reallocating means some people have decided they’d rather spend towards the median, though that data was available anyway. That said, I did spend quite a lot of time thinking about how I was gonna give (it’s strange to me that I find voting wrong worse than giving my own money wrong)
That said, perhaps it will codify discussions of impact, which I think are good. I’d like more quantification/ comparison. Are there some nice graphs somewhere of where Giving What We Can gifts go to?
I don’t think that election offers much better decisions. If I want someone I trust to help me decide where to give, I can already do that.
I don’t particularly like the “I donated” “I voted” tags, but I never like that kind of thing.
On balance I thought it was good and want more stuff like this.
I hold that there could be a well maintained wiki article on top EA orgs and then people could anonymously have added many non-linear stories a while ago. I would happily have added comments about their move fast and break things approach and maybe had a better way to raise it with them.
There would have been edit wars and an earlier investigation.
How much would you pay to have brought this forward 6 months or a year. And likewise for whatever other startling revelations there are. In which case, I suggest a functional wiki is worth 5% − 10% of that amount, per case.
My question is “Who would want to run an EA org or project in that kind of environment?”. Presumably, you’d be down, but my bet is that the vast majority of people wouldn’t.
It was pointed out to me that I probably vote a bit wrong on posts.
I generally just up and downvote how I feel, but occasionally if I think a post is very overrated or underrated I will strong upvote or downvote even though I feel less strong than that.
But this is I think the wrong behaviour and a defection. Since if we all did that then we’d all be manipulating the post to where we think it ought to be and we’d lose the information held in the median of where all our votes leave it.
Withholding the current score of a post till after a vote is cast (but the casting is committal) should be enough to prevent strategic behavior. But it comes with many downsides (I think feed ordering / recsys could work with private information, so the scores may be in principle inferrable from patterns in your feed, but you probably won’t actually do it. The worse problem is commitment, I do like to edit my votes quite a bit after initial impressions).
I imagine there’s a more subtle instrument, withholding the current score until committal votes have been cast seems almost like a limit case.
Although this isn’t in response to your specific case (correcting for overrated or underrated posts), but in response to
Since if we all did that then we’d all be manipulating the post to where we think it ought to be and we’d lose the information held in the median of where all our votes leave it.
I think it’s okay to “defect” to correct the results of others’ apparent defection or to keep important information from being hidden. I’ve used upvotes correctively when I think people are too harsh with downvotes or when the downvotes will make important information/discussion much less visible. To elaborate, I’ve sometimes done this for cases like these:
When a comment or post is at low or negative karma due to downvotes, despite being made in good faith (especially if it makes plausible, relevant and useful claims), and without being uncivil or breaking other norms, even if it expresses an unpopular view (e.g. opinion or ethical view) or makes some significant errors in reasoning. I don’t think we should disincentivize or censor such comments, and I think that’s what disagreement voting and explanations should be used for. I find when people use downvotes like this without explanation to be especially unfair. This also includes when downvotes crush well-intentioned and civil but poorly executed newbie posts/comments, which I think is unkind and unwelcoming. (I’ve used upvotes correctively like this even before we had disagree voting.)
For posts with low or negative karma due to downvotes, if they contain (imo) important information, possibly even if poorly framed, with bad argument in them or made in apparently bad faith, if there’s substantial valuable discussion on the issue or it isn’t being discussed visibly somewhere else on the EA Forum. Low karma risks effectively hiding (making much less visible) that information and surrounding discussion through the ranking algorithm. This is usually for community controversies and criticism.
I very rarely downvote at all, but maybe I’d refrain from downvoting something I would otherwise downvote because its karma is already low or negative.
Right—in my view, net-negative karma conveys a particular message (something like “this post would be better off not existing”) that is meaningfully stronger than the median voter’s standard for downvoting. It can therefore easily exist in circumstances where the median voter would not have endorsed that conclusion.
FWIW, I don’t think this is against the explicit EA Forum norms around voting, and using upvotes and strong upvotes this way seems in line with some of their “suggestions” in the table from that section. In particular, they suggest it’s appropriate to strong upvote if
You think many more people might benefit from seeing it.
You want to signal that this sort of behavior adds a lot of value.
These could be more or less true depending on the karma of the post or comment and how visible you think it is.
I don’t think using downvotes against overrated posts or comments falls under the suggestions, though, but doing it only for upvotes and not downvotes could bias the karma.
Any EA leadership have my permission to put scandal on the back burner until we have a strategy on bing by the way. feels like a big escalation to have an ML reading it’s own past messages and running a search engine.
EA internal issues matter but only if we are alive.
Reasons I would disagree: (1) Bing is not going to make us ‘not alive’ on a coming-year time scale. It’s (in my view) a useful and large-scale manifestation of problems with LLMs that can certainly be used to push ideas and memes around safety etc, but it’s not a direct global threat. (2) The people best-placed to deal with EA ‘scandal’ issues are unlikely to perfectly overlap with the people best-placed to deal wit the opportunities/challenges Bing poses. (3) I think it’s bad practice for a community to justify backburnering pressing community issues with an external issue, unless the case for the external issue is strong; it’s a norm that can easily become self-serving.
I think the community health team should make decisions on the balance of harms rather than beyond reasonable doubt. If it seems likely someone did something bad they can be punished a bit until we don’t think they’ll do it again. But we have to actually take all the harms into account.
“beyond reasonable doubt” is a very high standard of proof, which is reasonable when the effect of a false conviction is being unjustly locked in a prison. It comes at a cost: a lot of guilty people go free and do more damage.
Theres no reason to use that same standard for a situation where the punishments are things like losing a job or being kicked out of a social community. A high standard of proof should still be used, but it doesn’t need to be “beyond reasonable doubt” level. I would hate to be falsely kicked out of an EA group, but at the end of the day I can just do something else.
I agree that the magnitude of the proposed deprivation is highly relevant to the burden of proof. The social benefit from taking the action on a true positive, and the individual harm from acting on a false positive also weigh in the balance.
In my view, the appropriate burden of proof also takes into account the extent of other process provided. A heightened burden of proof is one procedure for reducing the risk of erroneous deprivations, but it is not the only or even the most important one.
In most cases, I would say that the thinner the other process, the higher the BOP needs to be. For example, discipline by the bar, medical board, etc is usually more likely than not . . . but you get a lot of process like an independent adjudicator, subpoena power, and judicial review. So we accept 51 percent with other procedural protections in play. (And as a practical matter, the bar generally wouldnt prosecute a case it thought was 51 percent anyway due to resource constraints). With significantly fewer protections, I’d argue that a higher BOP would be required—both as a legal matter (these are government agencies) and a practical one. Although not beyond a reasonable doubt.
Of course, more process has costs both financial and on those involved. But it’s a possible way to deal with some situations where the current evidence seems too strong to do nothing and too uncertain to take significant action.
I listened to this episode today Nathan, I thought it was really good, and you came across well. I think EAs should consider doing more podcasts, including those not created/hosting by EA people or groups. They’re an accessible medium with the potential for a lot of outreach (the 80k podcast is a big reason why I got directly involved with the community).
I know you didn’t want to speak for EA has a whole, but I think it was a good example with EA talking to the leftist community in good faith,[1] which is (imo) one of our biggest sources of criticism at the moment. I’d recommend others check out the rest of Rabbithole’s series on EA—it’s a good piece of data on what the American Left thinks of EA at the moment.
Summary:
+1 to Nathan for going on this podcast
+1 for people to check out the other EA-related Rabbithole episodes
Any time that you read a wiki page that is sparse or has mistakes, consider adding what you were trying to find. I reckon in a few months we could make the wiki really good to use.
I sense that conquest’s law is true → that organisations that are not specifically right wing move to the left.
I’m not concerned about moving to the left tbh but I am concerned with moving away from truth, so it feels like it would be good to constantly pull back towards saying true things.
I think the forum should have a retweet function but for the equivalent of github forks. So you can make changes to someone’s post and offer them the ability to incorporate them. If they don’t, you can just remake the article with the changes and an acknolwedgement that you did.
I don’t think people would actually do that that often, because they’d get no karma most of the time, but it would give karma, attribution trail for: - summaries - significant corrections/reframings - and the author could still accept the edits later
My very quick improving institutional decision-making (IIDM) thoughts
Epistemic status: Weak 55% confidence. I may delete. Feel free to call me out or DM me etc etc.
I am saying these so that someone has said them. I would like them to be better phrased but then I’d probably never share them. Please feel free to criticise them though I might modify them a lot and I’m sorry if they are blunt:
I don’t understand what concrete learnings there are from IIDM, except forecasting (which I am biased on). The EIP produced a report which said that in the institutions you’d expect to matter do. That was cheap falsification so I guess worth it. Beyond that, I don’t know. And I was quite involved for a while and I didn’t pick these up by osmosis. I assume that many people know even less than I do.
Is forecasting IIDM? Yes. But people know what forecasting is so it’s easier to use those words. Are humans primates, yes, but one of those words is easier to understand.
Does IIDM exist in the wild? Yes?? I know lots of EA-aligned people who work in institutions who to improve them. That seems like IIDM to me.
What ideas would I brainstorm, low confidence:
Connect EA networks across institutions. EAs in different institutions probably know things. Do they pass those around?
Try and improve EA knowledge tranfer How can someone get a high signal feed of information via email, WhatsApp, podcast app. If we had this then it would be easier to share to institutional colleagues
What has worked in EA orgs? I’m surprised we think we can improve institutions when we haven’t solved those problems internally
How does an org make forecasting really easy and low friction?
How can EA institutions share detailed knowledge in real treal-timeime across institutions?
Haha I don’t know what IIDM is but I do know what forecasting is. If I had lots of money one of the things I’d do is create a forecasting news organization. They don’t talk about what happened, they talk about what’s going to happen. The knowlege transfer is important. People are too spread apart to use one platform, but if there was a list of people who were readily available to share information on certain topics and their contact info that would be valuble.
This forum is not user-friendly. Took a bit to arrive.
I am not! I applied and didn’t get it, I think the movement is bigger than available tickets in a convention. I’m on a few EA discords if you’d like to chat.
I have strong “social security number” associations with the acronym SSN.
Setting those aside, I feel “scale” and “solvability” are simpler and perhaps less jargon-y words than “impact” and “tractability” (which is probably good), but I hear people use “impact” much more frequently than “scale” in conversation, and it feels broader in definition, so I lean towards “ITN” over “SSN”.
I am gonna do a set of polls and get a load of karma for it (70% >750). I’m currently ~20th overall on the forum despite writing few posts of note. I think polls I write create a lot of value and I like the way it incentivises me to think about questions the community wants to answer.
I am pretty happy with the current karma payment but I’m not sure everyone will be so I thought I’d surface it. I’ve considered saying that polls delivery half the karma, but that feels kind of messy and I do think polls are currently underrated on the forum.
Each EA org should pay $10 bounty to the best twitter thread talking about any episode. If you could generate 100 quality twitter threads on 80,000 hours episodes that for $1000 that would be really cheap. People would quote tweet and discuss and it would make the whole set of knowledge much more legible.
Cool idea, I’ll have a think about doing this for Hear This Idea. I expect writing the threads ourselves could take less time than setting up a bounty, finding the threads, paying out etc. But a norm of trying to summarise (e.g. 80K) episodes in 10 or so tweets sounds hugely valuable. Maybe they could all use a similar hashtag to find them — something like #EAPodcastRecap or #EAPodcastSummary
I edited the of wikipedia on Doing Good Better to try and make it more reflective of the book and Will’s current views. Let me know how you think I did.
A friend in canada wants to give 10k to a UK global health charity but wants it to be tax neutral. does anyone giving to a big charity want to swap (so he gives to your charity in canada and gets the tax back) and you give to this global health one?
Maybe RC Forward can help with this? They will forward donations to selected overseas charities, but not all EA organizations are on their list.
If that doesn’t work, it might be possible to find a match in a country other than the UK. For example Americans can give to Unlimit Health via GWWC, even though Unlimit Health isn’t registered in the US.
I was reading this article about Nuclear winter a couple of days ago and I struggled. It’s a good article but there isn’t an easy slot in my worldview for it. The main thrust was something like “maybe nuclear winter is worse than other people think”. But I don’t really know how bad other people think it is.
Compare this to community articles, I know how the community functions and I have opinions on things. Each article fits neatly into my brain.
If a had a globe my worldview the EA community section is like very well mapped out. And so when I hear oh, you know, Adelaide is near Sydney or something, I know where those places are, and I can make some sort of judgment on the comment. But my views on nuclear winter are like if I learn that the mountains near Drachmore are taller than people think. Where is drachmore? Which mountains? How tall do people think they are.
My suggestion here is better wikis, but mainly I think the problem is an interesting one. I think often the community section is well supported because we all have some prior structure. I think it’s hard to comment on air purity, AI minutiae or nuclear winter because I don’t have that prior space.
I wouldn’t recommend people tweet about the nonlinear stuff a lot.
There is an appropriate level of publicity for things and right now I think the forum is the right level for this. Seems like there is room for people to walk back and apologise. Posting more widely and I’m not sure there will be.
If you think that appropriate actions haven’t been taken in say a couple months then I get tweeting a bit more.
I think the substance of your take may be right, but there is something that doesn’t sit well with me about an EA suggesting to other EAs (essentially) “I don’t think EAs should talk about this publicly to non-EAs.” (I take it that is the main difference between discussing this on the Forum vs. Twitter—like, “let’s try to have EA address this internally at least for now.”) Maybe it’s because I don’t fully understand your justification—”there is room for people to walk back and apologize”—but the vibe here feels a bit to me like “as EAs, we need to control the narrative around this (‘there is an appropriate level of publicity,’)” and that always feels a bit antithetical to people reasoning about these issues and reaching their own conclusions.
I think I would’ve reacted differently if you had said: “I don’t plan to talk about this publicly for a while because of x, y, and z” without being prescriptive about how others should communicate about this stuff.
I think in general people don’t really understand how virality works in community dynamics. Like there are actions that when taken cannot be reversed.
I don’t say “never share this” but I think sharing publicly early will just make it much harder to have a vulnerable discussion.
I don’t mind EAs talking about this with non-EAs but I think twitter is sometimes like a feeding frenzy, particularly around EA stuff. And no, I don’t want that.
Notably, more agree with me than disagree (though some big upvotes on agreement obscure this—I generally am not wild about big agreeevotes).
As I’ve written elsewhere I think there is a spectrum from private to public. Some things should be more public than they are and other things more private. Currrently I am arguing this is about right. I thought that it turned out many issues with FTX were too private.
I think that a mature understanding of sharing things is required for navigating vulnerable situations (an I imagine you agree—many disliked the sharing of victims names around the time article why because that was too public for that information in their opinion)
I appreciate that you said it didn’t sit well with you. It doesn’t really sit well with me either. I welcome someone writing it better
Yeah, again, I think you might well be right on the substance. I haven’t tweeted about this and don’t plan to (in part because I think virality can often lead to repercussions for the affected parties that are disproportionate to the behavior—or at least, this is something a tweeter has no control over). I just think EA has kind of a yucky history when it comes to being prescriptive about where/when/how EAs talk about issues facing the EA community. I think this is a bad tendency—for instance, I think it has, ironically, contributed to the perception that EA is “culty” and also led to certain problematic behaviors getting pushed under the rug—and so I think we should strongly err on the side of not being prescriptive about how EAs talk about issues facing the community. Again, I think it’s totally fine to explain why you yourself are choosing to talk or not talk about something publicly.
I guess I plan for the future, not the past. But I agree that my stance is generally more public than most EAs. I talk to journalists about stuff, for instance, and I think more people should.
I imagine that it has cost and does cost 80k to push for AI safety stuff even when it was wierd and now it seems mainstream.
Like, I think an interesting metric is when people say something which shifts some kind of group vibe. And sure, catastrophic risk folks are into it, but many EAs aren’t and would have liked a more holistic approach (I guess).
I am frustrated and hurt when I take flack for criticism.
It seems to me that people think I’m just stirring shit by asking polls or criticising people in power.
Maybe I am a bit. I can’t deny I take some pleasure in it.
But there are a reasonable amount of personal costs too. There is a reason why 1-5 others I’ve talked to have said they don’t want to crticise because they are concerned about their careers.
I more or less entirely criticise on the forum. Believe me, if I wanted to actually stir shit, I could do it a lot more effectively than shortform comments.
I’m relatively pro casual sex as a person, but I will say that EA isn’t about being a sex-positive community—it’s about effectively doing good. And if one gets in the way of the other, I know what I’m choosing (doing good).
I think there is a positive sum compromise possible, but it seems acknowledging how I will trade off if it comes to it.
when you can comment on an article and it shows as a little speech bubble to the side of the text. I’ve opted into experimental features but I still can’t.
I think you just normally quote a section of the article, clicking “Block quote”
Some people use hypothes.is , which in theory gives the same functionality on any web page, but we’re very few and only people that have installed it can see the comments or add new ones
Some thoughts - Utilitiarianism but being cautious around the weird/unilateral stuff is still good - We shouldn’t be surprised that we didn’t figure out SBF was fraudulent quicker than billions of dollars of cryto money… and Michael Lewis - Scandal prediction markets are the solution here and one day they will be normal. But not today. Don’t boo me, I’m right - Everyone wants whistleblowing, no one wants the correctly incentivised decentralised form of whistleblowing. - Gotta say, I feel for many random individual people who knew or interacted closely with SBF but weren’t at FTX who are gonna get caught up in that - We were fundamentally unserious about avoiding reputational risk from crypto. I hope we are more serious about not dying from AI - I like you all a lot - I don’t mind taking the money of some retired non-EA oil baron, but I think not returning FTX’s money perhaps incentivises future pro-crime EAs. I would like a credible signal - The community does not need democratised funding (though I’d happily test it at a small scale) though we aren’t getting enough whistleblowing so we should work on that - We deserve to be scrutinised and mocked, we messed up. We should own that - X-risk is still extremely compelling - I am uncertain how impactful my work is - Our critics are usually very low signal but have a few key things of value to say. It is hard to listen to find those things without wasting loads of time, but missing them is bad too - People knew SBF was a bully who broke promises. That that information didn’t flow to where it needed/ was ignored was a problem— I think we shouldn’t say we want criticism, because we don’t. We didn’t want it about FTX and we don’t in any other places. We want very specific criticism. Everyone does, because the world is big and we have limited time. So how do we get the criticism that’s most useful to us - The community should seek to make the best funding decisions it can over time. I think that’s with orgs doing it and prediction markets to remove bad apples, but you can think what you want. But democratisation isn’t a goal in and of itself—good sustainable decisionmaking is. Perhaps there should be a jury of randomly chosen community member, perhaps we should have elections. I don’t know, but I do feel we haven’t been taking governance seriously enough
I remain confused about “utilitarianism, but use good judgement”. IMO, it’s amongst the more transparent motte-and-baileys I’ve seen. Here are two tweets from Eliezer that I see are regularly re-shared:
The rules say we must use consequentialism, but good people are deontologists, and virtue ethics is what actually works.
Go three-quarters of the way from deontology to utilitarianism and then stop. You are now in the right place. Stay there at least until you have become a god.
This describes Aristotelian Virtue Ethics—finding the golden mean between excess and deficiency. So are people here actually virtue ethicists who sometimes use math as a means of justification and explanation? Or do they continue to take utilitarianism to some of its weirder places, privately and publicly, but strategically seek shelter under other moral frameworks when criticized?
I’m finding it harder to take people who put “consequentialist” and “utilitarian” in their profiles and about mes seriously. If people abandon their stated moral framework on big important and consequential questions, then either they’re deluding themselves on what their moral framework actually is, or they really will act out the weird conclusions—but are being manipulative and strategic by saying “trust us, we have checks and balances”
And what happens when that double-checking comes back negative? And how much weight do you choose to give it? The answer seems to be rooted in matters of judgement and subjectivity. And if you’re doing it often enough, especially on questions of consequence, then that moral framework is better described as virtue ethics.
Out of curiosity, how would you say your process differs from a virtue ethicist trying to find the golden mean between excess and deficiency?
I notice that sometimes I want to post on something that’s on both the EA forum and lesswrong. And ideally, clicking “see lesswrong comments” would just show them on the current forum page and if I responded, it would calculate EA forum karma for the forum and LessWrong karma for lessWrong.
When someone says of your organisation “I want you to do X” do not say “You are wrong to want X”
This rudely discourages them from giving you feedback in future. Instead, there are a number of options:
If you want their feedback “Why do you want X?” “How does a lack of X affect you?”
If you don’t want their feedback “Sorry, we’re not taking feedback on that right now” or “Doing X isn’t a priority for us”
If you think they fundamentally misunderstand something “Can I ask you a question relating to X?”
None of these options tell them they are wrong.
I do a lot of user testing. Sometimes a user tells me something I disagree with. But they are the user. They know what they want. If I disagree, it’s either because they aren’t actually a user I want to support, they misunderstand how hard something is, or they don’t know how to solve their own problems.
None of these are solved by telling them they are wrong.
Often I see people responding to feedback with correction. I often do it myself. I think it has the wrong incentives. Rather than trying to tell someone they are wrong, now I try to either react with curiosity or to explain that I’m not taking feedback right now. That’s about me rather than them.
I sense new stuff on the forum is probably overrated. Surely we should assume that most of the most valuable things for most people to read have already been written?
The difference between the criticism contest and openphil’s cause prioritisation contest is pretty interesting. 60% I’m gonna think OpenPhil’s created more value in terms of changes in a 10 years time.
Causes which are much more pressing under longtermism than other belief systems
Longtermist causes are:
Those which are a high priority for marginal resources, whether they are under other belief systems or not.
The fact that biorisk and AI risk are high priority without longtermism doesn’t make them not “longtermist causes” just as it doesn’t make the not “causes that affect people alive today”
An open question for me (for EA Israel? For EA?) is whether we can talk about economic-politics publicly in our group.
For example, can we discuss openly that “regulating prices is bad”. This is considered an open political debate in Israel, politicians keep wanting to regulate prices (and sometimes they do, and then all the obvious things happen)
I mean I’d like to chat about that, and maybe happy to on this shortform? But I wouldn’t write a post on it. I guess it doesn’t seem that neglected to me.
In Israel, it is controversial to suggest not regulating prices, or to suggest lowering import taxes, or similar things. I could say a lot about this, but my points are:
I remember I was really jealous of the U.S when Biden suggested some very expensive program (UBI? Some free-medical-care reform?), but he SHOWED where the money is supposed to come from, there was a chart!
I’ve decided I’m going to just edit the wiki to be like the wiki I want.
Currently the wiki feels meticulously referenced but lacking in detail. I’d much prefer it to have more synthesised content which is occasionally just someone’s opinion. If you dislike this approach, let me know.
I do think that many of the entries are rather superficial, because so far we’ve been prioritizing breadth over depth. You are welcome to try to make some of these entries more substantive. I can’t tell, in the abstract, if I agree with your approach to resolving the tradeoff between having more content and having a greater fraction of content reflect just someone’s opinion. Maybe you can try editing a few articles and see if it attracts any feedback, via comments or karma?
Why do you think the summary got more upvotes. I’m not upset, I like a summary too, but in my mind, a question that anyone can submit answers to or upvote current answers is much more useful. So I am confused. Can any suggest why?
Anyone can comment on a post and upvote comments so I don’t see why a question would be better in that regard.
Also the post contained a lot of information on potential megaprojects which is not only quite interesting and educational but also prompts discussion.
Can you think of any examples of other movements which have this? I have not heard of such for e.g. the environmentalist or libertarian movements. Large companies might have whistleblowing policies, but I’ve not heard of any which make use of an independent organization for complaint processing.
I’m sorry to hear this (and grateful that you’re reporting them). We have systems for flagging when a user’s DM pattern is suspicious, but it’s imperfect (I’m not sure if it’s too permissive right now).
In case it’s useful for you to have a better picture of what’s going on, I think you get more of the DM spam because you’re very high up in the user list.
“I don’t think drinking is bad, but we have a low-alcohol culture so the fact you host parties with alcohol is bad”
Often the easiest mark of bad behaviour is that it breaks a norm we’ve agreed. Is it harmful in a specific case to be shoplift? Depends on what was happening to the things you stole. But it seems easier just to appeal to our general norm that shoplifting is bad. On average it is harmful and so even if it wasn’t in this specific case, being willing to shoplift is a bad sign. Even if you’re stealing meds to give to your gran, it may be good to have a general norm against this behaviour.
But if the norm is bad that weakens norms in general. Lots of people in the UK speed in their cars. But this teaches many people that twice a day, the laws aren’t actually laws. It encourages them that many government rules are stupid and needless as opposed to wise and reasonable
But how broadly should this norm apply? 99% of cases, 95%? I don’t know.
But it’s clear to me that if a norm only applies in 50% of cases it’s a bad norm. It’s gonna leave everyone trusting the values of the community less, because half the time it will punish or reward people incorrectly.
That’s right, you should be able to mention users with @ and posts with #. However, it does seem like they’re both currently broken, likely because we recently updated our search software. Thanks for flagging this! We’ll look into it.
I strongly dislike the “further reading” sections of the forum wiki/forum tags.
They imply that the right way to know more about things is to read a load of articles. It seems clear to me that instead we should sythesise these points and then link them where relevant. Then if you wanted more context you could read the links.
The ‘Further reading’ sections are a time-cheap way of helping readers learn more about a topic, given our limited capacity to write extended entries on those topics.
1) Clubhouse is a new social media platform, but you need an invite to join 2) It allows chat in rooms, and networking 3) Seems some people could deliver sooner value by having a clubhouse invite 4) People who are on clubhouse have invites to give 5) If you think an invite would be valuable or heck you’d just like one, comment below and then if anyone has invites to give they can see EAs who want them. 6) I have some invites to give away.
It is reasonable that 5- 20% of the community are scared that their harmless sexual behaviour will become unacceptable and that they will be seen as bad/unsafe if they support it.
It’s fair that they are upset and see this as something that might hurt them and fear the outcome.
There are two main models I have for many of these discussions:
Rationalist EAs—like truth-seeking, think a set of discourse norms should be obeyed at all times
Progressive EAs—think that some discussions require much more energy from some than others and need to be handled differently/more carefully. Want an environement where they feel safe
I think it’s easy to see these groups as against one another, but I think that’s not true. There are positive sum improvements.
Women being sad matters. And yes there are tradeoffs here, but it’s really sad that the women in the time article and all the other women who have been sad are sad.
If we could have a community where everyone says “EA does romantic relationships a lot better than the outside world” that would be worth spending $10 − 100mn on purely in community building terms, let alone in just welfare of individual EAs.
We spend millions each year of EAGs + 80k. Imagine if everyone just was like “Yeah EA is just a great safe fun place”
It is pretty reasonable for 5 − 20% of the community to have a boundary about not being caught up in coversations about sex in houses they need to stay in in foreigh countries. Or similarly bad conversations.
It’s reasonable they want to be sure this is taken really seriously, because they don’t want it to happen to them or their friends.
It’s complicated that this might lead to unintended consequences, but their desire seems very comprehensible.
It was very likely bad that Owen Cotton-Barrett upset a couple of women and then didn’t drastically change his behaviour, such that there were other instances.
That’s not to say other things weren’t bad. But this feels like something we can agree on.
The forum should hire mediators who’s job it is to try and surface concensus and allow discussion to flow better. Many discussion are a lot of different positions at once.
I think in SBF we farmed out our consciences. Like people who say “there need to be atrocities in war so that people who live in peace” we thought “SBF can do trade dodgy coins stuff so that we can help, but let’s not think about it”. I don’t think we could have known about the fraud, but I do think there were plenty of warning signs we ignored as “SBF is the man in the arena”. No, either we should have been cogent and open about what he was doing or we should have said we didn’t like it and begun pulling away reputationally.
I suggest we should want to quickly update to how we will feel later. ie for the FTX crisis we wanted to as quickly as possible make peace that FTX seemed much less valuable and that SBF had maybe done a large fraud.
(I use this example because i think it’s the least controversial)
I think accurate views of the world are the main thing i want. This has a greif and emotion component, but the guiding light is “is this how the world really is”
If I have a criticism of the EA community in regard to this, it’s not clear to me that we penalise ourselves for holding views we later regard as wrong, or look at what led us there. I haven’t seen much discussion of bad early positions on FTX and I’m not sure the community even agrees internally on Bostrom, the Time article or Nonlinear. But :
I would like us to find agreement on these
I would like thought on what led us to have early incorrect community mental states
I think this is very costly so I mainly think about how I could leave the process cheaper, but that’s something I think about.
I have also recently been thinking alot about “how should we want to deal with a scandal” but mostly in terms of how much time is being devoted to each of these scandals by a community who really advocates for using our minimal resources to do the most good. It makes me really disappointed.
<<i’m not sure the community even agrees internally on Bostrom, the Time article or Nonlinear>>
Forming a consensus additionally seems against the values of the EA community, particularly on quite complicated topics where there is alot of information asymmetry and people often update (as they should) based on new evidence, which is again an important part of the EA community to me at least. So I think I disagree and think it’s unrealistic to have a community as large as EA “to find agreement on these,” and I’m not sure how this would help.
But i fully agree it would be great if we had a better process or strategy of how to deal with scandals
“mostly in terms of how much time is being devoted to each of these scandals by a community who really advocates for using our minimal resources to do the most good. It makes me really disappointed.”
I think we talk about scandals too much without making progress but I’m not sure we spend too much time on them. Often it’s about trust. And communities need trust. If you are a not thick skinned person of colour, an outspoken autist, someone who runs an unconventional org, a normal person new to the job market, how these events are handled affects how much you can trust the community if these events happen to you.
It seems to me that most people want confidence that bad things won’t happen to them. If they don’t have that, they will probably leave. And that has it’s own, large, costs.
Yes sorry I think we are actually saying the same thing here, I meant your former statement not the later. I’m not saying we shouldn’t investigate things but the 300 plus comments on the 3-4 nonlinear posts doesn’t seem an optimal use of time and could probably be dealt with more efficiently, plus the thousands of people who have probably read the posts and comments is a lot of time! Maybe these things shouldn’t be handled in forum posts but in a different format.
I fully agree that these things have to be dealt with better my main concern about your point is over the consensus idea which I think is unrealistic in a community that tries to avoid group think and on topics (ftx aside) where there doesn’t seem to be a clear right or wrong.
This also seems right to me. I feel like there is a lot of unproductive conflict in the comments on these kinds of posts, which could be somehow prevented and would also be more productive if the conflict instead occurred between a smaller number of representative EA forum members, or something like that.
An very random idea in that direction that won’t work for many reasons, is some kind of “EA Forum jury” where you get randomly chosen to be one of the users in the comment section of a contentious post, and then you fight it out until you reach some kind of consensus, or at least the discussion dies down.
I do think the most standard way people have handled this in various contexts is to have panels, or courts, or boards or some kind of other system where some small subset of chosen representatives have the job of deciding on some tricky subject matter. I do kind of wish there was some kind of court system in EA that could do this.
One challenge with a “drama jury” is that the people who are most motivated to be heavy participants aren’t necessarily the people you want driving the discussion. But I guess that’s equally true in open posts. The solution in classical Athens was to pay jurors a wage; IIRC, many jurors were semiretired old men who had a lot of bandwidth.
Potentially, you’d have a few nonvoting neutrals in the mix to help facilitate discussion. It’s easier to be in a facilitating frame of mind when you are not simultaneously being asked to vote on a verdict.
What would happen if people kept posting it outside of the “courtroom”?
Not sure whether this is what you were implying, but I wasn’t thinking of private courts. My current guess is that it is important for courts to be at least observable, so that people can build trust in them (observable in the sense of how modern courts are observable, i.e. anyone can show up to the courtroom, but you might not be allowed to record it).
I think John meant that non-participants might keep commenting on the situation while the trial was in progress, and then after the trial. That might weaken some of the gains from having a trial in the first place (e.g., the hope that people will accept the verdict and move on to more productive things).
You could “sequester” the jury by making them promise not to read the non-courtroom threads until the jury had delivered a verdict. You could also have a norm that disputants would not comment in other threads while trial was ongoing. Not having the disputants in the non-courtroom thread would probably slow its velocity down considerably. You could even hide the courtroom thread from non-participants until the trial was over. That’s not a complete answer, but would probably help some.
The bottleneck feels more social than technological.
Also, I feel like someone else needs to do investigations for it to make sense for me to build the courtroom, since it does seem bad for one person to do both.
If you have anonymous feedback I’m happy to hear it. In fact I welcome it.
I will note that I’m not made of stone however and don’t promise to be perfect. But I always appreciate more information.
Some behaviours I’ve changed recently:
I am more cautious about posting polls around sensitive topics where there is no way to express that the poll is misframed
I generally try to match the amount of text of the person I’m talking to, and resist an urge to keep adding additional replies
In formal settings I might have previously touched people on the upper arm or shoulders in conversation, a couple of people said they didn’t like that, so I do it less and ask before I do
If you have issues (or compliments), even ones you are sure I am aware of, I would appreciate hearing them. We are probably more alien than you imagine.
I do not upvote articles on here merely because they are about EA.
Personally I want to read articles that update me in a certain direction. Merely an article that’s gonna make me sad or be like “shrug accurate” is not an article I’m gonna upvote on here.
I quite strongly dislike “drama” around things, rather than just trying to figure them out. Much of the HLI “drama” seems to be reading various comments and sharing that there is disagreement rather than attempts to turn uncertainty into clarity.
My response to this is “what are we doing”? Why aren’t there more attempts to figure out what we should actually believe as a group here? I really don’t understand why there is much discussion but so little (to my mind) attempt at synthesis.
I don’t see a clear path forward to consensus here. The best I can see, which I have tried to nudge in my last two long posts on the main thread, is “where do we go from here given the range of opinions held?”
As I see it, the top allegation that has been levied is intentional research misconduct,[1] with lesser included allegations of reckless research misconduct, grossly negligent research (mis)conduct, and negligent research conduct. A less legal-metaphory way to put it is: the biggest questions are whether HLI had something on the scale in favor of SM, if so was it a finger or a fist on the scale, and if so did HLI know (or should it have known) that the body part was on the scale.
It’s unsurprising that most people don’t want to openly deliberate about misconduct allegations, especially not in front of the accusers and the accused. There’s a reason juries deliberate in secret in an attempt to reach consensus.
I think that hesitation to publicly deliberate is particularly pronounced for those who fall in the middle part of the continuum,[2] which unfortunately contributes to the “pretty serious misconduct” and “this is way overblown” positions being overrepresented in comments compared to where I think they truly fall among the Forum community. Moreover, most of us lack the technical background and experience to lead a deliberation process.
What procedures would you suggest to move toward consensus?[3]
If someone thinks HLI is guilty of deceptive conduct (or conduct that is so reckless to be hard to distinguish from intentional deception), they are likely going to feel less discomfort raking HLI over the coals (“because they deserve it” and because maintaining epistemic defense against that kind of conduct is particularly important). If someone thinks this whole thing is a nothingburger, saying so wouldn’t seem emotionally difficult.
Properly used, anonymous polling can reveal a consensus that exists (as long as there’s no ballot stuffing) . . . but isn’t nearly as useful in developing a consensus. If you attempt to iterate the questions, you’re likely to find that more and more of the voting pool will be partisans on one side of the dispute or the other, so subsequent rounds will reflect community consensus less and less.
It seems plausible to me that those involved in Nonlinear have received more social sanction than those involved in FTX, even though the latter was obviously more harmful to this community and the world.
What does “involved in” mean? The most potentially plausible version of this compares people peripherally involved in FTX (under a broad definition) to the main players in Nonlinear.
Have your EA conflicts on… THE FORUM!
In general, I think it’s much better to first attempt to have a community conflict internally before I have it externally. This doesn’t really apply to criminal behaviour or sexual abuse. I am centrally talking about disagreements, eg the Bostrom stuff, fallout around the FTX stuff, Nonlinear stuff, now this manifest stuff.
Why do I think this?
If I want to credibly signal I will listen and obey norms, it seems better to start with a small discourse escalation rather than a large one. Starting a community discussion on twitter is like jumping straight to a shooting war.
Many external locations (eg twitter, the press) have very skewed norms/incentives to the forum and so many parties can feel like they are the victim. I find when multiple parties feel they are weaker and victimised that is likely to cause escalation.
Many spaces have less affordance for editing comments, seeing who agrees with who, having a respected mutual party say “woah hold up there”
It is hard to say “I will abide by the community sentiment” if I have already started the discussion elsewhere in order to shame people. And if I don’t intend to abide by the community sentiment, why am I trying to manage a community conflict in the first place. I might as well just jump straight to shaming.
It is hard to say “I am open to changing my mind” if I have set up the conflict in a way that leads to shaming if the other person doesn’t change theirs. It’s like holding a gun to someone’s head and saying that this is just a friendly discussion.
I desire reconciliation. I have hurt people in this community and been hurt by them. In both case to the point of tears and sleepless night. But still I would prefer reconciliation and growth over a escalating conflict
Conflict is often negative sum, so lets try and have it be the least negative sum as possible.
Probably a good chunk of it is church norms, centred around 1 Corinthians 6[2]. I don’t really endorse this, but I think it’s good to be clear why I think thinks.
Personal examples:
Last year I didn’t like that Hanania was a main speaker at manifest (iirc) so I went to their discord and said so. I then made some votes. The median user agreed with me and so Hanania didn’t speak. I doubt you heard about this, because I did it on the manifold discord. I hardly tweeted about it or anything. This and the fact I said I wouldn’t created a safe space to have the discussion and I largely got what I wanted.
You might think this is a comment is directed at a specific person, but I bet you are wrong. I dislike this behaviour when it is done by at least 3 different parties that I can think of.
If any of you has a dispute with another, do you dare to take it before the ungodly for judgment instead of before the Lord’s people? 2 Or do you not know that the Lord’s people will judge the world? And if you are to judge the world, are you not competent to judge trivial cases? 3 Do you not know that we will judge angels? How much more the things of this life! 4 Therefore, if you have disputes about such matters, do you ask for a ruling from those whose way of life is scorned in the church? 5 I say this to shame you. Is it possible that there is nobody among you wise enough to judge a dispute between believers? 6 But instead, one brother takes another to court—and this in front of unbelievers!
7 The very fact that you have lawsuits among you means you have been completely defeated already. Why not rather be wronged? Why not rather be cheated?
This is also a argument for the forum’s existence generally, if many of the arguments would otherwise be had on Twitter.
For sure when it comes to any internet based discussion, to promote quality discourse slowish long form >>>> rapid short form.
I agree with the caveat that certain kinds of more reasonable discussion can’t happen on the forum because the forum is where people are fighting.
For instance, because of the controversy I’ve been thinking a lot recently about antiracism recently—like what would effective antiracism look like; what lessons can we take from civil rights and what do we have to contribute (cool ideas on how to leapfrog past or fix education gaps? discourse norms that can facilitate hard but productive discussions about racism? advocating for literal reparations?) I have deleted a shortform I was writing on this because I think ppl would not engage with it positively. and I suspect I am missing the point somehow. I suspect people actually just want to fight, and the point is to be angry.
On the meta level, I have been pretty frustrated (with both sides though not equally) on the manner in which some people are arguing, and the types of arguments they use, and the motivations they. I think in some ways it is better to complain about that off the forum. It’s worse for feedback, but that’s also a good thing because the cycle of righteous rage does not continue on the forum. And you get different perspectives
(i wonder if a crux here is that you have a lot of twitter followers and I don’t. If you tweet you are speaking to an audience; if I tweet I am speaking to weird internet friends)
So I sort of agree, though depending on the topic I think it could quickly get a lot of eyes on it. I would prefer to discuss most things that are controversial/personal, not on twitter.
I feel like I want 80k to do more cause prioritisation if they are gonna direct so many people. Seems like 5 years ago they had their whole ranking thing which was easy to check. Now I am less confident in the quality of work that is directing lots of people in a certain direction.
Idk, many of the people they are directing would just do something kinda random which an 80k rec easily beats. I’d guess the number of people for whom 80k makes their plans worse in an absolute sense is kind of low and those people are likely to course correct.
Otoh, I do think people/orgs in general should consider doing more strategy/cause prio research, and if 80k were like “we want to triple the size of our research team to work out the ideal marginal talent allocation across longtermist interventions” that seems extremely exciting to me. But I don’t think 80k are currently being irresponsible (not that you explicitly said that, for some reason I got a bit of that vibe from your post).
80k could be much better than nothing and yet still missing out on a lot of potential impact, so I think your first paragraph doesn’t refute the point.
I agree with this, and have another tangential issue, which might be party of why cause prioritizing seems unclear? Their website seems confusing and overloaded to me.
Compare giving what we can’s page which has good branding and simple language. IMO 80,000 hours page has too much text and too much going on front page. Bring both websites up on your phone and judge for yourself.
These are the front page of EA for many people so are pretty important. These websites aren’t really for most of us, they are for fresh people so need to be punchy, straightforward and attractive. After clicking a couple pages bank things can get heavier.
My understanding is that 80k have done a bunch of A/B testing which suggested their current design outcompetes ~most others (presumably in terms of click-throughs / amount of time users spend on key pages).
You might not like it, but this is what peak performance looks like.
Love this response, peak performance ha.
I hope I’m wrong and this is the deal, that would be an excellent approach. Would be interesting to see what the other designs they tested were, but obviously I won’t.
I know of at least 1 NDA of an EA org silencing someone for discussing what bad behaviour that happened at that org. Should EA orgs be in the practice of making people sign such NDAs?
I suggest no.
I think I want a Chesterton’s TAP for all questions like this that says “how normal are these and why” whenever we think about a governance plan.
What’s a “Chesterton’s TAP”?
Not a generally used phrase, just my attempting to point to “a TAP for asking Chesterton’s fence-style questions”
What’s a TAP? I’m still not really sure what you’re saying.
“Trigger action pattern”, a technique for adopting habits proposed by CFAR <https://www.lesswrong.com/posts/wJutA2czyFg6HbYoW/what-are-trigger-action-plans-taps>.
Thanks!
“Chesterton’s TAP” is the most rationalist buzzword thing I’ve ever heard LOL, but I am putting together that what Chana said is that she’d like there to be some way for people to automatically notice (the trigger action pattern) when they might be adopting an abnormal/atypical governance plan and then reconsider whether the “normal” governance plan may be that way for a good reason even if we don’t immediately know what that reason is (the Chesterton’s fence)?
Oh, sorry! TAPs are a CFAR / psychology technique. https://www.lesswrong.com/posts/wJutA2czyFg6HbYoW/what-are-trigger-action-plans-taps
I am unsure what you mean? As in, because other orgs do this it’s probably normal?
I have no idea, but would like to! With things like “organizational structure” and “nonprofit governance”, I really want to understand the reference class (even if everyone in the reference class does stupid bad things and we want to do something different).
Strongly agree that moving forward we should steer away from such organizational structures; much better that something bad is aired publicly before it has a chance to become malignant
Some things I don’t think I’ve seen around FTX, which are probably due to the investigation, but still seems worth noting. Please correct me if these things have been said.
I haven’t seen anyone at the FTXFF acknowledge fault for negligence in not noticing that a defunct phone company (north dimension) was paying out their grants.
This isn’t hugely judgemental from me, I think I’d have made this mistake too, but I would like it acknowledged at some point
Since writing this it’s been pointed out that there were grants paid from FTX and Alameda accounts also. Ooof.
I haven’t seen anyone at CEA acknowledge that they ran an investigation in 2019-2020 on someone who would turn out to be one of the largest fraudsters in the world and failed to turn up anything despite seemingly a number of flags.
I remain confused
As I’ve written elsewhere I haven’t seen engagement on this point, which I find relatively credible, from one of the Time articles:
My comment on the above “While other things may have been bigger errors, this once seems most sort of “out of character” or “bad normsy”. And I know Naia well enough that this moves me a lot, even though it seems so out of character for [will] (maybe 30% that this is a broadly accurate account). This causes me consternation, I don’t understand and I think if this happened it was really bad and behaviour like it should not happen from any powerful EAs (or any EAs frankly).”
Extremely likely that the lawyers have urged relevant people to remain quiet on the first two points and probably the third as well.
Yeah seems right, but uh still seems worth saying.
Did you mean for the second paragraph of the quoted section to be in the quote section?
I can’t remember but you’re right that it’s unclear.
I haven’t read too much into this and am probably missing something.
Why do you think FTXFF was receiving grants via north dimension? The brief googling I did only mentioned north dimension in the context of FTX customers sending funds to FTX (specifically this SEC complaint). I could easily have missed something.
Grants were being made to grantees out of North Dimension’s account—at least one grant recipient confirmed receiving one on the Forum (would have to search for that). The trustee’s second interim report shows that FTXFF grants were being paid out of similar accounts that received customer funds.
It’s unclear to me whether FTX Philanthrophy (the actual 501c3) ever had any meaningful assets to its name, or whether (m)any of the grants even flowed through accounts that it had ownership of.
Seems pretty bad, no?
Certainly very concerning. Two possible mitigations though:
Any finding of negligence would only apply to those with duties or oversight responsibilities relating to operations. It’s not every employee or volunteer’s responsibility to be a compliance detective for the entire organization.
It’s plausible that people made some due dilligence efforts that were unsuccessful because they were fed false information and/or relied on corrupt experts (like “Attorney-1” in the second interim trustee report). E.g., if they were told by Legal that this had been signed off on and that it was necessary for tax reasons, it’s hard to criticize a non-lawyer too much for accepting that. Or more simply, they could have been told that all grants were made out of various internal accounts containing only corporate monies (again, with some tax-related justification that donating non-US profits through a US charity would be disadvantageous).
Ah, thank you!
I searched for that comment. I think this is probably the one you’re referencing.
I know of at least 1 other case.
Feels like we’ve had about 3 months since the FTX collapse with no kind of leadership comment. Uh that feels bad. I mean I’m all for “give cold takes” but how long are we talking.
Do you think this is not due to “sound legal advice”?
I am pretty sure there is no strong legal reason for people to not talk at this point. Not like totally confident but I do feel like I’ve talked to some people with legal expertise and they thought it would probably be fine to talk, in addition to my already bullish model.
People voting without explaining is good.
I often see people thinking that this is bragading or something when actually most people just don’t want to write a response, they either like or dislike something
If it were up to me I might suggest an anonymous “I don’t know” button and an anonymous “this is poorly framed” button.
When I used to run a lot of facebook polls, it was overwhelmingly men who wrote answers, but if there were options to vote, the gender was much more even. My hypothesis was that a kind of argumentative usually man tended to enjoy writing long responses more. And so blocking lower effort/less antagonistic/ more anonymous responses meant I heard more from this kind of person.
I don’t know if that is true on the forum, but I would guess that the higher effort it is to respond the more selective the responses become in some direction. I guess I’d ask if you think that the people spending the most effort are likely to be the most informed. In my experience, they aren’t.
More broadly I think it would be good if the forum optionally took some information about users—location, income, gender, cause area, etc and on answers with more than say 10 votes would display some kind of breakdown. I imagine it would sometimes be interesting to find out how exactly agreement and disagreement cut on different issues.edit More broadly I think it would be good if the forum tried to find clusters and pattterns in votes, perhaps allowing users to self nominate categories and then showing how categories split once there were enough votes. I’m a little wary of the forum deciding what categories are important and embedding that, but I’d like to see if an opinion was mainly liked by longtermists, women, etc.
Also I think it’s good to be able to anonymously express unpopular views. For most of human history it’s been unpopular to express support for LGBT+, the rights of women, animals. But if anonymous systems had existed we might have seen more support for such views. Likewise, pushing back against powerful people is easier if you can do it anonymously.
It seems like we could use the new reactions for some of this. At the moment they’re all positive but there could be some negative ones. And we’d want to be able to put the reactions on top level posts (which seems good anyway).
I think that it is generally fine to vote without explanations, but it would be nice to know why people are disagreeing or disliking something. Two scenarios come to mind:
If I write a comment that doesn’t make any claim/argument/proposal and it gets downvotes, I’m unclear what those downvotes mean.
If I make a post with a claim/argument/proposal and it gets downvoted without any comments, it isn’t clear what aspect of the post people have a problem with.
I remember writing in a comment several months ago about how I think that theft from an individual isn’t justified even if many people benefit from it, and multiple people disagreed without continuing the conversation. So I don’t know why they disagreed, or what part of the argument they through was wrong. Maybe I made a simple mistake, but nobody was willing to point it out.
I also think that you raise good points regarding demographics and the willingness of different groups of people to voice their perspectives.
I agree it would be nice to know, but in every case someone has decided they do want to vote but don’t want to comment. Sometimes I try and cajole an answer, but ultimately I’m glad they gave me any information at all.
What is bragading?
Think he was referring to “brigading”, referred to in this thread
Generally, it is voting more out of allegiance or affinity to a particular person, rather than an assessment of the quality of the post/comment.
If anyone who disagrees with me on the manifest stuff who considers themselves inside the EA movement, I’d like to have some discussions with a focus on consensus-building. ie we chat in DMs and the both report some statements we agreed on and some we specifically disagreed on.
Edited:
@Joseph Lemien asked for positions I hold:
The EA forum should not seek to have opinions on non-EA events. I don’t mean individual EAs shouldn’t have opinions, I mean that as a group we shouldn’t seek to judge individual event. I don’t think we’re very good at it.
I don’t like Hanania’s behaviour either and am a little wary of systems where norm breaking behaviour gives extra power, such as being endlessly edgy. But I will take those complaints to the manifold community internally.
EAGs are welcome to invite or disinvite whoever CEA likes. Maybe one day I’ll complain. But do I want EAGs to invite a load of manifest’s edgiest speakers? Not particularly.
It is fine for there to be spaces with discussion that I find ugly. If people want to go to these events, that’s up to them.
I dislike having unresolved conflicts which ossify into an inability to talk about things. Someone once told me that the couples who stay together are either great at settling disputes or almost never fight. We fight a bit and we aren’t great at settling it. I guess I’d like us to fight less (say we aren’t interested in conflicty posts) or to get better at making up (come to consensus afterwards, grow and change)
Only 1-6% of attendees at manifest had issues along eugenicsy lines in the feedback forms. I don’t think this is worth a huge change.
I would imagine it’s worth $10mns to avoid EA becoming a space full of people who fearmonger based on the races, genders or sexualities of others. I don’t think that’s very likely.
To me, current systems for taxing discussion of eugenics seem fine. There is the odd post that gets downvoted. If it were good and convincing it would be upvoted. so far it hasn’t been. seems fine. I am not scared of bad arguments [1]
Black people are probably not avoiding Manifest because of these speakers because that theory doesn’t seem to hold up for tech, rationalism[2], EA or several other communities.
I don’t know what people want when they point at “distancing EA from rationalism”
Manifest was fun for me, and it and several other events I went to in the bay felt like I let out a breath that I never knew I was holding. I am pretty careful what I say about you all sometimes and it’s tiring. I guess that’s true for some of you too. It was nice (and surprisingly un-edgy for me) to be in a space where I didn’t have to worry about offending people a lot. I enjoy having spaces where I feel safer.
There is a tradeoff between feeling safe and expression. I would have more time for some proposals if people acknowledged the costs they are putting on others. Even small costs, even costs I would willingly pay are still costs and to have that be unmentionable feels gaslighty.
There are some incentives in this community to be upset about things and to be blunt in response. Both of these things seem bad. I’d prefer incentives towards working together to figure out how the world is and impliment the most effective morally agreeable changes per unit resource. This requires some truthseeking, but probably not the maximal amount. and some kindness, but probably not the maximal amount.
Unless there was some kind of flooding of the forum to boost posts repeatedly.
LessWrong doesn’t have any significant discussion of eugenics either. As I (weakly) understand it they kicked many posters off who wanted to talk about such things.
Nathan, could you summarize/clarify for us readers what your views are? (or link to whatever comment or document has those views?) I suspect that I agree with you on a majority of aspects and disagree on a minority, but I’m not clear on what your views are.
I’d be interested to see some sort of informal and exploratory ‘working group’ on inclusion-type stuff within EA, and have a small group conversation once a month or so, but I’m not sure if there are many (any?) people other than me that would be interested in having discussions and trying to figure out some actions/solutions/improvements.[1]
^ We had something like this for talent pipelines and hiring (it was High Impact Talent Ecosystem, and it was somehow connected to or organized by SuccessIf, but I’m not clear and exactly what the relationship was), but after a few months the organizer stopped and I’m not clear on why. In fact, I’m vaguely considering picking up the baton and starting some kind of a monthly discussion group about talent pipelines, coaching/developing talent, etc.
Oooh that’s interesting. I’d be interested to hear what the conclusions are.
One limitation here: you have a view about Manifest. Your interlocutor would have a different view. But how do we know if those views are actually representative of major groupings?
My hunch is that, if equipped with a mind probe, we would find at least two major axes with several meaningfully different viewpoints on each axis. Overall, I’d predict that I would find at least four sizable clusters, probably five to seven.
So I ran a poll with 100 ish respondents and if you want to run the k-means analysis you can find those clusters yourself.
The anonymous data is downloadable here.
https://viewpoints.xyz/polls/ea-and-manifest/results
Beyond that, yes you are likely right, but I don’t know how to have that discussion better. I tried using polls and upvoted quotes as a springboard in this post (Truth-seeking vs Influence-seeking—a narrower discussion) but people didn’t really bite there.
Suggestions welcome.
It is kind of exhausting to keep trying to find ways to get better samples of the discourse, without a sense that people will eventually go “oh yeah this convinces me”. If I were more confident I would have more energy for it.
I don’t think those were most of the questions I was looking for, though. This isn’t a criticism: running the poll early risks missing important cruxes and fault lines that haven’t been found yet; running it late means that much of the discussion has already happened.
There are also tradeoffs with viewpoints.xyz being accessible (=better sampling) and the data being rich enough. Limitation to short answer stems with a binary response (plus an ambiguous “skip”) lends itself to identifying two major “camps” more easily that clusters within those camps. In general, expanding to five-point Likert scales would help, as would some sort of branching.
For example, I’d want to know—conditional on “Manifest did wrong here” / “the platforming was inappropriate”—what factors were more or less important to the respondent’s judgment. On a 1-5 scale, how important do you find [your view that the organizers did not distance themselves from the problematic viewpoints / the fit between the problematic viewpoints and a conference for the forecasting community / an absence of evidence that special guests with far-left or at least mainstream viewpoints on the topic were solicited / whatever]. And: how much would the following facts or considerations, if true, change your response to a hypothetical situation like the Manifest conference? Again, you can’t get how much on a binary response.
Maybe all that points out to polling being more of a post-dialogue event, and accepting that we would choose discussants based on past history & early reactions. For example, I would have moderately high confidence that user X would represent a stance close to a particular pole on most issues, while I would represent a stance that codes as “~ moderately progressive by EA Forum standards.”
Often it feels like I can never please people on this forum. I think the poll is significantly better than no poll.
Yeah, I agree with that! I don’t find it inconsistent with the idea that the reasonable trade-offs you made between various characteristics in the data-collection process make the data you got not a good match for the purposes I would like data for. They aregood data for people interested in the answer to certain other questions. No one can build a (practical) poll for all possible use cases, just as no one can build a (reasonably priced) car that is both very energy-efficient and has major towing/hauling chops.
As useful as viewpoints.xyz is, I will mention that for maybe 50% or 60% of the questions, my reaction was “it depends.” I suppose you can’t really get around that unless the person creating the questions spends much more time to carefully craft them (which sort of defeats the purpose of a quick-and-dirty poll), or unless you do interviews (which are of course much more costly). I do think there is value in the quick-and-dirty MVP version, but it’s usefullness has a pretty noticable upper bound.
Sam Harris takes Giving What We Can pledge for himself and for his meditation company “Waking Up”
Harris references MacAksill and Ord as having been central to his thinking and talks about Effective Altruism and exstential risk. He publicly pledges 10% of his own income and 10% of the profit from Waking Up. He also will create a series of lessons on his meditation and education app around altruism and effectiveness.
Harris has 1.4M twitter followers and is a famed Humanist and New Athiest. The Waking Up app has over 500k downloads on android, so I guess over 1 million overall.
https://dynamic.wakingup.com/course/D8D148
I like letting personal thoughts be up or downvoted, so I’ve put them in the comments.
Harris is a marmite figure—in my experience people love him or hate him.
It is good that he has done this.
Newswise, it seems to me it is more likely to impact the behavior of his listeners, who are likely to be well-disposed to him. This is a significant but currently low-profile announcement. As will the courses be on his app.
I don’t think I’d go spreading this around more generally, many don’t like Harris and for those who don’t like him, it could be easy to see EA as more of the same (callous superior progessivism).
In the low probability (5%?) event that EA gains traction in that space of the web (generally called the Intellectual Dark Web—don’t blame me, I don’t make the rules) I would urge caution for EA speakers who might pulled into polarising discussion which would leave some groups feeling EA ideas are “not for them”.
My guess is people who like Sam Harris are disproportionately likely to be potentially interested in EA.
This seems quite likely given EA Survey data where, amongst people who indicated they first heard of EA from a Podcast and indicated which podcast, Sam Harris’ strongly dominated all other podcasts.
More speculatively, we might try to compare these numbers to people hearing about EA from other categories. For example, by any measure, the number of people in the EA Survey who first heard about EA from Sam Harris’ podcast specifically is several times the number who heard about EA from Vox’s Future Perfect. As a lower bound, 4x more people specifically mentioned Sam Harris in their comment than selected Future Perfect, but this is probably dramatically undercounting Harris, since not everyone who selected Podcast wrote a comment that could be identified with a specific podcast. Unfortunately, I don’t know the relative audience size of Future Perfect posts vs Sam Harris’ EA podcasts specifically, but that could be used to give a rough sense of how well the different audiences respond.
Notably, Harris has interviewed several figures associated with EA; Ferriss only did MacAskill, while Harris has had MacAskill, Ord, Yudkowsky, and perhaps others.
This is true, although for whatever reason the responses to the podcast question seemed very heavily dominated by references to MacAskill.
This is the graph from our original post, showing every commonly mentioned category, not just the host (categories are not mutually exclusive). I’m not sure what explains why MacAskill really heavily dominated the Podcast category, while Singer heavily dominated the TED Talk category.
The address (in the link) is humbling and shows someone making a positive change for good reasons. He is clear and coherent.
Good on him.
An alternate stance on moderation (from @Habryka.)
This is from this comment responding to this post about there being too many bans on LessWrong. Note how the LessWrong is less moderated than here in that it (I guess) responds to individual posts less often, but more moderated in that I guess it rate limits people more without reason.
I found it thought provoking. I’d recommend reading it.
This is a pretty opposite approach to the EA forum which favours bans.
I sense this is quite different to the EA forum too. I can’t imagine a mod saying I don’t pay much attention to whether the user in question is “genuinely trying”. I find this honesty pretty stark. Feels like a thing moderators aren’t allowed to say. “We don’t like the quality of your comments and we don’t think you can improve”.
Again this is very blunt but I’m not sure it’s wrong.
It feels cringe to read that basically if I don’t get the sequences lessWrong might rate limit me. But it is good to be open about it. I don’t think the EA forum’s core philosophy is as easily expressed.
If you remove ones for site-integrity reasons (spamming DMs, ban evasion, vote manipulation), bans are fairly uncommon. In contrast, it sounds like LW does do some bans of early-stage users (cf. the disclaimer on this list), which could be cutting off users with a high risk of problematic behavior before it fully blossoms. Reading further, it seems like the stuff that triggers a rate limit at LW usually triggers no action, private counseling, or downvoting here.
As for more general moderation philosophy, I think the EA Forum has an unusual relationship to the broader EA community that makes the moderation approach outlined above a significantly worse fit for the Forum than for LW. As a practical matter, the Forum is the ~semi-official forum for the effective altruism movement. Organizations post official announcements here as a primary means of publishing them, but rarely on (say) the effectivealtruism subreddit. Posting certain content here is seen as a way of whistleblowing to the broader community as a whole. Major decisionmakers are known to read and even participate in the Forum.
In contrast (although I am not an LW user or a member of the broader rationality community), it seems to me that the LW forum doesn’t have this particular relationship to a real-world community. One could say that the LW forum is the official online instantiation of the LessWrong community (which is not limited to being an online community, but that’s a major part of it). In that case, we have something somewhat like the (made-up) Roman Catholic Forum (RCF) that is moderated by designees of the Pope. Since the Pope is the authoritative source on what makes something legitimately Roman Catholic, it’s appropriate for his designees to employ a heavier hand in deciding what posts and posters are in or out of bounds at the RCF. But CEA/EVF have—rightfully—mostly disowned any idea that they (or any other specific entity) decide what is or isn’t a valid or correct way to practice effective altruism.
One could also say that the LW forum is an online instantiation of the broader rationality community. That would be somewhat akin to John and Jane’s (made up) Baptist Forum (JJBF) that is moderated by John and Jane. One of the core tenets of Baptist polity is that there are no centralized, authoritative arbiters of faith and practice. So JJBF is just one of many places that Baptists and their critics can go to discuss Baptist topics. It’s appropriate for John and Jane to to employ a heavier hand in deciding what posts and posters are in or out of bounds at the JJBF because there are plenty of other, similar places for them to go. JJBF isn’t anything special. But as noted above, that isn’t really true of the EA Forum because of its ~semi-official status in a real-world social movement.
It’s ironic that—in my mind—either a broader or narrower conception of what LW is would justify tighter content-based moderation practices, while those are harder to justify in the in-between place that the EA Forum occupies. I think the mods here do a good job handling this awkward place for the most part by enforcing viewpoint-neutral rules like civility and letting the community manage most things through the semi-democratic karma method (although I would be somewhat more willing to remove certain content than they are).
This also roughly matches my impression. I do think I would prefer the EA community to either go towards more centralized governance or less centralized governance in the relevant way, but I agree that given how things are, the EA Forum team has less leeway with moderation than the LW team.
Wait it seems like a higher proportion of EA forum moderations are bans, but that LW does more moderation and more is rate limits? Is that not right?
My guess is LW both bans and rate-limits more.
Apart from choosing who can attend their conferences which are the de facto place that many community members meet, writing their intro to EA, managing the effective altruism website and offering criticism of specific members behaviour.
Seems like they are the de facto people who decide what is or isn’t valid way to practice effective altruism. If anything more than the LessWrong team (or maybe rationalists are just inherently unmanageable).
I agree on the ironic point though. I think you might assume that the EA forum would moderate more than LW, but that doesn’t seem to be the case.
I want to throw in a bit of my philosophy here.
Status note: This comment is written by me and reflects my views. I ran it past the other moderators, but they might have major disagreements with it.
I agree with a lot of Jason’s view here. The EA community is indeed much bigger than the EA Forum, and the Forum would serve its role as an online locus much less well if we used moderation action to police the epistemic practices of its participants.
I don’t actually think this that bad. I think it is a strength of the EA community that it is large enough and has sufficiently many worldviews that any central discussion space is going to be a bit of a mishmash of epistemologies.[1]
Some corresponding ways this viewpoint causes me to be reluctant to apply Habryka’s philosophy:[2]
Something like a judicial process is much more important to me. We try much harder than my read of LessWrong to apply rules consistently. We have the Forum Norms doc and our public history of cases forms something much closer to a legal code + case law than LW has. Obviously we’re far away from what would meet a judicial standard, but I view much of my work through that lens. Also notable is that all nontrivial moderation decisions get one or two moderators to second the proposal.
Related both to the epistemic diversity, and the above, I am much more reluctant to rely on my personal judgement about whether someone is a positive contributor to the discussion. I still do have those opinions, but am much more likely to use my power as a regular user to karma-vote on the content.
Some points of agreement:
Agreed. We are much more likely to make judgement calls in cases of new users. And much less likely to invest time in explaining the decision. We are still much less likely to ban new users than LessWrong. (Which, to be clear, I don’t think would have been tenable on LessWrong when they instituted their current policies, which was after the launch of GPT-4 and a giant influx of low quality content.)
Most of the work I do as a moderator is reading reports and recommending no official action. I have the internal experience of mostly fighting others to keep the Forum an open platform. Obviously that is a compatible experience with overmoderating the Forum into an echo chamber, but I will at least bring this up as a strong point of philosophical agreement.
Final points:
I do think we could potentially give more “near-ban” rate limits, such as the 1 comment/3 days. The main benefit of this I see is as allowing the user to write content disagreeing with their ban.
Controversial point! Maybe if everyone adopted my own epistemic practices the community would be better off. It would certainly gain in the ability to communicate smoothly with itself, and would probably spend less effort pulling in opposite directions as a result, but I think the size constraints and/or deference to authority that would be required would not be worth it.
Note that Habryka has been a huge influence on me. These disagreements are what remains after his large influence on me.
I think the banned individual should almost always get at least one final statement to disagree with the ban after its pronouncement. Even the Romulans allowed (will allow?) that. Absent unusual circumstances, I think they—and not the mods—should get the last word, so I would also allow a single reply if the mods responded to the final statement.
More generally, I’d be interested in ~”civility probation,” under which a problematic poster could be placed for ~three months as an option they could choose as an alternative to a 2-4 week outright ban. Under civility probation, any “probation officer” (trusted non-mod users) would be empowered to remove content too close to the civility line and optionally temp-ban the user for a cooling-off period of 48 hours. The theory of impact comes from the criminology literature, which tells us that speed and certainty of sanction are more effective than severity. If the mods later determined after full deliberation that the second comment actually violated the rules in a way that crossed the action threshold, then they could activate the withheld 2-4 week ban for the first offense and/or impose a new suspension for the new one.
We are seeing more of this in the criminal system—swift but moderate “intermediate sanctions” for things like failing a drug test, as opposed to doing little about probation violations until things reach a certain threshold and then going to the judge to revoke probation and send the offender away for at least several months. As far as due process, the theory is that the offender received their due process (consideration by a judge, right to presumption of innocence overcome only by proof beyond a reasonable doubt) in the proceedings that led to the imposition of probation in the first place.
“will allow?”
very good.
Yeah seems fair.
How are we going to deal emotionally with the first big newspaper attack against EA?
EA is pretty powerful in terms of impact and funding.
It seems only an amount of time before there is a really nasty article written about the community or a key figure.
Last year the NYT wrote a hit piece on Scott Alexander and while it was cool that he defended himself, I think he and the rationalist community overreacted and looked bad.
I would like us to avoid this.
If someone writes a hit piece about the community, Givewell, Will MacAskill etc, how are we going to avoid a kneejerk reaction that makes everything worse?
I suggest if and when this happens:
individuals largely don’t respond publicly unless they are very confident they can do so in a way that leads to deescalation.
articles exist to get clicks. It’s worth someone (not necessarily me or you) responding to an article in the NYT, but if, say a niche commentator goes after someone, fewer people will hear it if we let it go.
let the comms professionals deal with it. All EA orgs and big players have comms professionals. They can defend themselves.
if we must respond (we often needn’t) we should adopt a stance of grace, curiosity and humility. Why do they think these things are true? What would convince us?
Personally I hate being attacked and am liable to feel defensive and respond badly. I assume you are no different. I’d like to think about this so that if and when it happens we can avoid embarrassing ourselves and the things we care about.
Yeah, I think the community response to the NYT piece was counterproductive, and I’ve also been dismayed at how much people in the community feel the need to respond to smaller hit pieces, effectively signal boosting them, instead of just ignoring them. I generally think people shouldn’t engage with public attacks unless they have training in comms (and even then, sometimes the best response is just ignoring).
We’ve had multiple big newspaper attacks now. How’d we do compared to your expectations?
I think we did better externally than I expected but I think internally I didn’t really write enough here.
Suggestion.
Debate weeks every other week and we vote on what the topic is.
I think if the forum had a defined topic (especially) in advance, I would be more motivated to read a number of post on that topic.
One of the benefits of the culture war posts is that we are all thinking about the same thing. If we did that on topics perhaps with dialogues from experts, that would be good and on a useful topic.
Every other week feels exhausting, at least if the voting went in a certain direction.
I would pitch for every 2 months, but I like the sentiment of doing it a bit more.
A crux for me at the moment is whether we can shape debate weeks in a way which leads to deep rather than shallow engagement. If we were to run debate weeks more often, I’d (currently) want to see them causing people to change their mind, have useful conversations, etc… It’s something I’ll be looking closely at when we do a post-mortem on this debate week experiment.
Also, every other week seems prima facie a bit burdensome for un-interested users.
Additionally, I want top-down content to only be a part of the Forum. I wouldn’t want to over-shepherd discussion and end up with less wide-ranging and good quality posts.
Happy to explore other ways to integrate polls etc if people like them and they lead to good discussions though.
Hi Nathan! I like suggestions and would like to see more suggestions. But I don’t know what the theory of change is for the forum, so I find it hard to look at your suggestion and see if it maps onto the theory of change.
Re this: “One of the benefits of the culture war posts is that we are all thinking about the same thing.”
I’d be surprised if 5% of EAs spent more than 5 minutes thinking about this topic and 20% of forum readers spent more than 5 minutes thinking about it. I’d be surprised if there were more than 100 unique commenters on posts related to that topic. Why does this matter? Well, prioritising a minority of subject-matter interested people over the remaining majority could be a good way to shrink your audience.
Why is shrinking audience bad? If this forum focused more on EA topics and some people left I am not sure that would be bad. I guess it would be slightly good on expectation.
And to be clear I mean if we focused on “are AIs deserving of moral value” “what % of money should be spent on animal welfare”
I agree that there’s a lot of advantage of occasionally bringing a critical mass of attention to certain topics where this moves the community’s understanding forward vs. just hoping we end up naturally having the most important conversations.
Weird idea: What if some forum members were chosen as “jurors”, and their job is to read everything written during the debate week, possibly ask questions, and try to come to a conclusion?
I’m not that interested in AI welfare myself, but I might become interested if such “jurors” who recorded their opinion before and after made a big update in favor of paying attention to it.
To keep the jury relatively neutral, I would offer people the chance to sign up to “be a juror during the first week of August”, before the topic for the first week of August is actually known.
The front page agree disagree thing is soo coool. Great work forum team.
Thanks Nathan! People seem to like it so we might use it again in the future. If you or anyone else has feedback that might improve the next iteration of it, please let us know! You can comment here or just dm.
I think it’s neat!
But I think there’s work to do on the display of the aggregate.
I imagine there should probably be a table somewhere at least (a list of each person and what they say).
This might show a distribution, above.
There must be some way to just not have the icons overlap with each other like this. Like, use a second dimension, just to list them. Maybe use a wheat plot? I think strip plots and swarm plots could also be options.
I’m excited that we exceeded our goals enough to have the issue :)
I would personally go for a beeswarm plot.
But even just adding some random y and some transparency seems to improve things
document.querySelectorAll(‘.ForumEventPoll-userVote’).forEach(e ⇒ e.style.top = `${Math.random()*100-50}px`);
document.querySelectorAll(‘.ForumEventPoll-userVote’).forEach(e ⇒ e.style.opacity = `0.7`);
Really appreciate all the feedback and suggestions! This is definitely more votes than we expected. 😅
I implemented a hover-over based on @Agnes Stenlund’s designs in this PR, though our deployment is currently blocked (by something unrelated), so I’m not sure how long it will take to make it to the live site.
I may not have time to make further changes to the poll results UI this week, but please keep the comments coming—if we decide to run another debate or poll event, then we will iterate on the UI and take your feedback into account.
Looks great!
I tried to make it into a beeswarm, and while IMHO it does look nice it also needs a bunch more vertical space (and/or smaller circles)
Also adding a little force works too, eg here. There are pretty easy libraries for this.
The orange line above the circles makes it look like there’s a similar number of people at the extreme left and the extreme right, which doesn’t seem to be the case
I don’t think it would help much for this question, but I could imagine using this feature for future questions in which the ability to answer anonymously would be important. (One might limit this to users with a certain amount of karma to prevent brigading.)
I note some of my confusion that might have been shared by others. I initially had thought that the option from users was between binary “agree” and “disagree” and thought the method by which a user could choose was by dragging to one side or another. I see now that this would signify maximal agreement/disagreement, although maybe users like me might have done so in error. Perhaps something that could indicate this more clearly would be helpful to others.
Thanks Brad, I didn’t foresee that! (Agree react Brad’s comment if you experienced the same thing).
Would it have helped if we had marked increments along the slider? Like the below but prettier? (our designer is on holiday)
Yeah, if there were markers like “neutral”, “slightly agree”, “moderately agree”, “strongly agree”, etc. that might make it clearer.
After the decision by the user registers, a visual display that states something like “you’ve indicated that you strongly agree with the statement X. Redrag if this does not reflect your view or if something changes your mind and check out where the rest of the community falls on this question by clicking here.”
Another idea could be to ask, “How many EA resources should go do this, per year, for the next 10 years?”
Options could be things like,
“$0″, “$100k”, “1M”, “100M”, etc.
Also, maybe there could be a second question for, “How sure are you about this?”
Interesting. Certainty could also be a Y-axis, but I think that trades off against simplicity for a banner.
I’d love to hear more from the disagree reactors. They should feel very free to dm.
I’m excited to experiment more with interactive features in the future, so critiques are especially useful now!
I am not confident that another FTX level crisis is less likely to happen, other than that we might all say “oh this feels a bit like FTX”.
Changes:
Board swaps. Yeah maybe good, though many of the people who left were very experienced. And it’s not clear whether there are due diligence people (which seems to be what was missing).
Orgs being spun out of EV and EV being shuttered. I mean, maybe good though feels like it’s swung too far. Many mature orgs should run on their own, but small orgs do have many replicable features.
More talking about honesty. Not really sure this was the problem. The issue wasn’t the median EA it was in the tails. Are the tails of EA more honest? Hard to say
We have now had a big crisis so it’s less costly to say “this might be like that big crisis”. Though notably this might also be too cheap—we could flinch away from doing ambitious things
Large orgs seem slightly more beholden to comms/legal to avoid saying or doing the wrong thing.
OpenPhil is hiring more internally
Non-changes:
Still very centralised. I’m pretty pro-elite, so I’m not sure this is a problem in and of itself, though I have come to think that elites in general are less competent than I thought before (see FTX and OpenAI crisis)
Little discussion of why or how the affiliation with SBF happened despite many well connected EAs having a low opinion of him
Little discussion of what led us to ignore the base rate of scamminess in crypto and how we’ll avoid that in future
For both of these comments, I want a more explicit sense of what the alternative was. Many well-connected EAs had a low opinion of Sam. Some had a high opinion. Should we have stopped the high-opinion ones from affiliating with him? By what means? Equally, suppose he finds skepticism from (say) Will et al, instead of a warm welcome. He probably still starts the FTX future fund, and probably still tries to make a bunch of people regranters. He probably still talks up EA in public. What would it have taken to prevent any of the resultant harms?
Likewise, what does not ignoring the base rate of scamminess in crypto actually look like? Refusing to take any money made through crypto? Should we be shunning e.g. Vitalik Buterin now, or any of the community donors who made money speculating?
Not a complete answer, but I would have expected communication and advice for FTXFF grantees to have been different. From many well connected EAs having a low opinion of him, we can imagine that grantees might have been urged to properly set up corporations, not count their chickens before they hatched, properly document everything and assume a lower-trust environment more generally, etc. From not ignoring the base rate of scamminess in crypto, you’d expect to have seen stronger and more developed contingency planning (remembering that crypto firms can and do collapse in the wake of scams not of their own doing!), more decisions to build more organizational reserves rather than immediately ramping up spending, etc.
The measures you list would have prevented some financial harm to FTXFF grantees, but it seems to me that that is not the harm that people have been most concerned about. I think it’s fair for Ben to ask about what would have prevented the bigger harms.
Ben said “any of the resultant harms,” so I went with something I saw a fairly high probability. Also, I mostly limit this to harms caused by “the affiliation with SBF”—I think expecting EA to thwart schemes cooked up by people who happen to be EAs (without more) is about as realistic as expecting (e.g.) churches to thwart schemes cooked up by people who happen to be members (without more).
To be clear, I do not think the “best case scenario” story in the following three paragraphs would be likely. However, I think it is plausible, and is thus responsive to a view that SBF-related harms were largely inevitable.
In this scenario, leaders recognized after the 2018 Alameda situation that SBF was just too untrustworthy and possibly fraudulent (albeit against investors) to deal with—at least absent some safeguards (a competent CFO, no lawyers who were implicated in past shady poker-site scandals, first-rate and comprehensive auditors). Maybe SBF wasn’t too far gone at this point—he hadn’t even created FTX in mid-2018 -- and a costly signal from EA leaders (we won’t take your money) would have turned him—or at least some of his key lieutenants—away from the path he went down? Let’s assume not, though.
If SBF declined those safeguards, most orgs decline to take his money and certainly don’t put him on podcasts. (Remember that, at least as of 2018, it sounds like people thought Alameda was going nowhere—so the motivation to go against consensus and take SBF money is much weaker at first.) Word gets down to the rank-and-file that SBF is not aligned, likely depriving him of some of his FTX workforce. Major EA orgs take legible action to document that he is not in good standing with them, or adopt a public donor-acceptability policy that contains conditions they know he can’t/won’t meet. Major EA leaders do not work for or advise the FTXFF when/if it forms.
When FTX explodes, the comment from major EA orgs is that they were not fully convinced he was trustworthy and cut off ties from him when that came to light. There’s no statutory inquiry into EVF, and no real media story here. SBF is retrospectively seen as an ~apostate who was largely rejected by the community when he showed his true colors, despite the big $$ he had to offer, who continued to claim affiliation with EA for reputational cover. (Or maybe he would have gotten his feelings hurt and started the FTX Children’s Hospital Fund to launder his reputation? Not very likely.)
A more modest mitigation possibility focuses more on EVF, Will, and Nick. In this scenario, at least EVF doesn’t take SBF’s money. He isn’t mentioned on podcasts. Hopefully, Will and Nick don’t work with FTXFF, or if they do they clearly disaffiliate from EVF first. I’d characterize this scenario as limiting the affiliation with SBF by not having what is (rightly or wrongly) seen as EA’s flagship organization and its board members risk lending credibility to him. In this scenario, the media narrative is significantly milder—it’s much harder to write a juicy narrative about FTXFF funding various smaller organizations, and without the ability to use Will’s involvement with SBF as a unifying theme. Moreover, when FTX explodes in this scenario, EVF is not paralyzed in the same way it was in the actual scenario. It doesn’t have a CC investigation, ~$30MM clawback exposure, multiple recused board members, or other fires of its own to put out. It is able to effectively lead/coordinate the movement through a crisis in a way that it wasn’t (and arguably still isn’t) able to due to its own entanglement. That’s hardly avoiding all the harms involved in affiliation with SBF . . . but I’d argue it is a meaningful reduction.
The broader idea there is that it is particularly important to isolate certain parts of the EA ecosystem from the influence of low-trustworthiness donors, crypto influence, etc. This runs broader than the specific examples above. For instance, it was not good to have an organization with community-health responsibilities like EVF funded in significant part by a donor who was seen as low-trustworthiness, or one who was significantly more likely to be the subject of whistleblowing than the median donor.
Is there any reason to doubt the obvious answer—it was/is an easy way for highly-skilled quant types in their 20s and early 30s to make $$ very fast?
seems like this is a pretty damning conclusion that we haven’t actually come to terms with if it is the actual answer
It’s likely that no single answer is “the” sole answer. For instance, it’s likely that people believed they could assume that trusted insiders were more significantly more ethical than the average person. The insider-trusting bias has bitten any number of organizations and movements (e.g., churches, the Boy Scouts). However, it seems clear from Will’s recent podcast that the downsides of being linked to crypto were appreciated at some level. It would take a lot for me to be convinced that all that $$ wasn’t a major factor.
The Scout Mindset deserved 1/10th of the marketing campaign of WWOTF. Galef is a great figurehead for rational thinking and it would have been worth it to try and make her a public figure.
I think much of the issue is that:
1. It took a while to ramp up to being able to do things such as the marketing campaign for WWOTF. It’s not trivial to find the people and buy-in necessary. Previous EA books haven’t had similar.
2. Even when you have that capacity, it’s typically much more limited than we’d want.
I imagine EAs will get better at this over time.
Dear reader,
You are an EA, if you want to be. Reading this forum is enough. Giving a little of your salary effectively is enough. Trying to get an impactful job is enough. If you are trying even with a fraction of your resources to make the world better and chatting with other EAs about it, you are one too.
Post I spent 4 hours writing on a topic I care deeply about: 30 karma
Post I spent 40 minutes writing on a topic that the community vibes with: 120 karma
I guess this is fine—iys just people being interested but it can feel weird at times.
This is not fine
I dunno. I thought I’d surface.
Yeah, this is an unfortunate gradient, you have to decide not to follow it :-/
But there is more long-term glory in it.
Lab grown meat → no-kill meat
This tweet recommends changing the words we use to discuss lab-grown meat. Seems right.
There has been a lot of discussion of this, some studies were done on different names, and GFI among others seem to have landed on “cultivated meat”.
What surprises me about this work is that it does not seem to include the more aggressive (for lack of a better word) alternatives I have heard being thrown around, like “Suffering-free”, or “Clean”, or “cruelty-free”.
could you link to a few of the discussions & studies?
https://en.wikipedia.org/wiki/Cultured_meat#Nomenclature
For what it’s worth, my first interpretation of “no-kill meat” is that you’re harvesting meat from animals in ways that don’t kill them. Like amputation of parts that grow back.
I love this wording!
i’d be curious to see the results of e.g. focus groups on this — i’m just now realizing how awful of a name “lab grown meat” is, re: the connotations.
The OpenAI stuff has hit me pretty hard. If that’s you also, look after yourself.
I don’t really know what accurate thought looks like here.
Yeah, same
I hope you’re doing ok Nathan. Happy to chat in DM’s if you like ❤️
It will settle down soon enough. Not much will change as for most breaking news story. But I am thinking if I should switch to Claude.
I am really not the person to do it, but I still think there needs to be some community therapy here. Like a truth and reconciliation committee. Working together requires trust and I’m not sure we have it.
Poll: https://viewpoints.xyz/polls/ftx-impact-on-ea
Results: https://viewpoints.xyz/polls/ftx-impact-on-ea/results
Curious if you have examples of this being done well in communities you’ve been aware of? I might have asked you this before.
I’ve been part of an EA group where some emotionally honest conversations were had, and I think they were helpful but weren’t a big fix. I think a similar group later did a more explicit and formal version and they found it helpful.
I’ve never seen this done well. I guess I’d read about the truth and reconciliation committees in South Africa and Ireland.
I think the strategy fortnight worked really well. I suggest that another one is put in the calendar (for say 3 months time) and then rather than dripfeeding comment we sort of wait and then burst it out again.
It felt better to me, anyway to be like “for these two weeks I will engage”
I also thought it was pretty decent, and it caused me to get a post out that had been sitting in my drafts for quite a while.
I want to once again congratulate the forum team on this voting tool. I think by doing this, the EA forum is at the forefront of internal community discussions. No communities do this well and it’s surprising how powerful it is.
I hope Will MacAskill is doing well. I find it hard to predict how he’s doing as a person. While there have been lots of criticisms (and I’ve made some) I think it’s tremendously hard to be the Shelling person for a movement. There is a seperate axis however, and I hope in himself he’s doing well and I imagine many feel that way. I hope he has an accurate picture here.
I note that in some sense I have lost trust that the EA community gives me a clear prioritisation of where to donate.
Some clearer statements:
I still think GiveWell does great work
I still generally respect the funding decisions of Open Philanthropy
I still think this forum has a higher standard than most place
It is hard to know exactly how high impact animal welfare funding opportunities interact with x-risk ones
I don’t know what the general consensus on the most impactful x-risk funding opportunities are
I don’t really know what orgs do all-considered work on this topic. I guess the LTFF?
I am more confused/inattentive and this community is covering a larger set of possible choices so it’s harder to track what consensus is
Since it looks like you’re looking for an opinion, here’s mine:
To start, while I deeply respect GiveWell’s work, in my personal opinion I still find it hard to believe that any GiveWell top charity is worth donating to if you’re planning to do the typical EA project of maximizing the value of your donations in a scope sensitive and impartial way. …Additionally, I don’t think other x-risks matter nearly as much as AI risk work (though admittedly a lot of biorisk stuff is now focused on AI-bio intersections).
Instead, I think the main difficult judgement call in EA cause prioritization right now is “neglected animals” (eg invertebrates, wild animals) versus AI risk reduction.
AFAICT this also seems to be somewhat close to the overall view of the EA Forum as well as you can see in some of the debate weeks (animals smashed humans) and the Donation Election (where neglected animal orgs were all in the top, followed by PauseAI).
This comparison is made especially difficult because OP funds a lot of AI but not any of the neglected animal stuff, which subjects the AI work to significantly more diminished marginal returns.
To be clear, AI orgs still do need money. I think there’s a vibe that all the AI organizations that can be funded by OpenPhil are fully funded and thus AI donations are not attractive to individual EA forum donors. This is not true. I agree that their highest priority parts are fully funded and thus the marginal cost-effectiveness of donations is reduced. But this marginal cost-effectiveness is not eliminated, and it still can be high. I think there are quite a few AI orgs that are still primarily limited by money and would do great things with more funding. Additionally it’s not healthy for these orgs to be so heavily reliant on OpenPhil support.
So my overall guess is if you think AI is only 10x or less important in the abstract than work on neglected animals, you should donate to the neglected animals due to this diminishing marginal returns issue.
I currently lean a bit towards AI is >10x neglected animals and therefore I want to donate to AI stuff, but I really don’t think this is settled, it needs more research, and it’s very reasonable to believe the other way.
~
Ok so where to donate? I don’t have a good systematic take in either the animal space or the AI space unfortunately, but here’s a shot:
For starters, in the AI space, a big issue for individual donors is that unfortunately it’s very hard to properly evaluate AI organizations without a large stack of private information that is hard to come by. This private info has greatly changed my view of what organizations are good in the AI space. On the other hand you can basically evaluate animal orgs well enough with only public info, and the private info only improves the eval a little bit.
Moreover, in the neglected animal space, I do basically trust the EA Animal Welfare Fund to allocate money well and think it could be hard for an individual to outperform that. Shrimp Welfare Project also looks compelling.
I think the LTFF is worth donating to but to be clear I don’t think the LTFF actually does all-considered work on the topic—they seem to have an important segment of expertise that seems neglected outside the LTFF, but they definitely don’t have the expertise to cover and evaluate everything. I do think the LTFF would be a worthy donation choice.
If I were making a recommendation I would concur with the recommend the three AI orgs in OpenPhil’s list: Horizon, ARI, and CLTR—they are all being recommended by individual OpenPhil staff for good reason.
There are several other orgs I think are worth considering as well and you may want to think about options that are only available to you as an individual, such as political donations. Or think about ways where OpenPhil may not be able to do as well in the AI space, like PauseAI or digital sentience work, both of which still look neglected.
~
A few caveats/exceptions to my above comment:
I’m very uncertain about whether AI really is >10x neglected animals and I cannot emphasize enough that reasonable and very well-informed people can disagree on this issue and I could definitely imagine changing my mind on this over the next year.
I’m not shilling for my own orgs in this comment to keep it less biased, but those are also options.
I don’t mean to be mean to GiveWell. Of course donating to GiveWell is very good and still better than 99.99% of charitable giving!
Another area I don’t consider but probably should is organizations like Giving What We Can that work somewhat outside these cause areas but may have sufficient multipliers that it still is very cost-effective. I think meta-work on top of global health and development work (such as improving its effectiveness or getting more people to like it / do it better) can often lead to larger multipliers since there’s magnitudes more underlying money in that area + interest in the first place.
I don’t appropriately focus on digital sentience, which OpenPhil is also not doing and could also use some help. I think this could be fairly neglected. Work that aims to get AI companies to commit towards not committing animal mistreatment is also an interesting and incredibly underexplored area that I don’t know much about.
There’s a sizable amount of meta-strategic disagreement / uncertainty within the AI space that I gloss over here (imo Michael Dickens does a good job of overviewing this even if I have a lot of disagreements with his conclusions).
I do think risk aversion is underrated as a reasonable donor attitude that can vary between donors and does make the case for focusing on neglected animals stronger. I don’t think there’s an accurate and objective answer about how risk averse you ought to be.
I agree with this comment. Thanks for this clear overview.
The only element where I might differ is whether AI really is >10x neglected animals.
My main issue is that while AI is a very important topic, it’s very hard to know whether AI organizations will have an overall positive or negative (or neutral) impact.
First, it’s hard to know what will work and what won’t accidentally increase capabilities. More importantly, if we end up in a future aligned with human values but not animals or artificial sentience, this could still be a very bad world in which a large number of individuals are suffering (e.g., if factory farming continues indefinitely).
My tentative and not very solid view is that work at the intersection of AI x animals is promising (eg work that aims to get AI companies to commit towards not committing animal mistreatment), and attempts for a pause are interesting (since they give us more time to figure out stuff).
If you think that an aligned AGI will truly maximise global utility, you will have a more positive outlook.
But since I’m rather risk averse, I devote most of my resources to neglected animals.
I’m very uncertain about whether AI really is >10x neglected animals and I cannot emphasize enough that reasonable and very well-informed people can disagree on this issue and I could definitely imagine changing my mind on this over the next year. This is why I framed my comment the way I did hopefully making it clear that donating to neglected animal work is very much an answer I endorse.
I also agree it’s very hard to know whether AI organizations will have an overall positive or negative (or neutral) impact. I think there’s higher-level strategic issues that make the picture very difficult to ascertain even with a lot of relevant information (imo Michael Dickens does a good job of overviewing this even if I have a lot of disagreements). Also the private information asymmetry looms large here.
I also agree that “work that aims to get AI companies to commit towards not committing animal mistreatment” is an interesting and incredibly underexplored area. I think this is likely worth funding if you’re knowledgable about the space (I’m not) and know of good opportunities (I currently don’t).
I do think risk aversion is underrated as a reasonable donor attitude and does make the case for focusing on neglected animals stronger.
Makes sense ! I understand the position.
Regarding AI x animals donation opportunities, all of this is pretty new but I know a few. Hive launched a Ai for Animals website, with an upcoming conference: https://www.aiforanimals.org/
I also know about Electric Sheep, which has made a fellowship on the topic : https://electricsheep.teachable.com/
I think it’s normal, and even good that the EA community doesn’t have a clear prioritization of where to donate. People have different values and different beliefs, and so prioritize donations to different projects.
What do you mean? I don’t understand how animal welfare campaigns interact with x-risks, except for reducing the risk of future pandemics, but I don’t think that’s what you had in mind (and even then, I don’t think those are the kinds of pandemics that x-risk minded people worry about)
It seems clear to me that there is no general consensus, and some of the most vocal groups are actively fighting against each other.
You can see Giving What We Can recommendations for global catrastrophic risk reduction on this page[1] (i.e. there’s also Longview’s Emerging Challenges Fund). Many other orgs and foundations work on x-risk reduction, e.g. Open Philanthropy.
I think that if there were consensus that a single project was obviously the best, we would all have funded it already, unless it was able to productively use very very high amounts of money (e.g., cash transfers)
Disclaimer: I work at GWWC
do you feel confident about your moral philosophy?
I notice some people (including myself) reevaluating their relationship with EA.
This seems healthy.
When I was a Christian it was extremely costly for me to reduce my identification and resulted in a delayed and much more final break than perhaps I would have wished[1]. My general view is that people should update quickly, and so if I feel like moving away from EA, I do it when I feel that, rather than inevitably delaying and feeling ick.
Notably, reducing one’s identification with the EA community need not change one’s poise towards effective work/donations/earn to give. I doubt it will change mine. I just feel a little less close to the EA community than once I did, and that’s okay.
I don’t think I can give others good advice here, because we are all so different. But the advice I would want to hear is “be part of things you enjoy being part of, choose an amount of effort to give to effectiveness and try to be a bit more effective with that each month, treat yourself kindly because you too are a person worthy of love”
I think a slow move away from Christianity would have been healthier for me. Strangely I find it possible to imagine still being a Christian, had things gone differently, even while I wouldn’t switch now.
The vibe at EAG was chill, maybe a little downbeat, but fine. I can get myself riled up over the forum, but it’s not representative! Most EAs are just getting on with stuff.
(This isn’t to say that forum stuff isn’t important, its just as important as it is rather than what should define my mood)
@Toby Tremlett🔹 @Will Howard🔹
Where can i see the debate week diagram if I want to look back at it?
Here’s a screenshot (open in new tab to see it in slightly higher resolution). I’ve also made a spreadsheet with the individual voting results, which gives all the info that was on the banner just in a slightly more annoying format.
We are also planning to add native way to look back at past events as they appeared on the site :), although this isn’t a super high priority atm.
Nice one—even the tab to bring up the posts isn’t super easy to access (or I’m just a bit of a tech fail lol.)
It surprises me a bit (and I’m even impressed in a way) that so many EAs are all in on one side there.
Feels like there should be a “comment anonymously” feature. Would save everyone having to manage all these logins.
We have thought about that. Probably the main reason we haven’t done this is because of this reason, on which I’ll quote myself on from an internal slack message:
Touche
I strongly dislike the following sentence on effectivealtruism.org:
“Rather than just doing what feels right, we use evidence and careful analysis to find the very best causes to work on.”
It reads to me as arrogant, and epitomises the worst caracatures my friends do of EAs. Read it in a snarky voice (such as one might if they struggled with the movement and were looking to do research) “Rather that just doing what feels right...”
I suggest it gets changed to one of the following:
“We use evidence and careful analysis to find the very best causes to work on.”
“It’s great when anyone does a kind action no matter how small or effective. We have found value in using evidence and careful analysis to find the very best causes to work on.”
I am genuinely sure whoever wrote it meant well, so thank you for your hard work.
Are the two bullet points two alternative suggestions? If so, I prefer the first one.
I also thought this when I first read that sentence on the site, but I find it difficult (as I’m sure its original author does) to communicate its meaning in a subtler way. I like your proposed changes, but to me the contrast presented in that sentence is the most salient part of EA. To me, the thought is something like this:
“Doing good feels good, and for that reason, when we think about doing charity, we tend to use good feeling as a guide for judging how good our act is. That’s pretty normal, but have you considered that we can use evidence and analysis to make judgments about charity?”
The problem IMHO is that without the contrast, the sentiment doesn’t land. No one, in general, disagrees in principle with the use of evidence and careful analysis: it’s only in contrast with the way things are typically done that the EA argument is convincing.
I would choose your statement over the current one.
I think the sentiment lands pretty well even with a very toned down statement. The movement is called “effective altruism”. I think often in groups are worried that outgroups will not get their core differences when generally that’s all outgroups know about them.
I don’t think that anyone who visits that website won’t think that effectiveness isn’t a core feature. And I don’t think we need to be patronising (as EAs are charactured as being in conversations I have) in order to make known something that everyone already knows.
Several journalists (including those we were happy to have write pieces about WWOTF) have contacted me but I think if I talk to them, even carefully, my EA friends will be upset with me. And to be honest that upsets me.
We are in the middle of a mess of our own making. We deserve scrutiny. Ugh, I feel dirty and ashamed and frustrated.
To be clear, I think it should be your own decision to talk to journalists, but I do also just think that it’s just better for us to tell our own story on the EA Forum and write comments, and not give a bunch of journalists the ability to greatly distort the things we tell them in a call, with a platform and microphone that gives us no opportunity to object or correct things.
I have been almost universally appalled at the degree to which journalists straightforwardly lie in interviews, take quotes massively out of context, or make up random stuff related to what you said, and I do think it’s better that if you want to help the world understand what is going on, that you write up your own thoughts in your own context, instead of giving that job to someone else.
<3
Richard Ngo just gave a talk at EAG berlin about errors in AI governance. One being a lack of concrete policy suggestions.
Matt Yglesias said this a year ago. He was even the main speaker at EAG DC https://www.slowboring.com/p/at-last-an-ai-existential-risk-policy?utm_source=%2Fsearch%2Fai&utm_medium=reader2
Seems worth asking why we didn’t listen to top policy writers when they warned that we didn’t have good proposals.
What do you think of Thomas Larson’s bill? It seems pretty concrete to me, do you just think it is not good?
I am going on what Ngo said. So I guess, what does he think of it?
This sounds like the sort of question you should email Richard to ask before you make blanket accusations.
Ehhh, not really. I think it’s not a crazy view to hold and I wrote it on a shortform.
I think 90% of the answer to this is risk aversion from funders, especially LTFF and OpenPhil, see here. As such many things struggled for funding, see here.
We should acknowledge that doing good policy research often involves actually talking to and networking with policy people. It involves running think tanks and publishing policy reports, not just running academic institutions and publishing papers. You cannot do this kind of research well in a vacuum.
That fact combined with funders who were (and maybe still are) somewhat against funding people (except for people they knew extremely well) to network with policy makers in any way, has lead to (maybe is still leading to) very limited policy research and development happening.
I am sure others could justify this risk averse approach, and there are totally benefits to being risk averse. However in my view this was a mistake (and is maybe an ongoing mistake). I think was driven by the fact that funders were/are: A] not policy people, so do/did not understand the space so are were hesitant to make grants; B] heavily US centric, so do/did not understand the non-US policy space; and C] heavily capacity constrained, so do/did not have time to correct for A or B.
– –
(P.S. I would also note that I am very cautious about saying there is “a lack of concrete policy suggestions” or at least be clear what is meant by this. This phrase is used as one of the reasons for not funding policy engagement and saying we should spend a few more years just doing high level academic work before ever engaging with policy makers. I think this is just wrong. We have more than enough policy suggestions to get started and we will never get very very good policy design unless we get started and interact with the policy world.)
My current model is that actually very few people who went to DC and did “AI Policy work” chose a career that was well-suited to proposing policies that help with existential risk from AI. In-general people tried to choose more of a path of “try to be helpful to the US government” and “become influential in the AI-adjacent parts of the US government”, but there are almost no people working in DC whose actual job it is to think about the intersection of AI policy and existential risk. Mostly just people whose job it is to “become influential in the US government so that later they can steer the AI existential risk conversation in a better way”.
I find this very sad and consider it one of our worst mistakes, though I am also not confident in that model, and am curious whether people have alternative models.
That’s probably true because it’s not like jobs like that just happen to exist within government (unfortunately), and it’s hard to create your own role descriptions (especially with something so unusual) if you’re not already at the top.
That said, I think the strategy you describe EAs to have been doing can be impactful? For instance, now that AI risk has gone mainstream, some groups in government are starting to work on AI policy more directly, and if you’re already working on something kind of related and have a bunch of contacts and so on, you’re well-positioned to get into these groups and even get a leading role.
What’s challenging is that you need to make career decisions very autonomously and have a detailed understanding of AI risk and related levers to carve out your own valuable policy work at some point down the line (and not be complacent with “down the line never comes until it’s too late”). I could imagine that there are many EA-minded individuals who went into DC jobs or UK policy jobs with the intent to have an impact on AI later, but they’re unlikely to do much with that because they’re not proactive enough and not “in the weeds” enough with thinking about “what needs to happen, concretely, to avert an AI catastrophe?.”
Even so, I think I know several DC EAs who are exceptionally competent and super tuned in and who’ll likely do impactful work down the line, or are already about to do such things. (And I’m not even particularly connected to that sphere, DC/policy, so there are probably many more really cool EAs/EA-minded folks there that I’ve never talked to or read about.)
The slide Nathan is referring to. “We didn’t listen” feels a little strong; lots of people were working on policy detail or calling for it, it just seems ex post like it didn’t get sufficient attention. I agree directionally though, and Richard’s guesses at the causes (expecting fast take-off + business-as-usual politics) seem reasonable to me.
Also, *EAGxBerlin.
I talked to someone outside EA the other day who said that in a competive tender they wouldn’t apply to EA funders because they thought the process would likely to go to someone with connections to OpenPhil.
Seems bad.
EAs please post your job posting to twitter
Please post your jobs to Twitter and reply with @effective_jobs. Takes 5 minutes. and the jobs I’ve posted and then tweeted have got 1000s of impressions.
Or just DM me on twitter (@nathanpmyoung) and I’ll do it. I think it’s a really cheap way of getting EAs to look at your jobs. This applies to impactful roles in and outside EA.
Here is an example of some text:
-tweet 1
Founder’s Pledge Growth Director
@FoundersPledge are looking for someone to lead their efforts in growing the amount that tech entrepreneurs give to effective charities when they IPO.
Salary: $135 - $150k
Location: San Francisco
https://founders-pledge.jobs.personio.de/job/378212
-tweet 2, in reply
@effective_jobs
-end
I suggest it should be automated but that’s for a different post.
Confusion
I get why I and other give to Givewell rather than catastrophic risk—sometimes it’s good to know your “Impact account” is positive even if all the catastrophic risk work was useless.
But why do people not give to animal welfare in this case? Seems higher impact?
And if it’s just that we prefer humans to animals that seems like something we should be clear to ourselves about.
Also I don’t know if I like my mental model of an “impact account”. Seems like my giving has maybe once again become about me rather than impact.
ht @Aaron Bergman for surfacing this
This is exactly why I mostly give to animal charities. I do think there’s higher uncertainty of impact with animal charities compared to global health charities so I still give a bit to AMF. So roughly 80% animal charities, 20% global health.
Thanks for brining our convo here! As context for others, Nathan and I had a great discussion about this which was supposed to be recorded...but I managed to mess up and didn’t capture the incoming audio (i.e. everything Nathan said) 😢
Guess I’ll share a note I made about this (sounds AI written because it mostly was, generated from a separate rambly recording). A few lines are a little spicier than I’d ideally like but 🤷
Thanks for posting this. I had branching out my giving strategy to conclude some animal-welfare organizations on the to-do list, but this motivated me to actually pull the trigger on that.
I think most of the animal welfare neglect comes from the fact that if people are deep enough into EA to accept all of its “weird” premises they will donate to AI safety instead. Animal welfare is really this weird midway spot between “doesn’t rest on controversial claims” and “maximal impact”.
Definitely part of the explanation, but my strong impression from interaction irl and on Twitter is that many (most?) AI-safety-pilled EAs donate to GiveWell and much fewer to anything animal related.
I think ~literally except for Eliezer (who doesn’t think other animals are sentient), this isn’t what you’d expect from the weirdness model implied.
Assuming I’m not badly mistaken about others’ beliefs and the gestalt (sorry) of their donations, I just don’t think they’re trying to do the most good with their money. Tbc this isn’t some damning indictment—it’s how almost all self-identified EAs’ money is spent and I’m not at all talking about ‘normal person in rich country consumption.’
If you type “#” follwed by the title of a post and press enter it will link that post.
Example:
Examples of Successful Selective Disclosure in the Life Sciences
This is wild
OMG
I continue to think that a community this large needs mediation functions to avoid lots of harm with each subsequent scandal.
People asked for more details. so I wrote the below.
Let’s look at some recent scandals and I’ll try and point out some different groups that existed.
FTX—longtermists and non-lontermists, those with greater risk tolerance and less
Bostrom—rationalists and progressives
Owen Cotton-Barrett—looser norms vs more robust, weird vs normie
Nonlinear—loyalty vs kindness, consent vs duty of care
In each case, the community disagrees on who we should be and what we should be. People write comments to signal that they are good and want good things and shouldn’t be attacked. Other people see these and feel scared that they aren’t what the community wants.
This is tiring and anxiety inducing for all parties. In all cases here there are well intentioned, hard working people who have given a lot to try and make the world better who are scared they cannot trust their community to support them if push comes to shove. There are people horrified at the behaviour of others, scared that this behaviour will repeat itself, with all the costs attached. I feel this way, and I don’t think I am alone.
I think we need the community equivalent of therapy and mediation. We have now got to the stage where national media articles get written about our scandals and people threaten litigation. I just don’t think that a community of 3000 apes can survive this without serious psychological costs which in turn affect work and our lives. We all don’t want to be chucked out of a community which is safety and food and community for us. We all don’t want that community to become a hellhole. I don’t, SBF doesn’t, the woman hurt by OCB doesn’t, Kat and Emerson and Chloe and Alice don’t.
That’s not to say that all behaviour is equal, but that I think the frame here is empathy, boundary setting and safety, not conflict, auto-immune responses and exile.
What do I suggest?
After each scandal we have spaces to talk about our feelings, then we discuss what we think the norms of the community should be. Initially there will be disagreement but in time as we listen to those we disagree with we may realise how we differ. Then we can try and reintegrate this understanding to avoid it happening again. That’s what trust is—the confidence that something won’t happen above tolerance.
A concrete example
After the Bostrom stuff we had rationalist and progressive EAs in disagreement. Some thought he’d responded well, others badly. I think there was room for a discussion, to hear how unsafe his behaviour had left people feeling “do people judge my competence based on the colour of my skin?” “will my friends be safe here?”. I don’t think these feelings can be dismissed as wokery gone mad. But I think the other group had worries too “Will I be judged for things I said years ago?” “Seemingly even an apology isn’t enough”. I find I can empathise with both groups.
And I suggest what we want is some norms around this. Norms about things we do and don’t do. The aim should be to reduce community stress through there being bright lines and costs for behaviour we deem bad. And ways for those who do unacceptable things to come back to the community. I think there could be mutually agreeable ones, but I think the process would be tough.
We’d have to wrestle with how Bostrom and Hanson’s productivity seems related to their ability to think weird or ugly thoughts. We’d have to think about if mailing lists 20 years ago were public or private. We’d have to think about what value we put on safety. And we’d have to be willing not to pick up the sword if it didn’t go our way.
But I think there are acceptable positions here. Where people acknowledge harmful patterns of behaviour, perhaps even voluntarily leave for a time. Where people talk about the harm and the benefit created by those they disagree with. Where others see that some value weirdness/creativity more/less than they do. Where we rejoice in what we have achieved and mourn over how we have hurt one another. Where we grow to be a kinder, more mature community.
Intermission
This stuff breaks my heart. Not because I am good, but because I have predictably hurt people and been hurt by people in the past. And I’d like the cycle to stop. In my own life, conflict has never been the way out of this. Either I should leave people I cannot work with, or share and listen to those I can. And it is so hard and I fail often, but it’s better than becoming jaded and cruel or self-hating and perfectionist. I am broken, I am enough, I can be better. EA is flawed, EA is good, EA can improve. The world is awful, the world is better that it used to be, the world can improve.
As it is
Currently, I think we aren’t doing this work, so every subsequent scandal adds another grievance to the pile. And I guess people are leaving the community. If we spend millions a year trying to get graduates, isn’t it worth spending the same to keep long time members? I don’t know if there is a way to keep Kat and Emerson, Alice and Chloe, the concerned global healthy worker and the person who thinks SBF did nothing wrong, and me and you, but currently I don’t see us spending nearly the appropriate amount of mental effort or resources.
Oh and I’m really not angling to do this work. I have suggestions, sure, but I think the person should be widely trusted by the community as neutral and mature.
I’d bid for you to explain more what you mean here—but it’s your quick take!
I’m very keen for more details as well.
The CEA community health team does serve as a mediation function sometimes, I think. Maybe that’s not enough, but it seems worth mentioning.
Community health is also like the legal system in that they enforce sanctions so I wonder if that reduces the chance that someone reaches out to them to mediate.
I think this is the wrong frame tbh
How so?
I think I want them to be a mediation and boundary setting org, not just legal system
A previous partner and I did a sex and consent course together online I think it’s helped me be kinder in relationships.
Useful in general.
More useful if, you:
- have sex casually—
see harm in your relationships and want to grow
- are poly
As I’ve said elsewhere I think a very small proportion of people in EA are responsible for most of the relationship harms. Some of bad actors, who need to be removed, some are malefactors, who have either lots of interactions or engage in high risk behaviours and accidentally cause harm. I would guess I have more traits of the second category than almost all of you. So people like me should do the most work to change.
So most of you probably don’t need this, but if you are in some of the above groups, I’d recommend a course like this. Save yourself the heartache of upsetting people you care about.
Happy to DM.
https://dandelion.events/e/pd0zr?fbclid=IwAR0cIXFowU7R4dHZ4ptfpqsnnhdnLIJOfM_DjmS_5HR-rgQTnUzBdtQEnjE
Can we have some people doing AI Safety podcast/news interviews as well as Yud?
I am concerned that he’s gonna end up being the figurehead here. I assume someone is thinking of this, but I’m posting here to ensure that it is said. I am pretty sure that people are working on this, but I think it’s good to say this anyway.
We aren’t a community who says “I guess he deserves it” we say “who is the best person for the job?”. Yudkowsky, while he is an expert isn’t a median voice. His estimates of P(doom) are on the far tail of EA experts here. So if I could pick 1 person I wouldn’t pick him and frankly I wouldn’t pick just one person.
Some other voices I’d like to see on podcasts/ interviews:
Toby Ord
Paul Christiano
Ajeya Cotra
Amanda Askell
Will MacAskill
Joe Carlsmith*
Katja Grace*
Matthew Barnett*
Buck Schlegeris
Luke Meulhauser
Again, I’m not saying noone has thought of this (80%) they have. But I’d like to be 97% sure, so I’m flagging it.
*I am personally fond of this person so am biased
I am a bit confused by your inclusion of Will MacAskill. Will has been on a lot of podcasts, while for Eliezer I only remember 2. But your text sounds a bit like you worry that Eliezer will be too much on podcasts and MacAskill too little (I don’t want to stop MacAskill from going on podcasts btw. I agree that having multiple people present different perspectives on AGI safety seems like a good thing).
I think in the current discourse I’d like to see more of Will, who is a blanaced and clear communicator.
I don’t think you should be optimizing to avoid extreme views, but in favor of those with the most robust models, who can also communicate them effectively to the desired audience. I agree that if we’re going to be trying anything resembling public outreach it’d be good to have multiple voices for a variety of reasons.
On the first half of the criteria I’d feel good about Paul, Buck, and Luke. On the second half I think Luke’s blog is a point of evidence in favor. I haven’t read Paul’s blog, and I don’t think that LessWrong comments are sufficiently representative for me to have a strong opinion on either Paul or Buck.
I notice I am pretty skeptical of much longtermist work and the idea that we can make progress on this stuff just by thinking about it.
I think future people matter, but I will be surprised if, after x-risk reduction work, we can find 10s of billions of dollars of work that isn’t busywork and shouldn’t be spent attempting to learn how to get eg nations out of poverty.
I have heard one anecdote of an EA saying that they would be less likely to hire someone on the basis of their religion because it would imply they were
less good at their jobless intelligent/epistemically rigorous. I don’t think they were involved in hiring, but I don’t think anyone should hold this view.Here is why:
As soon as you are in a hiring situation, you have much more information than priors. Even if it were true that, say, ADHD[1] were less rational then the interview process should provide much more information than such a prior. If that’s not the case, get a better interview process, don’t start being prejudiced!
People don’t mind meritocracy, but they want a fair shake. If I heard that people had a prior that ADHD folks were less likely to be hard working, regardless of my actual performance in job tests, I would be less likely to want to be part of this community. You might lose my contributions. It seems likely to me that we come out ahead by ignoring small differences in groups so people don’t have to worry about this. People are very sensitive to this. Let’s agree not to defect. We judge on our best guess of your performance, not on appearances.
I would be unsurprised if this kind of thinking cut only one way. Is anyone suggesting they wouldn’t hire poly people because of the increased drama or men because of the increased likelihood of sexual scandal? No! We already think some information is irrelevant/inadmissible as a prior in hiring. Because we are glad of people’s right to be different or themselves. To me, race and religion clearly fall in this space. I want people to feel they can be human and still have a chance of a job.
I wouldn’t be surprised if this cashed out to “I hire people like me”. In this example was the individual really hiring on the basis of merit or did they just find certain religious people hard to deal with. We are not a social club, we are trying to do the most good. We want the best, not the people who are like us.
This pattern matches to actual racism/sexism. Like “sometimes I don’t get hired because people think Xs are worse at jobs”. How is that not racism? Seems bad.
Counterpoints:
Sometime gut does play a play a role. We think someone would get better on our team. Some might argue that it’s fine to use this as a tiebreaker. Or that its better to be honest that this is what’s going on.
Personally I think they points outweigh the counterpoints.
Hiring processes should hire the person who seems most likely to do the best job. And candidates should be confident this is happening. But for both predictive reasons, community welfare reasons and avoiding obvious pitfalls reasons I think small priors around race, religion, sexuality, gender, sexual practice should be discounted[2]. If you think the candidate is better or worse, it should show in the interview process. And yes, I get that gut plays a role, but I’d be really wary of gut that feeds clear biases. I think a community where we don’t do that comes out ahead and does more good.
I have a diagnosis so feel comfortable using this example.
And I think large priors are incorrect
In the wake of the financial crisis it was not uncommon to see suggestions that banks etc. should hire more women to be traders and risk managers because they would be less temperamentally inclined towards excessive risk taking.
I have not heard for such calls in EA, which was my point.
But neat example
These thoughts are VERY rough and hand wavy.
I think that we have more-or-less agreed as societies that there are some traits that is is okay to use to make choices about people (mainly: their actions/behaviors), and there are some traits that is is not okay to use (mainly: things that the person didn’t choose and isn’t responsible for). Race, religion, gender, and the like are widely accepted[1] as not socially acceptable traits to use when evaluating people’s ability to be a member of a team.[2] But there are other traits that we commonly treat as acceptable to use as the basis of treating people differently, such as what school someone went to, how many years of work experience they have, if they have a similar communication style as us, etc.
I think I might split this into two different issues.
One issue is: it isn’t very fair to give or withhold jobs (and other opportunities) based on things that people didn’t really have much choice in (such as where they were born, how wealthy their parents were, how good of an education they got in their youth, etc.)
A separate issue is: it is ineffective to employment decisions (hiring, promotions, etc.) based on things that don’t predict on-the-job success.
Sometimes these things line up nicely (such as how it isn’t fair to base employment decisions on hair color, and it is also good business to not base employment decisions on hair color). But sometimes they don’t line up so nicely: I think there are situations where it makes sense to use “did this person go to a prestigious school” to make employment decisions because that will get you better on-the-job performance; but it also seems unfair because we are in a sense rewarding this person for having won the lottery.[3]
In a certain sense I suppose this is just a mini rant about how the world is unfair. Nonetheless, I do think that a lot of conversations about hiring and discriminations get the two different issues conflated.
People’s perspectives vary, of course, but among my own social groups and peers “discrimination based on race/sex/etc. = bad” is widely accepted.
Employment is full of laws, but even in situations where there isn’t any legal issue (such as inviting friends over for a movie party, or organizing a book club) I view it as somewhat repulsive to include/exclude people based on gender/race/religion/etc. Details matter a lot, and I can think of exceptions, but that is more or less my starting point.
I’ve heard the phrase “genetic lottery,” and I suspect genes to contribute a lot to academic/career success. But lots of other things outside a person’s control affect how well they perform: being born in a particular place, how good your high school teachers were, stability of the household, if your parents had much money, and all the other things that we can roughly describe as “fortune” or “luck” or “happenstance.”
I know lots of people with lots of dispositions experience friction with just declining their parents’ religions, but that doesn’t mean I “get it” i.e., conflating religion with birth lotteries and immutability seems a little unhinged to me.
There may be a consensus that it’s low status to say out loud “we only hire harvard alum” or maybe illegal (or whatever), but there’s not a lot of pressure to actually try reducing implicit selection effects that end up in effect quite similar to a hardline rule. And I think harvard undergrad admissions have way more in common with lotteries than religion does!
I think the old sequencesy sort of “being bad at metaphysics (rejecting reductionism) is a predictor of unclear thinking” is fine! The better response to that is “come on, no one’s actually talking about literal belief in literal gods, they’re moreso saying that the social technologies are valuable or they’re uncomfortable just not stewarding their ancestors’ traditions” than like a DEI argument.
There is more to get into here but two main things:
I guess some EAs, and some who I think do really good work do literally believe in literal gods
I don’t actually think this is that predictive. I know some theists who are great at thinking carefully and many athiests who aren’t. I reckon I could distinguish the two in a discussion better than rejecting the former out of hand.
Some feedback on this post: this part was confusing. I assume that what this person said was something like “I think a religious person would probably be harder to work with because of X”, or “I think a religious person would be less likely to have trait Y”, rather than “religious people are worse at jobs”.
The specifics aren’t very important here, since the reasons not to discriminate against people for traits unrelated to their qualifications[1] are collectively overwhelming. But the lack of specifics made me think to myself: “is that actually what they said?”. It also made it hard to understand the context of your counterarguments, since there weren’t any arguments to counter.
Religion can sometimes be a relevant qualification, of course; if my childhood synagogue hired a Christian rabbi, I’d have some questions. But I assume that’s not what the anecdotal person was thinking about.
The person who was told this was me, and the person I was talking to straight up told me he’d be less likely to hire Christians because they’re less likely to be intelligent
Please don’t assume that EAs don’t actually say outrageously offensive things—they really do sometimes!
Edit: A friend told me I should clarify this was a teenage edgelord—I don’t want people to assume this kind of thing gets said all the time!
And since posting this I’ve said this to several people and 1 was like “yeah no I would downrate religious people too”
I think a poll on this could be pretty uncomfortable reading. If you don’t, run it and see.
Put it another way, would EAs discriminate against people who believe in astrology? I imagine more than the base rate. Part of me agrees with that, part of me thinks its norm harming to do. But I don’t think this one is “less than the population”
That’s exactly what I mean!
“I think religious people are less likely to have trait Y” was one form I thought that comment might have taken, and it turns out “trait Y” was “intelligence”.
Now that I’ve heard this detail, it’s easier to understand what misguided ideas were going through the speaker’s mind. I’m less confused now.
“Religious people are bad at jobs” sounds to me like “chewing gum is dangerous” — my reaction is “What are you talking about? That sounds wrong, and also… huh?”
By comparison, “religious people are less intelligent” sounds to me like “chewing gum is poisonous” — it’s easier to parse that statement, and compare it to my experience of the world, because it’s more specific.
*****
As an aside: I spend a lot of time on Twitter. My former job was running the EA Forum. I would never assume that any group has zero members who say offensive things, including EA.
I think the strongest reason to not do anything that even remotely looks like employer discrimination based on religion is that it’s illegal, at least for the US, UK, and European Union countries, which likely jointly encompasses >90% of employers in EA.
(I wouldn’t be surprised if this is true for most other countries as well, these are just the ones I checked).
There’s also the fact that, as a society and subject to certain exceptions, we’ve decided that employers shouldn’t be using an employee’s religious beliefs or lack thereof as an assessment factor in hiring. I think that’s a good rule from a rule-utilitarian framework. And we can’t allow people to utilize their assumptions about theists, non-theists, or particular theists in hiring without the rule breaking down.
The exceptions generally revolve around personal/family autonomy or expressive association, which don’t seem to be in play in the situation you describe.
I think that I generally agree with what you are suggesting/proposing, but there are all kinds of tricky complications. The first thing that jumps to my mind is that sometimes hiring the person who seems most likely to do the best job ends up having a disparate impact, even if there was no disparate treatment. This is not a counterargument, of course, but more so a reminder that you can do everything really well and still end up with a very skewed workforce.
I generally agree with the meritocratic perspective. It seems a good way (maybe the best?) to avoid tit-for-tat cycles of “those holding views popular in some context abuse power → those who don’t like the fact that power was abused retaliate in other contexts → in those other contexts, holding those views results in being harmed by people in those other contexts who abuse power”.
Good point about the priors. Strong priors about these things seem linked to seeing groups as monoliths with little within-group variance in ability. Accounting for the size of variance seems under-appreciated in general. E.g., if you’ve attended multiple universities, you might notice that there’s a lot of overlap between people’s “impressiveness”, despite differences in official university rankings. People could try to be less confused by thinking in terms of mean/median, variance, and distributions of ability/traits more, rather than comparing groups by their point estimates.
Some counter-considerations:
Religion and race seem quite different. Religion seems to come with a bunch of normative and descriptive beliefs that could affect job performance—especially in EA—and you can’t easily find out about those beliefs in a job interview. You could go from one religion to another, from no religion to some religion, or some religion to no religion. The (non)existence of that process might give you valuable information about how that person thinks about/reflects on things and whether you consider that to be good thinking/reflection.
For example, from a irreligious perspective, it might be considered evidence of poor thinking if a candidate thinks the world will end in ways consistent with those described in the Book of Revelation, or think that we’re less likely to be in a simulation because a benevolent, omnipotent being wouldn’t allow that to happen to us.
Anecdotally, on average, I find that people who have gone through the process of abandoning the religion they were raised with, especially at a young age, to be more truth-seeking and less influenced by popular, but not necessarily true, views.
Religion seems to cover too much. Some forms of it seems to offer immunity to act in certain ways, and the opportunity to cheaply attack others if they disagree with it. In other communities, religion might be used to justify poor material/physical treatment of some groups of people, e.g. women and gay people. While I don’t think being accepting of those religions will change the EA community too much, it does say something to/negatively affect the wider world if there’s sufficient buy-in/enough of an alliance/enough comfort with them.
But yeah, generally, sticking to the Schelling point of “don’t discriminate by religion (or lack-thereof)” seems good. Also, if someone is religious and in EA (i.e., being in an environment that doesn’t have too many people who think like them), it’s probably good evidence that they really want to do good and are willing to cooperate with others to do so, despite being different in important ways. It seems a shame to lose them.
Oh, another thought. (sorry for taking up so much space!) Sometimes something looks really icky, such as evaluating a candidate via religion, but is actually just standing in for a different trait. We care about A, and B is somewhat predictive of A, and A is really hard to measure, then maybe people sometimes use B as a rough proxy for A.
I think that this is sometimes used as the justification for sexism/racism/etc, where the old-school racist might say “I want a worker who is A, and B people are generally not A.” If the relationship between A and B is non-existent or fairly weak, then we would call this person out for discriminating unfairly. But now I’m starting to think of what we should do if there really is a correlation between A and B (such as sex and physical strength). That is what tends to happen if a candidate is asked to do an assessment that seems to have nothing to do with the job, such as clicking on animations of colored balloons: it appears to have nothing to do with the job, but it actually measures X, which is correlated with Y, which predicts on-the-job success.
I’d rather be evaluated as an individual than as a member of a group, and I suspect that in-group variation is greater than between-group variation, echoing what you wrote about the priors being weak.
You don’t need to apologise for taking up space! It’s a short form, write what you like.
I think EAs have a bit of an entitlement problem.
Sometimes we think that since we are good we can ignore the rules. Seems bad
As with many statements people make about people in EA, I think you’ve identified something that is true about humans in general.
I think it applies less to the average person in EA than to the average human. I think people in EA are more morally scrupulous and prone to feeling guilty/insufficiently moral than the average person, and I suspect you would agree with me given other things you’ve written. (But let me know if that’s wrong!)
I find statements of the type “sometimes we are X” to be largely uninformative when “X” is a part of human nature.
Compare “sometimes people in EA are materialistic and want to buy too many nice things for themselves; EA has a materialism problem” — I’m sure there are people in EA like this, and perhaps this condition could be a “problem” for them. But I don’t think people would learn very much about EA from the aforementioned statements, because they are also true of almost every group of people.
I sense that it’s good to publicly name serial harassers who have been kicked out of the community, even if the accuser doesn’t want them to be. Other people’s feeling matter too and I sense many people would like to know who they are.
I think there is a difference between different outcomes, but if you’ve been banned from EA events then you are almost certainly someone I don’t want to invite to parties etc.
I would appreciate being able to vote forum articles as both agree disagree and upvote downvote.
Lots of things where I think they are false but interesting or true but boring.
Relative Value Widget
It gives you sets of donations and you have to choose which you prefer. If you want you can add more at the bottom.
https://allourideas.org/manifund-relative-value
so far:
This is neat, kudos!
I imagine it might be feasible to later add probability distributions, though that might unnecessarily slow people down.
Also, some analysis would likely be able to generate a relative value function, after which you could do the resulting visualizations and similar.
Note I didn’t build the app, I just added the choices. Do you think geting the full relative values is worth it?
Why do people give to EA funds and not just OpenPhil?
does OpenPhil accept donations? I would have guessed not
It does not. There are a small number of co-funding situations where money from other donors might flow through Open Philanthropy operated mechanisms, but it isn’t broadly possible to donate to Open Philanthropy itself (either for opex or regranting).
Lol well no wonder then. Thanks both.
Unbalanced karma is good actually. it means that the moderators have to do less. I like the takes of the top users more than the median user and I want them to have more but not total influence.
Appeals to fairness don’t interest me—why should voting be fair?
I have more time for transparency.
A friend asked about effective places to give. He wanted to donate through his payroll in the UK. He was enthusiastic about it, but that process was not easy.
It wasn’t particularly clear whether GiveWell or EA Development Fund was better and each seemed to direct to the other in a way that felt at times sketchy.
It wasn’t clear if payroll giving was an option
He found it hard to find GiveWell’s spreadsheet of effectiveness
Feels like making donations easy should be a core concern of both GiveWell and EA Funds and my experience made me a little embarrassed to be honest.
EA short story competition?
Has anyone ever run a competition for EA related short stories?
Why would this be a good idea?
* Narratives resonate with people and have been used to convey ideas for 1000s of years
* It would be low cost and fun
* Using voting on this forum there is the same risk of “bad posts” as for any other post
How could it work?
* Stories submitted under a tag on the EA forum.
* Rated by upvotes
* Max 5000 words (I made this up, dispute it in the comments)
* If someone wants to give a reward, then there could be a prize for the highest rated
* If there is a lot of interest/quality they could be collated and even published
* Since it would be measured by upvotes it seems unlikely a destructive story would be highly rated (or as likely as any other destructive post on the forum)
Upvote if you think it’s a good idea. If it gets more than 40 karma I’ll write one.
I intend to strong downvote any article about EA that someone posts on here that they themselves have no positive takes on.
If I post an article, I have some reason I liked it. Even a single line. Being critical isn’t enough on it’s own. If someone posts an article, without a single quote they like, with the implication it’s a bad article, I am minded to strong downvote so that noone else has to waste their time on it.
What do you make of this post? I’ve been trying to understand the downvotes. I find it valuable in the same way that I would have found it valuable if a friend had sent me it in a DM without context, or if someone had quote tweeted it with a line like ‘Prominent YouTuber shares her take on FHI closing down’.
I find posts like this useful because it’s valuable to see what external critics are saying about EA. This helps me either a) learn from their critiques or b) rebut their critiques. Even if they are bad critiques and/or I don’t think it’s worth my time rebutting them, I think I should be aware of them because it’s valuable to understand how others perceive the movement I am connected to. I think this is the same for other Forum users. This being the case, according to the Forum’s guidance on voting, I think I should upvote them. As Lizka says here, a summary is appreciated but isn’t necessary. A requirement to include a summary or an explanation also imposes a (small) cost on the poster, thus reducing the probability they post. But I think you feel differently?
Being able to agree and disagreevote on posts feels like it might be great. Props to the forum team.
Looking forward to how it plays out! LessWrong made the intentional decision to not do it, because I thought posts are too large and have too many claims and agreement/disagreement didn’t really have much natural grounding any more, but we’ll see how it goes. I am glad to have two similar forums so we can see experiments like this play out.
My hope would be that it would allow people to decouple the quality of the post and whether they agree with it or not. Hopefully people could even feel better about upvoting posts they disagreed with (although based on comments that may be optimistic).
Perhaps combined with a possible tweak in what upvoting means (as mentioned by a few people), someone mentioned we could change “how much do you like this overall” to something that moves away form basing the reaction on an emotions. I think someone suggested something like “Do you think this post adds value” (That’s just a real hack at the alternative, I’m sure there are far better ones)
I think another option is to have reactions on a paragraph level. That would be interesting.
I guess African, Indian and Chinese voices are underrepresented in the AI Governance discussion. And in the unlikely case we die, we all die and it think it’s weird that half the people who will die have noone loyal to them in the discussion.
We want AI that works for everyone and it seems likely you want people who can represent billions who aren’t currently with a loyal representative.
I’m actually more concerned about the underrepresentation of certain voices as it applies to potential adverse effects of AGI (or even near-AGI) on society that don’t involve all of us dying. In the everyone-dies scenario, I would at least be similarly situated to people from Africa, India, and China in terms of experiencing the exact same bad thing that happens. But there are potential non-fatal outcomes, like locking in current global power structures and values, that affect people from non-Western countries much differently (and more adversely) than they’d affect people like me.
Yeah, in a scenario with “nation-controlled” AGI, it’s hard to see people from the non-victor sides not ending up (at least) as second-class citizens—for a long time. The fear/lack of guarantee of not ending up like this makes cooperation on safety more difficult, and the fear also kind of makes sense? Great if governance people manage to find a way to alleviate that fear—if it’s even possible. Heck, even allies of the leading state might be worried—doesn’t feel too good to end up as a vassal state. (Added later (2023-06-02): It may be a question that comes up as AGI discussions become mainstream.)
Wouldn’t rule out both American and Chinese outside of respective allied territory being caught in the crossfire of a US-China AI race.
Political polarization on both sides in the US is also very scary.
Sorry, yes. I think that ideally we don’t all die. And in those situations voices loyal to representative groups seem even more important.
This strikes me as another variation of “EA has a diversity problem.” Good to keep in mind that is it not just about progressive notions of inclusivity, though. There may be VERY significant consequences for the people in vast swaths of the world if a tiny group of people make decisions for all of humanity. But yeah, I also feel that it is a super weird aspect of the anarchic system (in the international relations sense of anarchy) that most of the people alive today have no one representing their interests.
It also seems to echo consistent critiques of development aid not including people in decision-making (along the lines of Ivan Illich’s To Hell with Good Intentions, or more general post-colonial narratives).
What means “have noone loyal to them” and “with a loyal representative”? Are you talking about the indian government? Or are you talking about EAs talking part in discussions such as yourself? (In which case, who are you loyal to?)
I think that’s part of the problem.
Who is loyal to the chinese people?
And I don’t think I’m good here. I think I try to be loyal to them, but I don’t know what the chinese people want and I think if I try and guess I’ll get it wrong in some key areas.
I’m reminded of when givewell?? asked recipients how they would trade money for children’s lives and they really fucking loved saving children’s lives. If we are doing things for others benefit we should take their weightings into account.
I notice we are great at discussing stuff but not great at coming to conclusions.
I wish the forum had a better setting for “I wrote this post and maybe people will find it interesting but I don’t want it on the front page unless they do because that feels pretenious”
edited
Give Directly has a President (Rory Stewart) paid $600k, and is hiring a Managing Director. I originally thought they had several other similar roles (because I looked on the website) but I talked to them an seemingly that is not the case. Below is the tweet that tipped me off but I think it is just mistaken.
Once could still take issue with the $600k (though I don’t really)
https://twitter.com/carolinefiennes/status/1600067781226950656?s=20&t=wlF4gg_MsdIKX59Qqdvm1w
Seems in line with CEO pay for US nonprofits with >100M in budget, at least when I spot check random charities near the end of this list.
I feel confused about the president/CEO distinction however.
Is it Normal? Uncertain
A ore important question for me though, is to ask Is it right? and Is it a good idea? I think the answer to both of these is a resounding no for a number of reasons.
- (For GiveDirectly). The premise of your entire organisation is that dollars do more good in the hands of the poor than the rich. For your organisation to then spend a huge amount of money on a CEO is arguably going against what the organisation stands for.
- Bad press for the organisation. After SBF and the Abbey etc. this shouldn’t take too much explaining
- Might reflect badly on the organisation when applying for grants
- (My personal gripe) what kind of person working to help the poorest people on earth could live with themselves earning so much given what their organisation. You have become part of the industrial aid complex which makes inequality worse—the kind of thing givedirectly almost seemed to be riling against in the first place.
High NGO salaries make me angry though, so maybe this is a bit too ranty ;).
The expectation of low salaries is one of the biggest problems hobbling the nonprofit sector. It makes it incredibly difficult to hire people of the caliber you need to run a high-performance organization.
This is classic Copenhagen interpretation of ethics stuff. Someone making that kind of money as a nonprofit CEO could almost always make much more money in the private sector while receiving significantly less grief. You’re creating incentives that get us worse nonprofits and a worse world.
Thanks Will
I’m interested in the evidence behind the idea that low salaries hobble the nonprofit sector. Is there research to support this outside of the for-profit market? I’m unconvinced that higher salaries (past a certain point) would lead to a better calibre of employee in the NGO field. I would have assumed that the attractiveness of running an effective and high profile org like Give directly might be enough to attract amazing candidates regardless of salary. It would be amazing to do AB testing, or even a RCT on this front but I would be imagine that would be hard to convince organisations to get involved in this research. Personally I think there are enough great leaders out there (especially for an org like givedirectly) who would happily work on 100,000 a year. the salary difference between 100k and 600k might make barely any difference at all in the pool of candidates you attract—but of course this is conjecture.
On the moral side of things, there’s a difference between taking a healthy salary of 100,000 dollars a year—enough to be in the top 0.5% of earners in the world and taking $600,000. We’re not looking for a masochist to run the best orgs, just someone who appreciates the moral weight of that degree of inequality within an organisation that purports to be supporting the world’s poorest.
If earning 600,000 rather than 100,000 is a strong incentive for a person running a non-profit, I probably don’t want them in charge. First I think that this kind of salary might lead someone to be less efficient with spending both in the American base and in distant company operations. NGOs need lean operations as they rely on year to year donations which are never secure—NGOs can’t expect to continue high growth rates of funding year on year like good businesses. Also leaders on high pay are probably likely to feel morally obligated to pay other admin staff more because of their own salary, rather than maximising the amount of money given directly to the poorest.
It may also affect the whole ethos of the organisation and respect from other staff especially in places like Kenya where staff will be getting paid far far less. Imagine you are earning a decent local wage in Kenya, which is still 100x less than your boss in America? Motivating yourself to do your job well becomes difficult. I’ve seen this personally in organisations here in Uganda where Western bosses earn far higher salaries. Local staff see the injustice within their own system then can’t get on board with the vision of the organisation. This kind of salary inequality is likely to affect organisational morale.
I’ve always thought the salaries of chief executives of various countries may provide an external vantage point on the reasonableness of charity-executive salaries. They tend to top out at 400K USD: https://en.wikipedia.org/wiki/List_of_salaries_of_heads_of_state_and_government.
At least in the US, Cabinet members, judges, senior career civil servants, and state governors tend to make on average half that. I have heard of some people who would be good federal judges, mainly at the district-court level, turning down nominations because they couldn’t stomach the 85-90% pay cut from being a big-firm partner. The quality of some of these senior political and judicial leaders varies . . . but I don’t think money is the real limiting factor in US leader quality. That is, I don’t get the sense that the US would generally have better leaders if the salaries at the top were doubled or tripled.
The non-salary “benefits” and costs of working at high levels in the government are different from the non-salary “benefits” and costs of working for a non-profit. But I think they differ in ways that some people would prefer the former over the latter (or vice versa).
In other words, a belief that charities should offer their senior leaders a significantly higher salary than senior leaders in world and regional governments potentially implies that almost every developed democracy in the world should be paying their senior leaders and civil servants significantly more than they do. Maybe they should?
I don’t have a firm opinion on salaries for charitable senior officials, but I think Nick is right insofar as high salaries can cause donor disillusionment and loss of morale within the organization. So while I’m willing to start with a presumption that government-comparable salaries for mid-level+ staff are appropriate (because they have been tested by the crucilble of the democratic process), it’s reasonable to ask for evidence that significantly higher salaries improve organizational effectiveness for non-profits.
I talked to someone there and they pointed out that Stewart hasn’t taken his salary yet, so it’s not clear that he will take all of it.
Thanks Nathan. That’s a nice potential gesture (and potentially a retrospective PR move). But this doesn’t help answer all my critcisms ;).
I dislike the framing of “considerable” and “high engagement” on the EA survey.
This copied from the survey:
To me “considerably engaged” EA people are doing a lot. Their median donation is $1000. They have “engaged extensively” and “often consider the principles of effective altruism” To me, they seem “highly engaged” in EA.
I’ve met people who are giving quite a lot of money, who have perhaps tried applied to EA jobs and not succeeded. And yet they are not allowed to consider themselves “highly engaged”. I guess this leads to them feeling disillusioned. It risks creating a privileged class of those who can get jobs at EA orgs and those who can’t. What about those who think they are doing an EA job but it’s not at an EA-aligned organisation? It seems wrong to me that they can’t consider themselves highly engaged.
I would prefer:
“Considerable engagement” → “high engagement”
“High engagement” → “maximum engagement”
And I would prefer the text read as follows:
High (previously considerable) engagement: I’ve engaged extensively with effective altruism content (e.g. attending an EA Global conference, applying for career coaching, or organizing an EA meetup). I often consider the principles of effective altruism when I make decisions about my career or charitable donations, but they are not the biggest factor to me.
Maximum (previously high) engagement: I am deeply involved in the effective altruism community. Perhaps I have chosen my career using the principles of effective altruism. I might earn to give or helping to lead an EA group or working at an EA-aligned organization. Maybe I tried for several years to gain such a career but have since moved to a plan B or Z. Regardless, I make my career or resource decisions on a primarily effective altruist basis.
It’s a bit rough, but I think it allows for people who are earning to give or deeply involved with the community to say they are maximally engaged and that those who are highly engaged to put a 4 without shame. Feel free to put your own drafts in the comments.
Currently, the idea that someone could be earning to give, donating $10,000s per year and perhaps still not consider themself highly engaged in EA seems like a flaw.
I think this is part of a more general problem that people say things like “I’m not totally EA” when they donate 1%+ of their income and are trying hard. Why create a club where so many are insecure about their membership.
I can’t speak for everyone, but if you donate even 1% of your income to charities which you think are effective, you’re EA in my book.
It is one of my deepest hopes, and one of my goals for my own work at CEA, that people who try hard and donate feel like they are certainly, absolutely a part of the movement. I think this is determined by lots of things, including:
The existence of good public conversations about donations, cause prioritization, etc., where anyone can contribute
The frequency of interesting news and stories about EA-related initiatives that make people feel happy about the progress their “team” is making
I hope that the EA Survey’s categories are a tiny speck compared to these.
Thanks for providing a detailed suggestion to go with this critique!
While I’m part of the team that puts together the EA Survey, I’m only answering for myself here.
People can consider themselves anything they want! It’s okay! You’re allowed! I hope that a single question on the survey isn’t causing major changes to how people self-identify. If this is happening, it implies a side-effect the Survey wasn’t meant to have.
Have you met people who specifically cited the survey (or some other place the question has showed up — I think CEA might have used it before?) as a source of disillusionment?
I’m not sure I understand why people would so strongly prefer being in a “highly engaged” category vs. a “considerably engaged” category if those categories occupy the same relative position on a list. Especially since people don’t use that language to describe themselves, in my experience. But I could easily be missing something.
I want someone who earns-to-give (at any salary) to feel comfortable saying “EA is a big part of my life, and I’m closely involved in the community”. But I don’t think this should determine how the EA Survey splits up its categories on this question, and vice-versa.
*****
One change I’d happily make would be changing “EA-aligned organization” to “impact-focused career” or something like that. But I do think it’s reasonable for the survey to be able to analyze the small group of people whose professional lives are closely tied to the movement, and who spend thousands of hours per year on EA-related work rather than hundreds.
(Similarly, in a survey about the climate movement, it would seem reasonable to have one answer aimed at full-time paid employees and one answer aimed at extremely active volunteers/donors. Both of those groups are obviously critical to the movement, but their answers have different implications.)
Earning-to-give is a tricky category. I think it’s a matter of degree, like the difference between “involved volunteer/group member” and “full-time employee/group organizer”. Someone who spends ~50 hours/year trying to allocate $10,000 is doing something extraordinary with their life, and EA having a big community of people like this is excellent, but I’d still like to be able to separate “active members of Giving What We Can” from “the few dozen people who do something like full-time grantmaking or employ people to do this for them”.
*****
Put another way: Before I joined CEA, I was an active GWWC member, read a lot of EA-related articles, did some contract work for MIRI/CFAR, and went to my local EA meetups. I’d been rejected from multiple EA roles and decided to pursue another path (I didn’t think it was likely I’d get an EA job until months later).
I was pretty engaged at this point, but the nature of my engagement now that I work for CEA is qualitatively different. The opinions of Aaron!2018 should mean something different to community leaders than the opinions of Aaron!2021 — they aren’t necessarily “less important” (I think Aaron!2018 would have a better perspective on certain issues than I do now, blinded as I am by constant exposure to everything), but they are “different”.
*****
All that said, maybe the right answer is to do away with this question and create clusters of respondents who fit certain criteria, after the fact, rather than having people self-define. e.g. “if two of A, B, or C are true, choose category X”.
It’s possible that this question is mean to measure something about non-monetary contribution size, not engagement. In which case, say that.
Call it, “non-financial contribution” and put 4 as ” I volunteer more than X hours” and 5 as “I work on a cause area directly or have taken a lower than salary rate jobs”.
I’ve said that people voting anonymously is good, and I still think so, but when I have people downvoting me for appreciating little jokes that other people most on my shortform, I think we’ve become grumpy.
Completely agree, I would love humour to be more appreciated on the forum. Rarely does a joke slip through appreciated/unpunished.
In my experience, this forum seems kinda hostile to attempts at humour (outside of april fools day). This might be a contributing factor to the relatively low population here!
I get that, though it feels like shortforms should be a bit looser.
haha whenever I try humour / sarcasm I get shot directly into the sun.
Does anyone understand the bottlenecks to a rapid malaria vaccine rollout? Feels underrated.
Best sense of what’s going on (my info’s second-hand) is it would cost ~$600M to buy and distribute all of Serum Institute’s supply (>120M doses $3.90 dose +~$1/dose distribution cost) and GAVI doesn’t have any new money to do so. So they’re possibly resistant to moving quickly, which may be slowing down the WHO prequalification process, which is a gating item for the vaccine being put in vials and purchased by GAVI (via UNICEF). Natural solution for funding is for Gates to lead an effort to do so, but they are heavy supporters of the RTS,S malaria vaccine, so it’s awkward for them to put major support into the new R21 vaccine which can be produced in large quantity. Also the person most associated with R21 is Adrian Hill, who is not well-liked in the malaria field. There will also be major logistical hurdles to getting it distributed in the countries, and there are a number of bureaucracies internally in each of the countries that will all need to cooperate.
Here’s an op-ed my colleague Zach, https://foreignpolicy.com/2023/12/08/new-malaria-vaccine-africa-world-health-organization-child-mortality/
Here’s one from Peter Singer https://www.project-syndicate.org/commentary/new-low-cost-malaria-vaccine-could-save-millions-by-peter-singer-2023-12
Here’s an FAQ with more info—https://docs.google.com/document/d/1mgeU-efHzs83lQNR3ma3qvFd1xqvZJQSp-ut1VEPAUY/edit
Pretend that you’re a Texan vaccine distributor.
You have the facility to produce en mass, something that once given out will no longer make a profit, so there’s no incentive to make a factory, but you’re an EA true and true so you build the thing you need and make the doses. Now you have doses in a warehouse somewhere.
You have to take the vaccine all over the admittedly large state, but with a good set of roads and railroads, this is an easily solvable problem, right?
You have a pile of vaccine, potentially connections with Texan hospitals who thankfully ALL speak English and you have the funding from your company to send people to distribute the vaccine.
There may or may not be a cold chain needed so you might need refrigerated trucks, but this is a solvable problem right? Cold chain trucks can’t be that more expensive than regular trucks?
So you go out and you start directing the largest portion of vaccines to go to the large cities and health departments, just to reach your 29 million people that you’re trying to hit. You pay a good salary to your logisticians and drivers to get the vaccines where they need to go.
In a few days, you’re able to effectively get a large chunk of your doses to where they need to go, but now you run into the problem of last mile logistics, where you need to get a dose to a person.
That means that the public has to get the message that this is available for them, where they can find it and how they can do it. God forbid there be a party that is trying to PSYOP that your vaccine causes Malarial cancer or something because that would be a problem.
You’ll have your early adopters, sure but after some time the people that will follow prudent public health measures will drop off and the lines will be empty.
You’ll still have 14 million doses, which have they been properly stored? This is of course accounting for the number of Texans who just won’t get a vaccine or are perhaps too young.
So you appeal to the state government to pass a law that all 8th graders need to have this once in a lifetime vaccine and in a miracle, they make it a law. You move the needle a little bit. 7.5 Million Texans are under 18, but those might be the easiest to get as they’re actively interacting with the government at least in the capacity of education.
And as you might guess, this isn’t about Texas. This is every country.
FWIW I reached out to someone involved in this at a high level a few months ago to see if there was a potential project here. They said the problem was “persuading WHO to accelerate a fairly logistically complex process”. It didn’t seem like there were many opportunities to turn money or time into impact so I didn’t pursue anything further.
There’s a few I know of:
For the new R21 vaccine, WHO is currently conducting prequalification of the production facilities. As far as I understand, African governments have to wait for prequalification to finish for before they can apply for subsidized procurement and rollout through UNICEF and GAVI.
For both RTS,S and R21, there are some logistical difficulties due to the vaccines’ 4 dose schedule (First three 1 month apart—doesn’t fit all too well into existing vaccination schedules) cold-chain requirements, and timing peak immunity with the seasonality of malaria.
Lastly since there already exists cost-effective counter-measures, it’s unclear how to balance new vaccine efforts against existing measures.
Seems worth considering that
A) EA has a number of characteristic of a “High Demand Group” (cult). This is a red flag and you should wrestle with it yourself.
B) Many of the “Sort of”s are peer pressure. You don’t have to do these things. And if you don’t want to, don’t!
In what sense is it “sort of” true that members need to get permission from leaders to date, change jobs, or marry?
I think there is starting to be social pressure on who to date. And there has been social pressure for which jobs to take for a while.
I think that one’s a reach, tbh.
(I also think the one about using guilt to control is a stretch.)
My call: EA gets 3.9 out of 14 possible cult points.
No
Yes (+1)
Partial (+0.8)
No
No
No
Partial (+0.5)
Very weak (+0.1)
No
Partial (+0.5)
No
No
Yes (+1)
No
I think you may have very high standards? By these standards, I don’t think there are any communities at all that would score 0 here.
~
I was not aware of “What would SBF do” stickers. Hopefully those people feel really dumb now. I definitely know about EY hero worship but I was going to count that towards a separate rationalist/LW cult count instead of the EA cult count.
I think where we differ is that I’m not making a comparison of whether EA is worse than this compared to other groups, if every group scores in the range of 0.5-1 I’ll still score 0.5 as 0.5, and not scale 0.5 down to 0 and 0.75 down to 0.5. Maybe that’s the wrong way to approach it but I think the least culty organization can still have cult-like tendencies, instead of being 0 by definition.
Also if it’s true that someone working at GPI was facing these pressures from “senior scholars in the field”, then that does seem like reason for others to worry. There also has been a lot of discussion on the forum about the types of critiques that seem like they are acceptable and the ones that aren’t etc. Your colleague also seems to believe this is a concern, for example, so I’m currently inclined to think that 0.2 is pretty reasonable and I don’t think I should update much based on your comment-but happy for more pushback!
I think
has to get more than 0.2, right? Being elitist and on a special mission to save humanity is a concerningly good descriptor of at least a decent chunk of EA.
Ok updated to 0.5. I think “the leader is considered the Messiah or an avatar” being false is fairly important.
>> The group teaches or implies that its supposedly exalted ends justify means that members would have considered unethical before joining the group (for example: collecting money for bogus charities).
> Partial (+0.5)
This seems too high to me, I think 0.25 at most. We’re pretty strong on “the ends don’t justify the means”.
>>The leadership induces guilt feelings in members in order to control them.
> No
This on the other hand deserves at least 0.25...
I don’t think it makes sense to say that the group is “preoccupied with making money”. I expect that there’s been less focus on this in EA than in other groups, although not necessarily due to any virtue, but rather because of how lucky we have been in having access to funding.
Nuclear risk is in the news. I hope:
- if you are an expert on nuclear risk, you are shopping around for interviews and comment
- if you are an EA org that talks about nuclear risk, you are going to publish at least one article on how the current crisis relates to nuclear risk or find an article that you like and share it
- if you are an EA aligned journalist, you are looking to write an article on nuclear risk and concrete actions we can take to reduce it
Factional infighting
[epistemic status—low, probably some element are wrong]
tl;dr
- communities have a range of dispute resolution mechanisms, whether voting to public conflict to some kind of civil war
- some of these are much better than others
- EA has disputes and resources and it seems likely that there will be a high profile conflict at some point
- What mechanisms could we put in place to handle that conflict constructively and in a positive sum way?
When a community grows as powerful as EA is, there can be disagreements about resource allocation. In EA these are likely to be significant.
There are EAs who think that the most effective cause area is AI safety. There are EAs who think it’s global dev. These people do not agree, though there can be ways to coordinate between them.
The spat between GiveWell and GiveDirectly is the beginning of this. Once there are disagreements on the scale of $10millions then some of that is gonna be sorted out over twitter. People may badmouth each other and damage the reputation of EA as a whole.
The way around this is to make solving problems easier than creating them. As in a political coalition, people need to have more benefits being inside the movement than outside it.
The EA forum already does good work here, allowing everyone to upvote posts they like.
Here are some other power sharing mechanisms:
- a fund where people can either vote on cause areas, expected value, or moral weights, so that it moves based on the community’s values as a whole
- a focus on “we disagree, but we respect” looking at how different parts of the community disagree but respect the effort of others
- a clear mechanism of bargains, where animal EAs donate to longtermist charities in exchange for longtermists to go vegan and vice versa
- some videos from key figures from different parts discussing their disagreements in a kind and human way
- “I would change if” a series of posts from people saying what would make them work on different cause areas. How cheap would chicken welfare have to be before Yudkowsky moved to work on it? How cheap would AI safety had to be before it became Singer’s key talking point
Call me a pessimist, but I can’t see how a community managing $50Bn across deeply dividided prioritites will stay chummy without proper dispute resolution systems. And I suggest we should start building them now.
By and large I think this aspect is going surprisingly well, largely because people have adopted a “disagree but respect” ethos.
I’m a bit unsure of such a fund—I guess that would pit different cause areas against each other more directly, which could be a conflict framing.
Regarding the mechanism of bargains, it’s a bit unclear to me what problem that solves.
EA infrastructure idea: Best Public Forecaster Award
Gather all public forecasting track records
Present them in an easily navigable form
Award prizes one for best brier score of forecasts resolving in the last year
If this gets more than 20 karma, I’ll write a full post on it. This is rough.
Questions that come to mind
Where would we find these forecasts
To begin with I would look at those with public records:
Scott Alexander
Bryan Caplan
Matthew Yglesias
Many such cases
Beyond these, one could build a community around finding forecasts of public figures. Alternatively, I guess GPT-3 has a good shot of being able to turn verbal forecasts into data which could then be checked.
What’s the impact
I’m only gonna sketch my argument here. As above, if this gets 20 karma I’ll write a full post (but only upvote if it’s good, let’s not waste any of our time).
We seem to think forecasting improves the accuracy of commentator
If we could build a high-status award for forecasting, more commentators would hear about it and it would serve as a nudge for others to make their forecasts more visible
I am confident this would lead to better commentary (this seems arrogant, but honestly the people I know who forecast more are more epistemically humble—I think celebrities could really benefit from more humility about their predictions)
Better commentary leads to better outcomes. Effective Altruism implicitly holds that many have priority orderings that don’t match reality. The world at large underrates the best charities, the chance of biorisk, etc. Journalism which was more accurate would be more accurate about these things too which would be a massive win
Wouldn’t the winners just be superforecasters
Not currently. I don’t think it’s too hard to make pretty robust boundaries on what a public figure is. Most superforecasters are not well enough known (and sorry to the 5 EAs I can count in metaculus’ top 50). But Yglesias is well known enough. Scott Alexander, I’m less sure but I think we could come up with some minimum amount of hits, followers, etc for someone to be eligible.
How much resource would this take
Depends on a couple of things (I have pulled these numbers out of thin air) please criticise them:
Who is giving this award its prestige? If it’s a lot of money, fine. If it’s an existing org, then it’s cheaper ( 0 - $50k)
How deeply are we looking. I think you could pay someone $50k to find say 100 public sets of forecasts and maybe another $10k to make a nice website. If you want to scrape twitter using GPT3 or crowdsource that’s maybe another $50-100k
Is there an award ceremony? If so I imagine that costs as much as a wedding so maybe $10k
That looks like $60 - $220k
If this failed, why did it fail?
It got embroiled in controversy over who was included
It was attached to some existing EA org and looked badly for them
It became a niche award that no one changed their behaviour based on
I want to say thanks to people involved in the EA endeavour. I know things can be tough at times, but you didn’t have to care about this stuff, but you do. Thank you, it means a lot to me. Let’s make the world better!
Joe Rogan (largest podcaster in the world) giving repeated concerned mediocre x-risk explanations suggests that people who have contacts with him should try and get someone on the show to talk about it.
eg listen from 2:40:00 Though there were several bits like this during the show.
I’ve been musing about some critiques of EA and one I like is “what’s the biggest thing that we are missing”
In general, I don’t think we are missing things (lol) but here are my top picks:
It seems possible that we reach out to sciency tech people because they are most similar to us. While this may genuinely be the cheapest way to get people now there may be costs to the community in terms of diversity of thought (most Sci/tech people are more similar than the general population)
I’m glad to see more outreach to people in developing nations
It seems obvious that science/tech people have the most to contribute to AI safety, but.. maybe not?
Also science/tech people have a particular racial/gender makeup and there is the hidden assumption that there isn’t an effective way to reach a more broader group. (Personally I hope that a load of resources in India, Nigeria, Brazil, etc will go some way here, but I dunno still feels like a legitimate question)
People are scared what the future might look like if it is only in the view of MacAskill/Bostrom/SBF. Yeah, in fact my (poor) model of MacAskill is scared of this too. But we can surface more that we wish we had a larger group making these decisions too.
We could build better ways of outsiders feeding into decisionmaking. I read a piece about the effectiveness of community vegan meals being underrated in EA. Now I’m not saying it should be funded, but I was surprsied to read some of these conferences are 5000+ people (iirc). Maybe that genuinely is oversight. But it’s really hard for high signal information to get to decisionmakers. That really is a problem we could work on. If it’s hard for people who speak EA-ese, how much much harder is it for those who speak different community languages, whose concepts seem frustrating to us.
More likely to me is a scenario of diminishing returns. Ie, tech people might be the most important to first order, but there’s already a lot of brilliant tech people working on the problem, so one more won’t make much of a difference. Whereas a few brilliant policy people could devise a regulatory scheme that penalises reckless AI deployment, etc, making more differences on a marginal basis.
+1 for policy people
I would like to see posts give you more karma than comments (which would hit me hard). Seems like a highly upvtoed post is waaaaay more valuable than 3 upvoted comments on that post, but it’s pretty often the latter gives more karma than the former.
Sometimes comments are better, but I think I agree they shouldn’t be worth exactly the same.
People might also have a lower bar for upvoting comments.
There you go, 3 mana. Easy peasy.
simple first step would be showing both separately like Reddit
You can see them separately, but it’s how they combine that matters.
I know you can figure them out, but I don’t see them presented separately on users pages. Am I missing something? Is it shown on the website somewhere?
They aren’t currently shown separately anywhere. I added it to the ForumMagnum feature-ideas repo but not sure whether we’ll wind up doing it.
They are shown separately here: https://eaforum.issarice.com/userlist?sort=karma
Is there a link to vote to show interest?
There is no EA “scene” on twitter.
For good or ill, while there are posters on twitter who talk about EA, there isn’t a “scene” (a space where people use loads of EA jargon and assume everyone is EA) or at least not that I’ve seen.
This surprised me.
UK government will pay for organisations to hire 18-24 year olds who are currently unemployed, for 6 months. This includes minimum wage and national insurance.
I imagine many EA orgs are people constrained rather than funding constrained but it might be worth it.
And here is a data science org which will train them as well https://twitter.com/John_Sandall/status/1315702046440534017
Note: applications have to be for 30 jobs, but you can apply over a number of organisations or alongside a local authority etc.
https://www.gov.uk/government/collections/kickstart-scheme
Is there a way to sort shortform posts?
EA Book discount codes.
tl;dr EA books have a positive externality. The response should be to subsidise them
If EA thinks that certain books (doing good better, the precipice) have greater benefits than they seem, they could subsidise them.
There could be an EA website which has amazon coupons for EA books so that you can get them more cheaply if buying for a friend, or advertise said coupon to your friends to encourage them to buy the book.
From 5 mins of research the current best way would be for a group to buys EA books and sell them at the list price but provide coupons as here—https://www.passionintopaychecks.com/how-to-create-single-use-amazon-coupons-promo-codes/
Alternatively, you could just sell them at the coupon price.
I think people have been taking up the model of open sourcing books (well, making them free). This has been done for [The Life You can Save](https://en.wikipedia.org/wiki/The_Life_You_Can_Save) and [Moral Uncertainty](https://www.williammacaskill.com/info-moral-uncertainty).
I think this could cost $50,000 to $300,000 or so depending on when this is done and how popular it is expected to be, but I expect it to be often worth it.
Seems that the Ebook/audiobook is free. Is that correct?
I imagine being able to give a free physcial copy would have more impact.
Yes, it’s free.
I like this idea and think it’s worth you taking further. My initial reactions are:
Getting more EA books into peoples hands seems great and worth much more per book than the cost of a book.
I don’t know how much of a bottleneck the price of a book is to buying them for friends/club members. I know EA Oxford has given away many books, I’ve also bought several for friends (and one famous person I contacted on instagram as a long shot who actually replied.
I’d therefore be interested in something which aimed to establish whether making books cheaper was a better or worse idea than just encouraging people to gift them.
John Behar/TLYCS probably have good thoughts on this.
Do you have any thoughts as to what the next step would be. It’s not obvious to me what you’d do to research the impact of this.
Perhaps have a questionnaire asking people how many people they’d give books to at different prices. Do we know the likelihood of people reading a book they are given?
Being open minded and curious is different from holding that as part of my identity.
Perhaps I never reach it. But it seems to me that “we are open minded people so we probably behave open mindedly” is false.
Or more specifically, I think that it’s good that EAs want to be open minded, but I’m not sure that we are purely because we listen graciously, run criticism contests, talk about cruxes.
The problem is the problem. And being open minded requires being open to changing one’s mind in difficult or set situations. And I don’t have a way that’s guaranteed to get us over that line.
Someone told me they don’t bet as a matter of principle. And that this means EA/Rats take their opinions less seriously as a result. Some thoughts
I respect individual EAs preferences. I regularly tell friends to do things they are excited about, to look after themselves, etc etc. If you don’t want to do something but feel you ought to, maybe think about why, but I will support you not doing it. If you have a blanket ban on gambling, fair enough. You are allowed to not do things because you don’t want to
Gambling is addictive, if you have a problem with it, don’t do it
Betting is a useful tool. I just do take opinions a bit less seriously if people don’t do the simple thing to put their money where their mouths are. And so a blanket ban is a slight cost. Imagine if I said I had a blanket ban on double cruxxing, or giving to animal welfare charities. It’s a thing I am allowed to do, but it does just seem a bit worse
To me, this seems like something else is actually going on. Perhaps it feels like “will you bet on it” is a way that certain people can twist my arm in a way that makes me feel uncomfortable? Perhaps the people who say this have been cruel to me in the past. I don’t know, but I sense there is something else going on. If you don’t bet as a blanket policy, could you tell me why?
I don’t bet because I feel it’s a slippery slope. I also strongly dislike how opinions and debates in EA are monetised, as this strengthens even more the neoliberal vibe EA already has, so my drive for refraining to do this in EA is stronger than outside.
Edit: and I too have gotten dismissed by EAs for it in the past.
I don’t want you to do something you don’t want to.
A slippery slope to what?
To gambling on anything else and taking an actual financial risk.
Yeah, I guess if you think there is a risk of gambling addiction, don’t do it.
But I don’t know that that’s a risk for many.
Also I think many of us take a financial risk by being involved in EA. We are making big financial choices.
There’s a difference between using money to help others and using it for betting?
Yes obviously, but not in the sense that you are investing resources.
Is there a difference between the financial risk of a bet and of a standard investment? Not really, no.
I don’t bet because it’s not a way to actually make money given the frictional costs to set it up, including my own ignorance about the proper procedure and having to remember it and keep enough capital for it. Ironically, people who are betting in this subculture are usually cargo culting the idea of wealth-maximization with the aesthetics of betting with the implicit assumption that the stakes of actual money are enough to lead to more correct beliefs when following the incentives really means not betting at all. If convenient, universal prediction markets weren’t regulated into nonexistence then I would sing a different tune.
I guess I do think the “wrong beliefs should cost you” is a lot of the gains. I guess I also think that bets should be able to be at scale of the disagreement is important, but I think that’s a much more niche view.
There are a number of possible reasons that the individual might not want to talk about publicly:
A concern about gambling being potentially addictive for them;
Being relatively risk-averse in their personal capacity (and/or believing that their risk tolerance is better deployed for more meaningful things than random bets);
Being more financially constrained than their would-be counterparts; and
Awareness of, and discomfort with, the increased power the betting norm could give people with more money.
On the third point: the bet amount that would be seen as meaningful will vary based on the person’s individual circumstances. It is emotionally tough to say—no, I don’t have much money, $10 (or whatever) would be a meaningful bet for me even though it might take $100 (or whatever) to be meaningful to you.
On the fourth point: if you have more financial resources, you can feel freer with your bets while other people need to be more constrained. That gives you more access to bet-offers as a rhetorical tool to promote your positions than people with fewer resources. It’s understandable that people with fewer resources might see that as a financial bludgeon, even if not intended as such.
I think the first one is good, the not so much.
I think there is something else going on here.
I have yet to see anyone in the EA/rat world make a bet for sums that matter, so I really don’t take these bets very seriously. They also aren’t a great way to uncover people’s true probabilities because if you are betting for money that matters you are obviously incentivized to try to negotiate what you think are the worst possible odds for the person on the other side that they might be dumb enough to accept.
kind of fair. I’m pretty sure I’ve seen $1000s
If anything… I probably take people less seriously if they do bet (not saying that’s good or bad, but just being honest), especially if there’s a bookmaker/platform taking a cut.
I think this is more about 1-1 bets.
I guess it depends if they win or lose on average. I still think knowing I barely win is useful self knowledge.
I think if I knew that I could trade “we all obey some slightly restrictive set of romance norms” for “EA becomes 50% women in the next 5 years” then that’s a trade I would advise we take.
That’s a big if. But seems trivially like the right thing to do—women do useful work and we should want more of them involved.
To say the unpopular reverse statement, if I knew that such a set of norms wouldn’t improve wellbeing in some average of women in EA and EA as a whole then I wouldn’t take the trade.
Seems worth acknowledging there are right answers here, if only we knew the outcomes of our decisions.
In defence of Will MacAskill and Nick Beckstead staying on the board of EVF
While I’ve publicly said that on priors they should be removed unless we hear arguments otherwise, I was kind of expecting someone to make those arguments. If noone will, I will.
MacAskill
MacAskill is very clever, personally kind, is a superlative networker and communicator. Imo he oversold SBF, but I guess I’d do much worse in his place. It seems to me that we should want people who have made mistakes and learned from them. Seems many EA orgs would be glad to have someone like him on the board. If anything, the question is if we don’t want too many people duplicated across EA orgs (do we want this?) which board is it most valuable to have MacAskill on? I guess EVF?
Beckstead
Beckstead is, I sense, extremely clever (generally I find OpenPhil people to be powerhouses), personally kind. I guess I think that he dropped the ball on running FTXFF well—feels like had they hired more people to manage OPS they might have queried why money was going from strange accounts, but again I don’t know the particulars (though I want to give the benefit of the doubt here). But again, it was a complicated project and I guess he sensed that speed of ramp up was the proirity. In many world’s he’d have been right.
I guess perhaps the two of them seem to have pretty similar blindspots (kind intelligent academicish EAs who scaled things really fast) so perhaps it is worth only having one on the board. Maybe it’s worth having someone who can say “hmm that seems likely too odd or shifty to be worth us doing it”. But this isn’t as much of a knockdown argument.
Feels like there should be some kind of community discussion and research in the wake of FTX, especially if no leadership is gonna do it. But I don’t know how that discussion would have legitimacy. I’m okay at such things, but honestly tend to fuck them up somehow. Any ideas?
If I were king
Use the ideas from all the varous posts
Have a big google doc where anyone can add research and also put a comment for each idea and allow people to discuss
Then hold another post where we have a final vote on what should happen
then EA orgs can see at least what some kind of community concensus things
And we can see what each other think
I wrote a post on possible next steps but it got little engagement—unclear if it was a bad post or people just needed a break from the topic. On mobile, so not linking it—but it’s my only post besides shortform.
The problem as I see it is that the bulk of proposals are significantly underdeveloped, risking both applause light support and failure to update from those with skeptical priors. They are far too thin to expect leaders already dealing with the biggest legal, reputational, and fiscal crisis in EA history to do the early development work.
Thus, I wouldn’t credit a vote at this point as reflecting much more than a desire for a more detailed proposal. The problem is that it’s not reasonable to expect people to write more fleshed-out proposals for free without reason to believe the powers-that-be will adopt them.
I suggested paying people to write up a set of proposals and then voting on those. But that requires both funding and a way to winnow the proposals and select authors. I suggested modified quadratic funding as a theoretical ideal, but a jury of pro-reform posters as a more practical alternative. I thought that problem was manageable, but it is a problem. In particular, at the proposal-development stage, I didn’t want tactical voting by reform skeptics.
Strong +1 to paying people for writing concrete, actionable proposals with clear success criteria etc. - but I also think that DEI / reform is just really, really hard, and I expect relatively few people in the community to have 1) the expertise 2) the knowledge of deeper community dynamics / being able to know the current stsances on things.
(meta point: really appreciate your bio Jason!)
I really liked Nate’s post and hope there can be more like it in the future.
Let’s assume that the time article is right about the amount of sexual harassment in EA. How big a problem is this relative to other problems? If we spend $10mn on EAGs (a guess) how much should we spend if we could halve sexual harassment in the community.
The whole sexual harassment issue isn’t something that can be easily fixed with money I think. It’s more a project of changing norms and what’s acceptable within the EA community.
The issue is it seems like many folks at the top of orgs, especially in SF, have deeply divergent views from the normal day-to-day folks joining/hearing about EA. This is going to be a huge problem moving forward from a public relations standpoint IMO.
Money can’t fix everything, but it can help some stuff, like hiring professionals outside of EA and supporting survivors who fear retaliation if they choose to speak out.
I’ll sort of publicly flag that I sort of break the karma system. Like the way I like to post comments is little and often and this is just overpowered in getting karma.
eg I recently overtook Julia Wise and I’ve been on the forum for years less than anyone else.
I don’t really know how to solve this—maybe someone should just 1 time nuke my karma? But yeah it’s true.
Note that I don’t do this deliberately—it’s just how I like to post and I think it’s honestly better to split up ideas into separate comments. But boy is it good at getting karma. And soooo much easier than writing posts.
https://eaforum.issarice.com/userlist?sort=karma
To modify a joke I quite liked:
I wouldn’t worry too much about the karma system. If you’re worried about having undue power in the discourse, one thing I’ve internalized is to use the strong upvote/downvote buttons very sparingly (e.g. I only strong-upvoted one post in 2022 and I think I never strong-downvoted any post, other than obvious spam).
Hey Nathan,
thank you for the ranking list. :)
I don’t think you need to start with zero karma again. The karma system is not supposed to mean very much. It is heavily favoured in certain aspects than a true representation of your skill or trustworthiness as a user on this forum. It is more or less a xp bar for social situations and is an indicator that someone posts good content here.
Let’s look at an example:
Aaron Gertler retired from the forum, someone who is in high regard, which got a lot of attention and sympathy. Many people were interested in the post, and it’s an easy topic to participate. So many were scrolling down to the comments to write something nice and thanking him for his work.
JP Addison did so too. He works for CEA and as a developer for the forum. His comment got more Karma than any post he made so far.
Karma is used in many places with different concepts behind it. The sum of it gives you no clear information. What I would think in your case: you are an active member of the forum, participate positively with only one post with negative karma. You participated in the FTX crisis discussion, which was an opportunity to gain or lose significant amounts of karma, but you survived it, probably with a good score.
Internetpoints can make you feel fantastic, they are a system to motivate for social interaction and to follow the community norms (in positive and negative ways).
Your modesty suits you well, but there is no need to. Stand upwards. There will always be those with few points but really good content, and those who overshoot the gems by far with activity.
Does EA have a clearly denoted place for exit interviews? Like if someone who was previously very involved was leaving, is there a place they could say why?
The amount of content on the forum is pretty overwhelming at the moment and I wonder if there is a better way to sort it.
Question answers
When answering questions, I recommend people put each separate point as a separate answer. The karma ranking system is useful to see what people like/don’t like and having a whole load of answers together muddies the water.
EA global
1) Why is EA global space constrained? Why not just have a larger venue?
I assume there is a good reason for this which I don’t know.
2) It’s hard to invite friends to EA global. Is this deliberate?
I have a close friend who finds EA quite compelling. I figured I’d invite them to EA global. They were dissuaded by the fact they had to apply and that it would cost $400.
I know that’s not the actual price, but they didn’t know that. I reckon they might have turned up for a couple of talks. Now they probably won’t apply.
Is there no way that this event could be more welcoming or is that not the point?
Re 1) Is there a strong reason to believe that EA Global is constrained by physical space? My impression is that they try to optimize pretty hard to have a good crowd and for there to be a high density of high-quality connections to be formed there.
Re 2) I don’t think EA Global is the best way for newcomers to EA to learn about EA.
EDIT: To be clear, neither 1) nor 2) are necessarily endorsements of the choice to structure EA Global in this way, just an explanation of what I think CEA is optimizing for.
EDIT 2 2021/10/11: This explanation may be wrong, see Amy Labenz’s comment here.
Personal anecdote possibly relevant for 2): EA Global 2016 was my first EA event. Before going, I had lukewarm-ish feelings towards EA, due mostly to a combination of negative misconceptions and positive true-conceptions; I decided to go anyway somewhat on a whim, since it was right next to my hometown, and I noticed that Robin Hanson and Ed Boyden were speaking there (and I liked their academic work). The event was a huge positive update for me towards the movement, and I quickly became involved – and now I do direct EA work.
I’m not sure that a different introduction would have led to a similar outcome. The conversations and talks at EAG are just (as a general rule) much better than at local events, and reading books or online material also doesn’t strike me as naturally leading to being part of a community in the same way.
It’s possible my situation doesn’t generalizes to others (perhaps I’m unusual in some way, or perhaps 2021 is different from 2016 in a crucial way such that the “EAG-first” strategy used to make sense but doesn’t anymore), and there may be other costs with having more newcomers at EAG (eg diluting the population of people more familiar with EA concepts), but I also think it’s possible my situation does generalize and that we’d be better off nudging more newcomers to come to EAG.
Hi Nathan,
Thank you for bringing this up!
1) We’d like to have a larger capacity at EA Global, and we’ve been trying to increase the number of people who can attend. Unfortunately, this year it’s been particularly difficult; we had to roll over our contract with the venue from 2020 and we are unable to use the full capacity of the venue to reduce the risk from COVID. We’re really excited that we just managed to add 300 spots (increasing capacity to 800 people), and we’re hoping to have more capacity in 2022.
There will also be an opportunity for people around the world to participate in the event online. Virtual attendees will be able to enjoy live streamed content as well as networking opportunities with other virtual attendees. More details will be published on the EA Global website the week of October 11.
2) We try to have different events that are welcoming to people who are at different points in their EA engagement. For someone earlier in their exploration of EA, the EAGx conferences are going to be a better fit. From the EA Global website:
Effective altruism conferences are a good fit for anyone who is putting EA principles into action through their donations, volunteering, or career plans. All community members, new or experienced, are welcome to apply.
EA Global: London will be selecting for highly-engaged members of the community.
EAGxPrague (3-5 December) will be more suitable for those who have less experience with effective altruism.
We’ll have lots more EAGx events in 2022, including Boston, Oxford, Singapore, and Australia, as well as EA Globals in San Francisco and London as usual. We may add additional events to this plan. The dates for those events and any additional events will go up on eaglobal.org when they’re confirmed.
In the meantime, if your friend is interested in seeing some talks, they can check out hundreds of past EA Global talks on the CEA YouTube channel.
Thanks for taking the time to answer. That all makes sense.
This perception gap site would be a good form for learning and could be used in altruism. It reframes correcting biases as a fun prediction game.
https://perceptiongap.us/
It’s a site which gets you to guess what other political groups (republicans and democrats) think about issues.
Why is it good:
1) It gets people thinking and predicting. They are asked a clear question about other groups and have to answer it.
2) It updates views in a non-patronising way—it turns out dems and repubs are much less polarised than most people think (the stat they give is that people predict 50% of repubs hold extreme views, when actually it’s 30). But rather than yelling this, or an annoying listicle, it gets people’s consent and teachest something.
3) It builds consensus. If we are actually closer to those we disagree with than we think, perhaps we could work with them.
4) It gives quick feedback. People learn best when given feedback which is close to the action. In this case, people are rapidly rewarded for thoughts like “probably most of X group” are more similar to me that I first think.
Imagine:
What percentage of neocons want insitutional reform?
What % of libertarians want an end to factory farming?
What % of socialists want an increase in foreign direct aid?
Conlusion
If you want to change people’s minds, don’t tell them stuff, get them to guess trustworthy values as a cutesy game.
I might start doing some policy BOTEC (Back of the envelope calculation) posts. ie where I suggest an idea and try and figure out how valuable it is. I think that do this faster with a group to bounce ideas off.
If you’d like to be added to a message chat (on whatsapp probably) to share policy BOTECs then reply here or DM me.
Is EA as a bait and switch a compelling argument for it being bad?
I don’t really think so
There are a wide variety of baits and switches, from what I’d call misleading to some pretty normal activities—is it a bait and switch when churches don’t discuss their most controversial beliefs at a “bring your friends” service? What about wearing nice clothes to a first date? [1]
EA is a big movement composed of different groups[2]. Many describe it differently.
EA has done so much global health stuff I am not sure it can be described as a bait and switch. eg https://docs.google.com/spreadsheets/d/1ip7nXs7l-8sahT6ehvk2pBrlQ6Umy5IMPYStO3taaoc/edit#gid=9418963
EA is way more transparent than any comparable movement. If it is a bait and switch then it does so much more to make clear where the money goes eg (https://openbook.fyi/).
On the other hand:
I do sometimes see people describing EA too favourably or pushing an inaccurate line.
I think that transparency comes with a feature of allowing anyone to come and say “what’s going on there” and that can be very beneficial at avoiding error but also bad criticism can be too cheap.
Overall I don’t find this line that compelling. And that parts that are seem largely in the past when EA was smaller (when perhaps it mattered less). Now that EA is big, it’s pretty clear that it cares about many different things.
Seems fine.
@Richard Y Chappell created the analogy.
@Sean_o_h argues that here.
I think that there might be something meaningfully different between wearing nice clothes to a first date (or a job interview), as opposed to intentionally not mentioning more controversial/divisive topics to newcomers. I think there is a difference between putting your best foot forward (dressing nice, grooming, explaining introductory EA principles articulately with a ‘pitch’ you have practices) and intentionally avoiding/occluding information.
For a date, I wouldn’t feel deceived/tricked if someone dressed nice. But I would feel deceived if the person intentionally withheld or hid information that they knew I would care about. (it is almost a joke that some people lie about age, weight, height, employment, and similar traits in dating).
I have to admit that I was a bit turned off (what word is appropriate for a very weak form of disgusted?) when I learned that there has long been an intentional effort in EA to funnel people from global development to long-termism within EA.
If anything, EA now has a strong public (admittedly critical) reputation for longtermist beliefs. I wouldn’t be surprised if some people have joined in order to pursue AI alignment and got confused when they found out more than half of the donations go to GHD & animal welfare.
re: fn 1, maybe my tweet?
Yes, I thought it was you but I couldn’t find it. Good analogy.
Clear benefits, diffuse harms
It is worth noting when systems introduce benefits in a few obvious ways but many small harms. An example is blocking housing. It benefits the neighbours a lot—they don’t have to have construction nearby—and the people who are harmed are just random marginal people who could have afforded a home but just can’t.
But these harms are real and should be tallied.
Much recent discussion in EA has suggested common sense risk reduction strategies which would stop clear bad behavior. Often we all agree on the clear bad behaviour.
But the risk reduction strategies would also often set norms against a range of greyer behaviour that the suggestors don’t engage in or doesn’t seem valuable to them. If you don’t work with your coworkers, then suggesting it be normed against, seems fine—it would make it hard for people to end up in weird living situations. But I know people who have loved living with coworkers. That’s a diffuse harm.
Mainly I think this involves acknowledging people are a lot weirder than you think. People want things I don’t expect them to want, they consent in business, housing and relationships to things I’d never expect them to. People are wild. And I think it’s worth there being bright lines against some kinds of behaviour that is bad or nearly always bad—I’d suggest dating your reports is ~ very unwise—but a lot is about human preferences and to understand that we need to elicit both wholesome and illicit preferences or consider harms that are diffuse.
Note that I’m not saying which way the balance of harms falls, but that both types should be counted.
I suggest there is waaaay to much to be on top of in EA and noone knows who is checking what. So some stuff goes unchecked. If there were a narrower set of “core things we study” then it seems more likely that those things would have been gone over by someone in detail and hence fewer errors in core facts.
One of the downsides of EA being so decentralized, I guess. I’m imagining an alternative history EA in which is was all AI alignment or it was all tropical disease prevention, and in those worlds the narrowing of “core things we study” would possibly result in more eyeballs on each thing.
I think we could still be better in this universe though no idea how.
It is frustrating that I cannot reply to comments from the notification menu. Seems like a natural thing to be able to do.
I think the EA forum wiki should allow longer and more informative articles. I think that it would get 5x traffic. So I’ve created a market to bet.
I think the wiki should be about summarising and synthesising articles on this forum.
- There are lots of great articles which will be rarely reread
- Many could do with more links to eachother and to other key peices
- Many could be better edited, combined etc
- The wiki could take all content and aim to turn it into a minimal viable form of itself
Sounds interesting. Can you flesh out a bit more what this should look like, in your view?
I think that the forum wiki should focus on taking chunks of article text and editing it, rather than pointing people to articles. So take all of the articles on global dev, squish them together or shorten them.
So there would be a page on “research debt” which would contain this article and also any more text that seemed relevant, but maybe without the introduction. Then a preface on how it links to other EA topics, a link to the original article and links to ways it interacts with other EA topics. It might turn out that that page had 3 or 4 articles squished into one or was broken into 3 or 4 pages. But like Wikipedia you could then link to “research debt” and someone could easily read it.
Thanks, makes sense. I’d be interested in, e.g. Pablo’s view.
If only we had tagging.
EA criticism
[Epistemic Status: low, I think this is probably wrong, but I would like to debug it publicly]
If I have a criticism of EA along Institutional Decision Making lines, it is this:
For a movement that wants to change how decisions get made, we should make those changes in our own organisations first.
Examples of good progress:
- prizes—EA orgs have offered prizes for innovation
- voting systems—it’s good that the forum is run on upvotes and that often I think EA uses the right tool for the job in terms of voting
Things I would like to see more of:
- an organisation listening to prediction markets/polls. If we believe nations should listen to forecasting can we make clearer which markets our orgs are looking and and listening to?
- an organisation run by prediction markets. The above but taking it further
- removing siloes in EA. If you have confidence to email random people it’s relatively easy to get stuff done, but can we lower the friction to allow good ideas to spread further?
- etc
It’s fine if we think these things will never work, but it seems weird to me that we think improvements would work elsewhere but that we don’t want them in our orgs. That’s like being NIMBY about our own suggested improvements.
Counterarguments
- these aren’t solutions people are actually arguing for. Yeah this is an okay point. But I think the seeds of them exist.
- prediction markets work in big orgs not small ones. Maybe, but isn’t it worth running one small inefficient organisation to try and learn the failure modes before we suggest this for nation states
EA twitter bots
A set of EA jobs twitter bots which each retweet a specific set of hashtags eg #AISafety #EAJob, #AnimalSuffering #EAJob, etc etc. Please don’t get hung up on these, we’d actually need to brainstorm the right hashtags.
You follow the bots and hear about the jobs.
Rather than using Facebook as a way to collect EA jobs we should use an airtable form
1) Individuals finding jobs could put all the details in, saving time for whoever would have to do this process at 80k time.
2) Airtable can post directly to facebook, so everyone would still see it https://community.airtable.com/t/posting-to-social-media-automatically/20987
3) Some people would find it quicker. Personally, I’d prefer an airtable form to inputting it to facebook manually every time.
Ideally we should find websites which often publish useful jobs and then scrape them regularly.
It would be good to easily be able to export jobs from the EA job board.
I suggest at some stage having up and downvoting of jobs would be useful.
Does anyone know people working on reforming the academic publishing process?
Coronavirus has caused journalists to look for scientific sources. There are no journal articles because of the lag time. So they have gone to preprint servers like bioRxiv (pronounced bio-archive). These servers are not peer reviewed so some articles are of low quality. So people have gone to twitter asking for experts to review the papers.
https://twitter.com/ryneches/status/1223439143503482880?s=19
This is effectively a new academic publishing paradigm. If there were support for good papers (somehow) you would have the key elements of a new, perhaps better system.
Some thoughts here too: http://physicsbuzz.physicscentral.com/2012/08/risks-and-rewards-of-arxiv-reporting.html?m=1
With Coronavirus providing a lot of impetus for change, those working in this area could find this an important time to increase visibility of their work.
HaukeHillebrandt has recommended supporting Prof Chris Chambers to do this: https://lets-fund.org/better-science/
The shifts in forum voting patterns across the EU and US seem worthy of investigation.
I’m not saying there is some conspiracy, it seems pretty obvious that EU and US EAs have different views and that appears in voting patterns but it seems like we could have more self knowledge here.
Agreed, and I think @Peter Wildeford has pointed that out in recent threads—it’s very unlikely to be a ‘conspiracy’ and much more likely that opinions and geographical locations are highly correlated. I can remember some recent comments of mine that swung from slighty upvoted to highly downvoted and back to slightly upvoted
This might be something that the Forum team is better placed to answer, but if anyone can think of a way to try to tease this out using data on the public API let me know and I can try and investigate it
But it’s just sort of ‘not-fun’ to know that if one posts one’s post at the wrong time it’s gonna go underwater and maybe never come back.
Not sure what to do but it feels like there is a positive sum solution.
Yeah it’s true, I was mostly just responding of the empirical question of how to identify/measure that split on the Forum itself.
As to dealing with the split and what it represents, my best guess is that there is a Bay-concentrated/influenced group of users who have geographically concentrated views, which much of the rest of EA disagree with/to varying extents find their beliefs/behaviour rude or repugnant or wrong.[1] The longer term question is if that group and the rest of EA[2] can cohere together under one banner or not.
I don’t know the answer there, but I’d very much prefer it to be discussion and mutual understanding rather than acrimony and mutual downvoting. But I admit I have been acrimonious and downvoted others on the Forum, so not sure those on the other side to me[3] would think I’m a good choice to start that dialogue.
Perhaps the feeling is mutual? I don’t know, certainly I think many members of this culture (not just in EA/Rationalist circles but beyond in the Bay) find ‘normie’ culture morally wrong and intelorable
Big simplification I know
For the record, as per bio, I am a ‘rest of the world/non-Bay’ EA
There have been a free comments about this. And I’m surprised the forum team hasn’t weighed in yet with data or comments. Are there actually voting trends which are differ across timezones? If so how do those patterns work? Should we do anything about it.
I’ve also found myself reactionary downvoting recently which I didn’t like but might have actually been fine just on the other side. That isn’t good at all so so guilty here too
I make a quick (and relatively uncontroversial) poll on how people are feeling about EA. I’ll share if we get 10+ respondents.
Without reading too much into it, there’s a similar amount of negativity about the state of EA as there is a lack of confidence in its future. That suggests to me that there’s a lot of people who think EA should be reformed to survive (rather than ‘it’ll dwindle and that’s fine’ or ‘I’m unhappy with it but it’ll be okay’)?
Currently 27-ish[1] people have responded:
Full results: https://viewpoints.xyz/polls/ea-sense-check/results
Statements people agree with:
Statements where there is significant conflict:
Statements where people aren’t sure or dislike the statement:
The applet makes it harder to track numbers than the full site.
It has an emotional impact on me to note that FTX claims are now trading at 50%. This means that on expectation, people are gonna get about half of what their assets were worth, had they help them until this time.
I don’t really understand whether it should change the way we understand the situation, but I think a lot of people’s life savings were wrapped up here and half is a lot better than nothing.
src: https://www.bloomberg.com/news/articles/2023-10-25/ftx-claims-rise-after-potential-bidders-for-shuttered-exchange-emerge
I am not confident on the reasons why this is, but I think it’s because Anthropic and the cryptocurrency Solana are now trading a lot higher. My last memory (bad do not trust) is that FTX has about 11bn in debt against 4bn in assets. I think Anthropic and the Sol they hold have both gone up by about a billion since then.
I dunno folks, but I hope people get their money back—and I know that includes some of you.
We are good at discussion but bad at finding the new thing to update to.
Look at the recent Happier Lives Institute discussion; https://forum.effectivealtruism.org/posts/g4QWGj3JFLiKRyxZe/the-happier-lives-institute-is-funding-constrained-and-needs
Lots of discussion, a reasonable amount of new information, but what should our final update be:
Have HLI acted fine or badly?
Is there a pattern of misquoting and bad scholarship?
Have global health orgs in general moved towards Self-reported WellBeing (SWB) as a way to measure interventions?
Has HLI generally done good/cost effective work?
I think that the forum comments model is very poor at this. After all, if there were widespread agreement (as I think there could be) then I think that would be a load of all our minds. We could have a discussion once and then not need to have it again.
As it is, I’m sure many people have taken away different things from this and I we’ll probably discsuss it again the next time the Happier Lives Institute or StrongMinds posts to the forum and I guess there has been some more bad blood created in the meantime.
Consensus is good and we don’t even try to reach it after big discussions.
It is just really hard to write comments that challenge without seeming to attack people. Anyone got any tips?
If you’re commenting on a post, it helps to start off with points of agreement and genuine compliments about things you liked. Try to be honest and non-patronizing: a comment where the only good thing you say is “your english is very good” will not be taken well, or a statement that “we both agree that murder is bad”. And don’t overthink it, a simple “great post” (if honest) is never unappreciated.
Another point is that the forum tends to have a problem with “nitpicking”, where the core points of a post are ignored in favor of pointing out minor, unimportant errors. Try to engage with the core points of an argument, or if you are pointing out a small error, preface it with “this is a minor nitpick”, and put it at the end of your comment.
So a criticism would look like:
“Very interesting post! I think X is a great point that more people should be talking about. However, I strongly disagree with core point Y, for [reasons]. Also, a minor nitpick: statement Z is wrong because [reasons]”
I think the above is way less likely to feel like an “attack”, even though the strong disagreements and critiques are still in there.
Some thoughts on: https://twitter.com/FreshMangoLassi/status/1628825657261146121?s=20
I agree that it’s worth saying something about sexual behaviour. Here are my broad thoughts:
I am sad about women having bad experiences, I think about it a lot
I want to be accurate in communication
I think it’s easy to reduce harms a lot without reducing benefits
Firstly, I’m sad about the current situation. Seems like too many women in EA have bad experiences. There is a discussion about what happens in other communities or tradeoffs. But first it’s really sad.
More than this, it seems worth dwelling on what it *feels* like. I guess for many it’s fine. But for some it can be exhausting or sad or uncomfortable. Women in EA complain to me about their treatment as women at lot, men much less. Seems notable.
But I don’t know what norms should be. I don’t know what’s best for EA women, for EA in general, for the world in general. In short, I don’t know how to optimise norms.
But harms seem easier to understand. It does seem to me there are some low cost, high benefit improvements. Particularly in people who have patterns of upsetting women.
Personally, I have really upset 2 or 3 women in EA around romance. I’ve said or done things that have left them sad for months. And I don’t think this is okay.
To them, I am sorry.
How do they feel? Well I sense, really sad. We’re not talking Time magazine stuff here, but I think they felt belittled, disrespected, judged and, briefly, unsafe. I don’t want anyone to feel like this, let alone because of me.
And compared to their suffering, and my sadness at it, it just seems pretty cheap to change my behaviour. To go on dates with a smaller group of people in EA, to create patterns to avoid situations I handle poorly, to spend time imagining women’s lives.
So I’m not gonna give a blanket pronouncement or say we are the worst. But personally, I am pretty flawed and I would prefer to change rather than hurt other people. And if you see that pattern in your life then I suggest taking real, actual steps.
I’d suggest you ask yourself. “Are there any women who, as a result of my actions in the last 2 years are seething or deeply upset.”
For most people the answer is no. Like seriously, the answer can be “no, you’re fine”. But if it’s yes, women are people right? Do you really believe that there aren’t some improvements possible here?
Some suggestions to yesses:
Talk to a trusted friend. How do they think you do here?
Imagine how much you would do to avoid the last woman being upset. Spend at least that much time avoiding the next woman being upset
I dislike the tribal nature of this discussion, that on some level it feels culture war-ey. So again, I don’t think this for everyone, but it is for me
But I really would recommend going to quality sex and relationship courses. I went to one run by a tantra group and I think it just made me a lot kinder and helped me reduce risks
Talk to women you’ve dated. How did they feel?
If you struggle with empathy with women, perhaps start with empathy for me. Trust me, you don’t want to feel like this. It’s horrible to have people who are upset as a result of my actions.
Most of all, I would recommend building empathy. I wish I had sat down and just written how the women I fancied felt, even for 5 minutes. And talked it over with a friend.
Take an interest in the mental lives of people you care about.
So I guess, the thing I could say was “If you continue patterns of romantic behaviour that frequently upset women that you could easily make less risky then I’ll be really upset with you and sad” as, if I were to continue I’d be so angry at myself.
Romance is not without risk—I don’t think this is a purely harm reducing question (though I could move to that opinion). But I think it’s possible to just reduce risks a lot while maintaining benefit. And if I have the option to do that and I choose not to, that’s basically my definition of bad.
What is a big open factual non community question in EA. I have a cool discussion tool I want to try out.
Daniel’s Heavy Tail Hypothesis (HTH) vs. this recent comment from Brian saying that he thinks that classic piece on ‘Why Charities Usually Don’t Differ Astronomically in Expected Cost-Effectiveness’ is still essentially valid.
Seems like Brian is arguing that there are at most 3-4 OOM differences between interventions whereas Daniel seems to imply there could be 8-10 OOM differences?
Similarly here: Valuing research works by eliciting comparisons from EA researchers—EA Forum (effectivealtruism.org)
And Ben Todd just tweeted about this as well.
Here is my first draft, basically there will be a plan money prediction market predicting what they community will vote on a central question (here “are the top 1% more than 10,000x as efffective as the median”) then we have a discussion and we vote and then resolve.
https://docs.google.com/document/d/14WpLjsS6idm8Ma-izKFOwkzy-B2F6RDpZ0xlc8aHlXg/edit
Should we want openai to turn off bing for a bit? We should, right? Should we create memes to that effect?
It is unclear to me that if we chose cause areas again, we would choose global developement
The lack of a focus on global development would make me sad
This issue should probably be investigated and mediated to avoid a huge community breakdown—it is naïve to think that we can just swan through this without careful and kind discussion
If I find this forum exhausting to post on some times I can only imagine how many people bounce off entirely.
The forum has a wiki (like wikipedia)
The “Criticism of EA Community” wiki post is here.
I think it would be better as a summary of criticisms rather than links to documents containing criticisms.
This is a departure from the current wiki style, so after talking to moderators we agreed to draft externally.
Collaborative Draft:
https://docs.google.com/document/d/1RetcAA7D94y6v3qxoKi_Ven-xF98FjirokvI-g8cKI4/edit#
Upvote this post if you think the “Criticism of EA Community” post will be better as a collaboratively-written summary.
Downvote if you like the current style.
Comments appreciated.
With better wiki features and a way to come to consensus on numbers I reckon this forum can write a career guide good enough to challenge 80k. They do great work, but we are many.
There were too few parties on the last night of EA global in london which led to overcrowding, stressed party hosts and wasting a load of people’s time.
I suggest in future that there should be at least n/200 parties where n is the number of people attending the conference.
I don’t think CEA should legislate parties, but I would like to surface in people’s minds that if there are fewer than n/200 parties, then you should call up your friend with most amenable housemates and tell them to organise!
Has rethink priorities ever thought of doing a survey of non-EAs? Perhaps paying for a poll? I’d be interested in questions like “What do you think of Effective Altruism? What do you think of Effective Altruists?”
Only asking questions of those who are currently here is survivorship bias. Likewise we could try and find people who left and ask why.
We are definitely planning on doing this kind of research, likely sometime in 2021.
I am not particularly excited to discuss Nonlinear in any way, but I note I’d prefer to discuss it on LessWrong rather than here.
Why is this?
I dunno
Feels like it’s gonna be awful discourse
The thing I actually want to do is go over things point by point but I feel here it’s gonna get all fraught and statusy
Why I like the donation election
I have some things I do like and then some clarifications.
I like that we are trying new mechanisms. If we were going to try and be a community that lasts we need to build ways of working that don’t have the failure modes that others have had in the past. I’m not particularly optimistic about this specific donation election, but I like that we are doing it. For this reason I’ve donated a little and voted.
I don’t think this specific donation election mechanism adds a lot. Money already gets allocated on a kind of voting system—you choose how you spend it. Gathering everyone’s votes and then reallocating means some people have decided they’d rather spend towards the median, though that data was available anyway. That said, I did spend quite a lot of time thinking about how I was gonna give (it’s strange to me that I find voting wrong worse than giving my own money wrong)
That said, perhaps it will codify discussions of impact, which I think are good. I’d like more quantification/ comparison. Are there some nice graphs somewhere of where Giving What We Can gifts go to?
I don’t think that election offers much better decisions. If I want someone I trust to help me decide where to give, I can already do that.
I don’t particularly like the “I donated” “I voted” tags, but I never like that kind of thing.
On balance I thought it was good and want more stuff like this.
This could have been a wiki
I hold that there could be a well maintained wiki article on top EA orgs and then people could anonymously have added many non-linear stories a while ago. I would happily have added comments about their move fast and break things approach and maybe had a better way to raise it with them.
There would have been edit wars and an earlier investigation.
How much would you pay to have brought this forward 6 months or a year. And likewise for whatever other startling revelations there are. In which case, I suggest a functional wiki is worth 5% − 10% of that amount, per case.
My question is “Who would want to run an EA org or project in that kind of environment?”. Presumably, you’d be down, but my bet is that the vast majority of people wouldn’t.
Given that people are suggesting a length set of org norms, I’m not sure that avoiding taxing orgs is their top concern.
While I support your right to disagreevote anonymously, I also challenge someone to articulate the disagreement.
It was pointed out to me that I probably vote a bit wrong on posts.
I generally just up and downvote how I feel, but occasionally if I think a post is very overrated or underrated I will strong upvote or downvote even though I feel less strong than that.
But this is I think the wrong behaviour and a defection. Since if we all did that then we’d all be manipulating the post to where we think it ought to be and we’d lose the information held in the median of where all our votes leave it.
Sorry.
Withholding the current score of a post till after a vote is cast (but the casting is committal) should be enough to prevent strategic behavior. But it comes with many downsides (I think feed ordering / recsys could work with private information, so the scores may be in principle inferrable from patterns in your feed, but you probably won’t actually do it. The worse problem is commitment, I do like to edit my votes quite a bit after initial impressions).
I imagine there’s a more subtle instrument, withholding the current score until committal votes have been cast seems almost like a limit case.
Although this isn’t in response to your specific case (correcting for overrated or underrated posts), but in response to
I think it’s okay to “defect” to correct the results of others’ apparent defection or to keep important information from being hidden. I’ve used upvotes correctively when I think people are too harsh with downvotes or when the downvotes will make important information/discussion much less visible. To elaborate, I’ve sometimes done this for cases like these:
When a comment or post is at low or negative karma due to downvotes, despite being made in good faith (especially if it makes plausible, relevant and useful claims), and without being uncivil or breaking other norms, even if it expresses an unpopular view (e.g. opinion or ethical view) or makes some significant errors in reasoning. I don’t think we should disincentivize or censor such comments, and I think that’s what disagreement voting and explanations should be used for. I find when people use downvotes like this without explanation to be especially unfair. This also includes when downvotes crush well-intentioned and civil but poorly executed newbie posts/comments, which I think is unkind and unwelcoming. (I’ve used upvotes correctively like this even before we had disagree voting.)
For posts with low or negative karma due to downvotes, if they contain (imo) important information, possibly even if poorly framed, with bad argument in them or made in apparently bad faith, if there’s substantial valuable discussion on the issue or it isn’t being discussed visibly somewhere else on the EA Forum. Low karma risks effectively hiding (making much less visible) that information and surrounding discussion through the ranking algorithm. This is usually for community controversies and criticism.
I very rarely downvote at all, but maybe I’d refrain from downvoting something I would otherwise downvote because its karma is already low or negative.
Right—in my view, net-negative karma conveys a particular message (something like “this post would be better off not existing”) that is meaningfully stronger than the median voter’s standard for downvoting. It can therefore easily exist in circumstances where the median voter would not have endorsed that conclusion.
FWIW, I don’t think this is against the explicit EA Forum norms around voting, and using upvotes and strong upvotes this way seems in line with some of their “suggestions” in the table from that section. In particular, they suggest it’s appropriate to strong upvote if
These could be more or less true depending on the karma of the post or comment and how visible you think it is.
I don’t think using downvotes against overrated posts or comments falls under the suggestions, though, but doing it only for upvotes and not downvotes could bias the karma.
Confidence 60%
Any EA leadership have my permission to put scandal on the back burner until we have a strategy on bing by the way. feels like a big escalation to have an ML reading it’s own past messages and running a search engine.
EA internal issues matter but only if we are alive.
Reasons I would disagree:
(1) Bing is not going to make us ‘not alive’ on a coming-year time scale. It’s (in my view) a useful and large-scale manifestation of problems with LLMs that can certainly be used to push ideas and memes around safety etc, but it’s not a direct global threat.
(2) The people best-placed to deal with EA ‘scandal’ issues are unlikely to perfectly overlap with the people best-placed to deal wit the opportunities/challenges Bing poses.
(3) I think it’s bad practice for a community to justify backburnering pressing community issues with an external issue, unless the case for the external issue is strong; it’s a norm that can easily become self-serving.
Strongly upvoted
I think the community health team should make decisions on the balance of harms rather than beyond reasonable doubt. If it seems likely someone did something bad they can be punished a bit until we don’t think they’ll do it again. But we have to actually take all the harms into account.
“beyond reasonable doubt” is a very high standard of proof, which is reasonable when the effect of a false conviction is being unjustly locked in a prison. It comes at a cost: a lot of guilty people go free and do more damage.
Theres no reason to use that same standard for a situation where the punishments are things like losing a job or being kicked out of a social community. A high standard of proof should still be used, but it doesn’t need to be “beyond reasonable doubt” level. I would hate to be falsely kicked out of an EA group, but at the end of the day I can just do something else.
I agree that the magnitude of the proposed deprivation is highly relevant to the burden of proof. The social benefit from taking the action on a true positive, and the individual harm from acting on a false positive also weigh in the balance.
In my view, the appropriate burden of proof also takes into account the extent of other process provided. A heightened burden of proof is one procedure for reducing the risk of erroneous deprivations, but it is not the only or even the most important one.
In most cases, I would say that the thinner the other process, the higher the BOP needs to be. For example, discipline by the bar, medical board, etc is usually more likely than not . . . but you get a lot of process like an independent adjudicator, subpoena power, and judicial review. So we accept 51 percent with other procedural protections in play. (And as a practical matter, the bar generally wouldnt prosecute a case it thought was 51 percent anyway due to resource constraints). With significantly fewer protections, I’d argue that a higher BOP would be required—both as a legal matter (these are government agencies) and a practical one. Although not beyond a reasonable doubt.
Of course, more process has costs both financial and on those involved. But it’s a possible way to deal with some situations where the current evidence seems too strong to do nothing and too uncertain to take significant action.
Should I tweet this? I’m very on the margin. Agree disagreevot (which doesn’t change karma)
I did a podcast where we talked about EA, would be great to hear your criticisms of it. https://pca.st/i0rovrat
Should I do more podcasts?
I listened to this episode today Nathan, I thought it was really good, and you came across well. I think EAs should consider doing more podcasts, including those not created/hosting by EA people or groups. They’re an accessible medium with the potential for a lot of outreach (the 80k podcast is a big reason why I got directly involved with the community).
I know you didn’t want to speak for EA has a whole, but I think it was a good example with EA talking to the leftist community in good faith,[1] which is (imo) one of our biggest sources of criticism at the moment. I’d recommend others check out the rest of Rabbithole’s series on EA—it’s a good piece of data on what the American Left thinks of EA at the moment.
Summary:
+1 to Nathan for going on this podcast
+1 for people to check out the other EA-related Rabbithole episodes
A similar podcast for those interested would be Habiba’s appearance on Garrison’s podcast The Most Interesting People I Know
Any time that you read a wiki page that is sparse or has mistakes, consider adding what you were trying to find. I reckon in a few months we could make the wiki really good to use.
I sense that conquest’s law is true → that organisations that are not specifically right wing move to the left.
I’m not concerned about moving to the left tbh but I am concerned with moving away from truth, so it feels like it would be good to constantly pull back towards saying true things.
I think the forum should have a retweet function but for the equivalent of github forks. So you can make changes to someone’s post and offer them the ability to incorporate them. If they don’t, you can just remake the article with the changes and an acknolwedgement that you did.
I don’t think people would actually do that that often, because they’d get no karma most of the time, but it would give karma, attribution trail for:
- summaries
- significant corrections/reframings
- and the author could still accept the edits later
My very quick improving institutional decision-making (IIDM) thoughts
Epistemic status: Weak 55% confidence. I may delete. Feel free to call me out or DM me etc etc.
I am saying these so that someone has said them. I would like them to be better phrased but then I’d probably never share them. Please feel free to criticise them though I might modify them a lot and I’m sorry if they are blunt:
I don’t understand what concrete learnings there are from IIDM, except forecasting (which I am biased on). The EIP produced a report which said that in the institutions you’d expect to matter do. That was cheap falsification so I guess worth it. Beyond that, I don’t know. And I was quite involved for a while and I didn’t pick these up by osmosis. I assume that many people know even less than I do.
Is forecasting IIDM? Yes. But people know what forecasting is so it’s easier to use those words. Are humans primates, yes, but one of those words is easier to understand.
Does IIDM exist in the wild? Yes?? I know lots of EA-aligned people who work in institutions who to improve them. That seems like IIDM to me.
What ideas would I brainstorm, low confidence:
Connect EA networks across institutions. EAs in different institutions probably know things. Do they pass those around?
Try and improve EA knowledge tranfer How can someone get a high signal feed of information via email, WhatsApp, podcast app. If we had this then it would be easier to share to institutional colleagues
What has worked in EA orgs? I’m surprised we think we can improve institutions when we haven’t solved those problems internally
How does an org make forecasting really easy and low friction?
How can EA institutions share detailed knowledge in real treal-timeime across institutions?
How do EAs avoid duplicating work?
Haha I don’t know what IIDM is but I do know what forecasting is. If I had lots of money one of the things I’d do is create a forecasting news organization. They don’t talk about what happened, they talk about what’s going to happen. The knowlege transfer is important. People are too spread apart to use one platform, but if there was a list of people who were readily available to share information on certain topics and their contact info that would be valuble.
Benjamin, I think you and I are gonna be friends. You at EAG SF?
This forum is not user-friendly. Took a bit to arrive.
I am not! I applied and didn’t get it, I think the movement is bigger than available tickets in a convention. I’m on a few EA discords if you’d like to chat.
Do we prefer
impact tractability neglectedness
scale solvability neglectedness
ITN
SSN
I have strong “social security number” associations with the acronym SSN.
Setting those aside, I feel “scale” and “solvability” are simpler and perhaps less jargon-y words than “impact” and “tractability” (which is probably good), but I hear people use “impact” much more frequently than “scale” in conversation, and it feels broader in definition, so I lean towards “ITN” over “SSN”.
In my head, “impact” seems to mix together scale + neglectedness + tractability, unless I’m missing something.
I actually prefer “scale, tractability, neglectedness” but nobody uses that lol
ITN.
I am gonna do a set of polls and get a load of karma for it (70% >750). I’m currently ~20th overall on the forum despite writing few posts of note. I think polls I write create a lot of value and I like the way it incentivises me to think about questions the community wants to answer.
I am pretty happy with the current karma payment but I’m not sure everyone will be so I thought I’d surface it. I’ve considered saying that polls delivery half the karma, but that feels kind of messy and I do think polls are currently underrated on the forum.
Any ideas?
https://eaforum.issarice.com/userlist?sort=karma
What? Polls?
Do you mean “Questions”?
EA podcasts and videos
Each EA org should pay $10 bounty to the best twitter thread talking about any episode. If you could generate 100 quality twitter threads on 80,000 hours episodes that for $1000 that would be really cheap. People would quote tweet and discuss and it would make the whole set of knowledge much more legible.
Cool idea, I’ll have a think about doing this for Hear This Idea. I expect writing the threads ourselves could take less time than setting up a bounty, finding the threads, paying out etc. But a norm of trying to summarise (e.g. 80K) episodes in 10 or so tweets sounds hugely valuable. Maybe they could all use a similar hashtag to find them — something like #EAPodcastRecap or #EAPodcastSummary
I recommend a thread of them. I rarely see poeple using hashtags currently.
And I probably agree you could/should write them yourselves but:
- other people might think different things are interesting than you do
Thanks! Sounds right on both fronts.
I edited the of wikipedia on Doing Good Better to try and make it more reflective of the book and Will’s current views. Let me know how you think I did.
https://en.wikipedia.org/w/index.php?title=William_MacAskill&editintro=Template%3ABLP_editintro#Doing_Good_Better
Plant-based meat. Fun video from a youtuber which makes a strong case. Very sharable. https://youtu.be/-k-V3ESHcfA
A friend in canada wants to give 10k to a UK global health charity but wants it to be tax neutral. does anyone giving to a big charity want to swap (so he gives to your charity in canada and gets the tax back) and you give to this global health one?
Maybe RC Forward can help with this? They will forward donations to selected overseas charities, but not all EA organizations are on their list.
If that doesn’t work, it might be possible to find a match in a country other than the UK. For example Americans can give to Unlimit Health via GWWC, even though Unlimit Health isn’t registered in the US.
Top Forecaster Podcast
I talked to Peter Wildeford, who is a top forecaster and Head of Rethink Priorities, about the US 2024 General Election.
We try to pin down specific probabilities.
Youtube: https://www.youtube.com/watch?v=M7jJxPfOdAo
Spotify: https://open.spotify.com/episode/4xJw9af9SMSmX5N2UZTpRD?si=Dh9RqPwqSDuHj7VpEx_nwg&nd=1
Pocketcasts: https://pca.st/ytt7guj0
Space in my brain.
I was reading this article about Nuclear winter a couple of days ago and I struggled. It’s a good article but there isn’t an easy slot in my worldview for it. The main thrust was something like “maybe nuclear winter is worse than other people think”. But I don’t really know how bad other people think it is.
Compare this to community articles, I know how the community functions and I have opinions on things. Each article fits neatly into my brain.
If a had a globe my worldview the EA community section is like very well mapped out. And so when I hear oh, you know, Adelaide is near Sydney or something, I know where those places are, and I can make some sort of judgment on the comment. But my views on nuclear winter are like if I learn that the mountains near Drachmore are taller than people think. Where is drachmore? Which mountains? How tall do people think they are.
My suggestion here is better wikis, but mainly I think the problem is an interesting one. I think often the community section is well supported because we all have some prior structure. I think it’s hard to comment on air purity, AI minutiae or nuclear winter because I don’t have that prior space.
That seems notable.
For those that disagree, what’s your experience?
Again, in general feel free to disagree anonymously.
I wouldn’t recommend people tweet about the nonlinear stuff a lot.
There is an appropriate level of publicity for things and right now I think the forum is the right level for this. Seems like there is room for people to walk back and apologise. Posting more widely and I’m not sure there will be.
If you think that appropriate actions haven’t been taken in say a couple months then I get tweeting a bit more.
I think the substance of your take may be right, but there is something that doesn’t sit well with me about an EA suggesting to other EAs (essentially) “I don’t think EAs should talk about this publicly to non-EAs.” (I take it that is the main difference between discussing this on the Forum vs. Twitter—like, “let’s try to have EA address this internally at least for now.”) Maybe it’s because I don’t fully understand your justification—”there is room for people to walk back and apologize”—but the vibe here feels a bit to me like “as EAs, we need to control the narrative around this (‘there is an appropriate level of publicity,’)” and that always feels a bit antithetical to people reasoning about these issues and reaching their own conclusions.
I think I would’ve reacted differently if you had said: “I don’t plan to talk about this publicly for a while because of x, y, and z” without being prescriptive about how others should communicate about this stuff.
Yeah i get that.
I think in general people don’t really understand how virality works in community dynamics. Like there are actions that when taken cannot be reversed.
I don’t say “never share this” but I think sharing publicly early will just make it much harder to have a vulnerable discussion.
I don’t mind EAs talking about this with non-EAs but I think twitter is sometimes like a feeding frenzy, particularly around EA stuff. And no, I don’t want that.
Notably, more agree with me than disagree (though some big upvotes on agreement obscure this—I generally am not wild about big agreeevotes).
As I’ve written elsewhere I think there is a spectrum from private to public. Some things should be more public than they are and other things more private. Currrently I am arguing this is about right. I thought that it turned out many issues with FTX were too private.
I think that a mature understanding of sharing things is required for navigating vulnerable situations (an I imagine you agree—many disliked the sharing of victims names around the time article why because that was too public for that information in their opinion)
I appreciate that you said it didn’t sit well with you. It doesn’t really sit well with me either. I welcome someone writing it better
Yeah, again, I think you might well be right on the substance. I haven’t tweeted about this and don’t plan to (in part because I think virality can often lead to repercussions for the affected parties that are disproportionate to the behavior—or at least, this is something a tweeter has no control over). I just think EA has kind of a yucky history when it comes to being prescriptive about where/when/how EAs talk about issues facing the EA community. I think this is a bad tendency—for instance, I think it has, ironically, contributed to the perception that EA is “culty” and also led to certain problematic behaviors getting pushed under the rug—and so I think we should strongly err on the side of not being prescriptive about how EAs talk about issues facing the community. Again, I think it’s totally fine to explain why you yourself are choosing to talk or not talk about something publicly.
I guess I plan for the future, not the past. But I agree that my stance is generally more public than most EAs. I talk to journalists about stuff, for instance, and I think more people should.
I imagine we might agree in cases.
I am so impressed at the speed with which Sage builds forecasting tools.
Props @Adam Binks and co.
Fatebook: the fastest way to make and track predictions looks great.
I still don’t really like the idea of CEA being democratically elected but I like it more than I once did.
89 people responded to my strategy poll so far.
Here are the areas of biggest uncertainty.
Seems we could try and understand these better.
Poll link: https://viewpoints.xyz/polls/ea-strategy-1
Analytics like: https://viewpoints.xyz/polls/ea-strategy-1/analytics
A: “Agree”, D: “Disagree”, S: “Skip”, ?: “It’s complicated”.
is
viewpoints.xyz
on github?I imagine that it has cost and does cost 80k to push for AI safety stuff even when it was wierd and now it seems mainstream.
Like, I think an interesting metric is when people say something which shifts some kind of group vibe. And sure, catastrophic risk folks are into it, but many EAs aren’t and would have liked a more holistic approach (I guess).
So it seems a notable tradeoff.
I would quite like Will MacAskill back right about now. I think he was generally a great voice in the discourse.
I am frustrated and hurt when I take flack for criticism.
It seems to me that people think I’m just stirring shit by asking polls or criticising people in power.
Maybe I am a bit. I can’t deny I take some pleasure in it.
But there are a reasonable amount of personal costs too. There is a reason why 1-5 others I’ve talked to have said they don’t want to crticise because they are concerned about their careers.
I more or less entirely criticise on the forum. Believe me, if I wanted to actually stir shit, I could do it a lot more effectively than shortform comments.
I’m relatively pro casual sex as a person, but I will say that EA isn’t about being a sex-positive community—it’s about effectively doing good. And if one gets in the way of the other, I know what I’m choosing (doing good).
I think there is a positive sum compromise possible, but it seems acknowledging how I will trade off if it comes to it.
In general I want to empower experts who rarely take risks to take more (eg the forum is better if the team make changes a lot)
How come some people have access to inline comments and others don’t?
What do you mean by inline comments?
when you can comment on an article and it shows as a little speech bubble to the side of the text. I’ve opted into experimental features but I still can’t.
You can enable it on a per-post basis, by clicking on the … below the title
But how does one write them? feels like something should appear when I highlight text.
I think you just normally quote a section of the article, clicking “Block quote”
Some people use hypothes.is , which in theory gives the same functionality on any web page, but we’re very few and only people that have installed it can see the comments or add new ones
Do you have any idea why my shortform doesn’t have disagree and agreevotes?
Because it’s from before disagree and agreevotes were a thing, not sure if there’s a way to make a new one, I would file a feature request https://forum.effectivealtruism.org/posts/NhSBgYq55BFs7t2cA/ea-forum-feature-suggestion-thread
As in all old shortforms don’t have them?
Yes, I think that’s currently the case (speaking as a user)
wild.
Why do some shortforms have agree voting and others don’t?
Depends on when the shortform was created.
As in they’ve recently removed it? If not, that doesn’t seem true.
Some thoughts
- Utilitiarianism but being cautious around the weird/unilateral stuff is still good
- We shouldn’t be surprised that we didn’t figure out SBF was fraudulent quicker than billions of dollars of cryto money… and Michael Lewis
- Scandal prediction markets are the solution here and one day they will be normal. But not today. Don’t boo me, I’m right
- Everyone wants whistleblowing, no one wants the correctly incentivised decentralised form of whistleblowing.
- Gotta say, I feel for many random individual people who knew or interacted closely with SBF but weren’t at FTX who are gonna get caught up in that
- We were fundamentally unserious about avoiding reputational risk from crypto. I hope we are more serious about not dying from AI
- I like you all a lot
- I don’t mind taking the money of some retired non-EA oil baron, but I think not returning FTX’s money perhaps incentivises future pro-crime EAs. I would like a credible signal
- The community does not need democratised funding (though I’d happily test it at a small scale) though we aren’t getting enough whistleblowing so we should work on that
- We deserve to be scrutinised and mocked, we messed up. We should own that
- X-risk is still extremely compelling
- I am uncertain how impactful my work is
- Our critics are usually very low signal but have a few key things of value to say. It is hard to listen to find those things without wasting loads of time, but missing them is bad too
- People knew SBF was a bully who broke promises. That that information didn’t flow to where it needed/ was ignored was a problem—
I think we shouldn’t say we want criticism, because we don’t. We didn’t want it about FTX and we don’t in any other places. We want very specific criticism. Everyone does, because the world is big and we have limited time. So how do we get the criticism that’s most useful to us
- The community should seek to make the best funding decisions it can over time. I think that’s with orgs doing it and prediction markets to remove bad apples, but you can think what you want. But democratisation isn’t a goal in and of itself—good sustainable decisionmaking is. Perhaps there should be a jury of randomly chosen community member, perhaps we should have elections. I don’t know, but I do feel we haven’t been taking governance seriously enough
I remain confused about “utilitarianism, but use good judgement”. IMO, it’s amongst the more transparent motte-and-baileys I’ve seen. Here are two tweets from Eliezer that I see are regularly re-shared:
This describes Aristotelian Virtue Ethics—finding the golden mean between excess and deficiency. So are people here actually virtue ethicists who sometimes use math as a means of justification and explanation? Or do they continue to take utilitarianism to some of its weirder places, privately and publicly, but strategically seek shelter under other moral frameworks when criticized?
I’m finding it harder to take people who put “consequentialist” and “utilitarian” in their profiles and about mes seriously. If people abandon their stated moral framework on big important and consequential questions, then either they’re deluding themselves on what their moral framework actually is, or they really will act out the weird conclusions—but are being manipulative and strategic by saying “trust us, we have checks and balances”
I don’t think you have to abandon it, but you can look twice or ask trusted friends etc etc.
That doesn’t mean you can’t do the thing you intended to do.
And what happens when that double-checking comes back negative? And how much weight do you choose to give it? The answer seems to be rooted in matters of judgement and subjectivity. And if you’re doing it often enough, especially on questions of consequence, then that moral framework is better described as virtue ethics.
Out of curiosity, how would you say your process differs from a virtue ethicist trying to find the golden mean between excess and deficiency?
I notice that sometimes I want to post on something that’s on both the EA forum and lesswrong. And ideally, clicking “see lesswrong comments” would just show them on the current forum page and if I responded, it would calculate EA forum karma for the forum and LessWrong karma for lessWrong.
Probably not worth building, but still.
Someone being recommended to learn about EA by listening to 10 hours of podcasts in the wild
Maximise useful feedback, minimise rudeness
When someone says of your organisation “I want you to do X” do not say “You are wrong to want X”
This rudely discourages them from giving you feedback in future. Instead, there are a number of options:
If you want their feedback “Why do you want X?” “How does a lack of X affect you?”
If you don’t want their feedback “Sorry, we’re not taking feedback on that right now” or “Doing X isn’t a priority for us”
If you think they fundamentally misunderstand something “Can I ask you a question relating to X?”
None of these options tell them they are wrong.
I do a lot of user testing. Sometimes a user tells me something I disagree with. But they are the user. They know what they want. If I disagree, it’s either because they aren’t actually a user I want to support, they misunderstand how hard something is, or they don’t know how to solve their own problems.
None of these are solved by telling them they are wrong.
Often I see people responding to feedback with correction. I often do it myself. I think it has the wrong incentives. Rather than trying to tell someone they are wrong, now I try to either react with curiosity or to explain that I’m not taking feedback right now. That’s about me rather than them.
Other than my karma, this post got negative karma. Why?
I understand that sometimes I post controversial stuff, but this one is just straightforwardly valuable.
https://forum.effectivealtruism.org/posts/GshpbrBaCQjxmAKJG/cause-prioritsation-contest-who-bettors-think-will-win
I sense new stuff on the forum is probably overrated. Surely we should assume that most of the most valuable things for most people to read have already been written?
Have you seen the new features google docs has added recently?
Tick boxes
Project trackers
New types of tables
Feels like they are gunning for Notion.
The difference between the criticism contest and openphil’s cause prioritisation contest is pretty interesting. 60% I’m gonna think OpenPhil’s created more value in terms of changes in a 10 years time.
1 minute video summaries of my EA Criticism contest articles:
Summaries are underrated—https://www.loom.com/share/4781668372694c83a4e9feffe249469b—full text
Improving Karma—https://www.loom.com/share/6d0decef2bd14efc9b22e14d43693002 - full text
Common misconception I see:
Longtermists causes are not:
Causes which are much more pressing under longtermism than other belief systems
Longtermist causes are:
Those which are a high priority for marginal resources, whether they are under other belief systems or not.
The fact that biorisk and AI risk are high priority without longtermism doesn’t make them not “longtermist causes” just as it doesn’t make the not “causes that affect people alive today”
How much value is there in combining two EA slacks which discuss the same topic?
Probably $1,000s right?
Or maybe we should assume it will be a natural process that one will subsume the other?
Effective altruism and politics
Here is an app that lets you vote on other people’s comments (I’d like to see it installed in the forum so there is a lower barrier to entry)
You can add thoughts and try and make arguments that get broad agreement.
What are the different parties of opinion on EA and politics.
https://pol.is/283be3mcmj
An open question for me (for EA Israel? For EA?) is whether we can talk about economic-politics publicly in our group.
For example, can we discuss openly that “regulating prices is bad”. This is considered an open political debate in Israel, politicians keep wanting to regulate prices (and sometimes they do, and then all the obvious things happen)
I mean I’d like to chat about that, and maybe happy to on this shortform? But I wouldn’t write a post on it. I guess it doesn’t seem that neglected to me.
In Israel, it is controversial to suggest not regulating prices, or to suggest lowering import taxes, or similar things. I could say a lot about this, but my points are:
In Israel:
It is neglected
It means EA would be involved in local politics
I remember I was really jealous of the U.S when Biden suggested some very expensive program (UBI? Some free-medical-care reform?), but he SHOWED where the money is supposed to come from, there was a chart!
EA Wiki
I’ve decided I’m going to just edit the wiki to be like the wiki I want.
Currently the wiki feels meticulously referenced but lacking in detail. I’d much prefer it to have more synthesised content which is occasionally just someone’s opinion. If you dislike this approach, let me know.
I do think that many of the entries are rather superficial, because so far we’ve been prioritizing breadth over depth. You are welcome to try to make some of these entries more substantive. I can’t tell, in the abstract, if I agree with your approach to resolving the tradeoff between having more content and having a greater fraction of content reflect just someone’s opinion. Maybe you can try editing a few articles and see if it attracts any feedback, via comments or karma?
Why do posts get more upvotes than questions with the same info?
I wrote this question: https://forum.effectivealtruism.org/posts/ckcoSe3CS2n3BW3aT/what-ea-projects-could-grow-to-become-megaprojects
Some others wrote this post summarising it:
https://forum.effectivealtruism.org/posts/faezoENQwSTyw9iop/ea-megaprojects-continued
Why do you think the summary got more upvotes. I’m not upset, I like a summary too, but in my mind, a question that anyone can submit answers to or upvote current answers is much more useful. So I am confused. Can any suggest why?
Anyone can comment on a post and upvote comments so I don’t see why a question would be better in that regard.
Also the post contained a lot of information on potential megaprojects which is not only quite interesting and educational but also prompts discussion.
At what size of the EA movement should there be an independent EA whistleblowing organisation, which investigates allegations of corruption?
Can you think of any examples of other movements which have this? I have not heard of such for e.g. the environmentalist or libertarian movements. Large companies might have whistleblowing policies, but I’ve not heard of any which make use of an independent organization for complaint processing.
The UK police does.
It seems to me if you wanted to avoid a huge scandal you’d want to empower and incentivise an organisation to find small ones.
I’ve been getting more spam mail on the forum recently.
I realise you can report users, which I think is quicker than figuring out who to mail and then copying the nameove.
I’m sorry to hear this (and grateful that you’re reporting them). We have systems for flagging when a user’s DM pattern is suspicious, but it’s imperfect (I’m not sure if it’s too permissive right now).
In case it’s useful for you to have a better picture of what’s going on, I think you get more of the DM spam because you’re very high up in the user list.
I don’t really mind. It’s not hard for me to just report the user (which is what you’d like right)
This is like 1 minute a week, so not a big deal for me. Thanks again for your and the team’s work.
I really like saved posts.
I think if I save them and then read mainly from my saved feed, that’s a better, less addictive, more informative experience.
Norms are useful so let’s have useful norms.
“I don’t think drinking is bad, but we have a low-alcohol culture so the fact you host parties with alcohol is bad”
Often the easiest mark of bad behaviour is that it breaks a norm we’ve agreed. Is it harmful in a specific case to be shoplift? Depends on what was happening to the things you stole. But it seems easier just to appeal to our general norm that shoplifting is bad. On average it is harmful and so even if it wasn’t in this specific case, being willing to shoplift is a bad sign. Even if you’re stealing meds to give to your gran, it may be good to have a general norm against this behaviour.
But if the norm is bad that weakens norms in general. Lots of people in the UK speed in their cars. But this teaches many people that twice a day, the laws aren’t actually laws. It encourages them that many government rules are stupid and needless as opposed to wise and reasonable
But how broadly should this norm apply? 99% of cases, 95%? I don’t know.
But it’s clear to me that if a norm only applies in 50% of cases it’s a bad norm. It’s gonna leave everyone trusting the values of the community less, because half the time it will punish or reward people incorrectly.
Sorry, how do I tag users or posts? I’ve forgotten and can’t find a shortcuts section on the forum
It used to be done by just typing the @ symbol followed by the person’s name, but that doesn’t seem to work anymore.
That’s right, you should be able to mention users with @ and posts with #. However, it does seem like they’re both currently broken, likely because we recently updated our search software. Thanks for flagging this! We’ll look into it.
The fix for this is now live—thanks!
I strongly dislike the “further reading” sections of the forum wiki/forum tags.
They imply that the right way to know more about things is to read a load of articles. It seems clear to me that instead we should sythesise these points and then link them where relevant. Then if you wanted more context you could read the links.
The ‘Further reading’ sections are a time-cheap way of helping readers learn more about a topic, given our limited capacity to write extended entries on those topics.
Clubhouse Invite Thread
1) Clubhouse is a new social media platform, but you need an invite to join
2) It allows chat in rooms, and networking
3) Seems some people could deliver sooner value by having a clubhouse invite
4) People who are on clubhouse have invites to give
5) If you think an invite would be valuable or heck you’d just like one, comment below and then if anyone has invites to give they can see EAs who want them.
6) I have some invites to give away.
Fun UK innovative policy competition:
https://heywoodfoundation.com/contest/
Mailing list for the new UK Conservative Party group on China.
Will probably be worth signing up to if that’s your area of interest.
https://chinaresearchgroup.substack.com/p/coming-soon
Please comment any other places people could find mailing lists or good content for EA related areas.
Some attempts at concensus thoughts on sexual behaviour:
I’ll split them up into subcomments
It is reasonable that 5- 20% of the community are scared that their harmless sexual behaviour will become unacceptable and that they will be seen as bad/unsafe if they support it.
It’s fair that they are upset and see this as something that might hurt them and fear the outcome.
There are two main models I have for many of these discussions:
Rationalist EAs—like truth-seeking, think a set of discourse norms should be obeyed at all times
Progressive EAs—think that some discussions require much more energy from some than others and need to be handled differently/more carefully. Want an environement where they feel safe
I think it’s easy to see these groups as against one another, but I think that’s not true. There are positive sum improvements.
Women being sad matters. And yes there are tradeoffs here, but it’s really sad that the women in the time article and all the other women who have been sad are sad.
I guess CEA doesn’t want to push specific norms here because the more they engage the more they will get blamed when things go wrong.
There should be a process on the forum for contentious discussions where there are 3 types of post.
An emotions post, where people talk about how they feel and try and say uncontentious things we all agree wtih
A few days later, a discourse post, where we try and have all the discussion
Two weeks later, a concensus post where we try to come up with some widely agreed concusions.
If we could have a community where everyone says “EA does romantic relationships a lot better than the outside world” that would be worth spending $10 − 100mn on purely in community building terms, let alone in just welfare of individual EAs.
We spend millions each year of EAGs + 80k. Imagine if everyone just was like “Yeah EA is just a great safe fun place”
It is pretty reasonable for 5 − 20% of the community to have a boundary about not being caught up in coversations about sex in houses they need to stay in in foreigh countries. Or similarly bad conversations.
It’s reasonable they want to be sure this is taken really seriously, because they don’t want it to happen to them or their friends.
It’s complicated that this might lead to unintended consequences, but their desire seems very comprehensible.
It was very likely bad that Owen Cotton-Barrett upset a couple of women and then didn’t drastically change his behaviour, such that there were other instances.
That’s not to say other things weren’t bad. But this feels like something we can agree on.
The forum should hire mediators who’s job it is to try and surface concensus and allow discussion to flow better. Many discussion are a lot of different positions at once.
Does this seem like an acceptable addition to the AI safety EA forum wiki page?
(There is nothing after the question for me, maybe you tried to upload an image but submitted the comment before it fully uploaded?)
I think in SBF we farmed out our consciences. Like people who say “there need to be atrocities in war so that people who live in peace” we thought “SBF can do trade dodgy coins stuff so that we can help, but let’s not think about it”. I don’t think we could have known about the fraud, but I do think there were plenty of warning signs we ignored as “SBF is the man in the arena”. No, either we should have been cogent and open about what he was doing or we should have said we didn’t like it and begun pulling away reputationally.
How should we want to deal with a scandal?
I suggest we should want to quickly update to how we will feel later. ie for the FTX crisis we wanted to as quickly as possible make peace that FTX seemed much less valuable and that SBF had maybe done a large fraud.
(I use this example because i think it’s the least controversial)
I think accurate views of the world are the main thing i want. This has a greif and emotion component, but the guiding light is “is this how the world really is”
If I have a criticism of the EA community in regard to this, it’s not clear to me that we penalise ourselves for holding views we later regard as wrong, or look at what led us there. I haven’t seen much discussion of bad early positions on FTX and I’m not sure the community even agrees internally on Bostrom, the Time article or Nonlinear. But :
I would like us to find agreement on these
I would like thought on what led us to have early incorrect community mental states
I think this is very costly so I mainly think about how I could leave the process cheaper, but that’s something I think about.
I have also recently been thinking alot about “how should we want to deal with a scandal” but mostly in terms of how much time is being devoted to each of these scandals by a community who really advocates for using our minimal resources to do the most good. It makes me really disappointed.
<<i’m not sure the community even agrees internally on Bostrom, the Time article or Nonlinear>>
Forming a consensus additionally seems against the values of the EA community, particularly on quite complicated topics where there is alot of information asymmetry and people often update (as they should) based on new evidence, which is again an important part of the EA community to me at least. So I think I disagree and think it’s unrealistic to have a community as large as EA “to find agreement on these,” and I’m not sure how this would help.
But i fully agree it would be great if we had a better process or strategy of how to deal with scandals
I think we talk about scandals too much without making progress but I’m not sure we spend too much time on them. Often it’s about trust. And communities need trust. If you are a not thick skinned person of colour, an outspoken autist, someone who runs an unconventional org, a normal person new to the job market, how these events are handled affects how much you can trust the community if these events happen to you.
It seems to me that most people want confidence that bad things won’t happen to them. If they don’t have that, they will probably leave. And that has it’s own, large, costs.
Yes sorry I think we are actually saying the same thing here, I meant your former statement not the later. I’m not saying we shouldn’t investigate things but the 300 plus comments on the 3-4 nonlinear posts doesn’t seem an optimal use of time and could probably be dealt with more efficiently, plus the thousands of people who have probably read the posts and comments is a lot of time! Maybe these things shouldn’t be handled in forum posts but in a different format.
I fully agree that these things have to be dealt with better my main concern about your point is over the consensus idea which I think is unrealistic in a community that tries to avoid group think and on topics (ftx aside) where there doesn’t seem to be a clear right or wrong.
This also seems right to me. I feel like there is a lot of unproductive conflict in the comments on these kinds of posts, which could be somehow prevented and would also be more productive if the conflict instead occurred between a smaller number of representative EA forum members, or something like that.
An very random idea in that direction that won’t work for many reasons, is some kind of “EA Forum jury” where you get randomly chosen to be one of the users in the comment section of a contentious post, and then you fight it out until you reach some kind of consensus, or at least the discussion dies down.
I do think the most standard way people have handled this in various contexts is to have panels, or courts, or boards or some kind of other system where some small subset of chosen representatives have the job of deciding on some tricky subject matter. I do kind of wish there was some kind of court system in EA that could do this.
One challenge with a “drama jury” is that the people who are most motivated to be heavy participants aren’t necessarily the people you want driving the discussion. But I guess that’s equally true in open posts. The solution in classical Athens was to pay jurors a wage; IIRC, many jurors were semiretired old men who had a lot of bandwidth.
Potentially, you’d have a few nonvoting neutrals in the mix to help facilitate discussion. It’s easier to be in a facilitating frame of mind when you are not simultaneously being asked to vote on a verdict.
I’ve had similar thoughts. I think the biggest questions are:
- Who would organise them?
- What powers would they have?
- What would happen if people kept posting it outside of the “courtroom”?
Not sure whether this is what you were implying, but I wasn’t thinking of private courts. My current guess is that it is important for courts to be at least observable, so that people can build trust in them (observable in the sense of how modern courts are observable, i.e. anyone can show up to the courtroom, but you might not be allowed to record it).
I think John meant that non-participants might keep commenting on the situation while the trial was in progress, and then after the trial. That might weaken some of the gains from having a trial in the first place (e.g., the hope that people will accept the verdict and move on to more productive things).
Ah, thanks, that makes sense
You could “sequester” the jury by making them promise not to read the non-courtroom threads until the jury had delivered a verdict. You could also have a norm that disputants would not comment in other threads while trial was ongoing. Not having the disputants in the non-courtroom thread would probably slow its velocity down considerably. You could even hide the courtroom thread from non-participants until the trial was over. That’s not a complete answer, but would probably help some.
Do you think you’ll build and test this on LessWrong? Feels doable.
The bottleneck feels more social than technological.
Also, I feel like someone else needs to do investigations for it to make sense for me to build the courtroom, since it does seem bad for one person to do both.
If you have anonymous feedback I’m happy to hear it. In fact I welcome it.
I will note that I’m not made of stone however and don’t promise to be perfect. But I always appreciate more information.
Some behaviours I’ve changed recently:
I am more cautious about posting polls around sensitive topics where there is no way to express that the poll is misframed
I generally try to match the amount of text of the person I’m talking to, and resist an urge to keep adding additional replies
In formal settings I might have previously touched people on the upper arm or shoulders in conversation, a couple of people said they didn’t like that, so I do it less and ask before I do
If you have issues (or compliments), even ones you are sure I am aware of, I would appreciate hearing them. We are probably more alien than you imagine.
https://www.admonymous.co/
Utilitarianism.net is currently down.
Looks okay to me now. How is it for you?
I do not upvote articles on here merely because they are about EA.
Personally I want to read articles that update me in a certain direction. Merely an article that’s gonna make me sad or be like “shrug accurate” is not an article I’m gonna upvote on here.
I get the desire to share them. I feel that too.
Every time I want to find quick takes, it takes longer than I expect.
Isn’t this film about the end of the world? Also yes
Does the Long Term Future Fund generally prefer many well defined project applications or one which gives a number of possible projects?
I think y’all need to iterate on using the forum more. It could be so much better if only we could figure out how
Could you clarify?
A couple of times I’ve probably been too defensive about people saying things behind my back. That’s not how I want to behave. I’m sorry.
I quite strongly dislike “drama” around things, rather than just trying to figure them out. Much of the HLI “drama” seems to be reading various comments and sharing that there is disagreement rather than attempts to turn uncertainty into clarity.
My response to this is “what are we doing”? Why aren’t there more attempts to figure out what we should actually believe as a group here? I really don’t understand why there is much discussion but so little (to my mind) attempt at synthesis.
I don’t see a clear path forward to consensus here. The best I can see, which I have tried to nudge in my last two long posts on the main thread, is “where do we go from here given the range of opinions held?”
As I see it, the top allegation that has been levied is intentional research misconduct,[1] with lesser included allegations of reckless research misconduct, grossly negligent research (mis)conduct, and negligent research conduct. A less legal-metaphory way to put it is: the biggest questions are whether HLI had something on the scale in favor of SM, if so was it a finger or a fist on the scale, and if so did HLI know (or should it have known) that the body part was on the scale.
It’s unsurprising that most people don’t want to openly deliberate about misconduct allegations, especially not in front of the accusers and the accused. There’s a reason juries deliberate in secret in an attempt to reach consensus.
I think that hesitation to publicly deliberate is particularly pronounced for those who fall in the middle part of the continuum,[2] which unfortunately contributes to the “pretty serious misconduct” and “this is way overblown” positions being overrepresented in comments compared to where I think they truly fall among the Forum community. Moreover, most of us lack the technical background and experience to lead a deliberation process.
What procedures would you suggest to move toward consensus?[3]
In my view, this allegation has been made in a slightly veiled manner, but clearly enough that it is counts as having been alleged.
If someone thinks HLI is guilty of deceptive conduct (or conduct that is so reckless to be hard to distinguish from intentional deception), they are likely going to feel less discomfort raking HLI over the coals (“because they deserve it” and because maintaining epistemic defense against that kind of conduct is particularly important). If someone thinks this whole thing is a nothingburger, saying so wouldn’t seem emotionally difficult.
Properly used, anonymous polling can reveal a consensus that exists (as long as there’s no ballot stuffing) . . . but isn’t nearly as useful in developing a consensus. If you attempt to iterate the questions, you’re likely to find that more and more of the voting pool will be partisans on one side of the dispute or the other, so subsequent rounds will reflect community consensus less and less.
It seems plausible to me that those involved in Nonlinear have received more social sanction than those involved in FTX, even though the latter was obviously more harmful to this community and the world.
I think jailtime counts as social sanction!
1 person has received jail time. FTXFF had business practices that led to far more harm than Nonlinear’s.
Like what?
What does “involved in” mean? The most potentially plausible version of this compares people peripherally involved in FTX (under a broad definition) to the main players in Nonlinear.