Some things I don’t think I’ve seen around FTX, which are probably due to the investigation, but still seems worth noting. Please correct me if these things have been said.
I haven’t seen anyone at the FTXFF acknowledge fault for negligence in not noticing that a defunct phone company (north dimension) was paying out their grants.
This isn’t hugely judgemental from me, I think I’d have made this mistake too, but I would like it acknowledged at some point
The FTX Foundation grants were funded via transfers from a variety of bank accounts, including North Dimension-8738 and Alameda-4456 (Primary Deposit Accounts), as well as Alameda-4464 and FTX Trading-9018
I haven’t seen anyone at CEA acknowledge that they ran an investigation in 2019-2020 on someone who would turn out to be one of the largest fraudsters in the world and failed to turn up anything despite seemingly a number of flags.
I remain confused
As I’ve written elsewhere I haven’t seen engagement on this point, which I find relatively credible, from one of the Time articles:
“Bouscal recalled speaking to Mac Aulay immediately after one of Mac Aulay’s conversations with MacAskill in late 2018. “Will basically took Sam’s side,” said Bouscal, who recalls waiting with Mac Aulay in the Stockholm airport while she was on the phone. (Bouscal and Mac Aulay had once dated; though no longer romantically involved, they remain close friends.) “Will basically threatened Tara,” Bouscal recalls. “I remember my impression being that Will was taking a pretty hostile stance here and that he was just believing Sam’s side of the story, which made no sense to me.””
My comment on the above “While other things may have been bigger errors, this once seems most sort of “out of character” or “bad normsy”. And I know Naia well enough that this moves me a lot, even though it seems so out of character for [will] (maybe 30% that this is a broadly accurate account). This causes me consternation, I don’t understand and I think if this happened it was really bad and behaviour like it should not happen from any powerful EAs (or any EAs frankly).”
I haven’t read too much into this and am probably missing something.
Why do you think FTXFF was receiving grants via north dimension? The brief googling I did only mentioned north dimension in the context of FTX customers sending funds to FTX (specifically this SEC complaint). I could easily have missed something.
Grants were being made to grantees out of North Dimension’s account—at least one grant recipient confirmed receiving one on the Forum (would have to search for that). The trustee’s second interim report shows that FTXFF grants were being paid out of similar accounts that received customer funds.
It’s unclear to me whether FTX Philanthrophy (the actual 501c3) ever had any meaningful assets to its name, or whether (m)any of the grants even flowed through accounts that it had ownership of.
Certainly very concerning. Two possible mitigations though:
Any finding of negligence would only apply to those with duties or oversight responsibilities relating to operations. It’s not every employee or volunteer’s responsibility to be a compliance detective for the entire organization.
It’s plausible that people made some due dilligence efforts that were unsuccessful because they were fed false information and/or relied on corrupt experts (like “Attorney-1” in the second interim trustee report). E.g., if they were told by Legal that this had been signed off on and that it was necessary for tax reasons, it’s hard to criticize a non-lawyer too much for accepting that. Or more simply, they could have been told that all grants were made out of various internal accounts containing only corporate monies (again, with some tax-related justification that donating non-US profits through a US charity would be disadvantageous).
I know of at least 1 NDA of an EA org silencing someone for discussing what bad behaviour that happened at that org. Should EA orgs be in the practice of making people sign such NDAs?
“Chesterton’s TAP” is the most rationalist buzzword thing I’ve ever heard LOL, but I am putting together that what Chana said is that she’d like there to be some way for people to automatically notice (the trigger action pattern) when they might be adopting an abnormal/atypical governance plan and then reconsider whether the “normal” governance plan may be that way for a good reason even if we don’t immediately know what that reason is (the Chesterton’s fence)?
I have no idea, but would like to! With things like “organizational structure” and “nonprofit governance”, I really want to understand the reference class (even if everyone in the reference class does stupid bad things and we want to do something different).
Strongly agree that moving forward we should steer away from such organizational structures; much better that something bad is aired publicly before it has a chance to become malignant
Feels like we’ve had about 3 months since the FTX collapse with no kind of leadership comment. Uh that feels bad. I mean I’m all for “give cold takes” but how long are we talking.
I am pretty sure there is no strong legal reason for people to not talk at this point. Not like totally confident but I do feel like I’ve talked to some people with legal expertise and they thought it would probably be fine to talk, in addition to my already bullish model.
I often see people thinking that this is bragading or something when actually most people just don’t want to write a response, they either like or dislike something
If it were up to me I might suggest an anonymous “I don’t know” button and an anonymous “this is poorly framed” button.
When I used to run a lot of facebook polls, it was overwhelmingly men who wrote answers, but if there were options to vote, the gender was much more even. My hypothesis was that a kind of argumentative usually man tended to enjoy writing long responses more. And so blocking lower effort/less antagonistic/ more anonymous responses meant I heard more from this kind of person.
I don’t know if that is true on the forum, but I would guess that the higher effort it is to respond the more selective the responses become in some direction. I guess I’d ask if you think that the people spending the most effort are likely to be the most informed. In my experience, they aren’t.
More broadly I think it would be good if the forum optionally took some information about users—location, income, gender, cause area, etc and on answers with more than say 10 votes would display some kind of breakdown. I imagine it would sometimes be interesting to find out how exactly agreement and disagreement cut on different issues.
Also I think it’s good to be able to anonymously express unpopular views. For most of human history it’s been unpopular to express support for LGBT+, the rights of women, animals. But if anonymous systems had existed we might have seen more support for such views. Likewise, pushing back against powerful people is easier if you can do it anonymously.
It seems like we could use the new reactions for some of this. At the moment they’re all positive but there could be some negative ones. And we’d want to be able to put the reactions on top level posts (which seems good anyway).
I think that it is generally fine to vote without explanations, but it would be nice to know why people are disagreeing or disliking something. Two scenarios come to mind:
If I write a comment that doesn’t make any claim/argument/proposal and it gets downvotes, I’m unclear what those downvotes mean.
If I make a post with a claim/argument/proposal and it gets downvoted without any comments, it isn’t clear what aspect of the post people have a problem with.
I remember writing in a comment several months ago about how I think that theft from an individual isn’t justified even if many people benefit from it, and multiple people disagreed without continuing the conversation. So I don’t know why they disagreed, or what part of the argument they through was wrong. Maybe I made a simple mistake, but nobody was willing to point it out.
I also think that you raise good points regarding demographics and the willingness of different groups of people to voice their perspectives.
I agree it would be nice to know, but in every case someone has decided they do want to vote but don’t want to comment. Sometimes I try and cajole an answer, but ultimately I’m glad they gave me any information at all.
Sam Harris takes Giving What We Can pledge for himself and for his meditation company “Waking Up”
Harris references MacAksill and Ord as having been central to his thinking and talks about Effective Altruism and exstential risk. He publicly pledges 10% of his own income and 10% of the profit from Waking Up. He also will create a series of lessons on his meditation and education app around altruism and effectiveness.
Harris has 1.4M twitter followers and is a famed Humanist and New Athiest. The Waking Up app has over 500k downloads on android, so I guess over 1 million overall.
Harris is a marmite figure—in my experience people love him or hate him.
It is good that he has done this.
Newswise, it seems to me it is more likely to impact the behavior of his listeners, who are likely to be well-disposed to him. This is a significant but currently low-profile announcement. As will the courses be on his app.
I don’t think I’d go spreading this around more generally, many don’t like Harris and for those who don’t like him, it could be easy to see EA as more of the same (callous superior progessivism).
In the low probability (5%?) event that EA gains traction in that space of the web (generally called the Intellectual Dark Web—don’t blame me, I don’t make the rules) I would urge caution for EA speakers who might pulled into polarising discussion which would leave some groups feeling EA ideas are “not for them”.
This seems quite likely given EA Survey data where, amongst people who indicated they first heard of EA from a Podcast and indicated which podcast, Sam Harris’ strongly dominated all other podcasts.
More speculatively, we might try to compare these numbers to people hearing about EA from other categories. For example, by any measure, the number of people in the EA Survey who first heard about EA from Sam Harris’ podcast specifically is several times the number who heard about EA from Vox’s Future Perfect. As a lower bound, 4x more people specifically mentioned Sam Harris in their comment than selected Future Perfect, but this is probably dramatically undercounting Harris, since not everyone who selected Podcast wrote a comment that could be identified with a specific podcast. Unfortunately, I don’t know the relative audience size of Future Perfect posts vs Sam Harris’ EA podcasts specifically, but that could be used to give a rough sense of how well the different audiences respond.
Notably, Harris has interviewed several figures associated with EA; Ferriss only did MacAskill, while Harris has had MacAskill, Ord, Yudkowsky, and perhaps others.
This is true, although for whatever reason the responses to the podcast question seemed very heavily dominated by references to MacAskill.
This is the graph from our original post, showing every commonly mentioned category, not just the host (categories are not mutually exclusive). I’m not sure what explains why MacAskill really heavily dominated the Podcast category, while Singer heavily dominated the TED Talk category.
How are we going to deal emotionally with the first big newspaper attack against EA?
EA is pretty powerful in terms of impact and funding.
It seems only an amount of time before there is a really nasty article written about the community or a key figure.
Last year the NYT wrote a hit piece on Scott Alexander and while it was cool that he defended himself, I think he and the rationalist community overreacted and looked bad.
I would like us to avoid this.
If someone writes a hit piece about the community, Givewell, Will MacAskill etc, how are we going to avoid a kneejerk reaction that makes everything worse?
I suggest if and when this happens:
individuals largely don’t respond publicly unless they are very confident they can do so in a way that leads to deescalation.
articles exist to get clicks. It’s worth someone (not necessarily me or you) responding to an article in the NYT, but if, say a niche commentator goes after someone, fewer people will hear it if we let it go.
let the comms professionals deal with it. All EA orgs and big players have comms professionals. They can defend themselves.
if we must respond (we often needn’t) we should adopt a stance of grace, curiosity and humility. Why do they think these things are true? What would convince us?
Personally I hate being attacked and am liable to feel defensive and respond badly. I assume you are no different. I’d like to think about this so that if and when it happens we can avoid embarrassing ourselves and the things we care about.
Yeah, I think the community response to the NYT piece was counterproductive, and I’ve also been dismayed at how much people in the community feel the need to respond to smaller hit pieces, effectively signal boosting them, instead of just ignoring them. I generally think people shouldn’t engage with public attacks unless they have training in comms (and even then, sometimes the best response is just ignoring).
The Scout Mindset deserved 1/10th of the marketing campaign of WWOTF. Galef is a great figurehead for rational thinking and it would have been worth it to try and make her a public figure.
I think much of the issue is that: 1. It took a while to ramp up to being able to do things such as the marketing campaign for WWOTF. It’s not trivial to find the people and buy-in necessary. Previous EA books haven’t had similar. 2. Even when you have that capacity, it’s typically much more limited than we’d want.
You are an EA, if you want to be. Reading this forum is enough. Giving a little of your salary effectively is enough. Trying to get an impactful job is enough. If you are trying even with a fraction of your resources to make the world better and chatting with other EAs about it, you are one too.
I am really not the person to do it, but I still think there needs to be some community therapy here. Like a truth and reconciliation committee. Working together requires trust and I’m not sure we have it.
Curious if you have examples of this being done well in communities you’ve been aware of? I might have asked you this before.
I’ve been part of an EA group where some emotionally honest conversations were had, and I think they were helpful but weren’t a big fix. I think a similar group later did a more explicit and formal version and they found it helpful.
I think the strategy fortnight worked really well. I suggest that another one is put in the calendar (for say 3 months time) and then rather than dripfeeding comment we sort of wait and then burst it out again.
It felt better to me, anyway to be like “for these two weeks I will engage”
I hope Will MacAskill is doing well. I find it hard to predict how he’s doing as a person. While there have been lots of criticisms (and I’ve made some) I think it’s tremendously hard to be the Shelling person for a movement. There is a seperate axis however, and I hope in himself he’s doing well and I imagine many feel that way. I hope he has an accurate picture here.
I notice some people (including myself) reevaluating their relationship with EA.
This seems healthy.
When I was a Christian it was extremely costly for me to reduce my identification and resulted in a delayed and much more final break than perhaps I would have wished[1]. My general view is that people should update quickly, and so if I feel like moving away from EA, I do it when I feel that, rather than inevitably delaying and feeling ick.
Notably, reducing one’s identification with the EA community need not change one’s poise towards effective work/donations/earn to give. I doubt it will change mine. I just feel a little less close to the EA community than once I did, and that’s okay.
I don’t think I can give others good advice here, because we are all so different. But the advice I would want to hear is “be part of things you enjoy being part of, choose an amount of effort to give to effectiveness and try to be a bit more effective with that each month, treat yourself kindly because you too are a person worthy of love”
I think a slow move away from Christianity would have been healthier for me. Strangely I find it possible to imagine still being a Christian, had things gone differently, even while I wouldn’t switch now.
The vibe at EAG was chill, maybe a little downbeat, but fine. I can get myself riled up over the forum, but it’s not representative! Most EAs are just getting on with stuff.
(This isn’t to say that forum stuff isn’t important, its just as important as it is rather than what should define my mood)
We have thought about that. Probably the main reason we haven’t done this is because of this reason, on which I’ll quote myself on from an internal slack message:
Currently if someone makes an anon account, they use an anonymous email address. There’s usually no way for us, or, by extension, someone who had full access to our database, to deanonymize them. However, if we were to add this feature, it would tie the anonymous comments to a primary account. Anyone who found a vulnerability in that part of the code, or got an RCE on us, would be able post a dump that would fully deanonymize all of those accounts.
“Rather than just doing what feels right, we use evidence and careful analysis to find the very best causes to work on.”
It reads to me as arrogant, and epitomises the worst caracatures my friends do of EAs. Read it in a snarky voice (such as one might if they struggled with the movement and were looking to do research) “Rather that just doing what feels right...”
I suggest it gets changed to one of the following:
“We use evidence and careful analysis to find the very best causes to work on.”
“It’s great when anyone does a kind action no matter how small or effective. We have found value in using evidence and careful analysis to find the very best causes to work on.”
I am genuinely sure whoever wrote it meant well, so thank you for your hard work.
I also thought this when I first read that sentence on the site, but I find it difficult (as I’m sure its original author does) to communicate its meaning in a subtler way. I like your proposed changes, but to me the contrast presented in that sentence is the most salient part of EA. To me, the thought is something like this:
“Doing good feels good, and for that reason, when we think about doing charity, we tend to use good feeling as a guide for judging how good our act is. That’s pretty normal, but have you considered that we can use evidence and analysis to make judgments about charity?”
The problem IMHO is that without the contrast, the sentiment doesn’t land. No one, in general, disagrees in principle with the use of evidence and careful analysis: it’s only in contrast with the way things are typically done that the EA argument is convincing.
I would choose your statement over the current one.
I think the sentiment lands pretty well even with a very toned down statement. The movement is called “effective altruism”. I think often in groups are worried that outgroups will not get their core differences when generally that’s all outgroups know about them.
I don’t think that anyone who visits that website won’t think that effectiveness isn’t a core feature. And I don’t think we need to be patronising (as EAs are charactured as being in conversations I have) in order to make known something that everyone already knows.
Several journalists (including those we were happy to have write pieces about WWOTF) have contacted me but I think if I talk to them, even carefully, my EA friends will be upset with me. And to be honest that upsets me.
We are in the middle of a mess of our own making. We deserve scrutiny. Ugh, I feel dirty and ashamed and frustrated.
To be clear, I think it should be your own decision to talk to journalists, but I do also just think that it’s just better for us to tell our own story on the EA Forum and write comments, and not give a bunch of journalists the ability to greatly distort the things we tell them in a call, with a platform and microphone that gives us no opportunity to object or correct things.
I have been almost universally appalled at the degree to which journalists straightforwardly lie in interviews, take quotes massively out of context, or make up random stuff related to what you said, and I do think it’s better that if you want to help the world understand what is going on, that you write up your own thoughts in your own context, instead of giving that job to someone else.
I think 90% of the answer to this is risk aversion from funders, especially LTFF and OpenPhil, see here. As such many things struggled for funding, see here.
We should acknowledge that doing good policy research often involves actually talking to and networking with policy people. It involves running think tanks and publishing policy reports, not just running academic institutions and publishing papers. You cannot do this kind of research well in a vacuum.
That fact combined with funders who were (and maybe still are) somewhat against funding people (except for people they knew extremely well) to network with policy makers in any way, has lead to (maybe is still leading to) very limited policy research and development happening.
I am sure others could justify this risk averse approach, and there are totally benefits to being risk averse. However in my view this was a mistake (and is maybe an ongoing mistake). I think was driven by the fact that funders were/are: A] not policy people, so do/did not understand the space so are were hesitant to make grants; B] heavily US centric, so do/did not understand the non-US policy space; and C] heavily capacity constrained, so do/did not have time to correct for A or B.
– –
(P.S. I would also note that I am very cautious about saying there is “a lack of concrete policy suggestions” or at least be clear what is meant by this. This phrase is used as one of the reasons for not funding policy engagement and saying we should spend a few more years just doing high level academic work before ever engaging with policy makers. I think this is just wrong. We have more than enough policy suggestions to get started and we will never get very very good policy design unless we get started and interact with the policy world.)
My current model is that actually very few people who went to DC and did “AI Policy work” chose a career that was well-suited to proposing policies that help with existential risk from AI. In-general people tried to choose more of a path of “try to be helpful to the US government” and “become influential in the AI-adjacent parts of the US government”, but there are almost no people working in DC whose actual job it is to think about the intersection of AI policy and existential risk. Mostly just people whose job it is to “become influential in the US government so that later they can steer the AI existential risk conversation in a better way”.
I find this very sad and consider it one of our worst mistakes, though I am also not confident in that model, and am curious whether people have alternative models.
but there are almost no people working in DC whose actual job it is to think about the intersection of AI policy and existential risk.
That’s probably true because it’s not like jobs like that just happen to exist within government (unfortunately), and it’s hard to create your own role descriptions (especially with something so unusual) if you’re not already at the top.
That said, I think the strategy you describe EAs to have been doing can be impactful? For instance, now that AI risk has gone mainstream, some groups in government are starting to work on AI policy more directly, and if you’re already working on something kind of related and have a bunch of contacts and so on, you’re well-positioned to get into these groups and even get a leading role.
What’s challenging is that you need to make career decisions very autonomously and have a detailed understanding of AI risk and related levers to carve out your own valuable policy work at some point down the line (and not be complacent with “down the line never comes until it’s too late”). I could imagine that there are many EA-minded individuals who went into DC jobs or UK policy jobs with the intent to have an impact on AI later, but they’re unlikely to do much with that because they’re not proactive enough and not “in the weeds” enough with thinking about “what needs to happen, concretely, to avert an AI catastrophe?.”
Even so, I think I know several DC EAs who are exceptionally competent and super tuned in and who’ll likely do impactful work down the line, or are already about to do such things. (And I’m not even particularly connected to that sphere, DC/policy, so there are probably many more really cool EAs/EA-minded folks there that I’ve never talked to or read about.)
The slide Nathan is referring to. “We didn’t listen” feels a little strong; lots of people were working on policy detail or calling for it, it just seems ex post like it didn’t get sufficient attention. I agree directionally though, and Richard’s guesses at the causes (expecting fast take-off + business-as-usual politics) seem reasonable to me.
I talked to someone outside EA the other day who said that in a competive tender they wouldn’t apply to EA funders because they thought the process would likely to go to someone with connections to OpenPhil.
Please post your jobs to Twitter and reply with @effective_jobs. Takes 5 minutes. and the jobs I’ve posted and then tweeted have got 1000s of impressions.
Or just DM me on twitter (@nathanpmyoung) and I’ll do it. I think it’s a really cheap way of getting EAs to look at your jobs. This applies to impactful roles in and outside EA.
Here is an example of some text:
-tweet 1
Founder’s Pledge Growth Director
@FoundersPledge are looking for someone to lead their efforts in growing the amount that tech entrepreneurs give to effective charities when they IPO.
I get why I and other give to Givewell rather than catastrophic risk—sometimes it’s good to know your “Impact account” is positive even if all the catastrophic risk work was useless.
But why do people not give to animal welfare in this case? Seems higher impact?
And if it’s just that we prefer humans to animals that seems like something we should be clear to ourselves about.
Also I don’t know if I like my mental model of an “impact account”. Seems like my giving has maybe once again become about me rather than impact.
This is exactly why I mostly give to animal charities. I do think there’s higher uncertainty of impact with animal charities compared to global health charities so I still give a bit to AMF. So roughly 80% animal charities, 20% global health.
Thanks for brining our convo here! As context for others, Nathan and I had a great discussion about this which was supposed to be recorded...but I managed to mess up and didn’t capture the incoming audio (i.e. everything Nathan said) 😢
Guess I’ll share a note I made about this (sounds AI written because it mostly was, generated from a separate rambly recording). A few lines are a little spicier than I’d ideally like but 🤷
Donations and Consistency in Effective Altruism
I believe that effective altruists should genuinely strive to practice effective altruism. By this, I mean that there are individuals who earnestly and seriously agree with the core arguments that animal welfare charities deserve significant financial support, both in relative and absolute terms. However, they do not always follow through on these convictions when it comes to donations.
Many, for example, will eagerly nod along with introductory presentations for university effective altruism groups often highlight the fact that a tiny fraction of all donations go toward animal welfare causes, even within EA.
And, as far as I can tell, very few if any EAs affirmatively dispute that animal welfare as a cause is simply more important and neglected, and similarly as tractable, as global poverty. But their donations do not seem to reflect this, going to GiveWell-type charities like GiveDirectly or Against Malaria Foundation instead of animal welfare organizations.
While supporting poverty alleviation efforts is commendable in its own right – after all we want poor people having more money and fewer dying from preventable diseases – it seems incongruous given their professed beliefs.
Without delving too deeply into speculation or psychoanalysis regarding individual motivations behind these donation choices; one possibility is simply an emotional preference for contributing toward human-centric causes over those focused on animals’ well-being.
To be clear: I am not claiming any personal moral superiority here; my own charitable giving record is awfully small in relative terms. Nonetheless I encourage fellow EAs who share concerns about factory farming’s abhorrent nature and have resources available for philanthropy to seriously consider allocating their donations toward animal welfare causes.
Thanks for posting this. I had branching out my giving strategy to conclude some animal-welfare organizations on the to-do list, but this motivated me to actually pull the trigger on that.
I think most of the animal welfare neglect comes from the fact that if people are deep enough into EA to accept all of its “weird” premises they will donate to AI safety instead. Animal welfare is really this weird midway spot between “doesn’t rest on controversial claims” and “maximal impact”.
Definitely part of the explanation, but my strong impression from interaction irl and on Twitter is that many (most?) AI-safety-pilled EAs donate to GiveWell and much fewer to anything animal related.
I think ~literally except for Eliezer (who doesn’t think other animals are sentient), this isn’t what you’d expect from the weirdness model implied.
Assuming I’m not badly mistaken about others’ beliefs and the gestalt (sorry) of their donations, I just don’t think they’re trying to do the most good with their money. Tbc this isn’t some damning indictment—it’s how almost all self-identified EAs’ money is spent and I’m not at all talking about ‘normal person in rich country consumption.’
I continue to think that a community this large needs mediation functions to avoid lots of harm with each subsequent scandal.
People asked for more details. so I wrote the below.
Let’s look at some recent scandals and I’ll try and point out some different groups that existed.
FTX—longtermists and non-lontermists, those with greater risk tolerance and less
Bostrom—rationalists and progressives
Owen Cotton-Barrett—looser norms vs more robust, weird vs normie
Nonlinear—loyalty vs kindness, consent vs duty of care
In each case, the community disagrees on who we should be and what we should be. People write comments to signal that they are good and want good things and shouldn’t be attacked. Other people see these and feel scared that they aren’t what the community wants.
This is tiring and anxiety inducing for all parties. In all cases here there are well intentioned, hard working people who have given a lot to try and make the world better who are scared they cannot trust their community to support them if push comes to shove. There are people horrified at the behaviour of others, scared that this behaviour will repeat itself, with all the costs attached. I feel this way, and I don’t think I am alone.
I think we need the community equivalent of therapy and mediation. We have now got to the stage where national media articles get written about our scandals and people threaten litigation. I just don’t think that a community of 3000 apes can survive this without serious psychological costs which in turn affect work and our lives. We all don’t want to be chucked out of a community which is safety and food and community for us. We all don’t want that community to become a hellhole. I don’t, SBF doesn’t, the woman hurt by OCB doesn’t, Kat and Emerson and Chloe and Alice don’t.
That’s not to say that all behaviour is equal, but that I think the frame here is empathy, boundary setting and safety, not conflict, auto-immune responses and exile.
What do I suggest?
After each scandal we have spaces to talk about our feelings, then we discuss what we think the norms of the community should be. Initially there will be disagreement but in time as we listen to those we disagree with we may realise how we differ. Then we can try and reintegrate this understanding to avoid it happening again. That’s what trust is—the confidence that something won’t happen above tolerance.
A concrete example
After the Bostrom stuff we had rationalist and progressive EAs in disagreement. Some thought he’d responded well, others badly. I think there was room for a discussion, to hear how unsafe his behaviour had left people feeling “do people judge my competence based on the colour of my skin?” “will my friends be safe here?”. I don’t think these feelings can be dismissed as wokery gone mad. But I think the other group had worries too “Will I be judged for things I said years ago?” “Seemingly even an apology isn’t enough”. I find I can empathise with both groups.
And I suggest what we want is some norms around this. Norms about things we do and don’t do. The aim should be to reduce community stress through there being bright lines and costs for behaviour we deem bad. And ways for those who do unacceptable things to come back to the community. I think there could be mutually agreeable ones, but I think the process would be tough.
We’d have to wrestle with how Bostrom and Hanson’s productivity seems related to their ability to think weird or ugly thoughts. We’d have to think about if mailing lists 20 years ago were public or private. We’d have to think about what value we put on safety. And we’d have to be willing not to pick up the sword if it didn’t go our way.
But I think there are acceptable positions here. Where people acknowledge harmful patterns of behaviour, perhaps even voluntarily leave for a time. Where people talk about the harm and the benefit created by those they disagree with. Where others see that some value weirdness/creativity more/less than they do. Where we rejoice in what we have achieved and mourn over how we have hurt one another. Where we grow to be a kinder, more mature community.
Intermission
This stuff breaks my heart. Not because I am good, but because I have predictably hurt people and been hurt by people in the past. And I’d like the cycle to stop. In my own life, conflict has never been the way out of this. Either I should leave people I cannot work with, or share and listen to those I can. And it is so hard and I fail often, but it’s better than becoming jaded and cruel or self-hating and perfectionist. I am broken, I am enough, I can be better. EA is flawed, EA is good, EA can improve. The world is awful, the world is better that it used to be, the world can improve.
As it is
Currently, I think we aren’t doing this work, so every subsequent scandal adds another grievance to the pile. And I guess people are leaving the community. If we spend millions a year trying to get graduates, isn’t it worth spending the same to keep long time members? I don’t know if there is a way to keep Kat and Emerson, Alice and Chloe, the concerned global healthy worker and the person who thinks SBF did nothing wrong, and me and you, but currently I don’t see us spending nearly the appropriate amount of mental effort or resources.
Oh and I’m really not angling to do this work. I have suggestions, sure, but I think the person should be widely trusted by the community as neutral and mature.
Community health is also like the legal system in that they enforce sanctions so I wonder if that reduces the chance that someone reaches out to them to mediate.
A previous partner and I did a sex and consent course together online I think it’s helped me be kinder in relationships.
Useful in general.
More useful if, you:
- have sex casually— see harm in your relationships and want to grow - are poly
As I’ve said elsewhere I think a very small proportion of people in EA are responsible for most of the relationship harms. Some of bad actors, who need to be removed, some are malefactors, who have either lots of interactions or engage in high risk behaviours and accidentally cause harm. I would guess I have more traits of the second category than almost all of you. So people like me should do the most work to change.
So most of you probably don’t need this, but if you are in some of the above groups, I’d recommend a course like this. Save yourself the heartache of upsetting people you care about.
Can we have some people doing AI Safety podcast/news interviews as well as Yud?
I am concerned that he’s gonna end up being the figurehead here. I assume someone is thinking of this, but I’m posting here to ensure that it is said. I am pretty sure that people are working on this, but I think it’s good to say this anyway.
We aren’t a community who says “I guess he deserves it” we say “who is the best person for the job?”. Yudkowsky, while he is an expert isn’t a median voice. His estimates of P(doom) are on the far tail of EA experts here. So if I could pick 1 person I wouldn’t pick him and frankly I wouldn’t pick just one person.
Some other voices I’d like to see on podcasts/ interviews:
Toby Ord
Paul Christiano
Ajeya Cotra
Amanda Askell
Will MacAskill
Joe Carlsmith*
Katja Grace*
Matthew Barnett*
Buck Schlegeris
Luke Meulhauser
Again, I’m not saying noone has thought of this (80%) they have. But I’d like to be 97% sure, so I’m flagging it.
I am a bit confused by your inclusion of Will MacAskill. Will has been on a lot of podcasts, while for Eliezer I only remember 2. But your text sounds a bit like you worry that Eliezer will be too much on podcasts and MacAskill too little (I don’t want to stop MacAskill from going on podcasts btw. I agree that having multiple people present different perspectives on AGI safety seems like a good thing).
I don’t think you should be optimizing to avoid extreme views, but in favor of those with the most robust models, who can also communicate them effectively to the desired audience. I agree that if we’re going to be trying anything resembling public outreach it’d be good to have multiple voices for a variety of reasons.
On the first half of the criteria I’d feel good about Paul, Buck, and Luke. On the second half I think Luke’s blog is a point of evidence in favor. I haven’t read Paul’s blog, and I don’t think that LessWrong comments are sufficiently representative for me to have a strong opinion on either Paul or Buck.
I notice I am pretty skeptical of much longtermist work and the idea that we can make progress on this stuff just by thinking about it.
I think future people matter, but I will be surprised if, after x-risk reduction work, we can find 10s of billions of dollars of work that isn’t busywork and shouldn’t be spent attempting to learn how to get eg nations out of poverty.
I have heard one anecdote of an EA saying that they would be less likely to hire someone on the basis of their religion because it would imply they were less good at their job less intelligent/epistemically rigorous. I don’t think they were involved in hiring, but I don’t think anyone should hold this view.
Here is why:
As soon as you are in a hiring situation, you have much more information than priors. Even if it were true that, say, ADHD[1] were less rational then the interview process should provide much more information than such a prior. If that’s not the case, get a better interview process, don’t start being prejudiced!
People don’t mind meritocracy, but they want a fair shake. If I heard that people had a prior that ADHD folks were less likely to be hard working, regardless of my actual performance in job tests, I would be less likely to want to be part of this community. You might lose my contributions. It seems likely to me that we come out ahead by ignoring small differences in groups so people don’t have to worry about this. People are very sensitive to this. Let’s agree not to defect. We judge on our best guess of your performance, not on appearances.
I would be unsurprised if this kind of thinking cut only one way. Is anyone suggesting they wouldn’t hire poly people because of the increased drama or men because of the increased likelihood of sexual scandal? No! We already think some information is irrelevant/inadmissible as a prior in hiring. Because we are glad of people’s right to be different or themselves. To me, race and religion clearly fall in this space. I want people to feel they can be human and still have a chance of a job.
I wouldn’t be surprised if this cashed out to “I hire people like me”. In this example was the individual really hiring on the basis of merit or did they just find certain religious people hard to deal with. We are not a social club, we are trying to do the most good. We want the best, not the people who are like us.
This pattern matches to actual racism/sexism. Like “sometimes I don’t get hired because people think Xs are worse at jobs”. How is that not racism? Seems bad.
Counterpoints:
Sometime gut does play a play a role. We think someone would get better on our team. Some might argue that it’s fine to use this as a tiebreaker. Or that its better to be honest that this is what’s going on.
Personally I think they points outweigh the counterpoints.
Hiring processes should hire the person who seems most likely to do the best job. And candidates should be confident this is happening. But for both predictive reasons, community welfare reasons and avoiding obvious pitfalls reasons I think small priors around race, religion, sexuality, gender, sexual practice should be discounted[2]. If you think the candidate is better or worse, it should show in the interview process. And yes, I get that gut plays a role, but I’d be really wary of gut that feeds clear biases. I think a community where we don’t do that comes out ahead and does more good.
I would be unsurprised if this kind of thinking cut only one way. Is anyone suggesting they wouldn’t hire poly people because of the increased drama or men because of the increased likelihood of sexual scandal?
In the wake of the financial crisis it was not uncommon to see suggestions that banks etc. should hire more women to be traders and risk managers because they would be less temperamentally inclined towards excessive risk taking.
I think that we have more-or-less agreed as societies that there are some traits that is is okay to use to make choices about people (mainly: their actions/behaviors), and there are some traits that is is not okay to use (mainly: things that the person didn’t choose and isn’t responsible for). Race, religion, gender, and the like are widely accepted[1] as not socially acceptable traits to use when evaluating people’s ability to be a member of a team.[2] But there are other traits that we commonly treat as acceptable to use as the basis of treating people differently, such as what school someone went to, how many years of work experience they have, if they have a similar communication style as us, etc.
I think I might split this into two different issues.
One issue is: it isn’t very fair to give or withhold jobs (and other opportunities) based on things that people didn’t really have much choice in (such as where they were born, how wealthy their parents were, how good of an education they got in their youth, etc.)
A separate issue is: it is ineffective to employment decisions (hiring, promotions, etc.) based on things that don’t predict on-the-job success.
Sometimes these things line up nicely (such as how it isn’t fair to base employment decisions on hair color, and it is also good business to not base employment decisions on hair color). But sometimes they don’t line up so nicely: I think there are situations where it makes sense to use “did this person go to a prestigious school” to make employment decisions because that will get you better on-the-job performance; but it also seems unfair because we are in a sense rewarding this person for having won the lottery.[3]
In a certain sense I suppose this is just a mini rant about how the world is unfair. Nonetheless, I do think that a lot of conversations about hiring and discriminations get the two different issues conflated.
Employment is full of laws, but even in situations where there isn’t any legal issue (such as inviting friends over for a movie party, or organizing a book club) I view it as somewhat repulsive to include/exclude people based on gender/race/religion/etc. Details matter a lot, and I can think of exceptions, but that is more or less my starting point.
I’ve heard the phrase “genetic lottery,” and I suspect genes to contribute a lot to academic/career success. But lots of other things outside a person’s control affect how well they perform: being born in a particular place, how good your high school teachers were, stability of the household, if your parents had much money, and all the other things that we can roughly describe as “fortune” or “luck” or “happenstance.”
I know lots of people with lots of dispositions experience friction with just declining their parents’ religions, but that doesn’t mean I “get it” i.e., conflating religion with birth lotteries and immutability seems a little unhinged to me.
There may be a consensus that it’s low status to say out loud “we only hire harvard alum” or maybe illegal (or whatever), but there’s not a lot of pressure to actually try reducing implicit selection effects that end up in effect quite similar to a hardline rule. And I think harvard undergrad admissions have way more in common with lotteries than religion does!
I think the old sequencesy sort of “being bad at metaphysics (rejecting reductionism) is a predictor of unclear thinking” is fine! The better response to that is “come on, no one’s actually talking about literal belief in literal gods, they’re moreso saying that the social technologies are valuable or they’re uncomfortable just not stewarding their ancestors’ traditions” than like a DEI argument.
There is more to get into here but two main things:
I guess some EAs, and some who I think do really good work do literally believe in literal gods
I don’t actually think this is that predictive. I know some theists who are great at thinking carefully and many athiests who aren’t. I reckon I could distinguish the two in a discussion better than rejecting the former out of hand.
...they would be less likely to hire someone on the basis of their religion because it would imply they were less good at their job.
Some feedback on this post: this part was confusing. I assume that what this person said was something like “I think a religious person would probably be harder to work with because of X”, or “I think a religious person would be less likely to have trait Y”, rather than “religious people are worse at jobs”.
The specifics aren’t very important here, since the reasons not to discriminate against people for traits unrelated to their qualifications[1] are collectively overwhelming. But the lack of specifics made me think to myself: “is that actually what they said?”. It also made it hard to understand the context of your counterarguments, since there weren’t any arguments to counter.
Religion can sometimes be a relevant qualification, of course; if my childhood synagogue hired a Christian rabbi, I’d have some questions. But I assume that’s not what the anecdotal person was thinking about.
The person who was told this was me, and the person I was talking to straight up told me he’d be less likely to hire Christians because they’re less likely to be intelligent
Please don’t assume that EAs don’t actually say outrageously offensive things—they really do sometimes!
Edit: A friend told me I should clarify this was a teenage edgelord—I don’t want people to assume this kind of thing gets said all the time!
And since posting this I’ve said this to several people and 1 was like “yeah no I would downrate religious people too”
I think a poll on this could be pretty uncomfortable reading. If you don’t, run it and see.
Put it another way, would EAs discriminate against people who believe in astrology? I imagine more than the base rate. Part of me agrees with that, part of me thinks its norm harming to do. But I don’t think this one is “less than the population”
“I think religious people are less likely to have trait Y” was one form I thought that comment might have taken, and it turns out “trait Y” was “intelligence”.
Now that I’ve heard this detail, it’s easier to understand what misguided ideas were going through the speaker’s mind. I’m less confused now.
“Religious people are bad at jobs” sounds to me like “chewing gum is dangerous” — my reaction is “What are you talking about? That sounds wrong, and also… huh?”
By comparison, “religious people are less intelligent” sounds to me like “chewing gum is poisonous” — it’s easier to parse that statement, and compare it to my experience of the world, because it’s more specific.
*****
As an aside: I spend a lot of time on Twitter. My former job was running the EA Forum. I would never assume that any group has zero members who say offensive things, including EA.
I think the strongest reason to not do anything that even remotely looks like employer discrimination based on religion is that it’s illegal, at least for the US, UK, and European Union countries, which likely jointly encompasses >90% of employers in EA.
(I wouldn’t be surprised if this is true for most other countries as well, these are just the ones I checked).
There’s also the fact that, as a society and subject to certain exceptions, we’ve decided that employers shouldn’t be using an employee’s religious beliefs or lack thereof as an assessment factor in hiring. I think that’s a good rule from a rule-utilitarian framework. And we can’t allow people to utilize their assumptions about theists, non-theists, or particular theists in hiring without the rule breaking down.
The exceptions generally revolve around personal/family autonomy or expressive association, which don’t seem to be in play in the situation you describe.
I think that I generally agree with what you are suggesting/proposing, but there are all kinds of tricky complications. The first thing that jumps to my mind is that sometimes hiring the person who seems most likely to do the best job ends up having a disparate impact, even if there was no disparate treatment. This is not a counterargument, of course, but more so a reminder that you can do everything really well and still end up with a very skewed workforce.
I generally agree with the meritocratic perspective. It seems a good way (maybe the best?) to avoid tit-for-tat cycles of “those holding views popular in some context abuse power → those who don’t like the fact that power was abused retaliate in other contexts → in those other contexts, holding those views results in being harmed by people in those other contexts who abuse power”.
Good point about the priors. Strong priors about these things seem linked to seeing groups as monoliths with little within-group variance in ability. Accounting for the size of variance seems under-appreciated in general. E.g., if you’ve attended multiple universities, you might notice that there’s a lot of overlap between people’s “impressiveness”, despite differences in official university rankings. People could try to be less confused by thinking in terms of mean/median, variance, and distributions of ability/traits more, rather than comparing groups by their point estimates.
Some counter-considerations:
Religion and race seem quite different. Religion seems to come with a bunch of normative and descriptive beliefs that could affect job performance—especially in EA—and you can’t easily find out about those beliefs in a job interview. You could go from one religion to another, from no religion to some religion, or some religion to no religion. The (non)existence of that process might give you valuable information about how that person thinks about/reflects on things and whether you consider that to be good thinking/reflection.
For example, from a irreligious perspective, it might be considered evidence of poor thinking if a candidate thinks the world will end in ways consistent with those described in the Book of Revelation, or think that we’re less likely to be in a simulation because a benevolent, omnipotent being wouldn’t allow that to happen to us.
Anecdotally, on average, I find that people who have gone through the process of abandoning the religion they were raised with, especially at a young age, to be more truth-seeking and less influenced by popular, but not necessarily true, views.
Religion seems to cover too much. Some forms of it seems to offer immunity to act in certain ways, and the opportunity to cheaply attack others if they disagree with it. In other communities, religion might be used to justify poor material/physical treatment of some groups of people, e.g. women and gay people. While I don’t think being accepting of those religions will change the EA community too much, it does say something to/negatively affect the wider world if there’s sufficient buy-in/enough of an alliance/enough comfort with them.
But yeah, generally, sticking to the Schelling point of “don’t discriminate by religion (or lack-thereof)” seems good. Also, if someone is religious and in EA (i.e., being in an environment that doesn’t have too many people who think like them), it’s probably good evidence that they really want to do good and are willing to cooperate with others to do so, despite being different in important ways. It seems a shame to lose them.
Oh, another thought. (sorry for taking up so much space!) Sometimes something looks really icky, such as evaluating a candidate via religion, but is actually just standing in for a different trait. We care about A, and B is somewhat predictive of A, and A is really hard to measure, then maybe people sometimes use B as a rough proxy for A.
I think that this is sometimes used as the justification for sexism/racism/etc, where the old-school racist might say “I want a worker who is A, and B people are generally not A.” If the relationship between A and B is non-existent or fairly weak, then we would call this person out for discriminating unfairly. But now I’m starting to think of what we should do if there really is a correlation between A and B (such as sex and physical strength). That is what tends to happen if a candidate is asked to do an assessment that seems to have nothing to do with the job, such as clicking on animations of colored balloons: it appears to have nothing to do with the job, but it actually measures X, which is correlated with Y, which predicts on-the-job success.
I’d rather be evaluated as an individual than as a member of a group, and I suspect that in-group variation is greater than between-group variation, echoing what you wrote about the priors being weak.
As with many statements people make about people in EA, I think you’ve identified something that is true about humans in general.
I think it applies less to the average person in EA than to the average human. I think people in EA are more morally scrupulous and prone to feeling guilty/insufficiently moral than the average person, and I suspect you would agree with me given other things you’ve written. (But let me know if that’s wrong!)
I find statements of the type “sometimes we are X” to be largely uninformative when “X” is a part of human nature.
Compare “sometimes people in EA are materialistic and want to buy too many nice things for themselves; EA has a materialism problem” — I’m sure there are people in EA like this, and perhaps this condition could be a “problem” for them. But I don’t think people would learn very much about EA from the aforementioned statements, because they are also true of almost every group of people.
I sense that it’s good to publicly name serial harassers who have been kicked out of the community, even if the accuser doesn’t want them to be. Other people’s feeling matter too and I sense many people would like to know who they are.
I think there is a difference between different outcomes, but if you’ve been banned from EA events then you are almost certainly someone I don’t want to invite to parties etc.
It does not. There are a small number of co-funding situations where money from other donors might flow through Open Philanthropy operated mechanisms, but it isn’t broadly possible to donate to Open Philanthropy itself (either for opex or regranting).
Unbalanced karma is good actually. it means that the moderators have to do less. I like the takes of the top users more than the median user and I want them to have more but not total influence.
Appeals to fairness don’t interest me—why should voting be fair?
A friend asked about effective places to give. He wanted to donate through his payroll in the UK. He was enthusiastic about it, but that process was not easy.
It wasn’t particularly clear whether GiveWell or EA Development Fund was better and each seemed to direct to the other in a way that felt at times sketchy.
It wasn’t clear if payroll giving was an option
He found it hard to find GiveWell’s spreadsheet of effectiveness
Feels like making donations easy should be a core concern of both GiveWell and EA Funds and my experience made me a little embarrassed to be honest.
Has anyone ever run a competition for EA related short stories?
Why would this be a good idea? * Narratives resonate with people and have been used to convey ideas for 1000s of years * It would be low cost and fun * Using voting on this forum there is the same risk of “bad posts” as for any other post
How could it work? * Stories submitted under a tag on the EA forum. * Rated by upvotes * Max 5000 words (I made this up, dispute it in the comments) * If someone wants to give a reward, then there could be a prize for the highest rated * If there is a lot of interest/quality they could be collated and even published * Since it would be measured by upvotes it seems unlikely a destructive story would be highly rated (or as likely as any other destructive post on the forum)
Upvote if you think it’s a good idea. If it gets more than 40 karma I’ll write one.
Looking forward to how it plays out! LessWrong made the intentional decision to not do it, because I thought posts are too large and have too many claims and agreement/disagreement didn’t really have much natural grounding any more, but we’ll see how it goes. I am glad to have two similar forums so we can see experiments like this play out.
My hope would be that it would allow people to decouple the quality of the post and whether they agree with it or not. Hopefully people could even feel better about upvoting posts they disagreed with (although based on comments that may be optimistic).
Perhaps combined with a possible tweak in what upvoting means (as mentioned by a few people), someone mentioned we could change “how much do you like this overall” to something that moves away form basing the reaction on an emotions. I think someone suggested something like “Do you think this post adds value” (That’s just a real hack at the alternative, I’m sure there are far better ones)
I guess African, Indian and Chinese voices are underrepresented in the AI Governance discussion. And in the unlikely case we die, we all die and it think it’s weird that half the people who will die have noone loyal to them in the discussion.
We want AI that works for everyone and it seems likely you want people who can represent billions who aren’t currently with a loyal representative.
I’m actually more concerned about the underrepresentation of certain voices as it applies to potential adverse effects of AGI (or even near-AGI) on society that don’t involve all of us dying. In the everyone-dies scenario, I would at least be similarly situated to people from Africa, India, and China in terms of experiencing the exact same bad thing that happens. But there are potential non-fatal outcomes, like locking in current global power structures and values, that affect people from non-Western countries much differently (and more adversely) than they’d affect people like me.
Yeah, in a scenario with “nation-controlled” AGI, it’s hard to see people from the non-victor sides not ending up (at least) as second-class citizens—for a long time. The fear/lack of guarantee of not ending up like this makes cooperation on safety more difficult, and the fear also kind of makes sense? Great if governance people manage to find a way to alleviate that fear—if it’s even possible. Heck, even allies of the leading state might be worried—doesn’t feel too good to end up as a vassal state. (Added later (2023-06-02): It may be a question that comes up as AGI discussions become mainstream.)
Wouldn’t rule out both American and Chinese outside of respective allied territory being caught in the crossfire of a US-China AI race.
Political polarization on both sides in the US is also very scary.
This strikes me as another variation of “EA has a diversity problem.” Good to keep in mind that is it not just about progressive notions of inclusivity, though. There may be VERY significant consequences for the people in vast swaths of the world if a tiny group of people make decisions for all of humanity. But yeah, I also feel that it is a super weird aspect of the anarchic system (in the international relations sense of anarchy) that most of the people alive today have no one representing their interests.
It also seems to echo consistent critiques of development aid not including people in decision-making (along the lines of Ivan Illich’s To Hell with Good Intentions, or more general post-colonial narratives).
What means “have noone loyal to them” and “with a loyal representative”? Are you talking about the indian government? Or are you talking about EAs talking part in discussions such as yourself? (In which case, who are you loyal to?)
And I don’t think I’m good here. I think I try to be loyal to them, but I don’t know what the chinese people want and I think if I try and guess I’ll get it wrong in some key areas.
I’m reminded of when givewell?? asked recipients how they would trade money for children’s lives and they really fucking loved saving children’s lives. If we are doing things for others benefit we should take their weightings into account.
I wish the forum had a better setting for “I wrote this post and maybe people will find it interesting but I don’t want it on the front page unless they do because that feels pretenious”
Give Directly has a President (Rory Stewart) paid $600k, and is hiring a Managing Director. I originally thought they had several other similar roles (because I looked on the website) but I talked to them an seemingly that is not the case. Below is the tweet that tipped me off but I think it is just mistaken.
Once could still take issue with the $600k (though I don’t really)
A ore important question for me though, is to ask Is it right? and Is it a good idea? I think the answer to both of these is a resounding no for a number of reasons.
- (For GiveDirectly). The premise of your entire organisation is that dollars do more good in the hands of the poor than the rich. For your organisation to then spend a huge amount of money on a CEO is arguably going against what the organisation stands for.
- Bad press for the organisation. After SBF and the Abbey etc. this shouldn’t take too much explaining
- Might reflect badly on the organisation when applying for grants
- (My personal gripe) what kind of person working to help the poorest people on earth could live with themselves earning so much given what their organisation. You have become part of the industrial aid complex which makes inequality worse—the kind of thing givedirectly almost seemed to be riling against in the first place.
High NGO salaries make me angry though, so maybe this is a bit too ranty ;).
The expectation of low salaries is one of the biggest problems hobbling the nonprofit sector. It makes it incredibly difficult to hire people of the caliber you need to run a high-performance organization.
what kind of person working to help the poorest people on earth could live with themselves earning so much given what their organisation
This is classic Copenhagen interpretation of ethics stuff. Someone making that kind of money as a nonprofit CEO could almost always make much more money in the private sector while receiving significantly less grief. You’re creating incentives that get us worse nonprofits and a worse world.
I’m interested in the evidence behind the idea that low salaries hobble the nonprofit sector. Is there research to support this outside of the for-profit market? I’m unconvinced that higher salaries (past a certain point) would lead to a better calibre of employee in the NGO field. I would have assumed that the attractiveness of running an effective and high profile org like Give directly might be enough to attract amazing candidates regardless of salary. It would be amazing to do AB testing, or even a RCT on this front but I would be imagine that would be hard to convince organisations to get involved in this research. Personally I think there are enough great leaders out there (especially for an org like givedirectly) who would happily work on 100,000 a year. the salary difference between 100k and 600k might make barely any difference at all in the pool of candidates you attract—but of course this is conjecture.
On the moral side of things, there’s a difference between taking a healthy salary of 100,000 dollars a year—enough to be in the top 0.5% of earners in the world and taking $600,000. We’re not looking for a masochist to run the best orgs, just someone who appreciates the moral weight of that degree of inequality within an organisation that purports to be supporting the world’s poorest.
If earning 600,000 rather than 100,000 is a strong incentive for a person running a non-profit, I probably don’t want them in charge. First I think that this kind of salary might lead someone to be less efficient with spending both in the American base and in distant company operations. NGOs need lean operations as they rely on year to year donations which are never secure—NGOs can’t expect to continue high growth rates of funding year on year like good businesses. Also leaders on high pay are probably likely to feel morally obligated to pay other admin staff more because of their own salary, rather than maximising the amount of money given directly to the poorest.
It may also affect the whole ethos of the organisation and respect from other staff especially in places like Kenya where staff will be getting paid far far less. Imagine you are earning a decent local wage in Kenya, which is still 100x less than your boss in America? Motivating yourself to do your job well becomes difficult. I’ve seen this personally in organisations here in Uganda where Western bosses earn far higher salaries. Local staff see the injustice within their own system then can’t get on board with the vision of the organisation. This kind of salary inequality is likely to affect organisational morale.
At least in the US, Cabinet members, judges, senior career civil servants, and state governors tend to make on average half that. I have heard of some people who would be good federal judges, mainly at the district-court level, turning down nominations because they couldn’t stomach the 85-90% pay cut from being a big-firm partner. The quality of some of these senior political and judicial leaders varies . . . but I don’t think money is the real limiting factor in US leader quality. That is, I don’t get the sense that the US would generally have better leaders if the salaries at the top were doubled or tripled.
The non-salary “benefits” and costs of working at high levels in the government are different from the non-salary “benefits” and costs of working for a non-profit. But I think they differ in ways that some people would prefer the former over the latter (or vice versa).
In other words, a belief that charities should offer their senior leaders a significantly higher salary than senior leaders in world and regional governments potentially implies that almost every developed democracy in the world should be paying their senior leaders and civil servants significantly more than they do. Maybe they should?
I don’t have a firm opinion on salaries for charitable senior officials, but I think Nick is right insofar as high salaries can cause donor disillusionment and loss of morale within the organization. So while I’m willing to start with a presumption that government-comparable salaries for mid-level+ staff are appropriate (because they have been tested by the crucilble of the democratic process), it’s reasonable to ask for evidence that significantly higher salaries improve organizational effectiveness for non-profits.
No engagement: I’ve heard of effective altruism, but do not engage with effective altruism content or ideas at all
Mild engagement: I’ve engaged with a few articles, videos, podcasts, discussions, events on effective altruism (e.g. reading Doing Good Better or spending ~5 hours on the website of 80,000 Hours)
Moderate engagement: I’ve engaged with multiple articles, videos, podcasts, discussions, or events on effective altruism (e.g. subscribing to the 80,000 Hours podcast or attending regular events at a local group). I sometimes consider the principles of effective altruism when I make decisions about my career or charitable donations.
Considerable engagement: I’ve engaged extensively with effective altruism content (e.g. attending an EA Global conference, applying for career coaching, or organizing an EA meetup). I often consider the principles of effective altruism when I make decisions about my career or charitable donations.
High engagement: I am heavily involved in the effective altruism community, perhaps helping to lead an EA group or working at an EA-aligned organization. I make heavy use of the principles of effective altruism when I make decisions about my career or charitable donations.
To me “considerably engaged” EA people are doing a lot. Their median donation is $1000. They have “engaged extensively” and “often consider the principles of effective altruism” To me, they seem “highly engaged” in EA.
I’ve met people who are giving quite a lot of money, who have perhaps tried applied to EA jobs and not succeeded. And yet they are not allowed to consider themselves “highly engaged”. I guess this leads to them feeling disillusioned. It risks creating a privileged class of those who can get jobs at EA orgs and those who can’t. What about those who think they are doing an EA job but it’s not at an EA-aligned organisation? It seems wrong to me that they can’t consider themselves highly engaged.
I would prefer:
“Considerable engagement” → “high engagement”
“High engagement” → “maximum engagement”
And I would prefer the text read as follows:
High (previously considerable) engagement: I’ve engaged extensively with effective altruism content (e.g. attending an EA Global conference, applying for career coaching, or organizing an EA meetup). I often consider the principles of effective altruism when I make decisions about my career or charitable donations, but they are not the biggest factor to me.
Maximum (previously high) engagement: I am deeply involved in the effective altruism community. Perhaps I have chosen my career using the principles of effective altruism. I might earn to give or helping to lead an EA group or working at an EA-aligned organization. Maybe I tried for several years to gain such a career but have since moved to a plan B or Z. Regardless, I make my career or resource decisions on a primarily effective altruist basis.
It’s a bit rough, but I think it allows for people who are earning to give or deeply involved with the community to say they are maximally engaged and that those who are highly engaged to put a 4 without shame. Feel free to put your own drafts in the comments.
Currently, the idea that someone could be earning to give, donating $10,000s per year and perhaps still not consider themself highly engaged in EA seems like a flaw.
I think this is part of a more general problem that people say things like “I’m not totally EA” when they donate 1%+ of their income and are trying hard. Why create a club where so many are insecure about their membership.
I can’t speak for everyone, but if you donate even 1% of your income to charities which you think are effective, you’re EA in my book.
It is one of my deepest hopes, and one of my goals for my own work at CEA, that people who try hard and donate feel like they are certainly, absolutely a part of the movement. I think this is determined by lots of things, including:
The existence of good public conversations about donations, cause prioritization, etc., where anyone can contribute
The frequency of interesting news and stories about EA-related initiatives that make people feel happy about the progress their “team” is making
I hope that the EA Survey’s categories are a tiny speck compared to these.
Thanks for providing a detailed suggestion to go with this critique!
While I’m part of the team that puts together the EA Survey, I’m only answering for myself here.
I’ve met people who are giving quite a lot of money, who have perhaps tried applied to EA jobs and not succeeded. And yet they are not allowed to consider themselves “highly engaged”. I guess this leads to them feeling disillusioned.
People can consider themselves anything they want! It’s okay! You’re allowed! I hope that a single question on the survey isn’t causing major changes to how people self-identify. If this is happening, it implies a side-effect the Survey wasn’t meant to have.
Have you met people who specifically cited the survey (or some other place the question has showed up — I think CEA might have used it before?) as a source of disillusionment?
I’m not sure I understand why people would so strongly prefer being in a “highly engaged” category vs. a “considerably engaged” category if those categories occupy the same relative position on a list. Especially since people don’t use that language to describe themselves, in my experience. But I could easily be missing something.
I want someone who earns-to-give (at any salary) to feel comfortable saying “EA is a big part of my life, and I’m closely involved in the community”. But I don’t think this should determine how the EA Survey splits up its categories on this question, and vice-versa.
*****
One change I’d happily make would be changing “EA-aligned organization” to “impact-focused career” or something like that. But I do think it’s reasonable for the survey to be able to analyze the small group of people whose professional lives are closely tied to the movement, and who spend thousands of hours per year on EA-related work rather than hundreds.
(Similarly, in a survey about the climate movement, it would seem reasonable to have one answer aimed at full-time paid employees and one answer aimed at extremely active volunteers/donors. Both of those groups are obviously critical to the movement, but their answers have different implications.)
Earning-to-give is a tricky category. I think it’s a matter of degree, like the difference between “involved volunteer/group member” and “full-time employee/group organizer”. Someone who spends ~50 hours/year trying to allocate $10,000 is doing something extraordinary with their life, and EA having a big community of people like this is excellent, but I’d still like to be able to separate “active members of Giving What We Can” from “the few dozen people who do something like full-time grantmaking or employ people to do this for them”.
*****
Put another way: Before I joined CEA, I was an active GWWC member, read a lot of EA-related articles, did some contract work for MIRI/CFAR, and went to my local EA meetups. I’d been rejected from multiple EA roles and decided to pursue another path (I didn’t think it was likely I’d get an EA job until months later).
I was pretty engaged at this point, but the nature of my engagement now that I work for CEA is qualitatively different. The opinions of Aaron!2018 should mean something different to community leaders than the opinions of Aaron!2021 — they aren’t necessarily “less important” (I think Aaron!2018 would have a better perspective on certain issues than I do now, blinded as I am by constant exposure to everything), but they are “different”.
*****
All that said, maybe the right answer is to do away with this question and create clusters of respondents who fit certain criteria, after the fact, rather than having people self-define. e.g. “if two of A, B, or C are true, choose category X”.
It’s possible that this question is mean to measure something about non-monetary contribution size, not engagement. In which case, say that.
Call it, “non-financial contribution” and put 4 as ” I volunteer more than X hours” and 5 as “I work on a cause area directly or have taken a lower than salary rate jobs”.
My call: EA gets 3.9 out of 14 possible cult points.
The group is focused on a living leader to whom members seem to display excessively zealous, unquestioning commitment.
No
The group is preoccupied with bringing in new members.
Yes (+1)
The group is preoccupied with making money.
Partial (+0.8)
Questioning, doubt, and dissent are discouraged or even punished.
No
Mind-numbing techniques (such as meditation, chanting, speaking in tongues, denunciation sessions, debilitating work routines) are used to suppress doubts about the group and its leader(s).
No
The leadership dictates sometimes in great detail how members should think, act, and feel (for example: members must get permission from leaders to date, change jobs, get married; leaders may prescribe what types of clothes to wear, where to live, how to discipline children, and so forth).
No
The group is elitist, claiming a special, exalted status for itself, its leader(s), and members (for example: the leader is considered the Messiah or an avatar; the group and/or the leader has a special mission to save humanity).
Partial (+0.5)
The group has a polarized us- versus-them mentality, which causes conflict with the wider society.
Very weak (+0.1)
The group’s leader is not accountable to any authorities (as are, for example, military commanders and ministers, priests, monks, and rabbis of mainstream denominations).
No
The group teaches or implies that its supposedly exalted ends justify means that members would have considered unethical before joining the group (for example: collecting money for bogus charities).
Partial (+0.5)
The leadership induces guilt feelings in members in order to control them.
No
Members’ subservience to the group causes them to cut ties with family and friends, and to give up personal goals and activities that were of interest before joining the group.
No
Members are expected to devote inordinate amounts of time to the group.
Yes (+1)
Members are encouraged or required to live and/or socialize only with other group members.
The group is focused on a living leader to whom members seem to display excessively zealous, unquestioning commitment.
I think this is nonzero, I think subsets of the community do display “excessively zealous” commitment to a leader given “What would SBF do” stickers. Outside views of LW (or at least older versions of it would probably worry that this was an EY cult.
+0.1
The group is preoccupied with bringing in new members.
+1
The group is preoccupied with making money.
+1
Questioning, doubt, and dissent are discouraged or even punished.
I think this is probably partial, given claims in this post, and positive-agreevote concerns here (though clearly all of the agree voters might be wrong). +0.2
Mind-numbing techniques (such as meditation, chanting, speaking in tongues, denunciation sessions, debilitating work routines) are used to suppress doubts about the group and its leader(s).
No
The leadership dictates sometimes in great detail how members should think, act, and feel (for example: members must get permission from leaders to date, change jobs, get married; leaders may prescribe what types of clothes to wear, where to live, how to discipline children, and so forth).
No (outside of Leverage research, perhaps)
The group is elitist, claiming a special, exalted status for itself, its leader(s), and members (for example: the leader is considered the Messiah or an avatar; the group and/or the leader has a special mission to save humanity).
Yes for elitist, and yes for saving humanity. +0.5
The group has a polarized us- versus-them mentality, which causes conflict with the wider society.
+0.1
The group’s leader is not accountable to any authorities (as are, for example, military commanders and ministers, priests, monks, and rabbis of mainstream denominations).
No
The group teaches or implies that its supposedly exalted ends justify means that members would have considered unethical before joining the group (for example: collecting money for bogus charities).
+1
The leadership induces guilt feelings in members in order to control them.
No (if we only consider “intentional” inducement
Members’ subservience to the group causes them to cut ties with family and friends, and to give up personal goals and activities that were of interest before joining the group.
+0.5
Members are expected to devote inordinate amounts of time to the group.
+0.8
Members are encouraged or required to live and/or socialize only with other group members.
Questioning, doubt, and dissent are discouraged or even punished.
I think this is probably partial, given claims in this post, and positive-agreevote concerns here (though clearly all of the agree voters might be wrong).
I think you may have very high standards? By these standards, I don’t think there are any communities at all that would score 0 here.
~
I think this is nonzero, I think subsets of the community do display “excessively zealous” commitment to a leader given “What would SBF do” stickers. Outside views of LW (or at least older versions of it would probably worry that this was an EY cult.
I was not aware of “What would SBF do” stickers. Hopefully those people feel really dumb now. I definitely know about EY hero worship but I was going to count that towards a separate rationalist/LW cult count instead of the EA cult count.
I think you may have very high standards? By these standards, I don’t think there are any communities at all that would score 0 here.
I think where we differ is that I’m not making a comparison of whether EA is worse than this compared to other groups, if every group scores in the range of 0.5-1 I’ll still score 0.5 as 0.5, and not scale 0.5 down to 0 and 0.75 down to 0.5. Maybe that’s the wrong way to approach it but I think the least culty organization can still have cult-like tendencies, instead of being 0 by definition.
Also if it’s true that someone working at GPI was facing these pressures from “senior scholars in the field”, then that does seem like reason for others to worry. There also has been a lot of discussion on the forum about the types of critiques that seem like they are acceptable and the ones that aren’t etc. Your colleague also seems to believe this is a concern, for example, so I’m currently inclined to think that 0.2 is pretty reasonable and I don’t think I should update much based on your comment-but happy for more pushback!
The group is elitist, claiming a special, exalted status for itself, its leader(s), and members (for example: the leader is considered the Messiah or an avatar; the group and/or the leader has a special mission to save humanity).
has to get more than 0.2, right? Being elitist and on a special mission to save humanity is a concerningly good descriptor of at least a decent chunk of EA.
>> The group teaches or implies that its supposedly exalted ends justify means that members would have considered unethical before joining the group (for example: collecting money for bogus charities).
> Partial (+0.5)
This seems too high to me, I think 0.25 at most. We’re pretty strong on “the ends don’t justify the means”.
>>The leadership induces guilt feelings in members in order to control them.
I don’t think it makes sense to say that the group is “preoccupied with making money”. I expect that there’s been less focus on this in EA than in other groups, although not necessarily due to any virtue, but rather because of how lucky we have been in having access to funding.
Nuclear risk is in the news. I hope: - if you are an expert on nuclear risk, you are shopping around for interviews and comment - if you are an EA org that talks about nuclear risk, you are going to publish at least one article on how the current crisis relates to nuclear risk or find an article that you like and share it - if you are an EA aligned journalist, you are looking to write an article on nuclear risk and concrete actions we can take to reduce it
[epistemic status—low, probably some element are wrong]
tl;dr - communities have a range of dispute resolution mechanisms, whether voting to public conflict to some kind of civil war - some of these are much better than others - EA has disputes and resources and it seems likely that there will be a high profile conflict at some point - What mechanisms could we put in place to handle that conflict constructively and in a positive sum way?
When a community grows as powerful as EA is, there can be disagreements about resource allocation. In EA these are likely to be significant.
There are EAs who think that the most effective cause area is AI safety. There are EAs who think it’s global dev. These people do not agree, though there can be ways to coordinate between them.
The spat between GiveWell and GiveDirectly is the beginning of this. Once there are disagreements on the scale of $10millions then some of that is gonna be sorted out over twitter. People may badmouth each other and damage the reputation of EA as a whole.
The way around this is to make solving problems easier than creating them. As in a political coalition, people need to have more benefits being inside the movement than outside it.
The EA forum already does good work here, allowing everyone to upvote posts they like.
Here are some other power sharing mechanisms: - a fund where people can either vote on cause areas, expected value, or moral weights, so that it moves based on the community’s values as a whole - a focus on “we disagree, but we respect” looking at how different parts of the community disagree but respect the effort of others - a clear mechanism of bargains, where animal EAs donate to longtermist charities in exchange for longtermists to go vegan and vice versa - some videos from key figures from different parts discussing their disagreements in a kind and human way - “I would change if” a series of posts from people saying what would make them work on different cause areas. How cheap would chicken welfare have to be before Yudkowsky moved to work on it? How cheap would AI safety had to be before it became Singer’s key talking point
Call me a pessimist, but I can’t see how a community managing $50Bn across deeply dividided prioritites will stay chummy without proper dispute resolution systems. And I suggest we should start building them now.
Beyond these, one could build a community around finding forecasts of public figures. Alternatively, I guess GPT-3 has a good shot of being able to turn verbal forecasts into data which could then be checked.
What’s the impact
I’m only gonna sketch my argument here. As above, if this gets 20 karma I’ll write a full post (but only upvote if it’s good, let’s not waste any of our time).
We seem to think forecasting improves the accuracy of commentator
If we could build a high-status award for forecasting, more commentators would hear about it and it would serve as a nudge for others to make their forecasts more visible
I am confident this would lead to better commentary (this seems arrogant, but honestly the people I know who forecast more are more epistemically humble—I think celebrities could really benefit from more humility about their predictions)
Better commentary leads to better outcomes. Effective Altruism implicitly holds that many have priority orderings that don’t match reality. The world at large underrates the best charities, the chance of biorisk, etc. Journalism which was more accurate would be more accurate about these things too which would be a massive win
Wouldn’t the winners just be superforecasters
Not currently. I don’t think it’s too hard to make pretty robust boundaries on what a public figure is. Most superforecasters are not well enough known (and sorry to the 5 EAs I can count in metaculus’ top 50). But Yglesias is well known enough. Scott Alexander, I’m less sure but I think we could come up with some minimum amount of hits, followers, etc for someone to be eligible.
How much resource would this take
Depends on a couple of things (I have pulled these numbers out of thin air) please criticise them:
Who is giving this award its prestige? If it’s a lot of money, fine. If it’s an existing org, then it’s cheaper ( 0 - $50k)
How deeply are we looking. I think you could pay someone $50k to find say 100 public sets of forecasts and maybe another $10k to make a nice website. If you want to scrape twitter using GPT3 or crowdsource that’s maybe another $50-100k
Is there an award ceremony? If so I imagine that costs as much as a wedding so maybe $10k
That looks like $60 - $220k
If this failed, why did it fail?
It got embroiled in controversy over who was included
It was attached to some existing EA org and looked badly for them
It became a niche award that no one changed their behaviour based on
I’ve been musing about some critiques of EA and one I like is “what’s the biggest thing that we are missing”
In general, I don’t think we are missing things (lol) but here are my top picks:
It seems possible that we reach out to sciency tech people because they are most similar to us. While this may genuinely be the cheapest way to get people now there may be costs to the community in terms of diversity of thought (most Sci/tech people are more similar than the general population)
I’m glad to see more outreach to people in developing nations
It seems obvious that science/tech people have the most to contribute to AI safety, but.. maybe not?
Also science/tech people have a particular racial/gender makeup and there is the hidden assumption that there isn’t an effective way to reach a more broader group. (Personally I hope that a load of resources in India, Nigeria, Brazil, etc will go some way here, but I dunno still feels like a legitimate question)
People are scared what the future might look like if it is only in the view of MacAskill/Bostrom/SBF. Yeah, in fact my (poor) model of MacAskill is scared of this too. But we can surface more that we wish we had a larger group making these decisions too.
We could build better ways of outsiders feeding into decisionmaking. I read a piece about the effectiveness of community vegan meals being underrated in EA. Now I’m not saying it should be funded, but I was surprsied to read some of these conferences are 5000+ people (iirc). Maybe that genuinely is oversight. But it’s really hard for high signal information to get to decisionmakers. That really is a problem we could work on. If it’s hard for people who speak EA-ese, how much much harder is it for those who speak different community languages, whose concepts seem frustrating to us.
It seems obvious that science/tech people have the most to contribute to AI safety, but.. maybe not?
More likely to me is a scenario of diminishing returns. Ie, tech people might be the most important to first order, but there’s already a lot of brilliant tech people working on the problem, so one more won’t make much of a difference. Whereas a few brilliant policy people could devise a regulatory scheme that penalises reckless AI deployment, etc, making more differences on a marginal basis.
I would like to see posts give you more karma than comments (which would hit me hard). Seems like a highly upvtoed post is waaaaay more valuable than 3 upvoted comments on that post, but it’s pretty often the latter gives more karma than the former.
I know you can figure them out, but I don’t see them presented separately on users pages. Am I missing something? Is it shown on the website somewhere?
For good or ill, while there are posters on twitter who talk about EA, there isn’t a “scene” (a space where people use loads of EA jargon and assume everyone is EA) or at least not that I’ve seen.
UK government will pay for organisations to hire 18-24 year olds who are currently unemployed, for 6 months. This includes minimum wage and national insurance.
I imagine many EA orgs are people constrained rather than funding constrained but it might be worth it.
tl;dr EA books have a positive externality. The response should be to subsidise them
If EA thinks that certain books (doing good better, the precipice) have greater benefits than they seem, they could subsidise them.
There could be an EA website which has amazon coupons for EA books so that you can get them more cheaply if buying for a friend, or advertise said coupon to your friends to encourage them to buy the book.
I think this could cost $50,000 to $300,000 or so depending on when this is done and how popular it is expected to be, but I expect it to be often worth it.
I like this idea and think it’s worth you taking further. My initial reactions are:
Getting more EA books into peoples hands seems great and worth much more per book than the cost of a book.
I don’t know how much of a bottleneck the price of a book is to buying them for friends/club members. I know EA Oxford has given away many books, I’ve also bought several for friends (and one famous person I contacted on instagram as a long shot who actually replied.
I’d therefore be interested in something which aimed to establish whether making books cheaper was a better or worse idea than just encouraging people to gift them.
John Behar/TLYCS probably have good thoughts on this.
Do you have any thoughts as to what the next step would be. It’s not obvious to me what you’d do to research the impact of this.
Perhaps have a questionnaire asking people how many people they’d give books to at different prices. Do we know the likelihood of people reading a book they are given?
Being open minded and curious is different from holding that as part of my identity.
Perhaps I never reach it. But it seems to me that “we are open minded people so we probably behave open mindedly” is false.
Or more specifically, I think that it’s good that EAs want to be open minded, but I’m not sure that we are purely because we listen graciously, run criticism contests, talk about cruxes.
The problem is the problem. And being open minded requires being open to changing one’s mind in difficult or set situations. And I don’t have a way that’s guaranteed to get us over that line.
Someone told me they don’t bet as a matter of principle. And that this means EA/Rats take their opinions less seriously as a result. Some thoughts
I respect individual EAs preferences. I regularly tell friends to do things they are excited about, to look after themselves, etc etc. If you don’t want to do something but feel you ought to, maybe think about why, but I will support you not doing it. If you have a blanket ban on gambling, fair enough. You are allowed to not do things because you don’t want to
Gambling is addictive, if you have a problem with it, don’t do it
Betting is a useful tool. I just do take opinions a bit less seriously if people don’t do the simple thing to put their money where their mouths are. And so a blanket ban is a slight cost. Imagine if I said I had a blanket ban on double cruxxing, or giving to animal welfare charities. It’s a thing I am allowed to do, but it does just seem a bit worse
To me, this seems like something else is actually going on. Perhaps it feels like “will you bet on it” is a way that certain people can twist my arm in a way that makes me feel uncomfortable? Perhaps the people who say this have been cruel to me in the past. I don’t know, but I sense there is something else going on. If you don’t bet as a blanket policy, could you tell me why?
I don’t bet because I feel it’s a slippery slope. I also strongly dislike how opinions and debates in EA are monetised, as this strengthens even more the neoliberal vibe EA already has, so my drive for refraining to do this in EA is stronger than outside.
Edit: and I too have gotten dismissed by EAs for it in the past.
I don’t bet because it’s not a way to actually make money given the frictional costs to set it up, including my own ignorance about the proper procedure and having to remember it and keep enough capital for it. Ironically, people who are betting in this subculture are usually cargo culting the idea of wealth-maximization with the aesthetics of betting with the implicit assumption that the stakes of actual money are enough to lead to more correct beliefs when following the incentives really means not betting at all. If convenient, universal prediction markets weren’t regulated into nonexistence then I would sing a different tune.
I guess I do think the “wrong beliefs should cost you” is a lot of the gains. I guess I also think that bets should be able to be at scale of the disagreement is important, but I think that’s a much more niche view.
There are a number of possible reasons that the individual might not want to talk about publicly:
A concern about gambling being potentially addictive for them;
Being relatively risk-averse in their personal capacity (and/or believing that their risk tolerance is better deployed for more meaningful things than random bets);
Being more financially constrained than their would-be counterparts; and
Awareness of, and discomfort with, the increased power the betting norm could give people with more money.
On the third point: the bet amount that would be seen as meaningful will vary based on the person’s individual circumstances. It is emotionally tough to say—no, I don’t have much money, $10 (or whatever) would be a meaningful bet for me even though it might take $100 (or whatever) to be meaningful to you.
On the fourth point: if you have more financial resources, you can feel freer with your bets while other people need to be more constrained. That gives you more access to bet-offers as a rhetorical tool to promote your positions than people with fewer resources. It’s understandable that people with fewer resources might see that as a financial bludgeon, even if not intended as such.
I have yet to see anyone in the EA/rat world make a bet for sums that matter, so I really don’t take these bets very seriously. They also aren’t a great way to uncover people’s true probabilities because if you are betting for money that matters you are obviously incentivized to try to negotiate what you think are the worst possible odds for the person on the other side that they might be dumb enough to accept.
If anything… I probably take people less seriously if they do bet (not saying that’s good or bad, but just being honest), especially if there’s a bookmaker/platform taking a cut.
I think if I knew that I could trade “we all obey some slightly restrictive set of romance norms” for “EA becomes 50% women in the next 5 years” then that’s a trade I would advise we take.
That’s a big if. But seems trivially like the right thing to do—women do useful work and we should want more of them involved.
To say the unpopular reverse statement, if I knew that such a set of norms wouldn’t improve wellbeing in some average of women in EA and EA as a whole then I wouldn’t take the trade.
Seems worth acknowledging there are right answers here, if only we knew the outcomes of our decisions.
In defence of Will MacAskill and Nick Beckstead staying on the board of EVF
While I’ve publicly said that on priors they should be removed unless we hear arguments otherwise, I was kind of expecting someone to make those arguments. If noone will, I will.
MacAskill
MacAskill is very clever, personally kind, is a superlative networker and communicator. Imo he oversold SBF, but I guess I’d do much worse in his place. It seems to me that we should want people who have made mistakes and learned from them. Seems many EA orgs would be glad to have someone like him on the board. If anything, the question is if we don’t want too many people duplicated across EA orgs (do we want this?) which board is it most valuable to have MacAskill on? I guess EVF?
Beckstead
Beckstead is, I sense, extremely clever (generally I find OpenPhil people to be powerhouses), personally kind. I guess I think that he dropped the ball on running FTXFF well—feels like had they hired more people to manage OPS they might have queried why money was going from strange accounts, but again I don’t know the particulars (though I want to give the benefit of the doubt here). But again, it was a complicated project and I guess he sensed that speed of ramp up was the proirity. In many world’s he’d have been right.
I guess perhaps the two of them seem to have pretty similar blindspots (kind intelligent academicish EAs who scaled things really fast) so perhaps it is worth only having one on the board. Maybe it’s worth having someone who can say “hmm that seems likely too odd or shifty to be worth us doing it”. But this isn’t as much of a knockdown argument.
Feels like there should be some kind of community discussion and research in the wake of FTX, especially if no leadership is gonna do it. But I don’t know how that discussion would have legitimacy. I’m okay at such things, but honestly tend to fuck them up somehow. Any ideas?
If I were king
Use the ideas from all the varous posts
Have a big google doc where anyone can add research and also put a comment for each idea and allow people to discuss
Then hold another post where we have a final vote on what should happen
then EA orgs can see at least what some kind of community concensus things
I wrote a post on possible next steps but it got little engagement—unclear if it was a bad post or people just needed a break from the topic. On mobile, so not linking it—but it’s my only post besides shortform.
The problem as I see it is that the bulk of proposals are significantly underdeveloped, risking both applause light support and failure to update from those with skeptical priors. They are far too thin to expect leaders already dealing with the biggest legal, reputational, and fiscal crisis in EA history to do the early development work.
Thus, I wouldn’t credit a vote at this point as reflecting much more than a desire for a more detailed proposal. The problem is that it’s not reasonable to expect people to write more fleshed-out proposals for free without reason to believe the powers-that-be will adopt them.
I suggested paying people to write up a set of proposals and then voting on those. But that requires both funding and a way to winnow the proposals and select authors. I suggested modified quadratic funding as a theoretical ideal, but a jury of pro-reform posters as a more practical alternative. I thought that problem was manageable, but it is a problem. In particular, at the proposal-development stage, I didn’t want tactical voting by reform skeptics.
Strong +1 to paying people for writing concrete, actionable proposals with clear success criteria etc. - but I also think that DEI / reform is just really, really hard, and I expect relatively few people in the community to have 1) the expertise 2) the knowledge of deeper community dynamics / being able to know the current stsances on things.
Let’s assume that the time article is right about the amount of sexual harassment in EA. How big a problem is this relative to other problems? If we spend $10mn on EAGs (a guess) how much should we spend if we could halve sexual harassment in the community.
The whole sexual harassment issue isn’t something that can be easily fixed with money I think. It’s more a project of changing norms and what’s acceptable within the EA community.
The issue is it seems like many folks at the top of orgs, especially in SF, have deeply divergent views from the normal day-to-day folks joining/hearing about EA. This is going to be a huge problem moving forward from a public relations standpoint IMO.
Money can’t fix everything, but it can help some stuff, like hiring professionals outside of EA and supporting survivors who fear retaliation if they choose to speak out.
I’ll sort of publicly flag that I sort of break the karma system. Like the way I like to post comments is little and often and this is just overpowered in getting karma.
eg I recently overtook Julia Wise and I’ve been on the forum for years less than anyone else.
I don’t really know how to solve this—maybe someone should just 1 time nuke my karma? But yeah it’s true.
Note that I don’t do this deliberately—it’s just how I like to post and I think it’s honestly better to split up ideas into separate comments. But boy is it good at getting karma. And soooo much easier than writing posts.
Having EA Forum karma tells you two things about a person:
They had the potential to have had a high impact in EA-relevant ways
They chose not to.
I wouldn’t worry too much about the karma system. If you’re worried about having undue power in the discourse, one thing I’ve internalized is to use the strong upvote/downvote buttons very sparingly (e.g. I only strong-upvoted one post in 2022 and I think I never strong-downvoted any post, other than obvious spam).
I don’t think you need to start with zero karma again. The karma system is not supposed to mean very much. It is heavily favoured in certain aspects than a true representation of your skill or trustworthiness as a user on this forum. It is more or less a xp bar for social situations and is an indicator that someone posts good content here.
Aaron Gertler retired from the forum, someone who is in high regard, which got a lot of attention and sympathy. Many people were interested in the post, and it’s an easy topic to participate. So many were scrolling down to the comments to write something nice and thanking him for his work.
JP Addison did so too. He works for CEA and as a developer for the forum. His comment got more Karma than any post he made so far.
Karma is used in many places with different concepts behind it. The sum of it gives you no clear information. What I would think in your case: you are an active member of the forum, participate positively with only one post with negative karma. You participated in the FTX crisis discussion, which was an opportunity to gain or lose significant amounts of karma, but you survived it, probably with a good score.
Internetpoints can make you feel fantastic, they are a system to motivate for social interaction and to follow the community norms (in positive and negative ways).
Your modesty suits you well, but there is no need to. Stand upwards. There will always be those with few points but really good content, and those who overshoot the gems by far with activity.
Does EA have a clearly denoted place for exit interviews? Like if someone who was previously very involved was leaving, is there a place they could say why?
When answering questions, I recommend people put each separate point as a separate answer. The karma ranking system is useful to see what people like/don’t like and having a whole load of answers together muddies the water.
1) Why is EA global space constrained? Why not just have a larger venue?
I assume there is a good reason for this which I don’t know.
2) It’s hard to invite friends to EA global. Is this deliberate?
I have a close friend who finds EA quite compelling. I figured I’d invite them to EA global. They were dissuaded by the fact they had to apply and that it would cost $400.
I know that’s not the actual price, but they didn’t know that. I reckon they might have turned up for a couple of talks. Now they probably won’t apply.
Is there no way that this event could be more welcoming or is that not the point?
Re 1) Is there a strong reason to believe that EA Global is constrained by physical space? My impression is that they try to optimize pretty hard to have a good crowd and for there to be a high density of high-quality connections to be formed there.
Re 2) I don’t think EA Global is the best way for newcomers to EA to learn about EA.
EDIT: To be clear, neither 1) nor 2) are necessarily endorsements of the choice to structure EA Global in this way, just an explanation of what I think CEA is optimizing for.
EDIT 2 2021/10/11: This explanation may be wrong, see Amy Labenz’s comment here.
Personal anecdote possibly relevant for 2): EA Global 2016 was my first EA event. Before going, I had lukewarm-ish feelings towards EA, due mostly to a combination of negative misconceptions and positive true-conceptions; I decided to go anyway somewhat on a whim, since it was right next to my hometown, and I noticed that Robin Hanson and Ed Boyden were speaking there (and I liked their academic work). The event was a huge positive update for me towards the movement, and I quickly became involved – and now I do direct EA work.
I’m not sure that a different introduction would have led to a similar outcome. The conversations and talks at EAG are just (as a general rule) much better than at local events, and reading books or online material also doesn’t strike me as naturally leading to being part of a community in the same way.
It’s possible my situation doesn’t generalizes to others (perhaps I’m unusual in some way, or perhaps 2021 is different from 2016 in a crucial way such that the “EAG-first” strategy used to make sense but doesn’t anymore), and there may be other costs with having more newcomers at EAG (eg diluting the population of people more familiar with EA concepts), but I also think it’s possible my situation does generalize and that we’d be better off nudging more newcomers to come to EAG.
1) We’d like to have a larger capacity at EA Global, and we’ve been trying to increase the number of people who can attend. Unfortunately, this year it’s been particularly difficult; we had to roll over our contract with the venue from 2020 and we are unable to use the full capacity of the venue to reduce the risk from COVID. We’re really excited that we just managed to add 300 spots (increasing capacity to 800 people), and we’re hoping to have more capacity in 2022.
There will also be an opportunity for people around the world to participate in the event online. Virtual attendees will be able to enjoy live streamed content as well as networking opportunities with other virtual attendees. More details will be published on the EA Global website the week of October 11.
2) We try to have different events that are welcoming to people who are at different points in their EA engagement. For someone earlier in their exploration of EA, the EAGx conferences are going to be a better fit. From the EA Global website:
Effective altruism conferences are a good fit for anyone who is putting EA principles into action through their donations, volunteering, or career plans. All community members, new or experienced, are welcome to apply.
EA Global: London will be selecting for highly-engaged members of the community.
EAGxPrague (3-5 December) will be more suitable for those who have less experience with effective altruism.
We’ll have lots more EAGx events in 2022, including Boston, Oxford, Singapore, and Australia, as well as EA Globals in San Francisco and London as usual. We may add additional events to this plan. The dates for those events and any additional events will go up on eaglobal.org when they’re confirmed.
In the meantime, if your friend is interested in seeing some talks, they can check out hundreds of past EA Global talks on the CEA YouTube channel.
It’s a site which gets you to guess what other political groups (republicans and democrats) think about issues.
Why is it good:
1) It gets people thinking and predicting. They are asked a clear question about other groups and have to answer it. 2) It updates views in a non-patronising way—it turns out dems and repubs are much less polarised than most people think (the stat they give is that people predict 50% of repubs hold extreme views, when actually it’s 30). But rather than yelling this, or an annoying listicle, it gets people’s consent and teachest something. 3) It builds consensus. If we are actually closer to those we disagree with than we think, perhaps we could work with them. 4) It gives quick feedback. People learn best when given feedback which is close to the action. In this case, people are rapidly rewarded for thoughts like “probably most of X group” are more similar to me that I first think.
Imagine:
What percentage of neocons want insitutional reform? What % of libertarians want an end to factory farming? What % of socialists want an increase in foreign direct aid?
Conlusion
If you want to change people’s minds, don’t tell them stuff, get them to guess trustworthy values as a cutesy game.
It is worth noting when systems introduce benefits in a few obvious ways but many small harms. An example is blocking housing. It benefits the neighbours a lot—they don’t have to have construction nearby—and the people who are harmed are just random marginal people who could have afforded a home but just can’t.
But these harms are real and should be tallied.
Much recent discussion in EA has suggested common sense risk reduction strategies which would stop clear bad behavior. Often we all agree on the clear bad behaviour.
But the risk reduction strategies would also often set norms against a range of greyer behaviour that the suggestors don’t engage in or doesn’t seem valuable to them. If you don’t work with your coworkers, then suggesting it be normed against, seems fine—it would make it hard for people to end up in weird living situations. But I know people who have loved living with coworkers. That’s a diffuse harm.
Mainly I think this involves acknowledging people are a lot weirder than you think. People want things I don’t expect them to want, they consent in business, housing and relationships to things I’d never expect them to. People are wild. And I think it’s worth there being bright lines against some kinds of behaviour that is bad or nearly always bad—I’d suggest dating your reports is ~ very unwise—but a lot is about human preferences and to understand that we need to elicit both wholesome and illicit preferences or consider harms that are diffuse.
Note that I’m not saying which way the balance of harms falls, but that both types should be counted.
I suggest there is waaaay to much to be on top of in EA and noone knows who is checking what. So some stuff goes unchecked. If there were a narrower set of “core things we study” then it seems more likely that those things would have been gone over by someone in detail and hence fewer errors in core facts.
One of the downsides of EA being so decentralized, I guess. I’m imagining an alternative history EA in which is was all AI alignment or it was all tropical disease prevention, and in those worlds the narrowing of “core things we study” would possibly result in more eyeballs on each thing.
I think the wiki should be about summarising and synthesising articles on this forum.
- There are lots of great articles which will be rarely reread - Many could do with more links to eachother and to other key peices - Many could be better edited, combined etc - The wiki could take all content and aim to turn it into a minimal viable form of itself
I think that the forum wiki should focus on taking chunks of article text and editing it, rather than pointing people to articles. So take all of the articles on global dev, squish them together or shorten them.
So there would be a page on “research debt” which would contain this article and also any more text that seemed relevant, but maybe without the introduction. Then a preface on how it links to other EA topics, a link to the original article and links to ways it interacts with other EA topics. It might turn out that that page had 3 or 4 articles squished into one or was broken into 3 or 4 pages. But like Wikipedia you could then link to “research debt” and someone could easily read it.
[Epistemic Status: low, I think this is probably wrong, but I would like to debug it publicly]
If I have a criticism of EA along Institutional Decision Making lines, it is this:
For a movement that wants to change how decisions get made, we should make those changes in our own organisations first.
Examples of good progress: - prizes—EA orgs have offered prizes for innovation - voting systems—it’s good that the forum is run on upvotes and that often I think EA uses the right tool for the job in terms of voting
Things I would like to see more of: - an organisation listening to prediction markets/polls. If we believe nations should listen to forecasting can we make clearer which markets our orgs are looking and and listening to? - an organisation run by prediction markets. The above but taking it further - removing siloes in EA. If you have confidence to email random people it’s relatively easy to get stuff done, but can we lower the friction to allow good ideas to spread further? - etc
It’s fine if we think these things will never work, but it seems weird to me that we think improvements would work elsewhere but that we don’t want them in our orgs. That’s like being NIMBY about our own suggested improvements.
Counterarguments - these aren’t solutions people are actually arguing for. Yeah this is an okay point. But I think the seeds of them exist.
- prediction markets work in big orgs not small ones. Maybe, but isn’t it worth running one small inefficient organisation to try and learn the failure modes before we suggest this for nation states
A set of EA jobs twitter bots which each retweet a specific set of hashtags eg #AISafety#EAJob, #AnimalSuffering#EAJob, etc etc. Please don’t get hung up on these, we’d actually need to brainstorm the right hashtags.
Does anyone know people working on reforming the academic publishing process?
Coronavirus has caused journalists to look for scientific sources. There are no journal articles because of the lag time. So they have gone to preprint servers like bioRxiv (pronounced bio-archive). These servers are not peer reviewed so some articles are of low quality. So people have gone to twitter asking for experts to review the papers.
This is effectively a new academic publishing paradigm. If there were support for good papers (somehow) you would have the key elements of a new, perhaps better system.
With Coronavirus providing a lot of impetus for change, those working in this area could find this an important time to increase visibility of their work.
It has an emotional impact on me to note that FTX claims are now trading at 50%. This means that on expectation, people are gonna get about half of what their assets were worth, had they help them until this time.
I don’t really understand whether it should change the way we understand the situation, but I think a lot of people’s life savings were wrapped up here and half is a lot better than nothing.
I am not confident on the reasons why this is, but I think it’s because Anthropic and the cryptocurrency Solana are now trading a lot higher. My last memory (bad do not trust) is that FTX has about 11bn in debt against 4bn in assets. I think Anthropic and the Sol they hold have both gone up by about a billion since then.
I dunno folks, but I hope people get their money back—and I know that includes some of you.
Lots of discussion, a reasonable amount of new information, but what should our final update be:
Have HLI acted fine or badly?
Is there a pattern of misquoting and bad scholarship?
Have global health orgs in general moved towards Self-reported WellBeing (SWB) as a way to measure interventions?
Has HLI generally done good/cost effective work?
I think that the forum comments model is very poor at this. After all, if there were widespread agreement (as I think there could be) then I think that would be a load of all our minds. We could have a discussion once and then not need to have it again.
As it is, I’m sure many people have taken away different things from this and I we’ll probably discsuss it again the next time the Happier Lives Institute or StrongMinds posts to the forum and I guess there has been some more bad blood created in the meantime.
Consensus is good and we don’t even try to reach it after big discussions.
If you’re commenting on a post, it helps to start off with points of agreement and genuine compliments about things you liked. Try to be honest and non-patronizing: a comment where the only good thing you say is “your english is very good” will not be taken well, or a statement that “we both agree that murder is bad”. And don’t overthink it, a simple “great post” (if honest) is never unappreciated.
Another point is that the forum tends to have a problem with “nitpicking”, where the core points of a post are ignored in favor of pointing out minor, unimportant errors. Try to engage with the core points of an argument, or if you are pointing out a small error, preface it with “this is a minor nitpick”, and put it at the end of your comment.
So a criticism would look like:
“Very interesting post! I think X is a great point that more people should be talking about. However, I strongly disagree with core point Y, for [reasons]. Also, a minor nitpick: statement Z is wrong because [reasons]”
I think the above is way less likely to feel like an “attack”, even though the strong disagreements and critiques are still in there.
I agree that it’s worth saying something about sexual behaviour. Here are my broad thoughts:
I am sad about women having bad experiences, I think about it a lot
I want to be accurate in communication
I think it’s easy to reduce harms a lot without reducing benefits
Firstly, I’m sad about the current situation. Seems like too many women in EA have bad experiences. There is a discussion about what happens in other communities or tradeoffs. But first it’s really sad.
More than this, it seems worth dwelling on what it *feels* like. I guess for many it’s fine. But for some it can be exhausting or sad or uncomfortable. Women in EA complain to me about their treatment as women at lot, men much less. Seems notable.
But I don’t know what norms should be. I don’t know what’s best for EA women, for EA in general, for the world in general. In short, I don’t know how to optimise norms.
But harms seem easier to understand. It does seem to me there are some low cost, high benefit improvements. Particularly in people who have patterns of upsetting women.
Personally, I have really upset 2 or 3 women in EA around romance. I’ve said or done things that have left them sad for months. And I don’t think this is okay.
To them, I am sorry.
How do they feel? Well I sense, really sad. We’re not talking Time magazine stuff here, but I think they felt belittled, disrespected, judged and, briefly, unsafe. I don’t want anyone to feel like this, let alone because of me.
And compared to their suffering, and my sadness at it, it just seems pretty cheap to change my behaviour. To go on dates with a smaller group of people in EA, to create patterns to avoid situations I handle poorly, to spend time imagining women’s lives.
So I’m not gonna give a blanket pronouncement or say we are the worst. But personally, I am pretty flawed and I would prefer to change rather than hurt other people. And if you see that pattern in your life then I suggest taking real, actual steps.
I’d suggest you ask yourself. “Are there any women who, as a result of my actions in the last 2 years are seething or deeply upset.”
For most people the answer is no. Like seriously, the answer can be “no, you’re fine”. But if it’s yes, women are people right? Do you really believe that there aren’t some improvements possible here?
Some suggestions to yesses:
Talk to a trusted friend. How do they think you do here?
Imagine how much you would do to avoid the last woman being upset. Spend at least that much time avoiding the next woman being upset
I dislike the tribal nature of this discussion, that on some level it feels culture war-ey. So again, I don’t think this for everyone, but it is for me
But I really would recommend going to quality sex and relationship courses. I went to one run by a tantra group and I think it just made me a lot kinder and helped me reduce risks
Talk to women you’ve dated. How did they feel?
If you struggle with empathy with women, perhaps start with empathy for me. Trust me, you don’t want to feel like this. It’s horrible to have people who are upset as a result of my actions.
Most of all, I would recommend building empathy. I wish I had sat down and just written how the women I fancied felt, even for 5 minutes. And talked it over with a friend.
Take an interest in the mental lives of people you care about.
So I guess, the thing I could say was “If you continue patterns of romantic behaviour that frequently upset women that you could easily make less risky then I’ll be really upset with you and sad” as, if I were to continue I’d be so angry at myself.
Romance is not without risk—I don’t think this is a purely harm reducing question (though I could move to that opinion). But I think it’s possible to just reduce risks a lot while maintaining benefit. And if I have the option to do that and I choose not to, that’s basically my definition of bad.
Daniel’s Heavy Tail Hypothesis (HTH) vs. this recent comment from Brian saying that he thinks that classic piece on ‘Why Charities Usually Don’t Differ Astronomically in Expected Cost-Effectiveness’ is still essentially valid.
Seems like Brian is arguing that there are at most 3-4 OOM differences between interventions whereas Daniel seems to imply there could be 8-10 OOM differences?
Here is my first draft, basically there will be a plan money prediction market predicting what they community will vote on a central question (here “are the top 1% more than 10,000x as efffective as the median”) then we have a discussion and we vote and then resolve.
It is unclear to me that if we chose cause areas again, we would choose global developement
The lack of a focus on global development would make me sad
This issue should probably be investigated and mediated to avoid a huge community breakdown—it is naïve to think that we can just swan through this without careful and kind discussion
With better wiki features and a way to come to consensus on numbers I reckon this forum can write a career guide good enough to challenge 80k. They do great work, but we are many.
There were too few parties on the last night of EA global in london which led to overcrowding, stressed party hosts and wasting a load of people’s time.
I suggest in future that there should be at least n/200 parties where n is the number of people attending the conference.
I don’t think CEA should legislate parties, but I would like to surface in people’s minds that if there are fewer than n/200 parties, then you should call up your friend with most amenable housemates and tell them to organise!
Has rethink priorities ever thought of doing a survey of non-EAs? Perhaps paying for a poll? I’d be interested in questions like “What do you think of Effective Altruism? What do you think of Effective Altruists?”
Only asking questions of those who are currently here is survivorship bias. Likewise we could try and find people who left and ask why.
I hold that there could be a well maintained wiki article on top EA orgs and then people could anonymously have added many non-linear stories a while ago. I would happily have added comments about their move fast and break things approach and maybe had a better way to raise it with them.
There would have been edit wars and an earlier investigation.
How much would you pay to have brought this forward 6 months or a year. And likewise for whatever other startling revelations there are. In which case, I suggest a functional wiki is worth 5% − 10% of that amount, per case.
My question is “Who would want to run an EA org or project in that kind of environment?”. Presumably, you’d be down, but my bet is that the vast majority of people wouldn’t.
It was pointed out to me that I probably vote a bit wrong on posts.
I generally just up and downvote how I feel, but occasionally if I think a post is very overrated or underrated I will strong upvote or downvote even though I feel less strong than that.
But this is I think the wrong behaviour and a defection. Since if we all did that then we’d all be manipulating the post to where we think it ought to be and we’d lose the information held in the median of where all our votes leave it.
Withholding the current score of a post till after a vote is cast (but the casting is committal) should be enough to prevent strategic behavior. But it comes with many downsides (I think feed ordering / recsys could work with private information, so the scores may be in principle inferrable from patterns in your feed, but you probably won’t actually do it. The worse problem is commitment, I do like to edit my votes quite a bit after initial impressions).
I imagine there’s a more subtle instrument, withholding the current score until committal votes have been cast seems almost like a limit case.
Although this isn’t in response to your specific case (correcting for overrated or underrated posts), but in response to
Since if we all did that then we’d all be manipulating the post to where we think it ought to be and we’d lose the information held in the median of where all our votes leave it.
I think it’s okay to “defect” to correct the results of others’ apparent defection or to keep important information from being hidden. I’ve used upvotes correctively when I think people are too harsh with downvotes or when the downvotes will make important information/discussion much less visible. To elaborate, I’ve sometimes done this for cases like these:
When a comment or post is at low or negative karma due to downvotes, despite being made in good faith (especially if it makes plausible, relevant and useful claims), and without being uncivil or breaking other norms, even if it expresses an unpopular view (e.g. opinion or ethical view) or makes some significant errors in reasoning. I don’t think we should disincentivize or censor such comments, and I think that’s what disagreement voting and explanations should be used for. I find when people use downvotes like this without explanation to be especially unfair. This also includes when downvotes crush well-intentioned and civil but poorly executed newbie posts/comments, which I think is unkind and unwelcoming. (I’ve used upvotes correctively like this even before we had disagree voting.)
For posts with low or negative karma due to downvotes, if they contain (imo) important information, possibly even if poorly framed, with bad argument in them or made in apparently bad faith, if there’s substantial valuable discussion on the issue or it isn’t being discussed visibly somewhere else on the EA Forum. Low karma risks effectively hiding (making much less visible) that information and surrounding discussion through the ranking algorithm. This is usually for community controversies and criticism.
I very rarely downvote at all, but maybe I’d refrain from downvoting something I would otherwise downvote because its karma is already low or negative.
Right—in my view, net-negative karma conveys a particular message (something like “this post would be better off not existing”) that is meaningfully stronger than the median voter’s standard for downvoting. It can therefore easily exist in circumstances where the median voter would not have endorsed that conclusion.
FWIW, I don’t think this is against the explicit EA Forum norms around voting, and using upvotes and strong upvotes this way seems in line with some of their “suggestions” in the table from that section. In particular, they suggest it’s appropriate to strong upvote if
You think many more people might benefit from seeing it.
You want to signal that this sort of behavior adds a lot of value.
These could be more or less true depending on the karma of the post or comment and how visible you think it is.
I don’t think using downvotes against overrated posts or comments falls under the suggestions, though, but doing it only for upvotes and not downvotes could bias the karma.
Any EA leadership have my permission to put scandal on the back burner until we have a strategy on bing by the way. feels like a big escalation to have an ML reading it’s own past messages and running a search engine.
EA internal issues matter but only if we are alive.
Reasons I would disagree: (1) Bing is not going to make us ‘not alive’ on a coming-year time scale. It’s (in my view) a useful and large-scale manifestation of problems with LLMs that can certainly be used to push ideas and memes around safety etc, but it’s not a direct global threat. (2) The people best-placed to deal with EA ‘scandal’ issues are unlikely to perfectly overlap with the people best-placed to deal wit the opportunities/challenges Bing poses. (3) I think it’s bad practice for a community to justify backburnering pressing community issues with an external issue, unless the case for the external issue is strong; it’s a norm that can easily become self-serving.
I think the community health team should make decisions on the balance of harms rather than beyond reasonable doubt. If it seems likely someone did something bad they can be punished a bit until we don’t think they’ll do it again. But we have to actually take all the harms into account.
“beyond reasonable doubt” is a very high standard of proof, which is reasonable when the effect of a false conviction is being unjustly locked in a prison. It comes at a cost: a lot of guilty people go free and do more damage.
Theres no reason to use that same standard for a situation where the punishments are things like losing a job or being kicked out of a social community. A high standard of proof should still be used, but it doesn’t need to be “beyond reasonable doubt” level. I would hate to be falsely kicked out of an EA group, but at the end of the day I can just do something else.
I agree that the magnitude of the proposed deprivation is highly relevant to the burden of proof. The social benefit from taking the action on a true positive, and the individual harm from acting on a false positive also weigh in the balance.
In my view, the appropriate burden of proof also takes into account the extent of other process provided. A heightened burden of proof is one procedure for reducing the risk of erroneous deprivations, but it is not the only or even the most important one.
In most cases, I would say that the thinner the other process, the higher the BOP needs to be. For example, discipline by the bar, medical board, etc is usually more likely than not . . . but you get a lot of process like an independent adjudicator, subpoena power, and judicial review. So we accept 51 percent with other procedural protections in play. (And as a practical matter, the bar generally wouldnt prosecute a case it thought was 51 percent anyway due to resource constraints). With significantly fewer protections, I’d argue that a higher BOP would be required—both as a legal matter (these are government agencies) and a practical one. Although not beyond a reasonable doubt.
Of course, more process has costs both financial and on those involved. But it’s a possible way to deal with some situations where the current evidence seems too strong to do nothing and too uncertain to take significant action.
I listened to this episode today Nathan, I thought it was really good, and you came across well. I think EAs should consider doing more podcasts, including those not created/hosting by EA people or groups. They’re an accessible medium with the potential for a lot of outreach (the 80k podcast is a big reason why I got directly involved with the community).
I know you didn’t want to speak for EA has a whole, but I think it was a good example with EA talking to the leftist community in good faith,[1] which is (imo) one of our biggest sources of criticism at the moment. I’d recommend others check out the rest of Rabbithole’s series on EA—it’s a good piece of data on what the American Left thinks of EA at the moment.
Summary:
+1 to Nathan for going on this podcast
+1 for people to check out the other EA-related Rabbithole episodes
Any time that you read a wiki page that is sparse or has mistakes, consider adding what you were trying to find. I reckon in a few months we could make the wiki really good to use.
I sense that conquest’s law is true → that organisations that are not specifically right wing move to the left.
I’m not concerned about moving to the left tbh but I am concerned with moving away from truth, so it feels like it would be good to constantly pull back towards saying true things.
I think the forum should have a retweet function but for the equivalent of github forks. So you can make changes to someone’s post and offer them the ability to incorporate them. If they don’t, you can just remake the article with the changes and an acknolwedgement that you did.
I don’t think people would actually do that that often, because they’d get no karma most of the time, but it would give karma, attribution trail for: - summaries - significant corrections/reframings - and the author could still accept the edits later
My very quick improving institutional decision-making (IIDM) thoughts
Epistemic status: Weak 55% confidence. I may delete. Feel free to call me out or DM me etc etc.
I am saying these so that someone has said them. I would like them to be better phrased but then I’d probably never share them. Please feel free to criticise them though I might modify them a lot and I’m sorry if they are blunt:
I don’t understand what concrete learnings there are from IIDM, except forecasting (which I am biased on). The EIP produced a report which said that in the institutions you’d expect to matter do. That was cheap falsification so I guess worth it. Beyond that, I don’t know. And I was quite involved for a while and I didn’t pick these up by osmosis. I assume that many people know even less than I do.
Is forecasting IIDM? Yes. But people know what forecasting is so it’s easier to use those words. Are humans primates, yes, but one of those words is easier to understand.
Does IIDM exist in the wild? Yes?? I know lots of EA-aligned people who work in institutions who to improve them. That seems like IIDM to me.
What ideas would I brainstorm, low confidence:
Connect EA networks across institutions. EAs in different institutions probably know things. Do they pass those around?
Try and improve EA knowledge tranfer How can someone get a high signal feed of information via email, WhatsApp, podcast app. If we had this then it would be easier to share to institutional colleagues
What has worked in EA orgs? I’m surprised we think we can improve institutions when we haven’t solved those problems internally
How does an org make forecasting really easy and low friction?
How can EA institutions share detailed knowledge in real treal-timeime across institutions?
Haha I don’t know what IIDM is but I do know what forecasting is. If I had lots of money one of the things I’d do is create a forecasting news organization. They don’t talk about what happened, they talk about what’s going to happen. The knowlege transfer is important. People are too spread apart to use one platform, but if there was a list of people who were readily available to share information on certain topics and their contact info that would be valuble.
This forum is not user-friendly. Took a bit to arrive.
I am not! I applied and didn’t get it, I think the movement is bigger than available tickets in a convention. I’m on a few EA discords if you’d like to chat.
I have strong “social security number” associations with the acronym SSN.
Setting those aside, I feel “scale” and “solvability” are simpler and perhaps less jargon-y words than “impact” and “tractability” (which is probably good), but I hear people use “impact” much more frequently than “scale” in conversation, and it feels broader in definition, so I lean towards “ITN” over “SSN”.
I am gonna do a set of polls and get a load of karma for it (70% >750). I’m currently ~20th overall on the forum despite writing few posts of note. I think polls I write create a lot of value and I like the way it incentivises me to think about questions the community wants to answer.
I am pretty happy with the current karma payment but I’m not sure everyone will be so I thought I’d surface it. I’ve considered saying that polls delivery half the karma, but that feels kind of messy and I do think polls are currently underrated on the forum.
Each EA org should pay $10 bounty to the best twitter thread talking about any episode. If you could generate 100 quality twitter threads on 80,000 hours episodes that for $1000 that would be really cheap. People would quote tweet and discuss and it would make the whole set of knowledge much more legible.
Cool idea, I’ll have a think about doing this for Hear This Idea. I expect writing the threads ourselves could take less time than setting up a bounty, finding the threads, paying out etc. But a norm of trying to summarise (e.g. 80K) episodes in 10 or so tweets sounds hugely valuable. Maybe they could all use a similar hashtag to find them — something like #EAPodcastRecap or #EAPodcastSummary
I edited the of wikipedia on Doing Good Better to try and make it more reflective of the book and Will’s current views. Let me know how you think I did.
I was reading this article about Nuclear winter a couple of days ago and I struggled. It’s a good article but there isn’t an easy slot in my worldview for it. The main thrust was something like “maybe nuclear winter is worse than other people think”. But I don’t really know how bad other people think it is.
Compare this to community articles, I know how the community functions and I have opinions on things. Each article fits neatly into my brain.
If a had a globe my worldview the EA community section is like very well mapped out. And so when I hear oh, you know, Adelaide is near Sydney or something, I know where those places are, and I can make some sort of judgment on the comment. But my views on nuclear winter are like if I learn that the mountains near Drachmore are taller than people think. Where is drachmore? Which mountains? How tall do people think they are.
My suggestion here is better wikis, but mainly I think the problem is an interesting one. I think often the community section is well supported because we all have some prior structure. I think it’s hard to comment on air purity, AI minutiae or nuclear winter because I don’t have that prior space.
I wouldn’t recommend people tweet about the nonlinear stuff a lot.
There is an appropriate level of publicity for things and right now I think the forum is the right level for this. Seems like there is room for people to walk back and apologise. Posting more widely and I’m not sure there will be.
If you think that appropriate actions haven’t been taken in say a couple months then I get tweeting a bit more.
I think the substance of your take may be right, but there is something that doesn’t sit well with me about an EA suggesting to other EAs (essentially) “I don’t think EAs should talk about this publicly to non-EAs.” (I take it that is the main difference between discussing this on the Forum vs. Twitter—like, “let’s try to have EA address this internally at least for now.”) Maybe it’s because I don’t fully understand your justification—”there is room for people to walk back and apologize”—but the vibe here feels a bit to me like “as EAs, we need to control the narrative around this (‘there is an appropriate level of publicity,’)” and that always feels a bit antithetical to people reasoning about these issues and reaching their own conclusions.
I think I would’ve reacted differently if you had said: “I don’t plan to talk about this publicly for a while because of x, y, and z” without being prescriptive about how others should communicate about this stuff.
I think in general people don’t really understand how virality works in community dynamics. Like there are actions that when taken cannot be reversed.
I don’t say “never share this” but I think sharing publicly early will just make it much harder to have a vulnerable discussion.
I don’t mind EAs talking about this with non-EAs but I think twitter is sometimes like a feeding frenzy, particularly around EA stuff. And no, I don’t want that.
Notably, more agree with me than disagree (though some big upvotes on agreement obscure this—I generally am not wild about big agreeevotes).
As I’ve written elsewhere I think there is a spectrum from private to public. Some things should be more public than they are and other things more private. Currrently I am arguing this is about right. I thought that it turned out many issues with FTX were too private.
I think that a mature understanding of sharing things is required for navigating vulnerable situations (an I imagine you agree—many disliked the sharing of victims names around the time article why because that was too public for that information in their opinion)
I appreciate that you said it didn’t sit well with you. It doesn’t really sit well with me either. I welcome someone writing it better
Yeah, again, I think you might well be right on the substance. I haven’t tweeted about this and don’t plan to (in part because I think virality can often lead to repercussions for the affected parties that are disproportionate to the behavior—or at least, this is something a tweeter has no control over). I just think EA has kind of a yucky history when it comes to being prescriptive about where/when/how EAs talk about issues facing the EA community. I think this is a bad tendency—for instance, I think it has, ironically, contributed to the perception that EA is “culty” and also led to certain problematic behaviors getting pushed under the rug—and so I think we should strongly err on the side of not being prescriptive about how EAs talk about issues facing the community. Again, I think it’s totally fine to explain why you yourself are choosing to talk or not talk about something publicly.
I guess I plan for the future, not the past. But I agree that my stance is generally more public than most EAs. I talk to journalists about stuff, for instance, and I think more people should.
I imagine that it has cost and does cost 80k to push for AI safety stuff even when it was wierd and now it seems mainstream.
Like, I think an interesting metric is when people say something which shifts some kind of group vibe. And sure, catastrophic risk folks are into it, but many EAs aren’t and would have liked a more holistic approach (I guess).
I am frustrated and hurt when I take flack for criticism.
It seems to me that people think I’m just stirring shit by asking polls or criticising people in power.
Maybe I am a bit. I can’t deny I take some pleasure in it.
But there are a reasonable amount of personal costs too. There is a reason why 1-5 others I’ve talked to have said they don’t want to crticise because they are concerned about their careers.
I more or less entirely criticise on the forum. Believe me, if I wanted to actually stir shit, I could do it a lot more effectively than shortform comments.
I’m relatively pro casual sex as a person, but I will say that EA isn’t about being a sex-positive community—it’s about effectively doing good. And if one gets in the way of the other, I know what I’m choosing (doing good).
I think there is a positive sum compromise possible, but it seems acknowledging how I will trade off if it comes to it.
when you can comment on an article and it shows as a little speech bubble to the side of the text. I’ve opted into experimental features but I still can’t.
I think you just normally quote a section of the article, clicking “Block quote”
Some people use hypothes.is , which in theory gives the same functionality on any web page, but we’re very few and only people that have installed it can see the comments or add new ones
Some thoughts - Utilitiarianism but being cautious around the weird/unilateral stuff is still good - We shouldn’t be surprised that we didn’t figure out SBF was fraudulent quicker than billions of dollars of cryto money… and Michael Lewis - Scandal prediction markets are the solution here and one day they will be normal. But not today. Don’t boo me, I’m right - Everyone wants whistleblowing, no one wants the correctly incentivised decentralised form of whistleblowing. - Gotta say, I feel for many random individual people who knew or interacted closely with SBF but weren’t at FTX who are gonna get caught up in that - We were fundamentally unserious about avoiding reputational risk from crypto. I hope we are more serious about not dying from AI - I like you all a lot - I don’t mind taking the money of some retired non-EA oil baron, but I think not returning FTX’s money perhaps incentivises future pro-crime EAs. I would like a credible signal - The community does not need democratised funding (though I’d happily test it at a small scale) though we aren’t getting enough whistleblowing so we should work on that - We deserve to be scrutinised and mocked, we messed up. We should own that - X-risk is still extremely compelling - I am uncertain how impactful my work is - Our critics are usually very low signal but have a few key things of value to say. It is hard to listen to find those things without wasting loads of time, but missing them is bad too - People knew SBF was a bully who broke promises. That that information didn’t flow to where it needed/ was ignored was a problem— I think we shouldn’t say we want criticism, because we don’t. We didn’t want it about FTX and we don’t in any other places. We want very specific criticism. Everyone does, because the world is big and we have limited time. So how do we get the criticism that’s most useful to us - The community should seek to make the best funding decisions it can over time. I think that’s with orgs doing it and prediction markets to remove bad apples, but you can think what you want. But democratisation isn’t a goal in and of itself—good sustainable decisionmaking is. Perhaps there should be a jury of randomly chosen community member, perhaps we should have elections. I don’t know, but I do feel we haven’t been taking governance seriously enough
I remain confused about “utilitarianism, but use good judgement”. IMO, it’s amongst the more transparent motte-and-baileys I’ve seen. Here are two tweets from Eliezer that I see are regularly re-shared:
The rules say we must use consequentialism, but good people are deontologists, and virtue ethics is what actually works.
Go three-quarters of the way from deontology to utilitarianism and then stop. You are now in the right place. Stay there at least until you have become a god.
This describes Aristotelian Virtue Ethics—finding the golden mean between excess and deficiency. So are people here actually virtue ethicists who sometimes use math as a means of justification and explanation? Or do they continue to take utilitarianism to some of its weirder places, privately and publicly, but strategically seek shelter under other moral frameworks when criticized?
I’m finding it harder to take people who put “consequentialist” and “utilitarian” in their profiles and about mes seriously. If people abandon their stated moral framework on big important and consequential questions, then either they’re deluding themselves on what their moral framework actually is, or they really will act out the weird conclusions—but are being manipulative and strategic by saying “trust us, we have checks and balances”
And what happens when that double-checking comes back negative? And how much weight do you choose to give it? The answer seems to be rooted in matters of judgement and subjectivity. And if you’re doing it often enough, especially on questions of consequence, then that moral framework is better described as virtue ethics.
Out of curiosity, how would you say your process differs from a virtue ethicist trying to find the golden mean between excess and deficiency?
I notice that sometimes I want to post on something that’s on both the EA forum and lesswrong. And ideally, clicking “see lesswrong comments” would just show them on the current forum page and if I responded, it would calculate EA forum karma for the forum and LessWrong karma for lessWrong.
When someone says of your organisation “I want you to do X” do not say “You are wrong to want X”
This rudely discourages them from giving you feedback in future. Instead, there are a number of options:
If you want their feedback “Why do you want X?” “How does a lack of X affect you?”
If you don’t want their feedback “Sorry, we’re not taking feedback on that right now” or “Doing X isn’t a priority for us”
If you think they fundamentally misunderstand something “Can I ask you a question relating to X?”
None of these options tell them they are wrong.
I do a lot of user testing. Sometimes a user tells me something I disagree with. But they are the user. They know what they want. If I disagree, it’s either because they aren’t actually a user I want to support, they misunderstand how hard something is, or they don’t know how to solve their own problems.
None of these are solved by telling them they are wrong.
Often I see people responding to feedback with correction. I often do it myself. I think it has the wrong incentives. Rather than trying to tell someone they are wrong, now I try to either react with curiosity or to explain that I’m not taking feedback right now. That’s about me rather than them.
I sense new stuff on the forum is probably overrated. Surely we should assume that most of the most valuable things for most people to read have already been written?
The difference between the criticism contest and openphil’s cause prioritisation contest is pretty interesting. 60% I’m gonna think OpenPhil’s created more value in terms of changes in a 10 years time.
Causes which are much more pressing under longtermism than other belief systems
Longtermist causes are:
Those which are a high priority for marginal resources, whether they are under other belief systems or not.
The fact that biorisk and AI risk are high priority without longtermism doesn’t make them not “longtermist causes” just as it doesn’t make the not “causes that affect people alive today”
An open question for me (for EA Israel? For EA?) is whether we can talk about economic-politics publicly in our group.
For example, can we discuss openly that “regulating prices is bad”. This is considered an open political debate in Israel, politicians keep wanting to regulate prices (and sometimes they do, and then all the obvious things happen)
I mean I’d like to chat about that, and maybe happy to on this shortform? But I wouldn’t write a post on it. I guess it doesn’t seem that neglected to me.
In Israel, it is controversial to suggest not regulating prices, or to suggest lowering import taxes, or similar things. I could say a lot about this, but my points are:
I remember I was really jealous of the U.S when Biden suggested some very expensive program (UBI? Some free-medical-care reform?), but he SHOWED where the money is supposed to come from, there was a chart!
I’ve decided I’m going to just edit the wiki to be like the wiki I want.
Currently the wiki feels meticulously referenced but lacking in detail. I’d much prefer it to have more synthesised content which is occasionally just someone’s opinion. If you dislike this approach, let me know.
I do think that many of the entries are rather superficial, because so far we’ve been prioritizing breadth over depth. You are welcome to try to make some of these entries more substantive. I can’t tell, in the abstract, if I agree with your approach to resolving the tradeoff between having more content and having a greater fraction of content reflect just someone’s opinion. Maybe you can try editing a few articles and see if it attracts any feedback, via comments or karma?
Why do you think the summary got more upvotes. I’m not upset, I like a summary too, but in my mind, a question that anyone can submit answers to or upvote current answers is much more useful. So I am confused. Can any suggest why?
Anyone can comment on a post and upvote comments so I don’t see why a question would be better in that regard.
Also the post contained a lot of information on potential megaprojects which is not only quite interesting and educational but also prompts discussion.
Can you think of any examples of other movements which have this? I have not heard of such for e.g. the environmentalist or libertarian movements. Large companies might have whistleblowing policies, but I’ve not heard of any which make use of an independent organization for complaint processing.
I’m sorry to hear this (and grateful that you’re reporting them). We have systems for flagging when a user’s DM pattern is suspicious, but it’s imperfect (I’m not sure if it’s too permissive right now).
In case it’s useful for you to have a better picture of what’s going on, I think you get more of the DM spam because you’re very high up in the user list.
“I don’t think drinking is bad, but we have a low-alcohol culture so the fact you host parties with alcohol is bad”
Often the easiest mark of bad behaviour is that it breaks a norm we’ve agreed. Is it harmful in a specific case to be shoplift? Depends on what was happening to the things you stole. But it seems easier just to appeal to our general norm that shoplifting is bad. On average it is harmful and so even if it wasn’t in this specific case, being willing to shoplift is a bad sign. Even if you’re stealing meds to give to your gran, it may be good to have a general norm against this behaviour.
But if the norm is bad that weakens norms in general. Lots of people in the UK speed in their cars. But this teaches many people that twice a day, the laws aren’t actually laws. It encourages them that many government rules are stupid and needless as opposed to wise and reasonable
But how broadly should this norm apply? 99% of cases, 95%? I don’t know.
But it’s clear to me that if a norm only applies in 50% of cases it’s a bad norm. It’s gonna leave everyone trusting the values of the community less, because half the time it will punish or reward people incorrectly.
That’s right, you should be able to mention users with @ and posts with #. However, it does seem like they’re both currently broken, likely because we recently updated our search software. Thanks for flagging this! We’ll look into it.
I strongly dislike the “further reading” sections of the forum wiki/forum tags.
They imply that the right way to know more about things is to read a load of articles. It seems clear to me that instead we should sythesise these points and then link them where relevant. Then if you wanted more context you could read the links.
The ‘Further reading’ sections are a time-cheap way of helping readers learn more about a topic, given our limited capacity to write extended entries on those topics.
1) Clubhouse is a new social media platform, but you need an invite to join 2) It allows chat in rooms, and networking 3) Seems some people could deliver sooner value by having a clubhouse invite 4) People who are on clubhouse have invites to give 5) If you think an invite would be valuable or heck you’d just like one, comment below and then if anyone has invites to give they can see EAs who want them. 6) I have some invites to give away.
It is reasonable that 5- 20% of the community are scared that their harmless sexual behaviour will become unacceptable and that they will be seen as bad/unsafe if they support it.
It’s fair that they are upset and see this as something that might hurt them and fear the outcome.
There are two main models I have for many of these discussions:
Rationalist EAs—like truth-seeking, think a set of discourse norms should be obeyed at all times
Progressive EAs—think that some discussions require much more energy from some than others and need to be handled differently/more carefully. Want an environement where they feel safe
I think it’s easy to see these groups as against one another, but I think that’s not true. There are positive sum improvements.
Women being sad matters. And yes there are tradeoffs here, but it’s really sad that the women in the time article and all the other women who have been sad are sad.
If we could have a community where everyone says “EA does romantic relationships a lot better than the outside world” that would be worth spending $10 − 100mn on purely in community building terms, let alone in just welfare of individual EAs.
We spend millions each year of EAGs + 80k. Imagine if everyone just was like “Yeah EA is just a great safe fun place”
It is pretty reasonable for 5 − 20% of the community to have a boundary about not being caught up in coversations about sex in houses they need to stay in in foreigh countries. Or similarly bad conversations.
It’s reasonable they want to be sure this is taken really seriously, because they don’t want it to happen to them or their friends.
It’s complicated that this might lead to unintended consequences, but their desire seems very comprehensible.
It was very likely bad that Owen Cotton-Barrett upset a couple of women and then didn’t drastically change his behaviour, such that there were other instances.
That’s not to say other things weren’t bad. But this feels like something we can agree on.
The forum should hire mediators who’s job it is to try and surface concensus and allow discussion to flow better. Many discussion are a lot of different positions at once.
I think in SBF we farmed out our consciences. Like people who say “there need to be atrocities in war so that people who live in peace” we thought “SBF can do trade dodgy coins stuff so that we can help, but let’s not think about it”. I don’t think we could have known about the fraud, but I do think there were plenty of warning signs we ignored as “SBF is the man in the arena”. No, either we should have been cogent and open about what he was doing or we should have said we didn’t like it and begun pulling away reputationally.
If you have anonymous feedback I’m happy to hear it. In fact I welcome it.
I will note that I’m not made of stone however and don’t promise to be perfect. But I always appreciate more information.
Some behaviours I’ve changed recently:
I am more cautious about posting polls around sensitive topics where there is no way to express that the poll is misframed
I generally try to match the amount of text of the person I’m talking to, and resist an urge to keep adding additional replies
In formal settings I might have previously touched people on the upper arm or shoulders in conversation, a couple of people said they didn’t like that, so I do it less and ask before I do
If you have issues (or compliments), even ones you are sure I am aware of, I would appreciate hearing them. We are probably more alien than you imagine.
I do not upvote articles on here merely because they are about EA.
Personally I want to read articles that update me in a certain direction. Merely an article that’s gonna make me sad or be like “shrug accurate” is not an article I’m gonna upvote on here.
I quite strongly dislike “drama” around things, rather than just trying to figure them out. Much of the HLI “drama” seems to be reading various comments and sharing that there is disagreement rather than attempts to turn uncertainty into clarity.
My response to this is “what are we doing”? Why aren’t there more attempts to figure out what we should actually believe as a group here? I really don’t understand why there is much discussion but so little (to my mind) attempt at synthesis.
I don’t see a clear path forward to consensus here. The best I can see, which I have tried to nudge in my last two long posts on the main thread, is “where do we go from here given the range of opinions held?”
As I see it, the top allegation that has been levied is intentional research misconduct,[1] with lesser included allegations of reckless research misconduct, grossly negligent research (mis)conduct, and negligent research conduct. A less legal-metaphory way to put it is: the biggest questions are whether HLI had something on the scale in favor of SM, if so was it a finger or a fist on the scale, and if so did HLI know (or should it have known) that the body part was on the scale.
It’s unsurprising that most people don’t want to openly deliberate about misconduct allegations, especially not in front of the accusers and the accused. There’s a reason juries deliberate in secret in an attempt to reach consensus.
I think that hesitation to publicly deliberate is particularly pronounced for those who fall in the middle part of the continuum,[2] which unfortunately contributes to the “pretty serious misconduct” and “this is way overblown” positions being overrepresented in comments compared to where I think they truly fall among the Forum community. Moreover, most of us lack the technical background and experience to lead a deliberation process.
What procedures would you suggest to move toward consensus?[3]
If someone thinks HLI is guilty of deceptive conduct (or conduct that is so reckless to be hard to distinguish from intentional deception), they are likely going to feel less discomfort raking HLI over the coals (“because they deserve it” and because maintaining epistemic defense against that kind of conduct is particularly important). If someone thinks this whole thing is a nothingburger, saying so wouldn’t seem emotionally difficult.
Properly used, anonymous polling can reveal a consensus that exists (as long as there’s no ballot stuffing) . . . but isn’t nearly as useful in developing a consensus. If you attempt to iterate the questions, you’re likely to find that more and more of the voting pool will be partisans on one side of the dispute or the other, so subsequent rounds will reflect community consensus less and less.
Some things I don’t think I’ve seen around FTX, which are probably due to the investigation, but still seems worth noting. Please correct me if these things have been said.
I haven’t seen anyone at the FTXFF acknowledge fault for negligence in not noticing that a defunct phone company (north dimension) was paying out their grants.
This isn’t hugely judgemental from me, I think I’d have made this mistake too, but I would like it acknowledged at some point
Since writing this it’s been pointed out that there were grants paid from FTX and Alameda accounts also. Ooof.
I haven’t seen anyone at CEA acknowledge that they ran an investigation in 2019-2020 on someone who would turn out to be one of the largest fraudsters in the world and failed to turn up anything despite seemingly a number of flags.
I remain confused
As I’ve written elsewhere I haven’t seen engagement on this point, which I find relatively credible, from one of the Time articles:
My comment on the above “While other things may have been bigger errors, this once seems most sort of “out of character” or “bad normsy”. And I know Naia well enough that this moves me a lot, even though it seems so out of character for [will] (maybe 30% that this is a broadly accurate account). This causes me consternation, I don’t understand and I think if this happened it was really bad and behaviour like it should not happen from any powerful EAs (or any EAs frankly).”
Extremely likely that the lawyers have urged relevant people to remain quiet on the first two points and probably the third as well.
Yeah seems right, but uh still seems worth saying.
Did you mean for the second paragraph of the quoted section to be in the quote section?
I can’t remember but you’re right that it’s unclear.
I haven’t read too much into this and am probably missing something.
Why do you think FTXFF was receiving grants via north dimension? The brief googling I did only mentioned north dimension in the context of FTX customers sending funds to FTX (specifically this SEC complaint). I could easily have missed something.
Grants were being made to grantees out of North Dimension’s account—at least one grant recipient confirmed receiving one on the Forum (would have to search for that). The trustee’s second interim report shows that FTXFF grants were being paid out of similar accounts that received customer funds.
It’s unclear to me whether FTX Philanthrophy (the actual 501c3) ever had any meaningful assets to its name, or whether (m)any of the grants even flowed through accounts that it had ownership of.
Seems pretty bad, no?
Certainly very concerning. Two possible mitigations though:
Any finding of negligence would only apply to those with duties or oversight responsibilities relating to operations. It’s not every employee or volunteer’s responsibility to be a compliance detective for the entire organization.
It’s plausible that people made some due dilligence efforts that were unsuccessful because they were fed false information and/or relied on corrupt experts (like “Attorney-1” in the second interim trustee report). E.g., if they were told by Legal that this had been signed off on and that it was necessary for tax reasons, it’s hard to criticize a non-lawyer too much for accepting that. Or more simply, they could have been told that all grants were made out of various internal accounts containing only corporate monies (again, with some tax-related justification that donating non-US profits through a US charity would be disadvantageous).
Ah, thank you!
I searched for that comment. I think this is probably the one you’re referencing.
I know of at least 1 other case.
I know of at least 1 NDA of an EA org silencing someone for discussing what bad behaviour that happened at that org. Should EA orgs be in the practice of making people sign such NDAs?
I suggest no.
I think I want a Chesterton’s TAP for all questions like this that says “how normal are these and why” whenever we think about a governance plan.
What’s a “Chesterton’s TAP”?
Not a generally used phrase, just my attempting to point to “a TAP for asking Chesterton’s fence-style questions”
What’s a TAP? I’m still not really sure what you’re saying.
“Trigger action pattern”, a technique for adopting habits proposed by CFAR <https://www.lesswrong.com/posts/wJutA2czyFg6HbYoW/what-are-trigger-action-plans-taps>.
Thanks!
“Chesterton’s TAP” is the most rationalist buzzword thing I’ve ever heard LOL, but I am putting together that what Chana said is that she’d like there to be some way for people to automatically notice (the trigger action pattern) when they might be adopting an abnormal/atypical governance plan and then reconsider whether the “normal” governance plan may be that way for a good reason even if we don’t immediately know what that reason is (the Chesterton’s fence)?
Oh, sorry! TAPs are a CFAR / psychology technique. https://www.lesswrong.com/posts/wJutA2czyFg6HbYoW/what-are-trigger-action-plans-taps
I am unsure what you mean? As in, because other orgs do this it’s probably normal?
I have no idea, but would like to! With things like “organizational structure” and “nonprofit governance”, I really want to understand the reference class (even if everyone in the reference class does stupid bad things and we want to do something different).
Strongly agree that moving forward we should steer away from such organizational structures; much better that something bad is aired publicly before it has a chance to become malignant
Feels like we’ve had about 3 months since the FTX collapse with no kind of leadership comment. Uh that feels bad. I mean I’m all for “give cold takes” but how long are we talking.
Do you think this is not due to “sound legal advice”?
I am pretty sure there is no strong legal reason for people to not talk at this point. Not like totally confident but I do feel like I’ve talked to some people with legal expertise and they thought it would probably be fine to talk, in addition to my already bullish model.
People voting without explaining is good.
I often see people thinking that this is bragading or something when actually most people just don’t want to write a response, they either like or dislike something
If it were up to me I might suggest an anonymous “I don’t know” button and an anonymous “this is poorly framed” button.
When I used to run a lot of facebook polls, it was overwhelmingly men who wrote answers, but if there were options to vote, the gender was much more even. My hypothesis was that a kind of argumentative usually man tended to enjoy writing long responses more. And so blocking lower effort/less antagonistic/ more anonymous responses meant I heard more from this kind of person.
I don’t know if that is true on the forum, but I would guess that the higher effort it is to respond the more selective the responses become in some direction. I guess I’d ask if you think that the people spending the most effort are likely to be the most informed. In my experience, they aren’t.
More broadly I think it would be good if the forum optionally took some information about users—location, income, gender, cause area, etc and on answers with more than say 10 votes would display some kind of breakdown. I imagine it would sometimes be interesting to find out how exactly agreement and disagreement cut on different issues.
Also I think it’s good to be able to anonymously express unpopular views. For most of human history it’s been unpopular to express support for LGBT+, the rights of women, animals. But if anonymous systems had existed we might have seen more support for such views. Likewise, pushing back against powerful people is easier if you can do it anonymously.
It seems like we could use the new reactions for some of this. At the moment they’re all positive but there could be some negative ones. And we’d want to be able to put the reactions on top level posts (which seems good anyway).
I think that it is generally fine to vote without explanations, but it would be nice to know why people are disagreeing or disliking something. Two scenarios come to mind:
If I write a comment that doesn’t make any claim/argument/proposal and it gets downvotes, I’m unclear what those downvotes mean.
If I make a post with a claim/argument/proposal and it gets downvoted without any comments, it isn’t clear what aspect of the post people have a problem with.
I remember writing in a comment several months ago about how I think that theft from an individual isn’t justified even if many people benefit from it, and multiple people disagreed without continuing the conversation. So I don’t know why they disagreed, or what part of the argument they through was wrong. Maybe I made a simple mistake, but nobody was willing to point it out.
I also think that you raise good points regarding demographics and the willingness of different groups of people to voice their perspectives.
I agree it would be nice to know, but in every case someone has decided they do want to vote but don’t want to comment. Sometimes I try and cajole an answer, but ultimately I’m glad they gave me any information at all.
What is bragading?
Think he was referring to “brigading”, referred to in this thread
Generally, it is voting more out of allegiance or affinity to a particular person, rather than an assessment of the quality of the post/comment.
Sam Harris takes Giving What We Can pledge for himself and for his meditation company “Waking Up”
Harris references MacAksill and Ord as having been central to his thinking and talks about Effective Altruism and exstential risk. He publicly pledges 10% of his own income and 10% of the profit from Waking Up. He also will create a series of lessons on his meditation and education app around altruism and effectiveness.
Harris has 1.4M twitter followers and is a famed Humanist and New Athiest. The Waking Up app has over 500k downloads on android, so I guess over 1 million overall.
https://dynamic.wakingup.com/course/D8D148
I like letting personal thoughts be up or downvoted, so I’ve put them in the comments.
Harris is a marmite figure—in my experience people love him or hate him.
It is good that he has done this.
Newswise, it seems to me it is more likely to impact the behavior of his listeners, who are likely to be well-disposed to him. This is a significant but currently low-profile announcement. As will the courses be on his app.
I don’t think I’d go spreading this around more generally, many don’t like Harris and for those who don’t like him, it could be easy to see EA as more of the same (callous superior progessivism).
In the low probability (5%?) event that EA gains traction in that space of the web (generally called the Intellectual Dark Web—don’t blame me, I don’t make the rules) I would urge caution for EA speakers who might pulled into polarising discussion which would leave some groups feeling EA ideas are “not for them”.
My guess is people who like Sam Harris are disproportionately likely to be potentially interested in EA.
This seems quite likely given EA Survey data where, amongst people who indicated they first heard of EA from a Podcast and indicated which podcast, Sam Harris’ strongly dominated all other podcasts.
More speculatively, we might try to compare these numbers to people hearing about EA from other categories. For example, by any measure, the number of people in the EA Survey who first heard about EA from Sam Harris’ podcast specifically is several times the number who heard about EA from Vox’s Future Perfect. As a lower bound, 4x more people specifically mentioned Sam Harris in their comment than selected Future Perfect, but this is probably dramatically undercounting Harris, since not everyone who selected Podcast wrote a comment that could be identified with a specific podcast. Unfortunately, I don’t know the relative audience size of Future Perfect posts vs Sam Harris’ EA podcasts specifically, but that could be used to give a rough sense of how well the different audiences respond.
Notably, Harris has interviewed several figures associated with EA; Ferriss only did MacAskill, while Harris has had MacAskill, Ord, Yudkowsky, and perhaps others.
This is true, although for whatever reason the responses to the podcast question seemed very heavily dominated by references to MacAskill.
This is the graph from our original post, showing every commonly mentioned category, not just the host (categories are not mutually exclusive). I’m not sure what explains why MacAskill really heavily dominated the Podcast category, while Singer heavily dominated the TED Talk category.
The address (in the link) is humbling and shows someone making a positive change for good reasons. He is clear and coherent.
Good on him.
How are we going to deal emotionally with the first big newspaper attack against EA?
EA is pretty powerful in terms of impact and funding.
It seems only an amount of time before there is a really nasty article written about the community or a key figure.
Last year the NYT wrote a hit piece on Scott Alexander and while it was cool that he defended himself, I think he and the rationalist community overreacted and looked bad.
I would like us to avoid this.
If someone writes a hit piece about the community, Givewell, Will MacAskill etc, how are we going to avoid a kneejerk reaction that makes everything worse?
I suggest if and when this happens:
individuals largely don’t respond publicly unless they are very confident they can do so in a way that leads to deescalation.
articles exist to get clicks. It’s worth someone (not necessarily me or you) responding to an article in the NYT, but if, say a niche commentator goes after someone, fewer people will hear it if we let it go.
let the comms professionals deal with it. All EA orgs and big players have comms professionals. They can defend themselves.
if we must respond (we often needn’t) we should adopt a stance of grace, curiosity and humility. Why do they think these things are true? What would convince us?
Personally I hate being attacked and am liable to feel defensive and respond badly. I assume you are no different. I’d like to think about this so that if and when it happens we can avoid embarrassing ourselves and the things we care about.
Yeah, I think the community response to the NYT piece was counterproductive, and I’ve also been dismayed at how much people in the community feel the need to respond to smaller hit pieces, effectively signal boosting them, instead of just ignoring them. I generally think people shouldn’t engage with public attacks unless they have training in comms (and even then, sometimes the best response is just ignoring).
We’ve had multiple big newspaper attacks now. How’d we do compared to your expectations?
I think we did better externally than I expected but I think internally I didn’t really write enough here.
The Scout Mindset deserved 1/10th of the marketing campaign of WWOTF. Galef is a great figurehead for rational thinking and it would have been worth it to try and make her a public figure.
I think much of the issue is that:
1. It took a while to ramp up to being able to do things such as the marketing campaign for WWOTF. It’s not trivial to find the people and buy-in necessary. Previous EA books haven’t had similar.
2. Even when you have that capacity, it’s typically much more limited than we’d want.
I imagine EAs will get better at this over time.
Dear reader,
You are an EA, if you want to be. Reading this forum is enough. Giving a little of your salary effectively is enough. Trying to get an impactful job is enough. If you are trying even with a fraction of your resources to make the world better and chatting with other EAs about it, you are one too.
The OpenAI stuff has hit me pretty hard. If that’s you also, look after yourself.
I don’t really know what accurate thought looks like here.
Yeah, same
I hope you’re doing ok Nathan. Happy to chat in DM’s if you like ❤️
It will settle down soon enough. Not much will change as for most breaking news story. But I am thinking if I should switch to Claude.
Post I spent 4 hours writing on a topic I care deeply about: 30 karma
Post I spent 40 minutes writing on a topic that the community vibes with: 120 karma
I guess this is fine—iys just people being interested but it can feel weird at times.
This is not fine
I dunno. I thought I’d surface.
Yeah, this is an unfortunate gradient, you have to decide not to follow it :-/
But there is more long-term glory in it.
I am really not the person to do it, but I still think there needs to be some community therapy here. Like a truth and reconciliation committee. Working together requires trust and I’m not sure we have it.
Poll: https://viewpoints.xyz/polls/ftx-impact-on-ea
Results: https://viewpoints.xyz/polls/ftx-impact-on-ea/results
Curious if you have examples of this being done well in communities you’ve been aware of? I might have asked you this before.
I’ve been part of an EA group where some emotionally honest conversations were had, and I think they were helpful but weren’t a big fix. I think a similar group later did a more explicit and formal version and they found it helpful.
I’ve never seen this done well. I guess I’d read about the truth and reconciliation committees in South Africa and Ireland.
I think the strategy fortnight worked really well. I suggest that another one is put in the calendar (for say 3 months time) and then rather than dripfeeding comment we sort of wait and then burst it out again.
It felt better to me, anyway to be like “for these two weeks I will engage”
I also thought it was pretty decent, and it caused me to get a post out that had been sitting in my drafts for quite a while.
I hope Will MacAskill is doing well. I find it hard to predict how he’s doing as a person. While there have been lots of criticisms (and I’ve made some) I think it’s tremendously hard to be the Shelling person for a movement. There is a seperate axis however, and I hope in himself he’s doing well and I imagine many feel that way. I hope he has an accurate picture here.
I notice some people (including myself) reevaluating their relationship with EA.
This seems healthy.
When I was a Christian it was extremely costly for me to reduce my identification and resulted in a delayed and much more final break than perhaps I would have wished[1]. My general view is that people should update quickly, and so if I feel like moving away from EA, I do it when I feel that, rather than inevitably delaying and feeling ick.
Notably, reducing one’s identification with the EA community need not change one’s poise towards effective work/donations/earn to give. I doubt it will change mine. I just feel a little less close to the EA community than once I did, and that’s okay.
I don’t think I can give others good advice here, because we are all so different. But the advice I would want to hear is “be part of things you enjoy being part of, choose an amount of effort to give to effectiveness and try to be a bit more effective with that each month, treat yourself kindly because you too are a person worthy of love”
I think a slow move away from Christianity would have been healthier for me. Strangely I find it possible to imagine still being a Christian, had things gone differently, even while I wouldn’t switch now.
The vibe at EAG was chill, maybe a little downbeat, but fine. I can get myself riled up over the forum, but it’s not representative! Most EAs are just getting on with stuff.
(This isn’t to say that forum stuff isn’t important, its just as important as it is rather than what should define my mood)
Feels like there should be a “comment anonymously” feature. Would save everyone having to manage all these logins.
We have thought about that. Probably the main reason we haven’t done this is because of this reason, on which I’ll quote myself on from an internal slack message:
Touche
I strongly dislike the following sentence on effectivealtruism.org:
“Rather than just doing what feels right, we use evidence and careful analysis to find the very best causes to work on.”
It reads to me as arrogant, and epitomises the worst caracatures my friends do of EAs. Read it in a snarky voice (such as one might if they struggled with the movement and were looking to do research) “Rather that just doing what feels right...”
I suggest it gets changed to one of the following:
“We use evidence and careful analysis to find the very best causes to work on.”
“It’s great when anyone does a kind action no matter how small or effective. We have found value in using evidence and careful analysis to find the very best causes to work on.”
I am genuinely sure whoever wrote it meant well, so thank you for your hard work.
Are the two bullet points two alternative suggestions? If so, I prefer the first one.
I also thought this when I first read that sentence on the site, but I find it difficult (as I’m sure its original author does) to communicate its meaning in a subtler way. I like your proposed changes, but to me the contrast presented in that sentence is the most salient part of EA. To me, the thought is something like this:
“Doing good feels good, and for that reason, when we think about doing charity, we tend to use good feeling as a guide for judging how good our act is. That’s pretty normal, but have you considered that we can use evidence and analysis to make judgments about charity?”
The problem IMHO is that without the contrast, the sentiment doesn’t land. No one, in general, disagrees in principle with the use of evidence and careful analysis: it’s only in contrast with the way things are typically done that the EA argument is convincing.
I would choose your statement over the current one.
I think the sentiment lands pretty well even with a very toned down statement. The movement is called “effective altruism”. I think often in groups are worried that outgroups will not get their core differences when generally that’s all outgroups know about them.
I don’t think that anyone who visits that website won’t think that effectiveness isn’t a core feature. And I don’t think we need to be patronising (as EAs are charactured as being in conversations I have) in order to make known something that everyone already knows.
Several journalists (including those we were happy to have write pieces about WWOTF) have contacted me but I think if I talk to them, even carefully, my EA friends will be upset with me. And to be honest that upsets me.
We are in the middle of a mess of our own making. We deserve scrutiny. Ugh, I feel dirty and ashamed and frustrated.
To be clear, I think it should be your own decision to talk to journalists, but I do also just think that it’s just better for us to tell our own story on the EA Forum and write comments, and not give a bunch of journalists the ability to greatly distort the things we tell them in a call, with a platform and microphone that gives us no opportunity to object or correct things.
I have been almost universally appalled at the degree to which journalists straightforwardly lie in interviews, take quotes massively out of context, or make up random stuff related to what you said, and I do think it’s better that if you want to help the world understand what is going on, that you write up your own thoughts in your own context, instead of giving that job to someone else.
<3
Richard Ngo just gave a talk at EAG berlin about errors in AI governance. One being a lack of concrete policy suggestions.
Matt Yglesias said this a year ago. He was even the main speaker at EAG DC https://www.slowboring.com/p/at-last-an-ai-existential-risk-policy?utm_source=%2Fsearch%2Fai&utm_medium=reader2
Seems worth asking why we didn’t listen to top policy writers when they warned that we didn’t have good proposals.
What do you think of Thomas Larson’s bill? It seems pretty concrete to me, do you just think it is not good?
I am going on what Ngo said. So I guess, what does he think of it?
This sounds like the sort of question you should email Richard to ask before you make blanket accusations.
Ehhh, not really. I think it’s not a crazy view to hold and I wrote it on a shortform.
I think 90% of the answer to this is risk aversion from funders, especially LTFF and OpenPhil, see here. As such many things struggled for funding, see here.
We should acknowledge that doing good policy research often involves actually talking to and networking with policy people. It involves running think tanks and publishing policy reports, not just running academic institutions and publishing papers. You cannot do this kind of research well in a vacuum.
That fact combined with funders who were (and maybe still are) somewhat against funding people (except for people they knew extremely well) to network with policy makers in any way, has lead to (maybe is still leading to) very limited policy research and development happening.
I am sure others could justify this risk averse approach, and there are totally benefits to being risk averse. However in my view this was a mistake (and is maybe an ongoing mistake). I think was driven by the fact that funders were/are: A] not policy people, so do/did not understand the space so are were hesitant to make grants; B] heavily US centric, so do/did not understand the non-US policy space; and C] heavily capacity constrained, so do/did not have time to correct for A or B.
– –
(P.S. I would also note that I am very cautious about saying there is “a lack of concrete policy suggestions” or at least be clear what is meant by this. This phrase is used as one of the reasons for not funding policy engagement and saying we should spend a few more years just doing high level academic work before ever engaging with policy makers. I think this is just wrong. We have more than enough policy suggestions to get started and we will never get very very good policy design unless we get started and interact with the policy world.)
My current model is that actually very few people who went to DC and did “AI Policy work” chose a career that was well-suited to proposing policies that help with existential risk from AI. In-general people tried to choose more of a path of “try to be helpful to the US government” and “become influential in the AI-adjacent parts of the US government”, but there are almost no people working in DC whose actual job it is to think about the intersection of AI policy and existential risk. Mostly just people whose job it is to “become influential in the US government so that later they can steer the AI existential risk conversation in a better way”.
I find this very sad and consider it one of our worst mistakes, though I am also not confident in that model, and am curious whether people have alternative models.
That’s probably true because it’s not like jobs like that just happen to exist within government (unfortunately), and it’s hard to create your own role descriptions (especially with something so unusual) if you’re not already at the top.
That said, I think the strategy you describe EAs to have been doing can be impactful? For instance, now that AI risk has gone mainstream, some groups in government are starting to work on AI policy more directly, and if you’re already working on something kind of related and have a bunch of contacts and so on, you’re well-positioned to get into these groups and even get a leading role.
What’s challenging is that you need to make career decisions very autonomously and have a detailed understanding of AI risk and related levers to carve out your own valuable policy work at some point down the line (and not be complacent with “down the line never comes until it’s too late”). I could imagine that there are many EA-minded individuals who went into DC jobs or UK policy jobs with the intent to have an impact on AI later, but they’re unlikely to do much with that because they’re not proactive enough and not “in the weeds” enough with thinking about “what needs to happen, concretely, to avert an AI catastrophe?.”
Even so, I think I know several DC EAs who are exceptionally competent and super tuned in and who’ll likely do impactful work down the line, or are already about to do such things. (And I’m not even particularly connected to that sphere, DC/policy, so there are probably many more really cool EAs/EA-minded folks there that I’ve never talked to or read about.)
The slide Nathan is referring to. “We didn’t listen” feels a little strong; lots of people were working on policy detail or calling for it, it just seems ex post like it didn’t get sufficient attention. I agree directionally though, and Richard’s guesses at the causes (expecting fast take-off + business-as-usual politics) seem reasonable to me.
Also, *EAGxBerlin.
I talked to someone outside EA the other day who said that in a competive tender they wouldn’t apply to EA funders because they thought the process would likely to go to someone with connections to OpenPhil.
Seems bad.
EAs please post your job posting to twitter
Please post your jobs to Twitter and reply with @effective_jobs. Takes 5 minutes. and the jobs I’ve posted and then tweeted have got 1000s of impressions.
Or just DM me on twitter (@nathanpmyoung) and I’ll do it. I think it’s a really cheap way of getting EAs to look at your jobs. This applies to impactful roles in and outside EA.
Here is an example of some text:
-tweet 1
Founder’s Pledge Growth Director
@FoundersPledge are looking for someone to lead their efforts in growing the amount that tech entrepreneurs give to effective charities when they IPO.
Salary: $135 - $150k
Location: San Francisco
https://founders-pledge.jobs.personio.de/job/378212
-tweet 2, in reply
@effective_jobs
-end
I suggest it should be automated but that’s for a different post.
Confusion
I get why I and other give to Givewell rather than catastrophic risk—sometimes it’s good to know your “Impact account” is positive even if all the catastrophic risk work was useless.
But why do people not give to animal welfare in this case? Seems higher impact?
And if it’s just that we prefer humans to animals that seems like something we should be clear to ourselves about.
Also I don’t know if I like my mental model of an “impact account”. Seems like my giving has maybe once again become about me rather than impact.
ht @Aaron Bergman for surfacing this
This is exactly why I mostly give to animal charities. I do think there’s higher uncertainty of impact with animal charities compared to global health charities so I still give a bit to AMF. So roughly 80% animal charities, 20% global health.
Thanks for brining our convo here! As context for others, Nathan and I had a great discussion about this which was supposed to be recorded...but I managed to mess up and didn’t capture the incoming audio (i.e. everything Nathan said) 😢
Guess I’ll share a note I made about this (sounds AI written because it mostly was, generated from a separate rambly recording). A few lines are a little spicier than I’d ideally like but 🤷
Thanks for posting this. I had branching out my giving strategy to conclude some animal-welfare organizations on the to-do list, but this motivated me to actually pull the trigger on that.
I think most of the animal welfare neglect comes from the fact that if people are deep enough into EA to accept all of its “weird” premises they will donate to AI safety instead. Animal welfare is really this weird midway spot between “doesn’t rest on controversial claims” and “maximal impact”.
Definitely part of the explanation, but my strong impression from interaction irl and on Twitter is that many (most?) AI-safety-pilled EAs donate to GiveWell and much fewer to anything animal related.
I think ~literally except for Eliezer (who doesn’t think other animals are sentient), this isn’t what you’d expect from the weirdness model implied.
Assuming I’m not badly mistaken about others’ beliefs and the gestalt (sorry) of their donations, I just don’t think they’re trying to do the most good with their money. Tbc this isn’t some damning indictment—it’s how almost all self-identified EAs’ money is spent and I’m not at all talking about ‘normal person in rich country consumption.’
If you type “#” follwed by the title of a post and press enter it will link that post.
Example:
Examples of Successful Selective Disclosure in the Life Sciences
This is wild
OMG
I continue to think that a community this large needs mediation functions to avoid lots of harm with each subsequent scandal.
People asked for more details. so I wrote the below.
Let’s look at some recent scandals and I’ll try and point out some different groups that existed.
FTX—longtermists and non-lontermists, those with greater risk tolerance and less
Bostrom—rationalists and progressives
Owen Cotton-Barrett—looser norms vs more robust, weird vs normie
Nonlinear—loyalty vs kindness, consent vs duty of care
In each case, the community disagrees on who we should be and what we should be. People write comments to signal that they are good and want good things and shouldn’t be attacked. Other people see these and feel scared that they aren’t what the community wants.
This is tiring and anxiety inducing for all parties. In all cases here there are well intentioned, hard working people who have given a lot to try and make the world better who are scared they cannot trust their community to support them if push comes to shove. There are people horrified at the behaviour of others, scared that this behaviour will repeat itself, with all the costs attached. I feel this way, and I don’t think I am alone.
I think we need the community equivalent of therapy and mediation. We have now got to the stage where national media articles get written about our scandals and people threaten litigation. I just don’t think that a community of 3000 apes can survive this without serious psychological costs which in turn affect work and our lives. We all don’t want to be chucked out of a community which is safety and food and community for us. We all don’t want that community to become a hellhole. I don’t, SBF doesn’t, the woman hurt by OCB doesn’t, Kat and Emerson and Chloe and Alice don’t.
That’s not to say that all behaviour is equal, but that I think the frame here is empathy, boundary setting and safety, not conflict, auto-immune responses and exile.
What do I suggest?
After each scandal we have spaces to talk about our feelings, then we discuss what we think the norms of the community should be. Initially there will be disagreement but in time as we listen to those we disagree with we may realise how we differ. Then we can try and reintegrate this understanding to avoid it happening again. That’s what trust is—the confidence that something won’t happen above tolerance.
A concrete example
After the Bostrom stuff we had rationalist and progressive EAs in disagreement. Some thought he’d responded well, others badly. I think there was room for a discussion, to hear how unsafe his behaviour had left people feeling “do people judge my competence based on the colour of my skin?” “will my friends be safe here?”. I don’t think these feelings can be dismissed as wokery gone mad. But I think the other group had worries too “Will I be judged for things I said years ago?” “Seemingly even an apology isn’t enough”. I find I can empathise with both groups.
And I suggest what we want is some norms around this. Norms about things we do and don’t do. The aim should be to reduce community stress through there being bright lines and costs for behaviour we deem bad. And ways for those who do unacceptable things to come back to the community. I think there could be mutually agreeable ones, but I think the process would be tough.
We’d have to wrestle with how Bostrom and Hanson’s productivity seems related to their ability to think weird or ugly thoughts. We’d have to think about if mailing lists 20 years ago were public or private. We’d have to think about what value we put on safety. And we’d have to be willing not to pick up the sword if it didn’t go our way.
But I think there are acceptable positions here. Where people acknowledge harmful patterns of behaviour, perhaps even voluntarily leave for a time. Where people talk about the harm and the benefit created by those they disagree with. Where others see that some value weirdness/creativity more/less than they do. Where we rejoice in what we have achieved and mourn over how we have hurt one another. Where we grow to be a kinder, more mature community.
Intermission
This stuff breaks my heart. Not because I am good, but because I have predictably hurt people and been hurt by people in the past. And I’d like the cycle to stop. In my own life, conflict has never been the way out of this. Either I should leave people I cannot work with, or share and listen to those I can. And it is so hard and I fail often, but it’s better than becoming jaded and cruel or self-hating and perfectionist. I am broken, I am enough, I can be better. EA is flawed, EA is good, EA can improve. The world is awful, the world is better that it used to be, the world can improve.
As it is
Currently, I think we aren’t doing this work, so every subsequent scandal adds another grievance to the pile. And I guess people are leaving the community. If we spend millions a year trying to get graduates, isn’t it worth spending the same to keep long time members? I don’t know if there is a way to keep Kat and Emerson, Alice and Chloe, the concerned global healthy worker and the person who thinks SBF did nothing wrong, and me and you, but currently I don’t see us spending nearly the appropriate amount of mental effort or resources.
Oh and I’m really not angling to do this work. I have suggestions, sure, but I think the person should be widely trusted by the community as neutral and mature.
I’d bid for you to explain more what you mean here—but it’s your quick take!
I’m very keen for more details as well.
The CEA community health team does serve as a mediation function sometimes, I think. Maybe that’s not enough, but it seems worth mentioning.
Community health is also like the legal system in that they enforce sanctions so I wonder if that reduces the chance that someone reaches out to them to mediate.
I think this is the wrong frame tbh
How so?
I think I want them to be a mediation and boundary setting org, not just legal system
A previous partner and I did a sex and consent course together online I think it’s helped me be kinder in relationships.
Useful in general.
More useful if, you:
- have sex casually—
see harm in your relationships and want to grow
- are poly
As I’ve said elsewhere I think a very small proportion of people in EA are responsible for most of the relationship harms. Some of bad actors, who need to be removed, some are malefactors, who have either lots of interactions or engage in high risk behaviours and accidentally cause harm. I would guess I have more traits of the second category than almost all of you. So people like me should do the most work to change.
So most of you probably don’t need this, but if you are in some of the above groups, I’d recommend a course like this. Save yourself the heartache of upsetting people you care about.
Happy to DM.
https://dandelion.events/e/pd0zr?fbclid=IwAR0cIXFowU7R4dHZ4ptfpqsnnhdnLIJOfM_DjmS_5HR-rgQTnUzBdtQEnjE
Can we have some people doing AI Safety podcast/news interviews as well as Yud?
I am concerned that he’s gonna end up being the figurehead here. I assume someone is thinking of this, but I’m posting here to ensure that it is said. I am pretty sure that people are working on this, but I think it’s good to say this anyway.
We aren’t a community who says “I guess he deserves it” we say “who is the best person for the job?”. Yudkowsky, while he is an expert isn’t a median voice. His estimates of P(doom) are on the far tail of EA experts here. So if I could pick 1 person I wouldn’t pick him and frankly I wouldn’t pick just one person.
Some other voices I’d like to see on podcasts/ interviews:
Toby Ord
Paul Christiano
Ajeya Cotra
Amanda Askell
Will MacAskill
Joe Carlsmith*
Katja Grace*
Matthew Barnett*
Buck Schlegeris
Luke Meulhauser
Again, I’m not saying noone has thought of this (80%) they have. But I’d like to be 97% sure, so I’m flagging it.
*I am personally fond of this person so am biased
I am a bit confused by your inclusion of Will MacAskill. Will has been on a lot of podcasts, while for Eliezer I only remember 2. But your text sounds a bit like you worry that Eliezer will be too much on podcasts and MacAskill too little (I don’t want to stop MacAskill from going on podcasts btw. I agree that having multiple people present different perspectives on AGI safety seems like a good thing).
I think in the current discourse I’d like to see more of Will, who is a blanaced and clear communicator.
I don’t think you should be optimizing to avoid extreme views, but in favor of those with the most robust models, who can also communicate them effectively to the desired audience. I agree that if we’re going to be trying anything resembling public outreach it’d be good to have multiple voices for a variety of reasons.
On the first half of the criteria I’d feel good about Paul, Buck, and Luke. On the second half I think Luke’s blog is a point of evidence in favor. I haven’t read Paul’s blog, and I don’t think that LessWrong comments are sufficiently representative for me to have a strong opinion on either Paul or Buck.
I notice I am pretty skeptical of much longtermist work and the idea that we can make progress on this stuff just by thinking about it.
I think future people matter, but I will be surprised if, after x-risk reduction work, we can find 10s of billions of dollars of work that isn’t busywork and shouldn’t be spent attempting to learn how to get eg nations out of poverty.
I have heard one anecdote of an EA saying that they would be less likely to hire someone on the basis of their religion because it would imply they were
less good at their jobless intelligent/epistemically rigorous. I don’t think they were involved in hiring, but I don’t think anyone should hold this view.Here is why:
As soon as you are in a hiring situation, you have much more information than priors. Even if it were true that, say, ADHD[1] were less rational then the interview process should provide much more information than such a prior. If that’s not the case, get a better interview process, don’t start being prejudiced!
People don’t mind meritocracy, but they want a fair shake. If I heard that people had a prior that ADHD folks were less likely to be hard working, regardless of my actual performance in job tests, I would be less likely to want to be part of this community. You might lose my contributions. It seems likely to me that we come out ahead by ignoring small differences in groups so people don’t have to worry about this. People are very sensitive to this. Let’s agree not to defect. We judge on our best guess of your performance, not on appearances.
I would be unsurprised if this kind of thinking cut only one way. Is anyone suggesting they wouldn’t hire poly people because of the increased drama or men because of the increased likelihood of sexual scandal? No! We already think some information is irrelevant/inadmissible as a prior in hiring. Because we are glad of people’s right to be different or themselves. To me, race and religion clearly fall in this space. I want people to feel they can be human and still have a chance of a job.
I wouldn’t be surprised if this cashed out to “I hire people like me”. In this example was the individual really hiring on the basis of merit or did they just find certain religious people hard to deal with. We are not a social club, we are trying to do the most good. We want the best, not the people who are like us.
This pattern matches to actual racism/sexism. Like “sometimes I don’t get hired because people think Xs are worse at jobs”. How is that not racism? Seems bad.
Counterpoints:
Sometime gut does play a play a role. We think someone would get better on our team. Some might argue that it’s fine to use this as a tiebreaker. Or that its better to be honest that this is what’s going on.
Personally I think they points outweigh the counterpoints.
Hiring processes should hire the person who seems most likely to do the best job. And candidates should be confident this is happening. But for both predictive reasons, community welfare reasons and avoiding obvious pitfalls reasons I think small priors around race, religion, sexuality, gender, sexual practice should be discounted[2]. If you think the candidate is better or worse, it should show in the interview process. And yes, I get that gut plays a role, but I’d be really wary of gut that feeds clear biases. I think a community where we don’t do that comes out ahead and does more good.
I have a diagnosis so feel comfortable using this example.
And I think large priors are incorrect
In the wake of the financial crisis it was not uncommon to see suggestions that banks etc. should hire more women to be traders and risk managers because they would be less temperamentally inclined towards excessive risk taking.
I have not heard for such calls in EA, which was my point.
But neat example
These thoughts are VERY rough and hand wavy.
I think that we have more-or-less agreed as societies that there are some traits that is is okay to use to make choices about people (mainly: their actions/behaviors), and there are some traits that is is not okay to use (mainly: things that the person didn’t choose and isn’t responsible for). Race, religion, gender, and the like are widely accepted[1] as not socially acceptable traits to use when evaluating people’s ability to be a member of a team.[2] But there are other traits that we commonly treat as acceptable to use as the basis of treating people differently, such as what school someone went to, how many years of work experience they have, if they have a similar communication style as us, etc.
I think I might split this into two different issues.
One issue is: it isn’t very fair to give or withhold jobs (and other opportunities) based on things that people didn’t really have much choice in (such as where they were born, how wealthy their parents were, how good of an education they got in their youth, etc.)
A separate issue is: it is ineffective to employment decisions (hiring, promotions, etc.) based on things that don’t predict on-the-job success.
Sometimes these things line up nicely (such as how it isn’t fair to base employment decisions on hair color, and it is also good business to not base employment decisions on hair color). But sometimes they don’t line up so nicely: I think there are situations where it makes sense to use “did this person go to a prestigious school” to make employment decisions because that will get you better on-the-job performance; but it also seems unfair because we are in a sense rewarding this person for having won the lottery.[3]
In a certain sense I suppose this is just a mini rant about how the world is unfair. Nonetheless, I do think that a lot of conversations about hiring and discriminations get the two different issues conflated.
People’s perspectives vary, of course, but among my own social groups and peers “discrimination based on race/sex/etc. = bad” is widely accepted.
Employment is full of laws, but even in situations where there isn’t any legal issue (such as inviting friends over for a movie party, or organizing a book club) I view it as somewhat repulsive to include/exclude people based on gender/race/religion/etc. Details matter a lot, and I can think of exceptions, but that is more or less my starting point.
I’ve heard the phrase “genetic lottery,” and I suspect genes to contribute a lot to academic/career success. But lots of other things outside a person’s control affect how well they perform: being born in a particular place, how good your high school teachers were, stability of the household, if your parents had much money, and all the other things that we can roughly describe as “fortune” or “luck” or “happenstance.”
I know lots of people with lots of dispositions experience friction with just declining their parents’ religions, but that doesn’t mean I “get it” i.e., conflating religion with birth lotteries and immutability seems a little unhinged to me.
There may be a consensus that it’s low status to say out loud “we only hire harvard alum” or maybe illegal (or whatever), but there’s not a lot of pressure to actually try reducing implicit selection effects that end up in effect quite similar to a hardline rule. And I think harvard undergrad admissions have way more in common with lotteries than religion does!
I think the old sequencesy sort of “being bad at metaphysics (rejecting reductionism) is a predictor of unclear thinking” is fine! The better response to that is “come on, no one’s actually talking about literal belief in literal gods, they’re moreso saying that the social technologies are valuable or they’re uncomfortable just not stewarding their ancestors’ traditions” than like a DEI argument.
There is more to get into here but two main things:
I guess some EAs, and some who I think do really good work do literally believe in literal gods
I don’t actually think this is that predictive. I know some theists who are great at thinking carefully and many athiests who aren’t. I reckon I could distinguish the two in a discussion better than rejecting the former out of hand.
Some feedback on this post: this part was confusing. I assume that what this person said was something like “I think a religious person would probably be harder to work with because of X”, or “I think a religious person would be less likely to have trait Y”, rather than “religious people are worse at jobs”.
The specifics aren’t very important here, since the reasons not to discriminate against people for traits unrelated to their qualifications[1] are collectively overwhelming. But the lack of specifics made me think to myself: “is that actually what they said?”. It also made it hard to understand the context of your counterarguments, since there weren’t any arguments to counter.
Religion can sometimes be a relevant qualification, of course; if my childhood synagogue hired a Christian rabbi, I’d have some questions. But I assume that’s not what the anecdotal person was thinking about.
The person who was told this was me, and the person I was talking to straight up told me he’d be less likely to hire Christians because they’re less likely to be intelligent
Please don’t assume that EAs don’t actually say outrageously offensive things—they really do sometimes!
Edit: A friend told me I should clarify this was a teenage edgelord—I don’t want people to assume this kind of thing gets said all the time!
And since posting this I’ve said this to several people and 1 was like “yeah no I would downrate religious people too”
I think a poll on this could be pretty uncomfortable reading. If you don’t, run it and see.
Put it another way, would EAs discriminate against people who believe in astrology? I imagine more than the base rate. Part of me agrees with that, part of me thinks its norm harming to do. But I don’t think this one is “less than the population”
That’s exactly what I mean!
“I think religious people are less likely to have trait Y” was one form I thought that comment might have taken, and it turns out “trait Y” was “intelligence”.
Now that I’ve heard this detail, it’s easier to understand what misguided ideas were going through the speaker’s mind. I’m less confused now.
“Religious people are bad at jobs” sounds to me like “chewing gum is dangerous” — my reaction is “What are you talking about? That sounds wrong, and also… huh?”
By comparison, “religious people are less intelligent” sounds to me like “chewing gum is poisonous” — it’s easier to parse that statement, and compare it to my experience of the world, because it’s more specific.
*****
As an aside: I spend a lot of time on Twitter. My former job was running the EA Forum. I would never assume that any group has zero members who say offensive things, including EA.
I think the strongest reason to not do anything that even remotely looks like employer discrimination based on religion is that it’s illegal, at least for the US, UK, and European Union countries, which likely jointly encompasses >90% of employers in EA.
(I wouldn’t be surprised if this is true for most other countries as well, these are just the ones I checked).
There’s also the fact that, as a society and subject to certain exceptions, we’ve decided that employers shouldn’t be using an employee’s religious beliefs or lack thereof as an assessment factor in hiring. I think that’s a good rule from a rule-utilitarian framework. And we can’t allow people to utilize their assumptions about theists, non-theists, or particular theists in hiring without the rule breaking down.
The exceptions generally revolve around personal/family autonomy or expressive association, which don’t seem to be in play in the situation you describe.
I think that I generally agree with what you are suggesting/proposing, but there are all kinds of tricky complications. The first thing that jumps to my mind is that sometimes hiring the person who seems most likely to do the best job ends up having a disparate impact, even if there was no disparate treatment. This is not a counterargument, of course, but more so a reminder that you can do everything really well and still end up with a very skewed workforce.
I generally agree with the meritocratic perspective. It seems a good way (maybe the best?) to avoid tit-for-tat cycles of “those holding views popular in some context abuse power → those who don’t like the fact that power was abused retaliate in other contexts → in those other contexts, holding those views results in being harmed by people in those other contexts who abuse power”.
Good point about the priors. Strong priors about these things seem linked to seeing groups as monoliths with little within-group variance in ability. Accounting for the size of variance seems under-appreciated in general. E.g., if you’ve attended multiple universities, you might notice that there’s a lot of overlap between people’s “impressiveness”, despite differences in official university rankings. People could try to be less confused by thinking in terms of mean/median, variance, and distributions of ability/traits more, rather than comparing groups by their point estimates.
Some counter-considerations:
Religion and race seem quite different. Religion seems to come with a bunch of normative and descriptive beliefs that could affect job performance—especially in EA—and you can’t easily find out about those beliefs in a job interview. You could go from one religion to another, from no religion to some religion, or some religion to no religion. The (non)existence of that process might give you valuable information about how that person thinks about/reflects on things and whether you consider that to be good thinking/reflection.
For example, from a irreligious perspective, it might be considered evidence of poor thinking if a candidate thinks the world will end in ways consistent with those described in the Book of Revelation, or think that we’re less likely to be in a simulation because a benevolent, omnipotent being wouldn’t allow that to happen to us.
Anecdotally, on average, I find that people who have gone through the process of abandoning the religion they were raised with, especially at a young age, to be more truth-seeking and less influenced by popular, but not necessarily true, views.
Religion seems to cover too much. Some forms of it seems to offer immunity to act in certain ways, and the opportunity to cheaply attack others if they disagree with it. In other communities, religion might be used to justify poor material/physical treatment of some groups of people, e.g. women and gay people. While I don’t think being accepting of those religions will change the EA community too much, it does say something to/negatively affect the wider world if there’s sufficient buy-in/enough of an alliance/enough comfort with them.
But yeah, generally, sticking to the Schelling point of “don’t discriminate by religion (or lack-thereof)” seems good. Also, if someone is religious and in EA (i.e., being in an environment that doesn’t have too many people who think like them), it’s probably good evidence that they really want to do good and are willing to cooperate with others to do so, despite being different in important ways. It seems a shame to lose them.
Oh, another thought. (sorry for taking up so much space!) Sometimes something looks really icky, such as evaluating a candidate via religion, but is actually just standing in for a different trait. We care about A, and B is somewhat predictive of A, and A is really hard to measure, then maybe people sometimes use B as a rough proxy for A.
I think that this is sometimes used as the justification for sexism/racism/etc, where the old-school racist might say “I want a worker who is A, and B people are generally not A.” If the relationship between A and B is non-existent or fairly weak, then we would call this person out for discriminating unfairly. But now I’m starting to think of what we should do if there really is a correlation between A and B (such as sex and physical strength). That is what tends to happen if a candidate is asked to do an assessment that seems to have nothing to do with the job, such as clicking on animations of colored balloons: it appears to have nothing to do with the job, but it actually measures X, which is correlated with Y, which predicts on-the-job success.
I’d rather be evaluated as an individual than as a member of a group, and I suspect that in-group variation is greater than between-group variation, echoing what you wrote about the priors being weak.
You don’t need to apologise for taking up space! It’s a short form, write what you like.
I think EAs have a bit of an entitlement problem.
Sometimes we think that since we are good we can ignore the rules. Seems bad
As with many statements people make about people in EA, I think you’ve identified something that is true about humans in general.
I think it applies less to the average person in EA than to the average human. I think people in EA are more morally scrupulous and prone to feeling guilty/insufficiently moral than the average person, and I suspect you would agree with me given other things you’ve written. (But let me know if that’s wrong!)
I find statements of the type “sometimes we are X” to be largely uninformative when “X” is a part of human nature.
Compare “sometimes people in EA are materialistic and want to buy too many nice things for themselves; EA has a materialism problem” — I’m sure there are people in EA like this, and perhaps this condition could be a “problem” for them. But I don’t think people would learn very much about EA from the aforementioned statements, because they are also true of almost every group of people.
I sense that it’s good to publicly name serial harassers who have been kicked out of the community, even if the accuser doesn’t want them to be. Other people’s feeling matter too and I sense many people would like to know who they are.
I think there is a difference between different outcomes, but if you’ve been banned from EA events then you are almost certainly someone I don’t want to invite to parties etc.
I would appreciate being able to vote forum articles as both agree disagree and upvote downvote.
Lots of things where I think they are false but interesting or true but boring.
Relative Value Widget
It gives you sets of donations and you have to choose which you prefer. If you want you can add more at the bottom.
https://allourideas.org/manifund-relative-value
so far:
This is neat, kudos!
I imagine it might be feasible to later add probability distributions, though that might unnecessarily slow people down.
Also, some analysis would likely be able to generate a relative value function, after which you could do the resulting visualizations and similar.
Note I didn’t build the app, I just added the choices. Do you think geting the full relative values is worth it?
Why do people give to EA funds and not just OpenPhil?
does OpenPhil accept donations? I would have guessed not
It does not. There are a small number of co-funding situations where money from other donors might flow through Open Philanthropy operated mechanisms, but it isn’t broadly possible to donate to Open Philanthropy itself (either for opex or regranting).
Lol well no wonder then. Thanks both.
Unbalanced karma is good actually. it means that the moderators have to do less. I like the takes of the top users more than the median user and I want them to have more but not total influence.
Appeals to fairness don’t interest me—why should voting be fair?
I have more time for transparency.
A friend asked about effective places to give. He wanted to donate through his payroll in the UK. He was enthusiastic about it, but that process was not easy.
It wasn’t particularly clear whether GiveWell or EA Development Fund was better and each seemed to direct to the other in a way that felt at times sketchy.
It wasn’t clear if payroll giving was an option
He found it hard to find GiveWell’s spreadsheet of effectiveness
Feels like making donations easy should be a core concern of both GiveWell and EA Funds and my experience made me a little embarrassed to be honest.
EA short story competition?
Has anyone ever run a competition for EA related short stories?
Why would this be a good idea?
* Narratives resonate with people and have been used to convey ideas for 1000s of years
* It would be low cost and fun
* Using voting on this forum there is the same risk of “bad posts” as for any other post
How could it work?
* Stories submitted under a tag on the EA forum.
* Rated by upvotes
* Max 5000 words (I made this up, dispute it in the comments)
* If someone wants to give a reward, then there could be a prize for the highest rated
* If there is a lot of interest/quality they could be collated and even published
* Since it would be measured by upvotes it seems unlikely a destructive story would be highly rated (or as likely as any other destructive post on the forum)
Upvote if you think it’s a good idea. If it gets more than 40 karma I’ll write one.
Being able to agree and disagreevote on posts feels like it might be great. Props to the forum team.
Looking forward to how it plays out! LessWrong made the intentional decision to not do it, because I thought posts are too large and have too many claims and agreement/disagreement didn’t really have much natural grounding any more, but we’ll see how it goes. I am glad to have two similar forums so we can see experiments like this play out.
My hope would be that it would allow people to decouple the quality of the post and whether they agree with it or not. Hopefully people could even feel better about upvoting posts they disagreed with (although based on comments that may be optimistic).
Perhaps combined with a possible tweak in what upvoting means (as mentioned by a few people), someone mentioned we could change “how much do you like this overall” to something that moves away form basing the reaction on an emotions. I think someone suggested something like “Do you think this post adds value” (That’s just a real hack at the alternative, I’m sure there are far better ones)
I think another option is to have reactions on a paragraph level. That would be interesting.
I guess African, Indian and Chinese voices are underrepresented in the AI Governance discussion. And in the unlikely case we die, we all die and it think it’s weird that half the people who will die have noone loyal to them in the discussion.
We want AI that works for everyone and it seems likely you want people who can represent billions who aren’t currently with a loyal representative.
I’m actually more concerned about the underrepresentation of certain voices as it applies to potential adverse effects of AGI (or even near-AGI) on society that don’t involve all of us dying. In the everyone-dies scenario, I would at least be similarly situated to people from Africa, India, and China in terms of experiencing the exact same bad thing that happens. But there are potential non-fatal outcomes, like locking in current global power structures and values, that affect people from non-Western countries much differently (and more adversely) than they’d affect people like me.
Yeah, in a scenario with “nation-controlled” AGI, it’s hard to see people from the non-victor sides not ending up (at least) as second-class citizens—for a long time. The fear/lack of guarantee of not ending up like this makes cooperation on safety more difficult, and the fear also kind of makes sense? Great if governance people manage to find a way to alleviate that fear—if it’s even possible. Heck, even allies of the leading state might be worried—doesn’t feel too good to end up as a vassal state. (Added later (2023-06-02): It may be a question that comes up as AGI discussions become mainstream.)
Wouldn’t rule out both American and Chinese outside of respective allied territory being caught in the crossfire of a US-China AI race.
Political polarization on both sides in the US is also very scary.
Sorry, yes. I think that ideally we don’t all die. And in those situations voices loyal to representative groups seem even more important.
This strikes me as another variation of “EA has a diversity problem.” Good to keep in mind that is it not just about progressive notions of inclusivity, though. There may be VERY significant consequences for the people in vast swaths of the world if a tiny group of people make decisions for all of humanity. But yeah, I also feel that it is a super weird aspect of the anarchic system (in the international relations sense of anarchy) that most of the people alive today have no one representing their interests.
It also seems to echo consistent critiques of development aid not including people in decision-making (along the lines of Ivan Illich’s To Hell with Good Intentions, or more general post-colonial narratives).
What means “have noone loyal to them” and “with a loyal representative”? Are you talking about the indian government? Or are you talking about EAs talking part in discussions such as yourself? (In which case, who are you loyal to?)
I think that’s part of the problem.
Who is loyal to the chinese people?
And I don’t think I’m good here. I think I try to be loyal to them, but I don’t know what the chinese people want and I think if I try and guess I’ll get it wrong in some key areas.
I’m reminded of when givewell?? asked recipients how they would trade money for children’s lives and they really fucking loved saving children’s lives. If we are doing things for others benefit we should take their weightings into account.
I notice we are great at discussing stuff but not great at coming to conclusions.
I wish the forum had a better setting for “I wrote this post and maybe people will find it interesting but I don’t want it on the front page unless they do because that feels pretenious”
edited
Give Directly has a President (Rory Stewart) paid $600k, and is hiring a Managing Director. I originally thought they had several other similar roles (because I looked on the website) but I talked to them an seemingly that is not the case. Below is the tweet that tipped me off but I think it is just mistaken.
Once could still take issue with the $600k (though I don’t really)
https://twitter.com/carolinefiennes/status/1600067781226950656?s=20&t=wlF4gg_MsdIKX59Qqdvm1w
Seems in line with CEO pay for US nonprofits with >100M in budget, at least when I spot check random charities near the end of this list.
I feel confused about the president/CEO distinction however.
Is it Normal? Uncertain
A ore important question for me though, is to ask Is it right? and Is it a good idea? I think the answer to both of these is a resounding no for a number of reasons.
- (For GiveDirectly). The premise of your entire organisation is that dollars do more good in the hands of the poor than the rich. For your organisation to then spend a huge amount of money on a CEO is arguably going against what the organisation stands for.
- Bad press for the organisation. After SBF and the Abbey etc. this shouldn’t take too much explaining
- Might reflect badly on the organisation when applying for grants
- (My personal gripe) what kind of person working to help the poorest people on earth could live with themselves earning so much given what their organisation. You have become part of the industrial aid complex which makes inequality worse—the kind of thing givedirectly almost seemed to be riling against in the first place.
High NGO salaries make me angry though, so maybe this is a bit too ranty ;).
The expectation of low salaries is one of the biggest problems hobbling the nonprofit sector. It makes it incredibly difficult to hire people of the caliber you need to run a high-performance organization.
This is classic Copenhagen interpretation of ethics stuff. Someone making that kind of money as a nonprofit CEO could almost always make much more money in the private sector while receiving significantly less grief. You’re creating incentives that get us worse nonprofits and a worse world.
Thanks Will
I’m interested in the evidence behind the idea that low salaries hobble the nonprofit sector. Is there research to support this outside of the for-profit market? I’m unconvinced that higher salaries (past a certain point) would lead to a better calibre of employee in the NGO field. I would have assumed that the attractiveness of running an effective and high profile org like Give directly might be enough to attract amazing candidates regardless of salary. It would be amazing to do AB testing, or even a RCT on this front but I would be imagine that would be hard to convince organisations to get involved in this research. Personally I think there are enough great leaders out there (especially for an org like givedirectly) who would happily work on 100,000 a year. the salary difference between 100k and 600k might make barely any difference at all in the pool of candidates you attract—but of course this is conjecture.
On the moral side of things, there’s a difference between taking a healthy salary of 100,000 dollars a year—enough to be in the top 0.5% of earners in the world and taking $600,000. We’re not looking for a masochist to run the best orgs, just someone who appreciates the moral weight of that degree of inequality within an organisation that purports to be supporting the world’s poorest.
If earning 600,000 rather than 100,000 is a strong incentive for a person running a non-profit, I probably don’t want them in charge. First I think that this kind of salary might lead someone to be less efficient with spending both in the American base and in distant company operations. NGOs need lean operations as they rely on year to year donations which are never secure—NGOs can’t expect to continue high growth rates of funding year on year like good businesses. Also leaders on high pay are probably likely to feel morally obligated to pay other admin staff more because of their own salary, rather than maximising the amount of money given directly to the poorest.
It may also affect the whole ethos of the organisation and respect from other staff especially in places like Kenya where staff will be getting paid far far less. Imagine you are earning a decent local wage in Kenya, which is still 100x less than your boss in America? Motivating yourself to do your job well becomes difficult. I’ve seen this personally in organisations here in Uganda where Western bosses earn far higher salaries. Local staff see the injustice within their own system then can’t get on board with the vision of the organisation. This kind of salary inequality is likely to affect organisational morale.
I’ve always thought the salaries of chief executives of various countries may provide an external vantage point on the reasonableness of charity-executive salaries. They tend to top out at 400K USD: https://en.wikipedia.org/wiki/List_of_salaries_of_heads_of_state_and_government.
At least in the US, Cabinet members, judges, senior career civil servants, and state governors tend to make on average half that. I have heard of some people who would be good federal judges, mainly at the district-court level, turning down nominations because they couldn’t stomach the 85-90% pay cut from being a big-firm partner. The quality of some of these senior political and judicial leaders varies . . . but I don’t think money is the real limiting factor in US leader quality. That is, I don’t get the sense that the US would generally have better leaders if the salaries at the top were doubled or tripled.
The non-salary “benefits” and costs of working at high levels in the government are different from the non-salary “benefits” and costs of working for a non-profit. But I think they differ in ways that some people would prefer the former over the latter (or vice versa).
In other words, a belief that charities should offer their senior leaders a significantly higher salary than senior leaders in world and regional governments potentially implies that almost every developed democracy in the world should be paying their senior leaders and civil servants significantly more than they do. Maybe they should?
I don’t have a firm opinion on salaries for charitable senior officials, but I think Nick is right insofar as high salaries can cause donor disillusionment and loss of morale within the organization. So while I’m willing to start with a presumption that government-comparable salaries for mid-level+ staff are appropriate (because they have been tested by the crucilble of the democratic process), it’s reasonable to ask for evidence that significantly higher salaries improve organizational effectiveness for non-profits.
I talked to someone there and they pointed out that Stewart hasn’t taken his salary yet, so it’s not clear that he will take all of it.
Thanks Nathan. That’s a nice potential gesture (and potentially a retrospective PR move). But this doesn’t help answer all my critcisms ;).
I dislike the framing of “considerable” and “high engagement” on the EA survey.
This copied from the survey:
To me “considerably engaged” EA people are doing a lot. Their median donation is $1000. They have “engaged extensively” and “often consider the principles of effective altruism” To me, they seem “highly engaged” in EA.
I’ve met people who are giving quite a lot of money, who have perhaps tried applied to EA jobs and not succeeded. And yet they are not allowed to consider themselves “highly engaged”. I guess this leads to them feeling disillusioned. It risks creating a privileged class of those who can get jobs at EA orgs and those who can’t. What about those who think they are doing an EA job but it’s not at an EA-aligned organisation? It seems wrong to me that they can’t consider themselves highly engaged.
I would prefer:
“Considerable engagement” → “high engagement”
“High engagement” → “maximum engagement”
And I would prefer the text read as follows:
High (previously considerable) engagement: I’ve engaged extensively with effective altruism content (e.g. attending an EA Global conference, applying for career coaching, or organizing an EA meetup). I often consider the principles of effective altruism when I make decisions about my career or charitable donations, but they are not the biggest factor to me.
Maximum (previously high) engagement: I am deeply involved in the effective altruism community. Perhaps I have chosen my career using the principles of effective altruism. I might earn to give or helping to lead an EA group or working at an EA-aligned organization. Maybe I tried for several years to gain such a career but have since moved to a plan B or Z. Regardless, I make my career or resource decisions on a primarily effective altruist basis.
It’s a bit rough, but I think it allows for people who are earning to give or deeply involved with the community to say they are maximally engaged and that those who are highly engaged to put a 4 without shame. Feel free to put your own drafts in the comments.
Currently, the idea that someone could be earning to give, donating $10,000s per year and perhaps still not consider themself highly engaged in EA seems like a flaw.
I think this is part of a more general problem that people say things like “I’m not totally EA” when they donate 1%+ of their income and are trying hard. Why create a club where so many are insecure about their membership.
I can’t speak for everyone, but if you donate even 1% of your income to charities which you think are effective, you’re EA in my book.
It is one of my deepest hopes, and one of my goals for my own work at CEA, that people who try hard and donate feel like they are certainly, absolutely a part of the movement. I think this is determined by lots of things, including:
The existence of good public conversations about donations, cause prioritization, etc., where anyone can contribute
The frequency of interesting news and stories about EA-related initiatives that make people feel happy about the progress their “team” is making
I hope that the EA Survey’s categories are a tiny speck compared to these.
Thanks for providing a detailed suggestion to go with this critique!
While I’m part of the team that puts together the EA Survey, I’m only answering for myself here.
People can consider themselves anything they want! It’s okay! You’re allowed! I hope that a single question on the survey isn’t causing major changes to how people self-identify. If this is happening, it implies a side-effect the Survey wasn’t meant to have.
Have you met people who specifically cited the survey (or some other place the question has showed up — I think CEA might have used it before?) as a source of disillusionment?
I’m not sure I understand why people would so strongly prefer being in a “highly engaged” category vs. a “considerably engaged” category if those categories occupy the same relative position on a list. Especially since people don’t use that language to describe themselves, in my experience. But I could easily be missing something.
I want someone who earns-to-give (at any salary) to feel comfortable saying “EA is a big part of my life, and I’m closely involved in the community”. But I don’t think this should determine how the EA Survey splits up its categories on this question, and vice-versa.
*****
One change I’d happily make would be changing “EA-aligned organization” to “impact-focused career” or something like that. But I do think it’s reasonable for the survey to be able to analyze the small group of people whose professional lives are closely tied to the movement, and who spend thousands of hours per year on EA-related work rather than hundreds.
(Similarly, in a survey about the climate movement, it would seem reasonable to have one answer aimed at full-time paid employees and one answer aimed at extremely active volunteers/donors. Both of those groups are obviously critical to the movement, but their answers have different implications.)
Earning-to-give is a tricky category. I think it’s a matter of degree, like the difference between “involved volunteer/group member” and “full-time employee/group organizer”. Someone who spends ~50 hours/year trying to allocate $10,000 is doing something extraordinary with their life, and EA having a big community of people like this is excellent, but I’d still like to be able to separate “active members of Giving What We Can” from “the few dozen people who do something like full-time grantmaking or employ people to do this for them”.
*****
Put another way: Before I joined CEA, I was an active GWWC member, read a lot of EA-related articles, did some contract work for MIRI/CFAR, and went to my local EA meetups. I’d been rejected from multiple EA roles and decided to pursue another path (I didn’t think it was likely I’d get an EA job until months later).
I was pretty engaged at this point, but the nature of my engagement now that I work for CEA is qualitatively different. The opinions of Aaron!2018 should mean something different to community leaders than the opinions of Aaron!2021 — they aren’t necessarily “less important” (I think Aaron!2018 would have a better perspective on certain issues than I do now, blinded as I am by constant exposure to everything), but they are “different”.
*****
All that said, maybe the right answer is to do away with this question and create clusters of respondents who fit certain criteria, after the fact, rather than having people self-define. e.g. “if two of A, B, or C are true, choose category X”.
It’s possible that this question is mean to measure something about non-monetary contribution size, not engagement. In which case, say that.
Call it, “non-financial contribution” and put 4 as ” I volunteer more than X hours” and 5 as “I work on a cause area directly or have taken a lower than salary rate jobs”.
Seems worth considering that
A) EA has a number of characteristic of a “High Demand Group” (cult). This is a red flag and you should wrestle with it yourself.
B) Many of the “Sort of”s are peer pressure. You don’t have to do these things. And if you don’t want to, don’t!
In what sense is it “sort of” true that members need to get permission from leaders to date, change jobs, or marry?
I think there is starting to be social pressure on who to date. And there has been social pressure for which jobs to take for a while.
I think that one’s a reach, tbh.
(I also think the one about using guilt to control is a stretch.)
My call: EA gets 3.9 out of 14 possible cult points.
No
Yes (+1)
Partial (+0.8)
No
No
No
Partial (+0.5)
Very weak (+0.1)
No
Partial (+0.5)
No
No
Yes (+1)
No
I went through and got 5.2/14 cult points:
I think this is nonzero, I think subsets of the community do display “excessively zealous” commitment to a leader given “What would SBF do” stickers. Outside views of LW (or at least older versions of it would probably worry that this was an EY cult.
+0.1
+1
+1
I think this is probably partial, given claims in this post, and positive-agreevote concerns here (though clearly all of the agree voters might be wrong).
+0.2
No
No (outside of Leverage research, perhaps)
Yes for elitist, and yes for saving humanity.
+0.5
+0.1
No
+1
No (if we only consider “intentional” inducement
+0.5
+0.8
No
I think you may have very high standards? By these standards, I don’t think there are any communities at all that would score 0 here.
~
I was not aware of “What would SBF do” stickers. Hopefully those people feel really dumb now. I definitely know about EY hero worship but I was going to count that towards a separate rationalist/LW cult count instead of the EA cult count.
I think where we differ is that I’m not making a comparison of whether EA is worse than this compared to other groups, if every group scores in the range of 0.5-1 I’ll still score 0.5 as 0.5, and not scale 0.5 down to 0 and 0.75 down to 0.5. Maybe that’s the wrong way to approach it but I think the least culty organization can still have cult-like tendencies, instead of being 0 by definition.
Also if it’s true that someone working at GPI was facing these pressures from “senior scholars in the field”, then that does seem like reason for others to worry. There also has been a lot of discussion on the forum about the types of critiques that seem like they are acceptable and the ones that aren’t etc. Your colleague also seems to believe this is a concern, for example, so I’m currently inclined to think that 0.2 is pretty reasonable and I don’t think I should update much based on your comment-but happy for more pushback!
I think
has to get more than 0.2, right? Being elitist and on a special mission to save humanity is a concerningly good descriptor of at least a decent chunk of EA.
Ok updated to 0.5. I think “the leader is considered the Messiah or an avatar” being false is fairly important.
>> The group teaches or implies that its supposedly exalted ends justify means that members would have considered unethical before joining the group (for example: collecting money for bogus charities).
> Partial (+0.5)
This seems too high to me, I think 0.25 at most. We’re pretty strong on “the ends don’t justify the means”.
>>The leadership induces guilt feelings in members in order to control them.
> No
This on the other hand deserves at least 0.25...
I don’t think it makes sense to say that the group is “preoccupied with making money”. I expect that there’s been less focus on this in EA than in other groups, although not necessarily due to any virtue, but rather because of how lucky we have been in having access to funding.
Nuclear risk is in the news. I hope:
- if you are an expert on nuclear risk, you are shopping around for interviews and comment
- if you are an EA org that talks about nuclear risk, you are going to publish at least one article on how the current crisis relates to nuclear risk or find an article that you like and share it
- if you are an EA aligned journalist, you are looking to write an article on nuclear risk and concrete actions we can take to reduce it
Factional infighting
[epistemic status—low, probably some element are wrong]
tl;dr
- communities have a range of dispute resolution mechanisms, whether voting to public conflict to some kind of civil war
- some of these are much better than others
- EA has disputes and resources and it seems likely that there will be a high profile conflict at some point
- What mechanisms could we put in place to handle that conflict constructively and in a positive sum way?
When a community grows as powerful as EA is, there can be disagreements about resource allocation. In EA these are likely to be significant.
There are EAs who think that the most effective cause area is AI safety. There are EAs who think it’s global dev. These people do not agree, though there can be ways to coordinate between them.
The spat between GiveWell and GiveDirectly is the beginning of this. Once there are disagreements on the scale of $10millions then some of that is gonna be sorted out over twitter. People may badmouth each other and damage the reputation of EA as a whole.
The way around this is to make solving problems easier than creating them. As in a political coalition, people need to have more benefits being inside the movement than outside it.
The EA forum already does good work here, allowing everyone to upvote posts they like.
Here are some other power sharing mechanisms:
- a fund where people can either vote on cause areas, expected value, or moral weights, so that it moves based on the community’s values as a whole
- a focus on “we disagree, but we respect” looking at how different parts of the community disagree but respect the effort of others
- a clear mechanism of bargains, where animal EAs donate to longtermist charities in exchange for longtermists to go vegan and vice versa
- some videos from key figures from different parts discussing their disagreements in a kind and human way
- “I would change if” a series of posts from people saying what would make them work on different cause areas. How cheap would chicken welfare have to be before Yudkowsky moved to work on it? How cheap would AI safety had to be before it became Singer’s key talking point
Call me a pessimist, but I can’t see how a community managing $50Bn across deeply dividided prioritites will stay chummy without proper dispute resolution systems. And I suggest we should start building them now.
By and large I think this aspect is going surprisingly well, largely because people have adopted a “disagree but respect” ethos.
I’m a bit unsure of such a fund—I guess that would pit different cause areas against each other more directly, which could be a conflict framing.
Regarding the mechanism of bargains, it’s a bit unclear to me what problem that solves.
EA infrastructure idea: Best Public Forecaster Award
Gather all public forecasting track records
Present them in an easily navigable form
Award prizes one for best brier score of forecasts resolving in the last year
If this gets more than 20 karma, I’ll write a full post on it. This is rough.
Questions that come to mind
Where would we find these forecasts
To begin with I would look at those with public records:
Scott Alexander
Bryan Caplan
Matthew Yglesias
Many such cases
Beyond these, one could build a community around finding forecasts of public figures. Alternatively, I guess GPT-3 has a good shot of being able to turn verbal forecasts into data which could then be checked.
What’s the impact
I’m only gonna sketch my argument here. As above, if this gets 20 karma I’ll write a full post (but only upvote if it’s good, let’s not waste any of our time).
We seem to think forecasting improves the accuracy of commentator
If we could build a high-status award for forecasting, more commentators would hear about it and it would serve as a nudge for others to make their forecasts more visible
I am confident this would lead to better commentary (this seems arrogant, but honestly the people I know who forecast more are more epistemically humble—I think celebrities could really benefit from more humility about their predictions)
Better commentary leads to better outcomes. Effective Altruism implicitly holds that many have priority orderings that don’t match reality. The world at large underrates the best charities, the chance of biorisk, etc. Journalism which was more accurate would be more accurate about these things too which would be a massive win
Wouldn’t the winners just be superforecasters
Not currently. I don’t think it’s too hard to make pretty robust boundaries on what a public figure is. Most superforecasters are not well enough known (and sorry to the 5 EAs I can count in metaculus’ top 50). But Yglesias is well known enough. Scott Alexander, I’m less sure but I think we could come up with some minimum amount of hits, followers, etc for someone to be eligible.
How much resource would this take
Depends on a couple of things (I have pulled these numbers out of thin air) please criticise them:
Who is giving this award its prestige? If it’s a lot of money, fine. If it’s an existing org, then it’s cheaper ( 0 - $50k)
How deeply are we looking. I think you could pay someone $50k to find say 100 public sets of forecasts and maybe another $10k to make a nice website. If you want to scrape twitter using GPT3 or crowdsource that’s maybe another $50-100k
Is there an award ceremony? If so I imagine that costs as much as a wedding so maybe $10k
That looks like $60 - $220k
If this failed, why did it fail?
It got embroiled in controversy over who was included
It was attached to some existing EA org and looked badly for them
It became a niche award that no one changed their behaviour based on
I’ve been musing about some critiques of EA and one I like is “what’s the biggest thing that we are missing”
In general, I don’t think we are missing things (lol) but here are my top picks:
It seems possible that we reach out to sciency tech people because they are most similar to us. While this may genuinely be the cheapest way to get people now there may be costs to the community in terms of diversity of thought (most Sci/tech people are more similar than the general population)
I’m glad to see more outreach to people in developing nations
It seems obvious that science/tech people have the most to contribute to AI safety, but.. maybe not?
Also science/tech people have a particular racial/gender makeup and there is the hidden assumption that there isn’t an effective way to reach a more broader group. (Personally I hope that a load of resources in India, Nigeria, Brazil, etc will go some way here, but I dunno still feels like a legitimate question)
People are scared what the future might look like if it is only in the view of MacAskill/Bostrom/SBF. Yeah, in fact my (poor) model of MacAskill is scared of this too. But we can surface more that we wish we had a larger group making these decisions too.
We could build better ways of outsiders feeding into decisionmaking. I read a piece about the effectiveness of community vegan meals being underrated in EA. Now I’m not saying it should be funded, but I was surprsied to read some of these conferences are 5000+ people (iirc). Maybe that genuinely is oversight. But it’s really hard for high signal information to get to decisionmakers. That really is a problem we could work on. If it’s hard for people who speak EA-ese, how much much harder is it for those who speak different community languages, whose concepts seem frustrating to us.
More likely to me is a scenario of diminishing returns. Ie, tech people might be the most important to first order, but there’s already a lot of brilliant tech people working on the problem, so one more won’t make much of a difference. Whereas a few brilliant policy people could devise a regulatory scheme that penalises reckless AI deployment, etc, making more differences on a marginal basis.
+1 for policy people
I would like to see posts give you more karma than comments (which would hit me hard). Seems like a highly upvtoed post is waaaaay more valuable than 3 upvoted comments on that post, but it’s pretty often the latter gives more karma than the former.
Sometimes comments are better, but I think I agree they shouldn’t be worth exactly the same.
People might also have a lower bar for upvoting comments.
There you go, 3 mana. Easy peasy.
simplest first step would be just showing both separately like Reddit
You can see them separately, but it’s how they combine that matters.
I know you can figure them out, but I don’t see them presented separately on users pages. Am I missing something? Is it shown on the website somewhere?
They aren’t currently shown separately anywhere. I added it to the ForumMagnum feature-ideas repo but not sure whether we’ll wind up doing it.
They are shown separately here: https://eaforum.issarice.com/userlist?sort=karma
Is there a link to vote to show interest?
There is no EA “scene” on twitter.
For good or ill, while there are posters on twitter who talk about EA, there isn’t a “scene” (a space where people use loads of EA jargon and assume everyone is EA) or at least not that I’ve seen.
This surprised me.
UK government will pay for organisations to hire 18-24 year olds who are currently unemployed, for 6 months. This includes minimum wage and national insurance.
I imagine many EA orgs are people constrained rather than funding constrained but it might be worth it.
And here is a data science org which will train them as well https://twitter.com/John_Sandall/status/1315702046440534017
Note: applications have to be for 30 jobs, but you can apply over a number of organisations or alongside a local authority etc.
https://www.gov.uk/government/collections/kickstart-scheme
Is there a way to sort shortform posts?
EA Book discount codes.
tl;dr EA books have a positive externality. The response should be to subsidise them
If EA thinks that certain books (doing good better, the precipice) have greater benefits than they seem, they could subsidise them.
There could be an EA website which has amazon coupons for EA books so that you can get them more cheaply if buying for a friend, or advertise said coupon to your friends to encourage them to buy the book.
From 5 mins of research the current best way would be for a group to buys EA books and sell them at the list price but provide coupons as here—https://www.passionintopaychecks.com/how-to-create-single-use-amazon-coupons-promo-codes/
Alternatively, you could just sell them at the coupon price.
I think people have been taking up the model of open sourcing books (well, making them free). This has been done for [The Life You can Save](https://en.wikipedia.org/wiki/The_Life_You_Can_Save) and [Moral Uncertainty](https://www.williammacaskill.com/info-moral-uncertainty).
I think this could cost $50,000 to $300,000 or so depending on when this is done and how popular it is expected to be, but I expect it to be often worth it.
Seems that the Ebook/audiobook is free. Is that correct?
I imagine being able to give a free physcial copy would have more impact.
Yes, it’s free.
I like this idea and think it’s worth you taking further. My initial reactions are:
Getting more EA books into peoples hands seems great and worth much more per book than the cost of a book.
I don’t know how much of a bottleneck the price of a book is to buying them for friends/club members. I know EA Oxford has given away many books, I’ve also bought several for friends (and one famous person I contacted on instagram as a long shot who actually replied.
I’d therefore be interested in something which aimed to establish whether making books cheaper was a better or worse idea than just encouraging people to gift them.
John Behar/TLYCS probably have good thoughts on this.
Do you have any thoughts as to what the next step would be. It’s not obvious to me what you’d do to research the impact of this.
Perhaps have a questionnaire asking people how many people they’d give books to at different prices. Do we know the likelihood of people reading a book they are given?
Being open minded and curious is different from holding that as part of my identity.
Perhaps I never reach it. But it seems to me that “we are open minded people so we probably behave open mindedly” is false.
Or more specifically, I think that it’s good that EAs want to be open minded, but I’m not sure that we are purely because we listen graciously, run criticism contests, talk about cruxes.
The problem is the problem. And being open minded requires being open to changing one’s mind in difficult or set situations. And I don’t have a way that’s guaranteed to get us over that line.
Someone told me they don’t bet as a matter of principle. And that this means EA/Rats take their opinions less seriously as a result. Some thoughts
I respect individual EAs preferences. I regularly tell friends to do things they are excited about, to look after themselves, etc etc. If you don’t want to do something but feel you ought to, maybe think about why, but I will support you not doing it. If you have a blanket ban on gambling, fair enough. You are allowed to not do things because you don’t want to
Gambling is addictive, if you have a problem with it, don’t do it
Betting is a useful tool. I just do take opinions a bit less seriously if people don’t do the simple thing to put their money where their mouths are. And so a blanket ban is a slight cost. Imagine if I said I had a blanket ban on double cruxxing, or giving to animal welfare charities. It’s a thing I am allowed to do, but it does just seem a bit worse
To me, this seems like something else is actually going on. Perhaps it feels like “will you bet on it” is a way that certain people can twist my arm in a way that makes me feel uncomfortable? Perhaps the people who say this have been cruel to me in the past. I don’t know, but I sense there is something else going on. If you don’t bet as a blanket policy, could you tell me why?
I don’t bet because I feel it’s a slippery slope. I also strongly dislike how opinions and debates in EA are monetised, as this strengthens even more the neoliberal vibe EA already has, so my drive for refraining to do this in EA is stronger than outside.
Edit: and I too have gotten dismissed by EAs for it in the past.
I don’t want you to do something you don’t want to.
A slippery slope to what?
To gambling on anything else and taking an actual financial risk.
Yeah, I guess if you think there is a risk of gambling addiction, don’t do it.
But I don’t know that that’s a risk for many.
Also I think many of us take a financial risk by being involved in EA. We are making big financial choices.
There’s a difference between using money to help others and using it for betting?
Yes obviously, but not in the sense that you are investing resources.
Is there a difference between the financial risk of a bet and of a standard investment? Not really, no.
I don’t bet because it’s not a way to actually make money given the frictional costs to set it up, including my own ignorance about the proper procedure and having to remember it and keep enough capital for it. Ironically, people who are betting in this subculture are usually cargo culting the idea of wealth-maximization with the aesthetics of betting with the implicit assumption that the stakes of actual money are enough to lead to more correct beliefs when following the incentives really means not betting at all. If convenient, universal prediction markets weren’t regulated into nonexistence then I would sing a different tune.
I guess I do think the “wrong beliefs should cost you” is a lot of the gains. I guess I also think that bets should be able to be at scale of the disagreement is important, but I think that’s a much more niche view.
There are a number of possible reasons that the individual might not want to talk about publicly:
A concern about gambling being potentially addictive for them;
Being relatively risk-averse in their personal capacity (and/or believing that their risk tolerance is better deployed for more meaningful things than random bets);
Being more financially constrained than their would-be counterparts; and
Awareness of, and discomfort with, the increased power the betting norm could give people with more money.
On the third point: the bet amount that would be seen as meaningful will vary based on the person’s individual circumstances. It is emotionally tough to say—no, I don’t have much money, $10 (or whatever) would be a meaningful bet for me even though it might take $100 (or whatever) to be meaningful to you.
On the fourth point: if you have more financial resources, you can feel freer with your bets while other people need to be more constrained. That gives you more access to bet-offers as a rhetorical tool to promote your positions than people with fewer resources. It’s understandable that people with fewer resources might see that as a financial bludgeon, even if not intended as such.
I think the first one is good, the not so much.
I think there is something else going on here.
I have yet to see anyone in the EA/rat world make a bet for sums that matter, so I really don’t take these bets very seriously. They also aren’t a great way to uncover people’s true probabilities because if you are betting for money that matters you are obviously incentivized to try to negotiate what you think are the worst possible odds for the person on the other side that they might be dumb enough to accept.
kind of fair. I’m pretty sure I’ve seen $1000s
If anything… I probably take people less seriously if they do bet (not saying that’s good or bad, but just being honest), especially if there’s a bookmaker/platform taking a cut.
I think this is more about 1-1 bets.
I guess it depends if they win or lose on average. I still think knowing I barely win is useful self knowledge.
I think if I knew that I could trade “we all obey some slightly restrictive set of romance norms” for “EA becomes 50% women in the next 5 years” then that’s a trade I would advise we take.
That’s a big if. But seems trivially like the right thing to do—women do useful work and we should want more of them involved.
To say the unpopular reverse statement, if I knew that such a set of norms wouldn’t improve wellbeing in some average of women in EA and EA as a whole then I wouldn’t take the trade.
Seems worth acknowledging there are right answers here, if only we knew the outcomes of our decisions.
In defence of Will MacAskill and Nick Beckstead staying on the board of EVF
While I’ve publicly said that on priors they should be removed unless we hear arguments otherwise, I was kind of expecting someone to make those arguments. If noone will, I will.
MacAskill
MacAskill is very clever, personally kind, is a superlative networker and communicator. Imo he oversold SBF, but I guess I’d do much worse in his place. It seems to me that we should want people who have made mistakes and learned from them. Seems many EA orgs would be glad to have someone like him on the board. If anything, the question is if we don’t want too many people duplicated across EA orgs (do we want this?) which board is it most valuable to have MacAskill on? I guess EVF?
Beckstead
Beckstead is, I sense, extremely clever (generally I find OpenPhil people to be powerhouses), personally kind. I guess I think that he dropped the ball on running FTXFF well—feels like had they hired more people to manage OPS they might have queried why money was going from strange accounts, but again I don’t know the particulars (though I want to give the benefit of the doubt here). But again, it was a complicated project and I guess he sensed that speed of ramp up was the proirity. In many world’s he’d have been right.
I guess perhaps the two of them seem to have pretty similar blindspots (kind intelligent academicish EAs who scaled things really fast) so perhaps it is worth only having one on the board. Maybe it’s worth having someone who can say “hmm that seems likely too odd or shifty to be worth us doing it”. But this isn’t as much of a knockdown argument.
Feels like there should be some kind of community discussion and research in the wake of FTX, especially if no leadership is gonna do it. But I don’t know how that discussion would have legitimacy. I’m okay at such things, but honestly tend to fuck them up somehow. Any ideas?
If I were king
Use the ideas from all the varous posts
Have a big google doc where anyone can add research and also put a comment for each idea and allow people to discuss
Then hold another post where we have a final vote on what should happen
then EA orgs can see at least what some kind of community concensus things
And we can see what each other think
I wrote a post on possible next steps but it got little engagement—unclear if it was a bad post or people just needed a break from the topic. On mobile, so not linking it—but it’s my only post besides shortform.
The problem as I see it is that the bulk of proposals are significantly underdeveloped, risking both applause light support and failure to update from those with skeptical priors. They are far too thin to expect leaders already dealing with the biggest legal, reputational, and fiscal crisis in EA history to do the early development work.
Thus, I wouldn’t credit a vote at this point as reflecting much more than a desire for a more detailed proposal. The problem is that it’s not reasonable to expect people to write more fleshed-out proposals for free without reason to believe the powers-that-be will adopt them.
I suggested paying people to write up a set of proposals and then voting on those. But that requires both funding and a way to winnow the proposals and select authors. I suggested modified quadratic funding as a theoretical ideal, but a jury of pro-reform posters as a more practical alternative. I thought that problem was manageable, but it is a problem. In particular, at the proposal-development stage, I didn’t want tactical voting by reform skeptics.
Strong +1 to paying people for writing concrete, actionable proposals with clear success criteria etc. - but I also think that DEI / reform is just really, really hard, and I expect relatively few people in the community to have 1) the expertise 2) the knowledge of deeper community dynamics / being able to know the current stsances on things.
(meta point: really appreciate your bio Jason!)
I really liked Nate’s post and hope there can be more like it in the future.
Let’s assume that the time article is right about the amount of sexual harassment in EA. How big a problem is this relative to other problems? If we spend $10mn on EAGs (a guess) how much should we spend if we could halve sexual harassment in the community.
The whole sexual harassment issue isn’t something that can be easily fixed with money I think. It’s more a project of changing norms and what’s acceptable within the EA community.
The issue is it seems like many folks at the top of orgs, especially in SF, have deeply divergent views from the normal day-to-day folks joining/hearing about EA. This is going to be a huge problem moving forward from a public relations standpoint IMO.
Money can’t fix everything, but it can help some stuff, like hiring professionals outside of EA and supporting survivors who fear retaliation if they choose to speak out.
I’ll sort of publicly flag that I sort of break the karma system. Like the way I like to post comments is little and often and this is just overpowered in getting karma.
eg I recently overtook Julia Wise and I’ve been on the forum for years less than anyone else.
I don’t really know how to solve this—maybe someone should just 1 time nuke my karma? But yeah it’s true.
Note that I don’t do this deliberately—it’s just how I like to post and I think it’s honestly better to split up ideas into separate comments. But boy is it good at getting karma. And soooo much easier than writing posts.
https://eaforum.issarice.com/userlist?sort=karma
To modify a joke I quite liked:
I wouldn’t worry too much about the karma system. If you’re worried about having undue power in the discourse, one thing I’ve internalized is to use the strong upvote/downvote buttons very sparingly (e.g. I only strong-upvoted one post in 2022 and I think I never strong-downvoted any post, other than obvious spam).
Hey Nathan,
thank you for the ranking list. :)
I don’t think you need to start with zero karma again. The karma system is not supposed to mean very much. It is heavily favoured in certain aspects than a true representation of your skill or trustworthiness as a user on this forum. It is more or less a xp bar for social situations and is an indicator that someone posts good content here.
Let’s look at an example:
Aaron Gertler retired from the forum, someone who is in high regard, which got a lot of attention and sympathy. Many people were interested in the post, and it’s an easy topic to participate. So many were scrolling down to the comments to write something nice and thanking him for his work.
JP Addison did so too. He works for CEA and as a developer for the forum. His comment got more Karma than any post he made so far.
Karma is used in many places with different concepts behind it. The sum of it gives you no clear information. What I would think in your case: you are an active member of the forum, participate positively with only one post with negative karma. You participated in the FTX crisis discussion, which was an opportunity to gain or lose significant amounts of karma, but you survived it, probably with a good score.
Internetpoints can make you feel fantastic, they are a system to motivate for social interaction and to follow the community norms (in positive and negative ways).
Your modesty suits you well, but there is no need to. Stand upwards. There will always be those with few points but really good content, and those who overshoot the gems by far with activity.
Does EA have a clearly denoted place for exit interviews? Like if someone who was previously very involved was leaving, is there a place they could say why?
The amount of content on the forum is pretty overwhelming at the moment and I wonder if there is a better way to sort it.
Question answers
When answering questions, I recommend people put each separate point as a separate answer. The karma ranking system is useful to see what people like/don’t like and having a whole load of answers together muddies the water.
EA global
1) Why is EA global space constrained? Why not just have a larger venue?
I assume there is a good reason for this which I don’t know.
2) It’s hard to invite friends to EA global. Is this deliberate?
I have a close friend who finds EA quite compelling. I figured I’d invite them to EA global. They were dissuaded by the fact they had to apply and that it would cost $400.
I know that’s not the actual price, but they didn’t know that. I reckon they might have turned up for a couple of talks. Now they probably won’t apply.
Is there no way that this event could be more welcoming or is that not the point?
Re 1) Is there a strong reason to believe that EA Global is constrained by physical space? My impression is that they try to optimize pretty hard to have a good crowd and for there to be a high density of high-quality connections to be formed there.
Re 2) I don’t think EA Global is the best way for newcomers to EA to learn about EA.
EDIT: To be clear, neither 1) nor 2) are necessarily endorsements of the choice to structure EA Global in this way, just an explanation of what I think CEA is optimizing for.
EDIT 2 2021/10/11: This explanation may be wrong, see Amy Labenz’s comment here.
Personal anecdote possibly relevant for 2): EA Global 2016 was my first EA event. Before going, I had lukewarm-ish feelings towards EA, due mostly to a combination of negative misconceptions and positive true-conceptions; I decided to go anyway somewhat on a whim, since it was right next to my hometown, and I noticed that Robin Hanson and Ed Boyden were speaking there (and I liked their academic work). The event was a huge positive update for me towards the movement, and I quickly became involved – and now I do direct EA work.
I’m not sure that a different introduction would have led to a similar outcome. The conversations and talks at EAG are just (as a general rule) much better than at local events, and reading books or online material also doesn’t strike me as naturally leading to being part of a community in the same way.
It’s possible my situation doesn’t generalizes to others (perhaps I’m unusual in some way, or perhaps 2021 is different from 2016 in a crucial way such that the “EAG-first” strategy used to make sense but doesn’t anymore), and there may be other costs with having more newcomers at EAG (eg diluting the population of people more familiar with EA concepts), but I also think it’s possible my situation does generalize and that we’d be better off nudging more newcomers to come to EAG.
Hi Nathan,
Thank you for bringing this up!
1) We’d like to have a larger capacity at EA Global, and we’ve been trying to increase the number of people who can attend. Unfortunately, this year it’s been particularly difficult; we had to roll over our contract with the venue from 2020 and we are unable to use the full capacity of the venue to reduce the risk from COVID. We’re really excited that we just managed to add 300 spots (increasing capacity to 800 people), and we’re hoping to have more capacity in 2022.
There will also be an opportunity for people around the world to participate in the event online. Virtual attendees will be able to enjoy live streamed content as well as networking opportunities with other virtual attendees. More details will be published on the EA Global website the week of October 11.
2) We try to have different events that are welcoming to people who are at different points in their EA engagement. For someone earlier in their exploration of EA, the EAGx conferences are going to be a better fit. From the EA Global website:
Effective altruism conferences are a good fit for anyone who is putting EA principles into action through their donations, volunteering, or career plans. All community members, new or experienced, are welcome to apply.
EA Global: London will be selecting for highly-engaged members of the community.
EAGxPrague (3-5 December) will be more suitable for those who have less experience with effective altruism.
We’ll have lots more EAGx events in 2022, including Boston, Oxford, Singapore, and Australia, as well as EA Globals in San Francisco and London as usual. We may add additional events to this plan. The dates for those events and any additional events will go up on eaglobal.org when they’re confirmed.
In the meantime, if your friend is interested in seeing some talks, they can check out hundreds of past EA Global talks on the CEA YouTube channel.
Thanks for taking the time to answer. That all makes sense.
This perception gap site would be a good form for learning and could be used in altruism. It reframes correcting biases as a fun prediction game.
https://perceptiongap.us/
It’s a site which gets you to guess what other political groups (republicans and democrats) think about issues.
Why is it good:
1) It gets people thinking and predicting. They are asked a clear question about other groups and have to answer it.
2) It updates views in a non-patronising way—it turns out dems and repubs are much less polarised than most people think (the stat they give is that people predict 50% of repubs hold extreme views, when actually it’s 30). But rather than yelling this, or an annoying listicle, it gets people’s consent and teachest something.
3) It builds consensus. If we are actually closer to those we disagree with than we think, perhaps we could work with them.
4) It gives quick feedback. People learn best when given feedback which is close to the action. In this case, people are rapidly rewarded for thoughts like “probably most of X group” are more similar to me that I first think.
Imagine:
What percentage of neocons want insitutional reform?
What % of libertarians want an end to factory farming?
What % of socialists want an increase in foreign direct aid?
Conlusion
If you want to change people’s minds, don’t tell them stuff, get them to guess trustworthy values as a cutesy game.
Clear benefits, diffuse harms
It is worth noting when systems introduce benefits in a few obvious ways but many small harms. An example is blocking housing. It benefits the neighbours a lot—they don’t have to have construction nearby—and the people who are harmed are just random marginal people who could have afforded a home but just can’t.
But these harms are real and should be tallied.
Much recent discussion in EA has suggested common sense risk reduction strategies which would stop clear bad behavior. Often we all agree on the clear bad behaviour.
But the risk reduction strategies would also often set norms against a range of greyer behaviour that the suggestors don’t engage in or doesn’t seem valuable to them. If you don’t work with your coworkers, then suggesting it be normed against, seems fine—it would make it hard for people to end up in weird living situations. But I know people who have loved living with coworkers. That’s a diffuse harm.
Mainly I think this involves acknowledging people are a lot weirder than you think. People want things I don’t expect them to want, they consent in business, housing and relationships to things I’d never expect them to. People are wild. And I think it’s worth there being bright lines against some kinds of behaviour that is bad or nearly always bad—I’d suggest dating your reports is ~ very unwise—but a lot is about human preferences and to understand that we need to elicit both wholesome and illicit preferences or consider harms that are diffuse.
Note that I’m not saying which way the balance of harms falls, but that both types should be counted.
I suggest there is waaaay to much to be on top of in EA and noone knows who is checking what. So some stuff goes unchecked. If there were a narrower set of “core things we study” then it seems more likely that those things would have been gone over by someone in detail and hence fewer errors in core facts.
One of the downsides of EA being so decentralized, I guess. I’m imagining an alternative history EA in which is was all AI alignment or it was all tropical disease prevention, and in those worlds the narrowing of “core things we study” would possibly result in more eyeballs on each thing.
I think we could still be better in this universe though no idea how.
It is frustrating that I cannot reply to comments from the notification menu. Seems like a natural thing to be able to do.
I think the EA forum wiki should allow longer and more informative articles. I think that it would get 5x traffic. So I’ve created a market to bet.
I think the wiki should be about summarising and synthesising articles on this forum.
- There are lots of great articles which will be rarely reread
- Many could do with more links to eachother and to other key peices
- Many could be better edited, combined etc
- The wiki could take all content and aim to turn it into a minimal viable form of itself
Sounds interesting. Can you flesh out a bit more what this should look like, in your view?
I think that the forum wiki should focus on taking chunks of article text and editing it, rather than pointing people to articles. So take all of the articles on global dev, squish them together or shorten them.
So there would be a page on “research debt” which would contain this article and also any more text that seemed relevant, but maybe without the introduction. Then a preface on how it links to other EA topics, a link to the original article and links to ways it interacts with other EA topics. It might turn out that that page had 3 or 4 articles squished into one or was broken into 3 or 4 pages. But like Wikipedia you could then link to “research debt” and someone could easily read it.
Thanks, makes sense. I’d be interested in, e.g. Pablo’s view.
If only we had tagging.
EA criticism
[Epistemic Status: low, I think this is probably wrong, but I would like to debug it publicly]
If I have a criticism of EA along Institutional Decision Making lines, it is this:
For a movement that wants to change how decisions get made, we should make those changes in our own organisations first.
Examples of good progress:
- prizes—EA orgs have offered prizes for innovation
- voting systems—it’s good that the forum is run on upvotes and that often I think EA uses the right tool for the job in terms of voting
Things I would like to see more of:
- an organisation listening to prediction markets/polls. If we believe nations should listen to forecasting can we make clearer which markets our orgs are looking and and listening to?
- an organisation run by prediction markets. The above but taking it further
- removing siloes in EA. If you have confidence to email random people it’s relatively easy to get stuff done, but can we lower the friction to allow good ideas to spread further?
- etc
It’s fine if we think these things will never work, but it seems weird to me that we think improvements would work elsewhere but that we don’t want them in our orgs. That’s like being NIMBY about our own suggested improvements.
Counterarguments
- these aren’t solutions people are actually arguing for. Yeah this is an okay point. But I think the seeds of them exist.
- prediction markets work in big orgs not small ones. Maybe, but isn’t it worth running one small inefficient organisation to try and learn the failure modes before we suggest this for nation states
EA twitter bots
A set of EA jobs twitter bots which each retweet a specific set of hashtags eg #AISafety #EAJob, #AnimalSuffering #EAJob, etc etc. Please don’t get hung up on these, we’d actually need to brainstorm the right hashtags.
You follow the bots and hear about the jobs.
Rather than using Facebook as a way to collect EA jobs we should use an airtable form
1) Individuals finding jobs could put all the details in, saving time for whoever would have to do this process at 80k time.
2) Airtable can post directly to facebook, so everyone would still see it https://community.airtable.com/t/posting-to-social-media-automatically/20987
3) Some people would find it quicker. Personally, I’d prefer an airtable form to inputting it to facebook manually every time.
Ideally we should find websites which often publish useful jobs and then scrape them regularly.
It would be good to easily be able to export jobs from the EA job board.
I suggest at some stage having up and downvoting of jobs would be useful.
Does anyone know people working on reforming the academic publishing process?
Coronavirus has caused journalists to look for scientific sources. There are no journal articles because of the lag time. So they have gone to preprint servers like bioRxiv (pronounced bio-archive). These servers are not peer reviewed so some articles are of low quality. So people have gone to twitter asking for experts to review the papers.
https://twitter.com/ryneches/status/1223439143503482880?s=19
This is effectively a new academic publishing paradigm. If there were support for good papers (somehow) you would have the key elements of a new, perhaps better system.
Some thoughts here too: http://physicsbuzz.physicscentral.com/2012/08/risks-and-rewards-of-arxiv-reporting.html?m=1
With Coronavirus providing a lot of impetus for change, those working in this area could find this an important time to increase visibility of their work.
HaukeHillebrandt has recommended supporting Prof Chris Chambers to do this: https://lets-fund.org/better-science/
It has an emotional impact on me to note that FTX claims are now trading at 50%. This means that on expectation, people are gonna get about half of what their assets were worth, had they help them until this time.
I don’t really understand whether it should change the way we understand the situation, but I think a lot of people’s life savings were wrapped up here and half is a lot better than nothing.
src: https://www.bloomberg.com/news/articles/2023-10-25/ftx-claims-rise-after-potential-bidders-for-shuttered-exchange-emerge
I am not confident on the reasons why this is, but I think it’s because Anthropic and the cryptocurrency Solana are now trading a lot higher. My last memory (bad do not trust) is that FTX has about 11bn in debt against 4bn in assets. I think Anthropic and the Sol they hold have both gone up by about a billion since then.
I dunno folks, but I hope people get their money back—and I know that includes some of you.
We are good at discussion but bad at finding the new thing to update to.
Look at the recent Happier Lives Institute discussion; https://forum.effectivealtruism.org/posts/g4QWGj3JFLiKRyxZe/the-happier-lives-institute-is-funding-constrained-and-needs
Lots of discussion, a reasonable amount of new information, but what should our final update be:
Have HLI acted fine or badly?
Is there a pattern of misquoting and bad scholarship?
Have global health orgs in general moved towards Self-reported WellBeing (SWB) as a way to measure interventions?
Has HLI generally done good/cost effective work?
I think that the forum comments model is very poor at this. After all, if there were widespread agreement (as I think there could be) then I think that would be a load of all our minds. We could have a discussion once and then not need to have it again.
As it is, I’m sure many people have taken away different things from this and I we’ll probably discsuss it again the next time the Happier Lives Institute or StrongMinds posts to the forum and I guess there has been some more bad blood created in the meantime.
Consensus is good and we don’t even try to reach it after big discussions.
It is just really hard to write comments that challenge without seeming to attack people. Anyone got any tips?
If you’re commenting on a post, it helps to start off with points of agreement and genuine compliments about things you liked. Try to be honest and non-patronizing: a comment where the only good thing you say is “your english is very good” will not be taken well, or a statement that “we both agree that murder is bad”. And don’t overthink it, a simple “great post” (if honest) is never unappreciated.
Another point is that the forum tends to have a problem with “nitpicking”, where the core points of a post are ignored in favor of pointing out minor, unimportant errors. Try to engage with the core points of an argument, or if you are pointing out a small error, preface it with “this is a minor nitpick”, and put it at the end of your comment.
So a criticism would look like:
“Very interesting post! I think X is a great point that more people should be talking about. However, I strongly disagree with core point Y, for [reasons]. Also, a minor nitpick: statement Z is wrong because [reasons]”
I think the above is way less likely to feel like an “attack”, even though the strong disagreements and critiques are still in there.
Some thoughts on: https://twitter.com/FreshMangoLassi/status/1628825657261146121?s=20
I agree that it’s worth saying something about sexual behaviour. Here are my broad thoughts:
I am sad about women having bad experiences, I think about it a lot
I want to be accurate in communication
I think it’s easy to reduce harms a lot without reducing benefits
Firstly, I’m sad about the current situation. Seems like too many women in EA have bad experiences. There is a discussion about what happens in other communities or tradeoffs. But first it’s really sad.
More than this, it seems worth dwelling on what it *feels* like. I guess for many it’s fine. But for some it can be exhausting or sad or uncomfortable. Women in EA complain to me about their treatment as women at lot, men much less. Seems notable.
But I don’t know what norms should be. I don’t know what’s best for EA women, for EA in general, for the world in general. In short, I don’t know how to optimise norms.
But harms seem easier to understand. It does seem to me there are some low cost, high benefit improvements. Particularly in people who have patterns of upsetting women.
Personally, I have really upset 2 or 3 women in EA around romance. I’ve said or done things that have left them sad for months. And I don’t think this is okay.
To them, I am sorry.
How do they feel? Well I sense, really sad. We’re not talking Time magazine stuff here, but I think they felt belittled, disrespected, judged and, briefly, unsafe. I don’t want anyone to feel like this, let alone because of me.
And compared to their suffering, and my sadness at it, it just seems pretty cheap to change my behaviour. To go on dates with a smaller group of people in EA, to create patterns to avoid situations I handle poorly, to spend time imagining women’s lives.
So I’m not gonna give a blanket pronouncement or say we are the worst. But personally, I am pretty flawed and I would prefer to change rather than hurt other people. And if you see that pattern in your life then I suggest taking real, actual steps.
I’d suggest you ask yourself. “Are there any women who, as a result of my actions in the last 2 years are seething or deeply upset.”
For most people the answer is no. Like seriously, the answer can be “no, you’re fine”. But if it’s yes, women are people right? Do you really believe that there aren’t some improvements possible here?
Some suggestions to yesses:
Talk to a trusted friend. How do they think you do here?
Imagine how much you would do to avoid the last woman being upset. Spend at least that much time avoiding the next woman being upset
I dislike the tribal nature of this discussion, that on some level it feels culture war-ey. So again, I don’t think this for everyone, but it is for me
But I really would recommend going to quality sex and relationship courses. I went to one run by a tantra group and I think it just made me a lot kinder and helped me reduce risks
Talk to women you’ve dated. How did they feel?
If you struggle with empathy with women, perhaps start with empathy for me. Trust me, you don’t want to feel like this. It’s horrible to have people who are upset as a result of my actions.
Most of all, I would recommend building empathy. I wish I had sat down and just written how the women I fancied felt, even for 5 minutes. And talked it over with a friend.
Take an interest in the mental lives of people you care about.
So I guess, the thing I could say was “If you continue patterns of romantic behaviour that frequently upset women that you could easily make less risky then I’ll be really upset with you and sad” as, if I were to continue I’d be so angry at myself.
Romance is not without risk—I don’t think this is a purely harm reducing question (though I could move to that opinion). But I think it’s possible to just reduce risks a lot while maintaining benefit. And if I have the option to do that and I choose not to, that’s basically my definition of bad.
What is a big open factual non community question in EA. I have a cool discussion tool I want to try out.
Daniel’s Heavy Tail Hypothesis (HTH) vs. this recent comment from Brian saying that he thinks that classic piece on ‘Why Charities Usually Don’t Differ Astronomically in Expected Cost-Effectiveness’ is still essentially valid.
Seems like Brian is arguing that there are at most 3-4 OOM differences between interventions whereas Daniel seems to imply there could be 8-10 OOM differences?
Similarly here: Valuing research works by eliciting comparisons from EA researchers—EA Forum (effectivealtruism.org)
And Ben Todd just tweeted about this as well.
Here is my first draft, basically there will be a plan money prediction market predicting what they community will vote on a central question (here “are the top 1% more than 10,000x as efffective as the median”) then we have a discussion and we vote and then resolve.
https://docs.google.com/document/d/14WpLjsS6idm8Ma-izKFOwkzy-B2F6RDpZ0xlc8aHlXg/edit
Should we want openai to turn off bing for a bit? We should, right? Should we create memes to that effect?
It is unclear to me that if we chose cause areas again, we would choose global developement
The lack of a focus on global development would make me sad
This issue should probably be investigated and mediated to avoid a huge community breakdown—it is naïve to think that we can just swan through this without careful and kind discussion
If I find this forum exhausting to post on some times I can only imagine how many people bounce off entirely.
The forum has a wiki (like wikipedia)
The “Criticism of EA Community” wiki post is here.
I think it would be better as a summary of criticisms rather than links to documents containing criticisms.
This is a departure from the current wiki style, so after talking to moderators we agreed to draft externally.
Collaborative Draft:
https://docs.google.com/document/d/1RetcAA7D94y6v3qxoKi_Ven-xF98FjirokvI-g8cKI4/edit#
Upvote this post if you think the “Criticism of EA Community” post will be better as a collaboratively-written summary.
Downvote if you like the current style.
Comments appreciated.
With better wiki features and a way to come to consensus on numbers I reckon this forum can write a career guide good enough to challenge 80k. They do great work, but we are many.
There were too few parties on the last night of EA global in london which led to overcrowding, stressed party hosts and wasting a load of people’s time.
I suggest in future that there should be at least n/200 parties where n is the number of people attending the conference.
I don’t think CEA should legislate parties, but I would like to surface in people’s minds that if there are fewer than n/200 parties, then you should call up your friend with most amenable housemates and tell them to organise!
Has rethink priorities ever thought of doing a survey of non-EAs? Perhaps paying for a poll? I’d be interested in questions like “What do you think of Effective Altruism? What do you think of Effective Altruists?”
Only asking questions of those who are currently here is survivorship bias. Likewise we could try and find people who left and ask why.
We are definitely planning on doing this kind of research, likely sometime in 2021.
This could have been a wiki
I hold that there could be a well maintained wiki article on top EA orgs and then people could anonymously have added many non-linear stories a while ago. I would happily have added comments about their move fast and break things approach and maybe had a better way to raise it with them.
There would have been edit wars and an earlier investigation.
How much would you pay to have brought this forward 6 months or a year. And likewise for whatever other startling revelations there are. In which case, I suggest a functional wiki is worth 5% − 10% of that amount, per case.
My question is “Who would want to run an EA org or project in that kind of environment?”. Presumably, you’d be down, but my bet is that the vast majority of people wouldn’t.
Given that people are suggesting a length set of org norms, I’m not sure that avoiding taxing orgs is their top concern.
While I support your right to disagreevote anonymously, I also challenge someone to articulate the disagreement.
It was pointed out to me that I probably vote a bit wrong on posts.
I generally just up and downvote how I feel, but occasionally if I think a post is very overrated or underrated I will strong upvote or downvote even though I feel less strong than that.
But this is I think the wrong behaviour and a defection. Since if we all did that then we’d all be manipulating the post to where we think it ought to be and we’d lose the information held in the median of where all our votes leave it.
Sorry.
Withholding the current score of a post till after a vote is cast (but the casting is committal) should be enough to prevent strategic behavior. But it comes with many downsides (I think feed ordering / recsys could work with private information, so the scores may be in principle inferrable from patterns in your feed, but you probably won’t actually do it. The worse problem is commitment, I do like to edit my votes quite a bit after initial impressions).
I imagine there’s a more subtle instrument, withholding the current score until committal votes have been cast seems almost like a limit case.
Although this isn’t in response to your specific case (correcting for overrated or underrated posts), but in response to
I think it’s okay to “defect” to correct the results of others’ apparent defection or to keep important information from being hidden. I’ve used upvotes correctively when I think people are too harsh with downvotes or when the downvotes will make important information/discussion much less visible. To elaborate, I’ve sometimes done this for cases like these:
When a comment or post is at low or negative karma due to downvotes, despite being made in good faith (especially if it makes plausible, relevant and useful claims), and without being uncivil or breaking other norms, even if it expresses an unpopular view (e.g. opinion or ethical view) or makes some significant errors in reasoning. I don’t think we should disincentivize or censor such comments, and I think that’s what disagreement voting and explanations should be used for. I find when people use downvotes like this without explanation to be especially unfair. This also includes when downvotes crush well-intentioned and civil but poorly executed newbie posts/comments, which I think is unkind and unwelcoming. (I’ve used upvotes correctively like this even before we had disagree voting.)
For posts with low or negative karma due to downvotes, if they contain (imo) important information, possibly even if poorly framed, with bad argument in them or made in apparently bad faith, if there’s substantial valuable discussion on the issue or it isn’t being discussed visibly somewhere else on the EA Forum. Low karma risks effectively hiding (making much less visible) that information and surrounding discussion through the ranking algorithm. This is usually for community controversies and criticism.
I very rarely downvote at all, but maybe I’d refrain from downvoting something I would otherwise downvote because its karma is already low or negative.
Right—in my view, net-negative karma conveys a particular message (something like “this post would be better off not existing”) that is meaningfully stronger than the median voter’s standard for downvoting. It can therefore easily exist in circumstances where the median voter would not have endorsed that conclusion.
FWIW, I don’t think this is against the explicit EA Forum norms around voting, and using upvotes and strong upvotes this way seems in line with some of their “suggestions” in the table from that section. In particular, they suggest it’s appropriate to strong upvote if
These could be more or less true depending on the karma of the post or comment and how visible you think it is.
I don’t think using downvotes against overrated posts or comments falls under the suggestions, though, but doing it only for upvotes and not downvotes could bias the karma.
Confidence 60%
Any EA leadership have my permission to put scandal on the back burner until we have a strategy on bing by the way. feels like a big escalation to have an ML reading it’s own past messages and running a search engine.
EA internal issues matter but only if we are alive.
Reasons I would disagree:
(1) Bing is not going to make us ‘not alive’ on a coming-year time scale. It’s (in my view) a useful and large-scale manifestation of problems with LLMs that can certainly be used to push ideas and memes around safety etc, but it’s not a direct global threat.
(2) The people best-placed to deal with EA ‘scandal’ issues are unlikely to perfectly overlap with the people best-placed to deal wit the opportunities/challenges Bing poses.
(3) I think it’s bad practice for a community to justify backburnering pressing community issues with an external issue, unless the case for the external issue is strong; it’s a norm that can easily become self-serving.
Strongly upvoted
I think the community health team should make decisions on the balance of harms rather than beyond reasonable doubt. If it seems likely someone did something bad they can be punished a bit until we don’t think they’ll do it again. But we have to actually take all the harms into account.
“beyond reasonable doubt” is a very high standard of proof, which is reasonable when the effect of a false conviction is being unjustly locked in a prison. It comes at a cost: a lot of guilty people go free and do more damage.
Theres no reason to use that same standard for a situation where the punishments are things like losing a job or being kicked out of a social community. A high standard of proof should still be used, but it doesn’t need to be “beyond reasonable doubt” level. I would hate to be falsely kicked out of an EA group, but at the end of the day I can just do something else.
I agree that the magnitude of the proposed deprivation is highly relevant to the burden of proof. The social benefit from taking the action on a true positive, and the individual harm from acting on a false positive also weigh in the balance.
In my view, the appropriate burden of proof also takes into account the extent of other process provided. A heightened burden of proof is one procedure for reducing the risk of erroneous deprivations, but it is not the only or even the most important one.
In most cases, I would say that the thinner the other process, the higher the BOP needs to be. For example, discipline by the bar, medical board, etc is usually more likely than not . . . but you get a lot of process like an independent adjudicator, subpoena power, and judicial review. So we accept 51 percent with other procedural protections in play. (And as a practical matter, the bar generally wouldnt prosecute a case it thought was 51 percent anyway due to resource constraints). With significantly fewer protections, I’d argue that a higher BOP would be required—both as a legal matter (these are government agencies) and a practical one. Although not beyond a reasonable doubt.
Of course, more process has costs both financial and on those involved. But it’s a possible way to deal with some situations where the current evidence seems too strong to do nothing and too uncertain to take significant action.
Should I tweet this? I’m very on the margin. Agree disagreevot (which doesn’t change karma)
I did a podcast where we talked about EA, would be great to hear your criticisms of it. https://pca.st/i0rovrat
Should I do more podcasts?
I listened to this episode today Nathan, I thought it was really good, and you came across well. I think EAs should consider doing more podcasts, including those not created/hosting by EA people or groups. They’re an accessible medium with the potential for a lot of outreach (the 80k podcast is a big reason why I got directly involved with the community).
I know you didn’t want to speak for EA has a whole, but I think it was a good example with EA talking to the leftist community in good faith,[1] which is (imo) one of our biggest sources of criticism at the moment. I’d recommend others check out the rest of Rabbithole’s series on EA—it’s a good piece of data on what the American Left thinks of EA at the moment.
Summary:
+1 to Nathan for going on this podcast
+1 for people to check out the other EA-related Rabbithole episodes
A similar podcast for those interested would be Habiba’s appearance on Garrison’s podcast The Most Interesting People I Know
Any time that you read a wiki page that is sparse or has mistakes, consider adding what you were trying to find. I reckon in a few months we could make the wiki really good to use.
I sense that conquest’s law is true → that organisations that are not specifically right wing move to the left.
I’m not concerned about moving to the left tbh but I am concerned with moving away from truth, so it feels like it would be good to constantly pull back towards saying true things.
I think the forum should have a retweet function but for the equivalent of github forks. So you can make changes to someone’s post and offer them the ability to incorporate them. If they don’t, you can just remake the article with the changes and an acknolwedgement that you did.
I don’t think people would actually do that that often, because they’d get no karma most of the time, but it would give karma, attribution trail for:
- summaries
- significant corrections/reframings
- and the author could still accept the edits later
My very quick improving institutional decision-making (IIDM) thoughts
Epistemic status: Weak 55% confidence. I may delete. Feel free to call me out or DM me etc etc.
I am saying these so that someone has said them. I would like them to be better phrased but then I’d probably never share them. Please feel free to criticise them though I might modify them a lot and I’m sorry if they are blunt:
I don’t understand what concrete learnings there are from IIDM, except forecasting (which I am biased on). The EIP produced a report which said that in the institutions you’d expect to matter do. That was cheap falsification so I guess worth it. Beyond that, I don’t know. And I was quite involved for a while and I didn’t pick these up by osmosis. I assume that many people know even less than I do.
Is forecasting IIDM? Yes. But people know what forecasting is so it’s easier to use those words. Are humans primates, yes, but one of those words is easier to understand.
Does IIDM exist in the wild? Yes?? I know lots of EA-aligned people who work in institutions who to improve them. That seems like IIDM to me.
What ideas would I brainstorm, low confidence:
Connect EA networks across institutions. EAs in different institutions probably know things. Do they pass those around?
Try and improve EA knowledge tranfer How can someone get a high signal feed of information via email, WhatsApp, podcast app. If we had this then it would be easier to share to institutional colleagues
What has worked in EA orgs? I’m surprised we think we can improve institutions when we haven’t solved those problems internally
How does an org make forecasting really easy and low friction?
How can EA institutions share detailed knowledge in real treal-timeime across institutions?
How do EAs avoid duplicating work?
Haha I don’t know what IIDM is but I do know what forecasting is. If I had lots of money one of the things I’d do is create a forecasting news organization. They don’t talk about what happened, they talk about what’s going to happen. The knowlege transfer is important. People are too spread apart to use one platform, but if there was a list of people who were readily available to share information on certain topics and their contact info that would be valuble.
Benjamin, I think you and I are gonna be friends. You at EAG SF?
This forum is not user-friendly. Took a bit to arrive.
I am not! I applied and didn’t get it, I think the movement is bigger than available tickets in a convention. I’m on a few EA discords if you’d like to chat.
Do we prefer
impact tractability neglectedness
scale solvability neglectedness
ITN
SSN
I have strong “social security number” associations with the acronym SSN.
Setting those aside, I feel “scale” and “solvability” are simpler and perhaps less jargon-y words than “impact” and “tractability” (which is probably good), but I hear people use “impact” much more frequently than “scale” in conversation, and it feels broader in definition, so I lean towards “ITN” over “SSN”.
In my head, “impact” seems to mix together scale + neglectedness + tractability, unless I’m missing something.
I actually prefer “scale, tractability, neglectedness” but nobody uses that lol
ITN.
I am gonna do a set of polls and get a load of karma for it (70% >750). I’m currently ~20th overall on the forum despite writing few posts of note. I think polls I write create a lot of value and I like the way it incentivises me to think about questions the community wants to answer.
I am pretty happy with the current karma payment but I’m not sure everyone will be so I thought I’d surface it. I’ve considered saying that polls delivery half the karma, but that feels kind of messy and I do think polls are currently underrated on the forum.
Any ideas?
https://eaforum.issarice.com/userlist?sort=karma
What? Polls?
Do you mean “Questions”?
EA podcasts and videos
Each EA org should pay $10 bounty to the best twitter thread talking about any episode. If you could generate 100 quality twitter threads on 80,000 hours episodes that for $1000 that would be really cheap. People would quote tweet and discuss and it would make the whole set of knowledge much more legible.
Cool idea, I’ll have a think about doing this for Hear This Idea. I expect writing the threads ourselves could take less time than setting up a bounty, finding the threads, paying out etc. But a norm of trying to summarise (e.g. 80K) episodes in 10 or so tweets sounds hugely valuable. Maybe they could all use a similar hashtag to find them — something like #EAPodcastRecap or #EAPodcastSummary
I recommend a thread of them. I rarely see poeple using hashtags currently.
And I probably agree you could/should write them yourselves but:
- other people might think different things are interesting than you do
Thanks! Sounds right on both fronts.
I edited the of wikipedia on Doing Good Better to try and make it more reflective of the book and Will’s current views. Let me know how you think I did.
https://en.wikipedia.org/w/index.php?title=William_MacAskill&editintro=Template%3ABLP_editintro#Doing_Good_Better
Plant-based meat. Fun video from a youtuber which makes a strong case. Very sharable. https://youtu.be/-k-V3ESHcfA
Top Forecaster Podcast
I talked to Peter Wildeford, who is a top forecaster and Head of Rethink Priorities, about the US 2024 General Election.
We try to pin down specific probabilities.
Youtube: https://www.youtube.com/watch?v=M7jJxPfOdAo
Spotify: https://open.spotify.com/episode/4xJw9af9SMSmX5N2UZTpRD?si=Dh9RqPwqSDuHj7VpEx_nwg&nd=1
Pocketcasts: https://pca.st/ytt7guj0
Space in my brain.
I was reading this article about Nuclear winter a couple of days ago and I struggled. It’s a good article but there isn’t an easy slot in my worldview for it. The main thrust was something like “maybe nuclear winter is worse than other people think”. But I don’t really know how bad other people think it is.
Compare this to community articles, I know how the community functions and I have opinions on things. Each article fits neatly into my brain.
If a had a globe my worldview the EA community section is like very well mapped out. And so when I hear oh, you know, Adelaide is near Sydney or something, I know where those places are, and I can make some sort of judgment on the comment. But my views on nuclear winter are like if I learn that the mountains near Drachmore are taller than people think. Where is drachmore? Which mountains? How tall do people think they are.
My suggestion here is better wikis, but mainly I think the problem is an interesting one. I think often the community section is well supported because we all have some prior structure. I think it’s hard to comment on air purity, AI minutiae or nuclear winter because I don’t have that prior space.
That seems notable.
For those that disagree, what’s your experience?
Again, in general feel free to disagree anonymously.
I wouldn’t recommend people tweet about the nonlinear stuff a lot.
There is an appropriate level of publicity for things and right now I think the forum is the right level for this. Seems like there is room for people to walk back and apologise. Posting more widely and I’m not sure there will be.
If you think that appropriate actions haven’t been taken in say a couple months then I get tweeting a bit more.
I think the substance of your take may be right, but there is something that doesn’t sit well with me about an EA suggesting to other EAs (essentially) “I don’t think EAs should talk about this publicly to non-EAs.” (I take it that is the main difference between discussing this on the Forum vs. Twitter—like, “let’s try to have EA address this internally at least for now.”) Maybe it’s because I don’t fully understand your justification—”there is room for people to walk back and apologize”—but the vibe here feels a bit to me like “as EAs, we need to control the narrative around this (‘there is an appropriate level of publicity,’)” and that always feels a bit antithetical to people reasoning about these issues and reaching their own conclusions.
I think I would’ve reacted differently if you had said: “I don’t plan to talk about this publicly for a while because of x, y, and z” without being prescriptive about how others should communicate about this stuff.
Yeah i get that.
I think in general people don’t really understand how virality works in community dynamics. Like there are actions that when taken cannot be reversed.
I don’t say “never share this” but I think sharing publicly early will just make it much harder to have a vulnerable discussion.
I don’t mind EAs talking about this with non-EAs but I think twitter is sometimes like a feeding frenzy, particularly around EA stuff. And no, I don’t want that.
Notably, more agree with me than disagree (though some big upvotes on agreement obscure this—I generally am not wild about big agreeevotes).
As I’ve written elsewhere I think there is a spectrum from private to public. Some things should be more public than they are and other things more private. Currrently I am arguing this is about right. I thought that it turned out many issues with FTX were too private.
I think that a mature understanding of sharing things is required for navigating vulnerable situations (an I imagine you agree—many disliked the sharing of victims names around the time article why because that was too public for that information in their opinion)
I appreciate that you said it didn’t sit well with you. It doesn’t really sit well with me either. I welcome someone writing it better
Yeah, again, I think you might well be right on the substance. I haven’t tweeted about this and don’t plan to (in part because I think virality can often lead to repercussions for the affected parties that are disproportionate to the behavior—or at least, this is something a tweeter has no control over). I just think EA has kind of a yucky history when it comes to being prescriptive about where/when/how EAs talk about issues facing the EA community. I think this is a bad tendency—for instance, I think it has, ironically, contributed to the perception that EA is “culty” and also led to certain problematic behaviors getting pushed under the rug—and so I think we should strongly err on the side of not being prescriptive about how EAs talk about issues facing the community. Again, I think it’s totally fine to explain why you yourself are choosing to talk or not talk about something publicly.
I guess I plan for the future, not the past. But I agree that my stance is generally more public than most EAs. I talk to journalists about stuff, for instance, and I think more people should.
I imagine we might agree in cases.
I am so impressed at the speed with which Sage builds forecasting tools.
Props @Adam Binks and co.
Fatebook: the fastest way to make and track predictions looks great.
I still don’t really like the idea of CEA being democratically elected but I like it more than I once did.
89 people responded to my strategy poll so far.
Here are the areas of biggest uncertainty.
Seems we could try and understand these better.
Poll link: https://viewpoints.xyz/polls/ea-strategy-1
Analytics like: https://viewpoints.xyz/polls/ea-strategy-1/analytics
A: “Agree”, D: “Disagree”, S: “Skip”, ?: “It’s complicated”.
is
viewpoints.xyz
on github?I imagine that it has cost and does cost 80k to push for AI safety stuff even when it was wierd and now it seems mainstream.
Like, I think an interesting metric is when people say something which shifts some kind of group vibe. And sure, catastrophic risk folks are into it, but many EAs aren’t and would have liked a more holistic approach (I guess).
So it seems a notable tradeoff.
I would quite like Will MacAskill back right about now. I think he was generally a great voice in the discourse.
I am frustrated and hurt when I take flack for criticism.
It seems to me that people think I’m just stirring shit by asking polls or criticising people in power.
Maybe I am a bit. I can’t deny I take some pleasure in it.
But there are a reasonable amount of personal costs too. There is a reason why 1-5 others I’ve talked to have said they don’t want to crticise because they are concerned about their careers.
I more or less entirely criticise on the forum. Believe me, if I wanted to actually stir shit, I could do it a lot more effectively than shortform comments.
I’m relatively pro casual sex as a person, but I will say that EA isn’t about being a sex-positive community—it’s about effectively doing good. And if one gets in the way of the other, I know what I’m choosing (doing good).
I think there is a positive sum compromise possible, but it seems acknowledging how I will trade off if it comes to it.
In general I want to empower experts who rarely take risks to take more (eg the forum is better if the team make changes a lot)
How come some people have access to inline comments and others don’t?
What do you mean by inline comments?
when you can comment on an article and it shows as a little speech bubble to the side of the text. I’ve opted into experimental features but I still can’t.
You can enable it on a per-post basis, by clicking on the … below the title
But how does one write them? feels like something should appear when I highlight text.
I think you just normally quote a section of the article, clicking “Block quote”
Some people use hypothes.is , which in theory gives the same functionality on any web page, but we’re very few and only people that have installed it can see the comments or add new ones
Do you have any idea why my shortform doesn’t have disagree and agreevotes?
Because it’s from before disagree and agreevotes were a thing, not sure if there’s a way to make a new one, I would file a feature request https://forum.effectivealtruism.org/posts/NhSBgYq55BFs7t2cA/ea-forum-feature-suggestion-thread
As in all old shortforms don’t have them?
Yes, I think that’s currently the case (speaking as a user)
wild.
Why do some shortforms have agree voting and others don’t?
Depends on when the shortform was created.
As in they’ve recently removed it? If not, that doesn’t seem true.
Some thoughts
- Utilitiarianism but being cautious around the weird/unilateral stuff is still good
- We shouldn’t be surprised that we didn’t figure out SBF was fraudulent quicker than billions of dollars of cryto money… and Michael Lewis
- Scandal prediction markets are the solution here and one day they will be normal. But not today. Don’t boo me, I’m right
- Everyone wants whistleblowing, no one wants the correctly incentivised decentralised form of whistleblowing.
- Gotta say, I feel for many random individual people who knew or interacted closely with SBF but weren’t at FTX who are gonna get caught up in that
- We were fundamentally unserious about avoiding reputational risk from crypto. I hope we are more serious about not dying from AI
- I like you all a lot
- I don’t mind taking the money of some retired non-EA oil baron, but I think not returning FTX’s money perhaps incentivises future pro-crime EAs. I would like a credible signal
- The community does not need democratised funding (though I’d happily test it at a small scale) though we aren’t getting enough whistleblowing so we should work on that
- We deserve to be scrutinised and mocked, we messed up. We should own that
- X-risk is still extremely compelling
- I am uncertain how impactful my work is
- Our critics are usually very low signal but have a few key things of value to say. It is hard to listen to find those things without wasting loads of time, but missing them is bad too
- People knew SBF was a bully who broke promises. That that information didn’t flow to where it needed/ was ignored was a problem—
I think we shouldn’t say we want criticism, because we don’t. We didn’t want it about FTX and we don’t in any other places. We want very specific criticism. Everyone does, because the world is big and we have limited time. So how do we get the criticism that’s most useful to us
- The community should seek to make the best funding decisions it can over time. I think that’s with orgs doing it and prediction markets to remove bad apples, but you can think what you want. But democratisation isn’t a goal in and of itself—good sustainable decisionmaking is. Perhaps there should be a jury of randomly chosen community member, perhaps we should have elections. I don’t know, but I do feel we haven’t been taking governance seriously enough
I remain confused about “utilitarianism, but use good judgement”. IMO, it’s amongst the more transparent motte-and-baileys I’ve seen. Here are two tweets from Eliezer that I see are regularly re-shared:
This describes Aristotelian Virtue Ethics—finding the golden mean between excess and deficiency. So are people here actually virtue ethicists who sometimes use math as a means of justification and explanation? Or do they continue to take utilitarianism to some of its weirder places, privately and publicly, but strategically seek shelter under other moral frameworks when criticized?
I’m finding it harder to take people who put “consequentialist” and “utilitarian” in their profiles and about mes seriously. If people abandon their stated moral framework on big important and consequential questions, then either they’re deluding themselves on what their moral framework actually is, or they really will act out the weird conclusions—but are being manipulative and strategic by saying “trust us, we have checks and balances”
I don’t think you have to abandon it, but you can look twice or ask trusted friends etc etc.
That doesn’t mean you can’t do the thing you intended to do.
And what happens when that double-checking comes back negative? And how much weight do you choose to give it? The answer seems to be rooted in matters of judgement and subjectivity. And if you’re doing it often enough, especially on questions of consequence, then that moral framework is better described as virtue ethics.
Out of curiosity, how would you say your process differs from a virtue ethicist trying to find the golden mean between excess and deficiency?
I notice that sometimes I want to post on something that’s on both the EA forum and lesswrong. And ideally, clicking “see lesswrong comments” would just show them on the current forum page and if I responded, it would calculate EA forum karma for the forum and LessWrong karma for lessWrong.
Probably not worth building, but still.
Someone being recommended to learn about EA by listening to 10 hours of podcasts in the wild
Maximise useful feedback, minimise rudeness
When someone says of your organisation “I want you to do X” do not say “You are wrong to want X”
This rudely discourages them from giving you feedback in future. Instead, there are a number of options:
If you want their feedback “Why do you want X?” “How does a lack of X affect you?”
If you don’t want their feedback “Sorry, we’re not taking feedback on that right now” or “Doing X isn’t a priority for us”
If you think they fundamentally misunderstand something “Can I ask you a question relating to X?”
None of these options tell them they are wrong.
I do a lot of user testing. Sometimes a user tells me something I disagree with. But they are the user. They know what they want. If I disagree, it’s either because they aren’t actually a user I want to support, they misunderstand how hard something is, or they don’t know how to solve their own problems.
None of these are solved by telling them they are wrong.
Often I see people responding to feedback with correction. I often do it myself. I think it has the wrong incentives. Rather than trying to tell someone they are wrong, now I try to either react with curiosity or to explain that I’m not taking feedback right now. That’s about me rather than them.
Other than my karma, this post got negative karma. Why?
I understand that sometimes I post controversial stuff, but this one is just straightforwardly valuable.
https://forum.effectivealtruism.org/posts/GshpbrBaCQjxmAKJG/cause-prioritsation-contest-who-bettors-think-will-win
I sense new stuff on the forum is probably overrated. Surely we should assume that most of the most valuable things for most people to read have already been written?
Have you seen the new features google docs has added recently?
Tick boxes
Project trackers
New types of tables
Feels like they are gunning for Notion.
The difference between the criticism contest and openphil’s cause prioritisation contest is pretty interesting. 60% I’m gonna think OpenPhil’s created more value in terms of changes in a 10 years time.
1 minute video summaries of my EA Criticism contest articles:
Summaries are underrated—https://www.loom.com/share/4781668372694c83a4e9feffe249469b—full text
Improving Karma—https://www.loom.com/share/6d0decef2bd14efc9b22e14d43693002 - full text
Common misconception I see:
Longtermists causes are not:
Causes which are much more pressing under longtermism than other belief systems
Longtermist causes are:
Those which are a high priority for marginal resources, whether they are under other belief systems or not.
The fact that biorisk and AI risk are high priority without longtermism doesn’t make them not “longtermist causes” just as it doesn’t make the not “causes that affect people alive today”
How much value is there in combining two EA slacks which discuss the same topic?
Probably $1,000s right?
Or maybe we should assume it will be a natural process that one will subsume the other?
Effective altruism and politics
Here is an app that lets you vote on other people’s comments (I’d like to see it installed in the forum so there is a lower barrier to entry)
You can add thoughts and try and make arguments that get broad agreement.
What are the different parties of opinion on EA and politics.
https://pol.is/283be3mcmj
An open question for me (for EA Israel? For EA?) is whether we can talk about economic-politics publicly in our group.
For example, can we discuss openly that “regulating prices is bad”. This is considered an open political debate in Israel, politicians keep wanting to regulate prices (and sometimes they do, and then all the obvious things happen)
I mean I’d like to chat about that, and maybe happy to on this shortform? But I wouldn’t write a post on it. I guess it doesn’t seem that neglected to me.
In Israel, it is controversial to suggest not regulating prices, or to suggest lowering import taxes, or similar things. I could say a lot about this, but my points are:
In Israel:
It is neglected
It means EA would be involved in local politics
I remember I was really jealous of the U.S when Biden suggested some very expensive program (UBI? Some free-medical-care reform?), but he SHOWED where the money is supposed to come from, there was a chart!
EA Wiki
I’ve decided I’m going to just edit the wiki to be like the wiki I want.
Currently the wiki feels meticulously referenced but lacking in detail. I’d much prefer it to have more synthesised content which is occasionally just someone’s opinion. If you dislike this approach, let me know.
I do think that many of the entries are rather superficial, because so far we’ve been prioritizing breadth over depth. You are welcome to try to make some of these entries more substantive. I can’t tell, in the abstract, if I agree with your approach to resolving the tradeoff between having more content and having a greater fraction of content reflect just someone’s opinion. Maybe you can try editing a few articles and see if it attracts any feedback, via comments or karma?
Why do posts get more upvotes than questions with the same info?
I wrote this question: https://forum.effectivealtruism.org/posts/ckcoSe3CS2n3BW3aT/what-ea-projects-could-grow-to-become-megaprojects
Some others wrote this post summarising it:
https://forum.effectivealtruism.org/posts/faezoENQwSTyw9iop/ea-megaprojects-continued
Why do you think the summary got more upvotes. I’m not upset, I like a summary too, but in my mind, a question that anyone can submit answers to or upvote current answers is much more useful. So I am confused. Can any suggest why?
Anyone can comment on a post and upvote comments so I don’t see why a question would be better in that regard.
Also the post contained a lot of information on potential megaprojects which is not only quite interesting and educational but also prompts discussion.
At what size of the EA movement should there be an independent EA whistleblowing organisation, which investigates allegations of corruption?
Can you think of any examples of other movements which have this? I have not heard of such for e.g. the environmentalist or libertarian movements. Large companies might have whistleblowing policies, but I’ve not heard of any which make use of an independent organization for complaint processing.
The UK police does.
It seems to me if you wanted to avoid a huge scandal you’d want to empower and incentivise an organisation to find small ones.
I’ve been getting more spam mail on the forum recently.
I realise you can report users, which I think is quicker than figuring out who to mail and then copying the nameove.
I’m sorry to hear this (and grateful that you’re reporting them). We have systems for flagging when a user’s DM pattern is suspicious, but it’s imperfect (I’m not sure if it’s too permissive right now).
In case it’s useful for you to have a better picture of what’s going on, I think you get more of the DM spam because you’re very high up in the user list.
I don’t really mind. It’s not hard for me to just report the user (which is what you’d like right)
This is like 1 minute a week, so not a big deal for me. Thanks again for your and the team’s work.
I really like saved posts.
I think if I save them and then read mainly from my saved feed, that’s a better, less addictive, more informative experience.
Norms are useful so let’s have useful norms.
“I don’t think drinking is bad, but we have a low-alcohol culture so the fact you host parties with alcohol is bad”
Often the easiest mark of bad behaviour is that it breaks a norm we’ve agreed. Is it harmful in a specific case to be shoplift? Depends on what was happening to the things you stole. But it seems easier just to appeal to our general norm that shoplifting is bad. On average it is harmful and so even if it wasn’t in this specific case, being willing to shoplift is a bad sign. Even if you’re stealing meds to give to your gran, it may be good to have a general norm against this behaviour.
But if the norm is bad that weakens norms in general. Lots of people in the UK speed in their cars. But this teaches many people that twice a day, the laws aren’t actually laws. It encourages them that many government rules are stupid and needless as opposed to wise and reasonable
But how broadly should this norm apply? 99% of cases, 95%? I don’t know.
But it’s clear to me that if a norm only applies in 50% of cases it’s a bad norm. It’s gonna leave everyone trusting the values of the community less, because half the time it will punish or reward people incorrectly.
Sorry, how do I tag users or posts? I’ve forgotten and can’t find a shortcuts section on the forum
It used to be done by just typing the @ symbol followed by the person’s name, but that doesn’t seem to work anymore.
That’s right, you should be able to mention users with @ and posts with #. However, it does seem like they’re both currently broken, likely because we recently updated our search software. Thanks for flagging this! We’ll look into it.
The fix for this is now live—thanks!
I strongly dislike the “further reading” sections of the forum wiki/forum tags.
They imply that the right way to know more about things is to read a load of articles. It seems clear to me that instead we should sythesise these points and then link them where relevant. Then if you wanted more context you could read the links.
The ‘Further reading’ sections are a time-cheap way of helping readers learn more about a topic, given our limited capacity to write extended entries on those topics.
Clubhouse Invite Thread
1) Clubhouse is a new social media platform, but you need an invite to join
2) It allows chat in rooms, and networking
3) Seems some people could deliver sooner value by having a clubhouse invite
4) People who are on clubhouse have invites to give
5) If you think an invite would be valuable or heck you’d just like one, comment below and then if anyone has invites to give they can see EAs who want them.
6) I have some invites to give away.
Fun UK innovative policy competition:
https://heywoodfoundation.com/contest/
Mailing list for the new UK Conservative Party group on China.
Will probably be worth signing up to if that’s your area of interest.
https://chinaresearchgroup.substack.com/p/coming-soon
Please comment any other places people could find mailing lists or good content for EA related areas.
Some attempts at concensus thoughts on sexual behaviour:
I’ll split them up into subcomments
It is reasonable that 5- 20% of the community are scared that their harmless sexual behaviour will become unacceptable and that they will be seen as bad/unsafe if they support it.
It’s fair that they are upset and see this as something that might hurt them and fear the outcome.
There are two main models I have for many of these discussions:
Rationalist EAs—like truth-seeking, think a set of discourse norms should be obeyed at all times
Progressive EAs—think that some discussions require much more energy from some than others and need to be handled differently/more carefully. Want an environement where they feel safe
I think it’s easy to see these groups as against one another, but I think that’s not true. There are positive sum improvements.
Women being sad matters. And yes there are tradeoffs here, but it’s really sad that the women in the time article and all the other women who have been sad are sad.
I guess CEA doesn’t want to push specific norms here because the more they engage the more they will get blamed when things go wrong.
There should be a process on the forum for contentious discussions where there are 3 types of post.
An emotions post, where people talk about how they feel and try and say uncontentious things we all agree wtih
A few days later, a discourse post, where we try and have all the discussion
Two weeks later, a concensus post where we try to come up with some widely agreed concusions.
If we could have a community where everyone says “EA does romantic relationships a lot better than the outside world” that would be worth spending $10 − 100mn on purely in community building terms, let alone in just welfare of individual EAs.
We spend millions each year of EAGs + 80k. Imagine if everyone just was like “Yeah EA is just a great safe fun place”
It is pretty reasonable for 5 − 20% of the community to have a boundary about not being caught up in coversations about sex in houses they need to stay in in foreigh countries. Or similarly bad conversations.
It’s reasonable they want to be sure this is taken really seriously, because they don’t want it to happen to them or their friends.
It’s complicated that this might lead to unintended consequences, but their desire seems very comprehensible.
It was very likely bad that Owen Cotton-Barrett upset a couple of women and then didn’t drastically change his behaviour, such that there were other instances.
That’s not to say other things weren’t bad. But this feels like something we can agree on.
The forum should hire mediators who’s job it is to try and surface concensus and allow discussion to flow better. Many discussion are a lot of different positions at once.
Does this seem like an acceptable addition to the AI safety EA forum wiki page?
(There is nothing after the question for me, maybe you tried to upload an image but submitted the comment before it fully uploaded?)
I think in SBF we farmed out our consciences. Like people who say “there need to be atrocities in war so that people who live in peace” we thought “SBF can do trade dodgy coins stuff so that we can help, but let’s not think about it”. I don’t think we could have known about the fraud, but I do think there were plenty of warning signs we ignored as “SBF is the man in the arena”. No, either we should have been cogent and open about what he was doing or we should have said we didn’t like it and begun pulling away reputationally.
If you have anonymous feedback I’m happy to hear it. In fact I welcome it.
I will note that I’m not made of stone however and don’t promise to be perfect. But I always appreciate more information.
Some behaviours I’ve changed recently:
I am more cautious about posting polls around sensitive topics where there is no way to express that the poll is misframed
I generally try to match the amount of text of the person I’m talking to, and resist an urge to keep adding additional replies
In formal settings I might have previously touched people on the upper arm or shoulders in conversation, a couple of people said they didn’t like that, so I do it less and ask before I do
If you have issues (or compliments), even ones you are sure I am aware of, I would appreciate hearing them. We are probably more alien than you imagine.
https://www.admonymous.co/
Isn’t this film about the end of the world? Also yes
I do not upvote articles on here merely because they are about EA.
Personally I want to read articles that update me in a certain direction. Merely an article that’s gonna make me sad or be like “shrug accurate” is not an article I’m gonna upvote on here.
I get the desire to share them. I feel that too.
Every time I want to find quick takes, it takes longer than I expect.
Does the Long Term Future Fund generally prefer many well defined project applications or one which gives a number of possible projects?
I think y’all need to iterate on using the forum more. It could be so much better if only we could figure out how
Could you clarify?
A couple of times I’ve probably been too defensive about people saying things behind my back. That’s not how I want to behave. I’m sorry.
I quite strongly dislike “drama” around things, rather than just trying to figure them out. Much of the HLI “drama” seems to be reading various comments and sharing that there is disagreement rather than attempts to turn uncertainty into clarity.
My response to this is “what are we doing”? Why aren’t there more attempts to figure out what we should actually believe as a group here? I really don’t understand why there is much discussion but so little (to my mind) attempt at synthesis.
I don’t see a clear path forward to consensus here. The best I can see, which I have tried to nudge in my last two long posts on the main thread, is “where do we go from here given the range of opinions held?”
As I see it, the top allegation that has been levied is intentional research misconduct,[1] with lesser included allegations of reckless research misconduct, grossly negligent research (mis)conduct, and negligent research conduct. A less legal-metaphory way to put it is: the biggest questions are whether HLI had something on the scale in favor of SM, if so was it a finger or a fist on the scale, and if so did HLI know (or should it have known) that the body part was on the scale.
It’s unsurprising that most people don’t want to openly deliberate about misconduct allegations, especially not in front of the accusers and the accused. There’s a reason juries deliberate in secret in an attempt to reach consensus.
I think that hesitation to publicly deliberate is particularly pronounced for those who fall in the middle part of the continuum,[2] which unfortunately contributes to the “pretty serious misconduct” and “this is way overblown” positions being overrepresented in comments compared to where I think they truly fall among the Forum community. Moreover, most of us lack the technical background and experience to lead a deliberation process.
What procedures would you suggest to move toward consensus?[3]
In my view, this allegation has been made in a slightly veiled manner, but clearly enough that it is counts as having been alleged.
If someone thinks HLI is guilty of deceptive conduct (or conduct that is so reckless to be hard to distinguish from intentional deception), they are likely going to feel less discomfort raking HLI over the coals (“because they deserve it” and because maintaining epistemic defense against that kind of conduct is particularly important). If someone thinks this whole thing is a nothingburger, saying so wouldn’t seem emotionally difficult.
Properly used, anonymous polling can reveal a consensus that exists (as long as there’s no ballot stuffing) . . . but isn’t nearly as useful in developing a consensus. If you attempt to iterate the questions, you’re likely to find that more and more of the voting pool will be partisans on one side of the dispute or the other, so subsequent rounds will reflect community consensus less and less.