Thanks for writing this up. It’d be nice to have a paragraph of bio for each of the board members on ev’s website. Google search didn’t give me much for some of the board members.
Matt Goodman
- 18 Apr 2023 14:07 UTC; 75 points) 's comment on Updates to the Effective Ventures US board by (
‘Trust’ can mean a few different things. Here it’s used like ‘trust someone has good intentions’. But it could also mean ‘trust someone’s judgement’.
Lack of the second kind of trust in EA leadership could make someone in favour of more/broader governence and transparency, even if they have the first kind of trust.
I really can’t express clearly how badly I think of FLI’snon-apology.Why on earth would they think a neo-nazi publication would ever be a good thing to fund?The Future of Life Institute makes no apologies for engaging with many people across the immensely diverse political spectrum, because our mission is so important that it needs broad support from all sectors of societyWhy on earth would they put this in their response, rather than condemning neo-nazism?@Tegmark
....which makes no mention of the neo-nazi views ofNya Dagbladet,and does not condemn them. That section reads to me as almost an afterthought to their response, which is a rant about how Expo.se is unfairly criticising FLI, and howNya Dagbladetis not neo-nazi.Here’s that quote in context:We will continue to engage the broadest sample of humankind, whether or not we are criticized by anyone who questions our motives, or who may have their own agendas. And in this effort, the Future of Life Institute stands and will always stand emphatically against racism, bigotry, bias, injustice and discrimination at all times and in all forms.This is very vague and makes no mention ofNya Dagbladet! In fact, when read immediately after the sentence before it, it could appear to be a kind of hit back at Expo.se’s criticism of FLI in a‘those damn intolerant liberal bigots’kind of way.This is why I take issue with FLI talking about engaging ‘across the immensely diverse political spectrum’ and standing again ‘discrimination at all times and in all forms’ - it’s ok to discriminate against neo-nazis! In fact,it’s completely necessary, in order for a tolerant society to survive.Platitudes like ‘we stand against injustice and discrimination’ do not cut it when your organisation has ben accused of offering funding to neo-nazis. FLI needs to explicitly condemn and disavowNya Dagbladetand neo-nazi ideas.
Unless we actually are saying that talking with ‘bad people’ is automatically bad and something you should apologize to all your right thinking friends for having contaminated them with proximity to badness afterwards.This is putting it very, very euphemistically, if you want to call ‘offering $100,000 in funding to a neo-Nazi publication’ ,‘talking with bad people’.Is there a principled argument that thinking about funding a group like that, and then changing your mind is bad?Yes. Even if they thankfully never granted the money, the question remains—why wasNya Dagbladetever anywhere near a shortlist of things that FLI would consider funding?The fact remains that FLI has not disavowedNya Dagbladetfor their neo-nazi views. This is the most FLI gave as an explanation for them rescinding the offer of funding:we ultimately decided to reject it because of what our subsequent due diligence uncoveredThis is incredibly vague and could be talking about almost anything! Other parts of their non-apology seem to hint that they considerNya Dagbladet’spolitical viewsare acceptable, and ok to be engaging with. Again, this is taken from their apology:The Future of Life Institute makes no apologies for engaging with many people across the immensely diverse political spectrum, because our mission is so important that it needs broad support from all sectors of society...We will continue to engage the broadest sample of humankind, whether or not we are criticized by anyone who questions our motives, or who may have their own agendas.I can’t believe I’m writing this, but some political views should be roundly rejected and never considered acceptable when thinking about the future of humankind. Holocaust deniers should be top of that list, and FLI needs to say as such ASAP.
I think this criticism be extended beyond cryptocurrency, to Social Media. Specifically, EA is heavily reliant on funding from Dustin Moskovitz, co-founder Facebook. (I’m fairly ignorant as to details of Moskovitz’s finance, I believe he still owns shares in Meta and so has at least some controlling interest in the company, but I could be off-base here)
Social media is criticised for a lot of things, but here I’m just going to link the following article, because it’s recent, and because it seems topical to a lot of EA global health/development stuff: Meta faces $1.6bn lawsuit over Facebook posts inciting violence in Tigray war.
There’s a story here that goes ‘Man who’s made billions in technology that significantly damages social and political institutions, including spreading misinformation about elections, covid, vaccines, and allowing people to spread abuse and incite violence, now wants to use that money for the good of society ’. And to the degree that you think that’s true, you might think that the harms done by Meta outweigh the good done by Open Philanthropy.
---
There’s a critique of EA, that goes ‘EA is more focused on individual donations than systemic change’. I used to think this was off-base, because there’s plenty of EAs who want to do system-changing things, like advocate for animal welfare laws, or work in government policy.
Now I read this criticism more like:
“By relying on one or two extremely rich donors for a large portion of EA funding, EA is less likely to be advocate for the kind of systemic change that would be harmful to the financial interests of these donors”,
or (and I’m thinking of crypto and social media here:)
“By relying on one or two extremely rich donors who’ve made their fortunes in ‘disruptive’ technology, EA is less likely to be critical of the harms that these technologies do to the world”.
And I actually think that’s quite a valid criticism.
What do you mean by:
“downplaying engaging in politics in order to make societal institutions better and more just”?
I can interpret it a couple of ways:
Criticism that EA doesn’t engage in politics enough
Warning about the risks of getting involved in politics
Either way, SBF was a major political donor. I’m reading he was the 2nd biggest donor for the Democrats:
There’s a joke that whatever the question is in Bible Study, the correct answer is always ‘God’, ‘Jesus’, or ‘The Bible’. I think it would be bad if the EA equivalent to that became ‘AI’, ‘Existential risk’ and ‘Randomised controlled trials’ .
On the other hand, discussion relies on people having a shared pool of information, and I think it’s very easy to overestimate how much common information people share. I’ve found in group discussions it’s common that someone who’s not an regular to the discussions will bring a whole set of talking points, articles, authors, ideas etc that I had no idea even existed till then. Which is great, except I don’t know what to say in response except ‘uh, what was the name of that? I’ll have to read into it’ .
My sincere apologies, I had missed that it had been updated! V. Embarrassing. Thankyou for doing that
Why aren’t we protesting AI acceleration in the street?
I’m not super up to date with the latest EA thinking on current AI capabilities. The takes I read on social media from Yudkowsky and the like are something along the lines of ‘We’re at a really dangerous time, various companies are engaged in arms race to make more and more powerful AIs with little regard to safety, and this will directly lead to humanity being wiped out by AGI in the near future’. For people really believe this to be true (especially if you live in San Fransico) - why aren’t you protesting on the street?
Some reasons this might work:
There’s lots of precedents of public pressure leading to laws being passed or procedures changed, that have increased safety standards across many industries
The companies working on AI alignment are based in San Francisco. There’s a big EA and rationalist community in SF. Protests could happen outside the HQ of AI companies.
Stories about silicon valley tech companies get lots of press coverage in mainstream media
There’s a prevailing anti—big tech companies feeling in parts of society that could be tapped into it
Specifically, there’s criticisms of the newest AIs for things like ‘training AI models on artists’ work, then putting artists out of a job’ (Dalle) or ‘making it much easier to cheat at university’ (ChatGPT). Whilst this isn’t directly related to AGI safety, it’s the kind of feeling that could be tapped into for the purpose of this protest
If an AI safety researcher could be interviewed on camera at the march it adds credibility to the march, that experts are concerned
It adds credibility to the voices of experts warning about AI risk, if they’re so worried they’re willing to get out on the street to protest about it
How did you come to choose the name ‘neoliberal’? The first Google result for the term ‘neoliberalism’ gives the following Wikipedia definition:
“Neoliberalism is contemporarily used to refer to market-oriented reform policies such as “eliminating price controls, deregulating capital markets, lowering trade barriers” and reducing, especially through privatization and austerity, state influence in the economy.′
Which seems only partially aligned with your stated beliefs and contradictory to ‘a robust social safety net’
(Edited link formatting)
I’m quite skeptical of post-hoc articles with titles like ‘X was no surprise’, they’re usually full of hindsight bias. Like, if it was no surprise, did you predict it coming?
Although there’s almost nothing about SBF here, is this part 1 of a series?
throwaway790 what’s your reason for approving of Rob Wiblin’s statement about FTX, but not Will Macaskill’s, or Holden Karnovsky’s? I read all their statements as ‘we strongly condemn this’ in summary.
“This is despite EA significantly contributing to Biden’s win in 2020.”
What makes you think this?
Hi emmannaemeka, I don’t know how best to respond to this, but please know that you have my sympathies for you in this situation. I often read news of terrorism and kidnapping in Nigeria, and it is a terrible thing to see. I admire that you resist against this and continue providing education!
EA’s earlier relationship with a sketchy billionaire (and the degree to which this was covered up)
This is the first time I’m hearing about this. Am I right in understanding that EA has got involved with not one, but two crypto billionaires who are on the wrong side of the law?
Wikipedia link in the original quote is broken btw.
I went through these experiences voluntarily and with the knowledge that I have the freedom to stop whenever I want. People suffering from painful disease, children dying of hunger, chickens being electrocuted to death, fish being asphyxiated to death—for these individuals, such experiences are a horrific reality, not an experiment
I think this is a very important distinction that should be given more emphasis. When I’ve experienced severe pain, the no.1 thought in my mind was “oh god make it stop”. This makes complete sense if you think of pain as your body’s way of saying, “ok, whatever it is you’re doing, you need to stop doing it now.” And I think a lot of the psychological suffering I experienced was due to the stress of not being able to stop the thing that was causing pain, and not knowing how long the pain would go on for. I add the word ‘psychological’ for clarity here, but in reality I don’t think there’s a clear difference between ‘psychological’ and ‘physical’ sources of pain. All pain in a sense is psychological—all of it happens ‘in your mind’, and factors such as knowing the pain will end soon can have a big effect on the experience of pain.
This distinction could also have a big effect on how people rate their pain on the pain-track framework. The framework seems to define pain a lot in terms of ‘how long could a person endure this?’ And that answer probably varies a lot depending on whether you know the pain will go away soon, or not. For ‘disabling’ pain, it could literally be less disabling, if you knows it’s going to end soon. You might think something like, “ok, I know this will end in 5 minutes, for now I’m going to do this other job to distract myself”. And looking back at the experience, and your behaviour at the time, you might read the scale, and think “ok it’s wasn’t that disabling, I could still do stuff”.
Sure. I’ve written a short summary and my reaction to it, and made it a linkpost
Update: FLI FAQ on the rejected grant proposal controversy.
Although I still think the original statement was not good, reading the FAQ and comments in the linked post have helped me have more empathy for the difficulties of releasing a PR when under public pressure to say something urgently.
I think my tone here was too confrontational and demanding, and I’m sorry if that caused additional stress for FLI.
Thankyou to FLI for updating both the initial statement, and putting out the FAQ, which clears things up.
When EVF announced the new interim CEOs 3 months ago, I noted that there wasn’t a bio for EVF’s board members on their website, and that it was hard to find much information on Google. At this moment in time, it’s the most upvoted comment on that post, with 35 upvoted and 29 agreements. Howie agreed to update the website, but as of now it doesn’t look like anything has been added.
I’d like to raise this again, it would be good to update EVF’s website with board member bios for transparency, and maybe a contact email address. I like that this press release has bios for Zach and Eli, and a link to Becca’s forum account. Could you add a bio for Rebecca? Again, it’s hard to find much info, since there was no bio in the previous press release I don’t know anything about her.