Thanks for the update on this! I don’t think I’d heard about it.
Howie_Lempel
Regulatory inquiry into Effective Ventures Foundation UK
“In 1993, he obtained a bachelor’s degree in radio from Emerson College in Boston,[4] where one of his professors was the writer David Foster Wallace”
Yes — since the first week of the crisis, Nick and Will have been recused from the relevant discussions / decisions on the boards of both EV entities to avoid any potential conflict of interest. Staff in both EV entities were informed about that decision in mid-November.
The 80k podcast also has some potentially relevant episodes though they’re prob not directly what you most want.
https://80000hours.org/podcast/episodes/phil-trammell-patient-philanthropy/
https://80000hours.org/podcast/episodes/will-macaskill-ambition-longtermism-mental-health/
Maybe especially the section on patient philanthropy.
https://80000hours.org/podcast/episodes/will-macaskill-what-we-owe-the-future/
Some bits of this. E.g. some of the bits on political donations.
My guess is that Part II, trajectory changes will have a bunch of relevant stuff. Maybe also a bit of part 5. But unfortunately I don’t remember too clearly.
It’s been a while since I read it but Joe Carlsmith’s series on expected utility might help some.
[My impression. I haven’t worked on grantmaking for a long time.] I think this depends on the topic, size of the grant, technicality of the grant, etc. Some grantmakers are themselves experts. Some grantmakers have experts in house. For technical/complicated grants, I think non-expert grantmakers will usually talk to at least some experts before pulling the trigger but it depends on how clearcut the case for the grant is, how big the grant is, etc.
I think parts of What We Owe the Future by Will MacAskill discuss this approach a bit.
Others, most of which I haven’t fully read and not always fully on topic:
Richard Posner. Catastrophe: Risk and Response. (Precursor)
Richard A Clarke and RP Eddy. Warnings: Finding Cassandras to Stop Catastrophes
General Leslie Groves. Now It Ca Be Told: the Story of the Manhattan Project (nukes)
Much narrower recommendation for nearby problems is Overcoming Perfectionism (~a CBT workbook).
I’d recommend to some EAs who are already struggling with these feelings (and know some who’ve really benefitted from it). (It’s not precisely aimed at this but I think it can be repurposed for a subset of people.)
Wouldn’t recommend to students recently exposed to EA who are worried about these feelings in future.
If you haven’t come across it, a lot of EAs have found Nate Soares’ Replacing Guilt series useful for this. (I personally didn’t click with it but have lots of friends who did).
I like the way some of Joe Carlsmith’s essays touch on this.
FYI—subsamples of that survey were asked about this in other ways, which gave some evidence that “extremely bad outcome” was ~equivalent to extinction.
Explicit P(doom)=5-10% The levels of badness involved in that last question seemed ambiguous in retrospect, so I added two new questions about human extinction explicitly. The median respondent’s probability of x-risk from humans failing to control AI [1]was 10%, weirdly more than median chance of human extinction from AI in general,[2] at 5%. This might just be because different people got these questions and the median is quite near the divide between 5% and 10%. The most interesting thing here is probably that these are both very high—it seems the ‘extremely bad outcome’ numbers in the old question were not just catastrophizing merely disastrous AI outcomes.
Thanks for this! It was really useful and will save 80,000 Hours a lot of time.
I think the people responsible for EA Global admissions (including Amy Labenz, Eli Nathan, and others) have added a bunch of value to me over the years by making it more likely that a conversation or meeting with somebody at EA Global who I don’t already know will end up being productive. Making admissions decisions at EAG (and being the public face of an exclusive admissions policy) sounds like a really thankless job and I know a bunch of the people involved end up having to make decisions that make them pretty sad because they think it’s best for the world. I mostly just wanted to express some appreciation for them and to mention that I’ve benefitted from it since it feels uncomfortable to say out loud so is probably under expressed.
One positive effect of selective admissions that I don’t often see discussed is that it makes me more likely to take meetings with folks I don’t already know. I’d guess that this increases the accessibility of EA leaders to a bunch of community members.
Fwiw, I’ve sometimes gotten overambitious with the number of meetings I take at EAG and ended up socially exhausted enough to be noticeably less productive for several days afterwards. This is a big enough cost that I’ve skipped some years. So, I think in the past I’ve probably been on the margin where if the people at EAG had not been selected for being people I could be helpful to, I’d have been less likely to go.- Feb 9, 2023, 8:34 AM; 20 points) 's comment on Solidarity for those Rejected from EA Global by (
I’m curious whether there’s any answer AI experts could have given that would be a reasonably big update for you.
For example is there any level of consensus against ~AGI by 2070 (or some other date) that would be strong enough to move your forecast by 10 percentage points?
I definitely agree that takeaway would be a mistake. I think my view is more like “if the specifics of what MT says on a particular topic don’t feel like they really fit your organisation, you should not feel bound to them. Especially if you’re a small organisation with an unusual culture or if their advice seems to clash with conventional wisdom from other sources, especially in silicon valley.
I’d endorse their book as useful for managers at any org. A lot of the basic takeaways (especially having consistent one on ones) seem pretty robust and it would be surprising if you shouldn’t do them at all.
Agree with a lot of this post. I lived in DC from 2008-2010 and various short periods before and after and overall I liked it (though I’d probably like it a bit less today and expect a lot of EAs to like it less than I did).
The features of DC that most affected me: -DC felt like a company town. This had advantages. I liked having tons of friends who were think tank analysts or worked on the Hill and were trying to change the world (though I suspect polarization has made the vibe a bit worse). It also had disadvantages. Relative to NYC (which I knew best at the time) I knew relatively few people living in DC because they wanted to make DC great and this meant things like a worse music scene (despite the fact that I grew up on DC punk music). -Lots of people, especially young people, only stay for a couple of years so it was hard to maintain a friend group. I think this was a big deal. -DC is small relative to a place like NY. Overall this felt like a disadvantage to me though I expect it would be a feature to some other people. DC felt like more of a bubble and there were fewer places to explore. There was a concert I’d be interested in ~once a week instead of a couple per night. On the other hand, several houses full of friends and co-workers lived within a five minute walk which was great. That said, it’s still one of the biggest metro areas in the US. -I thought it was cool/exciting to live in a city where policy and politics were happening (though I think I’d enjoy less today). -I think there were some disadvantages to everybody being very networky and the culture being kind of conservative.
“I don’t think they would put out material that fails to apply to them.”
I think we mostly agree but I don’t think that’s necessarily true. My impression is that they mainly study what’s useful to their clients and from what I can glean from their book, those clients are mostly big and corporate. I think they might fall outside of their main target audience.
+1 to Paul grahams essays.
Hi Matt—thanks for the suggestion. I agree that we should have a page like this. I’ve asked someone to take this on but we’ve got a lot of things to update at the moment so it won’t go up immediately. In the meantime, CEA’s team page has links to bios for most of the trustees here.