Thanks a lot and good point, edited to include full names and links!
Andrea_Miotti
The Compendium, A full argument about extinction risk from AGI
Conjecture: A standing offer for public debates on AI
Christiano (ARC) and GA (Conjecture) Discuss Alignment Cruxes
Retrospective on the 2022 Conjecture AI Discussions
AGI in sight: our look at the game board
Conjecture: Internal Infohazard Policy
For more information on the current funding situation, here is OpenPhil’s latest update indicating that their assets are down ~40%, limiting growth in their commitment compared to previous projections, and GiveWell’s latest funding projection update indicating that they “don’t expect to have enough funding to support all the cost-effective opportunities [they] find”.
Thank you very much for sharing this honest account of your experience!
Failure is an inherent byproduct of taking more risks, and it’s really hard to write openly about career near-misses like this one, as failing (really) hurts.
However, accounts of failures, near-misses, and missed opportunities are extremely valuable for all of us to learn and go forward, especially as we embark in more ambitious high-risk, high-reward projects.
This post is especially valuable as it also sheds more light on paths towards high-impact political roles, so thanks again!
Completely agree! Although I imagine that the situation will change soon due to 1) last funding decisions being finalized 2) funded projects coming out of stealth mode 3) more rejected applicants posting their applications publicly (when there are few downsides to doing so) 4) the Future Fund publishes a progress report in the next months.
So I expect the non-disclosure issue to be significantly reduced in the next few months.
Thanks for sharing the presentation, great work!
Regarding the third question from the audience, “What kind of resource could we share to a random person on the street if we want to introduce them to AI x-risk?”, in addition to the resources you mention I think Stuart Russel’s 2021 BBC Reith Lectures Series, “Living with Artificial Intelligence”, is an excellent introduction for a generalist audience.
In addition to being accessible, the talks have the institutional gravitas of being from a prestigious lecture series from the BBC and an established academic, which makes them more likely to convince a generalist audience.
I think the marginal value of a pre-order of What We Owe the Future is much higher than a pre-order of Gates’s book, as Gates’s book has a much higher baseline probability of ending up as a bestseller and receiving significant press coverage thanks to Gates’s fame.
As usual, it would be great to see downvotes accompanied by reasons for downvoting, especially in the case of NegativeNuno’s comments, since it’s an account literally created to provide frank criticism with a clear disclaimer in its bio.
Thanks for this writeup! I definitely never thought about snakebites as a major issue before, despite its similarity to “obvious” global health issues like malaria.
This issue is gaining the attention of EU policymakers, including MEPs.
On April 20, an MEP from the Greens/EFA political group tabled a parliamentary question on the issue, citing recent research reviews to note that high-welfare octopus farming is impossible.
He asks whether the European Commission can “confirm the incompatibility of commercial octopus farming investments with the ‘do no significant harm’ principle, which underpins the EU’s sustainable finance policies and is the basis for EU taxonomy”.
The Fabian Society even went ahead with one of the megaprojects currently being discussed in EA: founding a new university.
In 1894, Fabian Society members Beatrice and Sidney Webb, Graham Walls, and George Bernard Shaw established the London School of Economics and Political Science to improve social science education and address what they saw as the world’s most pressing problems of the time.
Thank you very much for writing this! It’s very timely advice for many people, as new organizations are being (or will soon be) set up thanks to funding decisions of the Future Fund (e.g., another comment to this post).
I also couldn’t find much information on campus recruitment expenses for top firms. However, according to the US National Association of Colleges and Employers (NACE), in 2018 average cost-per-hire from US universities was $6,110.
FAANG and other top tier employers are likely to spend much more than the average.
- 18 Apr 2022 22:32 UTC; 30 points) 's comment on FTX/CEA—show us your numbers! by (
Current (highly engaged) EAs mostly coming from well-off backgrounds can also be a good argument in favor of more funding for career building for students and recent graduates though.
EAs from less-affluent backgrounds are those who benefit the most from career building and exploration funding, as they are the people most likely to face financial/other kinds of bottlenecks that prevent them from doing impactful stuff.Reducing career building funding will just reinforce the trend of only well-off EAs that can afford taking risks staying engaged, while EAs from less affluent backgrounds being more likely to drift out of the community/less likely to take riskier but more impactful career paths.
As you say, the solution would be to effectively assess whether career building has counterfactual impact and ideally even fine-tuning the funding amount to specific circumstances, although that probably could lead to the development of weird and undesirable incentives on the applicants’ side.
Thanks, great to hear you found it useful!
As you mention, the export controls are aimed at, and have the primary effect of, differentially slowing down a specific country’s AI development, rather than AGI development overall.
This has a few relevant side effects, such as reduced proliferation and competition, but doesn’t slow down the frontier of overall AGI development (nor does it aim to do so).