forecasting newsletter by nuno sempere
MathiasKB
Excerpt from the most recent update from the ALERT team:
Highly pathogenic avian influenza (HPAI) H5N1: What a week! The news, data, and analyses are coming in fast and furious.
Overall, ALERT team members feel that the risk of an H5N1 pandemic emerging over the coming decade is increasing. Team members estimate that the chance that the WHO will declare a Public Health Emergency of International Concern (PHEIC) within 1 year from now because of an H5N1 virus, in whole or in part, is 0.9% (range 0.5%-1.3%). The team sees the chance going up substantially over the next decade, with the 5-year chance at 13% (range 10%-15%) and the 10-year chance increasing to 25% (range 20%-30%).
their estimated 10 year risk is a lot higher than I would have anticipated.
write, write, write.
I suspect the primary reasons you want to break up Deepmind from Google is to:
Increase their autonomy, reducing pressure from google to race
Reduce Deepmind’s access to capital and compute, reducing their competitiveness
Perhaps that goes without saying, but I think it’s worth explicitly mentioning. In a world without AI risk, I don’t believe you would be citing various consumer harms to argue for a break up.
The traditional argument for breaking up companies and preventing mergers is to reduce the company’s market power, increasing consumer surplus. In this case, the implicit reason for breaking up Deepmind is to decrease its competitiveness thus reducing consumer surplus.
I think it’s perfectly fine to argue for this, I just really want us to be explicit about it.
I’m awestruck, that is an incredible track record. Thanks for taking the time to write this out.
These are concepts and ideas I regularly use throughout my week and which have significantly shaped my thinking. A deep thanks to everyone who has contributed to FHI, your work certainly had an influence on me.
I think I’m sympathetic to Oxford’s decision.
By the end, the line between genuine scientific inquiry and activistic ‘research’ got quite blurry at FHI. I don’t think papers such as: ‘Proposal for a New UK National Institute for Biological Security’, belong in an academic institution, even if I agree with the conclusion.
One thing that stood out to me reading the comments on Reddit, was how much of the poor reception that could have been avoided with a little clearer communication.
For people such as MacAskill, who are deeply familiar with effective altruism, the question: “Why would SBF pretend to be an Effective Altruist if he was just looking to do fraud?” is quite the conundrum. Of all the types of altruism, why specifically pick EA as the vehicle to smuggle your reputation? EA was already unlikeable and elitist before the scandal. Why not donate to puppies and Harvard like everyone else?
I actually admire MacAskill for asking that question. The easy out, would be to say: “how could we have been so foolish, SBF was clearly never a real EA”. But he instead grapples with the fact that SBF seems to have been genuinely motivated by effective altruism, and that these ideals must have played some part in SBFs decision to commit fraud.
But for any listener who is not as deeply familiar with the effective altruism movement, and doesn’t know its reputation, the question comes off as hopelessly naive. The emphasis they hear is: “Why would SBF, a fraudulent billionaire, pretend to be an Effective Altruist?” The answer to that is obvious—malicious actors pretend to be altruistic all the time!
I see EA communication make this mistake all the time. A question or idea whose merit is obvious to you might not be obvious to everyone else if you don’t spell out the assumptions it rests on.
I think I am misunderstanding the original question then?
I mean if you ask: “what you all think about the series as an entry point for talking about some of these EA issues with friends, family, colleagues, and students”
then the reach is not the 10 million people watching the show, it’s the people you get a chance to speak to.
Wasn’t the Future Fund quite explicitly about longtermist projects?
I mean if you worked for an animal foundation and were in a call about give directly, I can understand that somebody might say: “Look we are an animal fund, global poverty is outside our scope”.
Obviously saying “I don’t care about poverty” or something sufficiently close that your counterpart remembers it as that, is not ideal, especially not when you’re speaking to an ex-minister of the United Kingdom.
But before we get mad at those who ran the Future Fund, please consider there’s much context we don’t have. Why did this call get set up in the first place? I would expect them to be screening mechanisms in place to prevent this kind of mismatch. What Rory remembers might not have been what the Future Fund grant maker remembers and there might have been a mismatch between the very blunt ‘SF culture’ the future fund operated by and what an ex-minister expects.
That said I have a very positive impression of Rory Stewart, and it saddens me to hear our community gave him this perception. Had I been in his shoes, I’m not sure I would have thought any different.
I’m working on an article about gene drives to eradicate malaria, and am looking for biology experts who can help me understand certain areas I’m finding confusing and fact check claims I feel unsure about.
If you are a masters or grad student in biology and would be interested in helping, I would be incredibly grateful.
An example of a question I’ve been trying to answer today:
How likely is successful crossbreeding between subspecies of Anopheles Gambiae (such as anopheles gambiae s.s. and anopheles arabiensis), and how likely is successful crossbreeding between anopheles gambiae and other complexes?
If you know the answer to questions like these or would have an easy time finding it out, send me a dm! Happy to pay for your time.
a devastating argument, years of work wasted. Why oh why did I insist that the book’s front cover had to be a snowman?
I think it’s a travesty that so many valuable analyses are never publicly shared, but due to unreasonable external expectations it’s currently hard for any single organization to become more transparent without occurring enormous costs.
If open phil actually were to start publishing their internal analyses behind each grant, I will bet you at good odds the the following scenario is going to play out on the EA Forum:
Somebody digs deep into a specific analysis carried out. It turns out Open Phil’s analysis has several factual errors that any domain expert could have alerted them to, additionally they entirely failed to consider some important aspect which may change the conclusion.
Somebody in the comments accuses Open Phil of shoddy and irresponsible work. That they are making such large donations decisions based on work filled with errors, proves their irresponsibility. Moreover, why have they still not responded to the criticism?
A new meta-post argues that the EA movement needs reform, and uses the above as one of several examples showing the incompetence of ‘EA leadership’.
Several things would be true about the above hypothetical example:
Open Phil’s analysis did, in fact, have errors.
It would have been better for Open Phil’s work not to have those errors.
The errors were only found because they chose to make the analysis public.
The costs for Open Phil to reduce the error rate of analyses, would not be worth the benefits.
These mistakes were found, and at no cost (outside of reputation) to the organization.
Criticism shouldn’t have to warrant a response if it takes time away from work which is more important. The internal analyses from open phil I’ve been privileged to see were pretty good. They were also made by humans, who make errors all the time.
In my ideal world, every one of these analyses would be open to the public. Like open-source programming people would be able to contribute to every analysis, fixing bugs, adding new insights, and updating old analyses as new evidence comes out.
But like an open-source programming project, there has to be an understanding that no repository is ever going to be bug-free or have every feature.
If open phil shared all their analyses and nobody was able to discover important omissions or errors, my main conclusion would be they are spending far too much time on each analysis.
Some EA organizations are held to impossibly high standards. Whenever somebody points this out, a common response is: “But the EA community should be held to a higher standard!”. I’m not so sure! The bar is where it’s at because it takes significant effort to higher it. EA organizations are subject to the same constraints the rest of the world is subject to.
More openness requires a lowering of expectations. We should strive for a culture that is high in criticism, but low in judgement.
Agree, I suspect most people downvoted it because they inferred it was a leading question.
I haven’t seen the series, but am currently halfway through the second book.
I think it really depends on the person. The person I imagine would watch three-body problem, get hooked, and subsequently ponder about how it relates to the real world, seems like they also would get hooked by just getting sent a good lesswrong post?
But sure, if someone mentioned to me they watched and liked the series and they don’t know about EA already, I think it could be a great way to start a conversation about EA and Longtermism.
Relevant to the discussion is a recently released book by Dirk-Jan Koch who was Chief Science Officer in the Dutch Foreign Ministry (which houses their development efforts). The book explores the second order effects of aid and their implications for an effective development assistance: Foreign Aid And Its Unintended Consequences.
In some ways, the arguments of needing to focus more on second-order effects are similar to the famous ‘growth and the case against randomista development’ forum post.
The west didn’t become wealthy through marginal health interventions, why should we expect this for Sierra Leone or Bangladesh?
Second-order effects are important and should be taken into as much consideration as the first-order effects. But arguing that second-order effects are more difficult to predict, and we therefore shouldn’t do anything falls prey to the Copenhagen Interpretation of Ethics.
just fyi Dean Karlan doesn’t run USAID, he’s Chief Economist. Samatha Power is the (chief) administrator of USAID.
I think Bryan Caplan is directionally correct, but his argumentation in this post is incredibly sloppy.
A marxist communist could make the exact same complaint as Bryan Caplan, but with the signs flipped. Why do all these economists focus on RCTs for educational interventions, and never once consider the best educational intervention is to rise up in violent revolution and overthrow our capitalist oppressors?
I don’t recall any of the RCT papers I’ve read being particularly heavy on normative claims. Usually they’ll just say:
“this intervention had a measurable effect on X, so policy makers interested in improving X should consider it part of the tool kit”
or
“this intervention didn’t have an effect on X, so policy makers interested in improving X should not do this”
Which seems completely reasonable to me. They aren’t quietly rejecting the question, they largely are just not engaging with normative questions of what policy makers ought to do. RCTs are a way of taking out ideology and focus on strictly empirical questions.
Consider joining hackathons such as the ones organized by Apart Research. Anyone can join and get to work on problems directly related to AI Safety.
If you do a good project, you can put that on your resume and have something to speak about at your next interview.
I think there’s at least two categories:
The beginner who is scared of ridicule.
The senior, who don’t have time to write to the forum standard without risking reputation.
I’m more interested in what we can do to encourage the latter group. My impression is that many senior people are reluctant to post, as they don’t have time to write something sufficiently well-argued and respond to the comments.
Instead many good discussions take place in signal groups, google docs and email threads. In a perfect world, these discussions would be in the forum. The issue right now is that if those conversations would take place on the forum, too many people chime in with long, eloquently written, but wrong arguments that the subject matter experts now have to spend additional time shutting down or look like they are dodging a hard question.
Additionally lowering their bar for public engagement, puts them at risk of attack. There are people reading this forum just to find ammunition for hit pieces. A poorly worded comment from the leader of an organization will be used against them.
I’m grappling with this exact issue. I think AI is the most important technology humanity will event, but I’m skeptical of the EV of much work on the technology. Still it seems that it should be the only reasonable thing to spend all my time thinking about, but even then I’m not sure I’d arrive at anything useful.
And the opportunity cost is saving hundreds of lives. I don’t think there is any other question that has cost me as much sleep as this one.