If the claim was that this is best among public outreach interventions, the title is misleading. The post also doesn’t really compare deep questioning to other public outreach methods, just justifies it on its own terms.
Opportunity costs for attention and time are the other things people could be doing, and it it common and I think basically justifiable to value people’s time at a level similar to their work salary. The reasoning is that typically, even if you can’t make money during your free time, people are willing to spend money and give up other opportunities to get free time—if they want to use that time to do deep questioning, that’s great, but if and when they do, they are explicitly valuing that use of their time over other options.
And I agree that some grassroots organizations could push this forward, but I worry doing it on behalf of an organization with an explicit agenda, even as a volunteer, might undermine the personal connection of deep questioning. As you said, “the interlocutors do not have the impression that the public outreacher is from an organization and tries to persuade them of something.” If they are, in fact, coming from an organization, that seems to be deeply deceptive.
Davidmanheim
Scaling Laws and Likely Limits to AI
I think this is a great thing to do, and have a general policy of approving of net positive actions. I also think that there are several convincing claims about why this is potentially tractable, including the arguments that it is fully additive, and that it avoids the need for coordination. Unfortunately, I also think that it fails to clear the bar for what we should expect in effective altruism in a couple ways.
First, I remain unconvinced that it’s a “top-effective” intervention. It’s unreasonable to say that it is cost-effective when there are opportunity costs which are not explored, and the actual impact is not quantified. As Vasco lays out below, there should be a stronger case that this is better than alternatives, or a clearer case that it should be done as an effective but ancillary activity which people can do in their free time, rather than as something that is as effective as the human league or other campaigns. To change that, I think that there should be a clear explanation of what is required for this to succeed in individual cases (training, experience,) an estimate of the time required, and an estimate of the actual impact (what proportion of people change their behavior, how much does it change, how long does the change persist,) and a exploration of how and when this could be net-negative (if done poorly, if it generates pushback when done at scale, etc.)
Second, I also think that it’s not ambitious enough on its own terms; if it is as effective as claimed, how can it be scaled up effectively? Should there be volunteer training groups to teach people to do this more widely? Can this be done via existing networks? Could there be a trial designed to measure impact?
To conclude, overall, I think that this is admirable and a potentially tractable, but presented with misleadingly strong claims, and as I outlined above, neither as clear on several points as it could be, nor as ambitious as would be beneficial.
Misnaming and Other Issues with OpenAI’s “Human Level” Superintelligence Hierarchy
This makes a number of non-trivial assumption and unsourced claims about a number of different issues, from relative moral value of animals to the carrying capacity of different biomes; I know that many of these are seen as common wisdom in EA, but I think failing to lay them out greatly weakens the conclusions.
Also, some questions to think about: Why are insects ignored? How does the transition happen, legally or economically? What are the impacts of land use changes, and do farmers sell the land? (To whom?) Do social norms around meat undermine the viability of a transition?
{making humanity more safe VS shortening AGI timelines} is itself a false dichotomy or false spectrum.
Why? Because in some situations, shortening AGI timelines could make humanity more safe, such as by avoiding an overhang of over-abundant computing resources that AGI could abruptly take advantage of if it’s invented too far in the future (the “compute overhang” argument).
I think this also ignores the counterfactual world with less safety research, where the equivalent advances, which are funded because of commercial incentives, come from less generalizable safety research, and we end up with less well prosaically aligned but similarly capable systems. (And I haven’t really laid out this argument before, but I think it generalizes to the counterfactual world without OpenAI or even Deepmind being inspired by AI safety concerns.)
I think it’s fine, if orgs are set up for getting the donations. If they have a “donate” button or page, they are set up to get the money, less credit card fees, etc. They problem is that setting up that sort of thing is anywhere from easy to legally very complex.
As someone who runs an organization that does a lot of biorisk work, it’s incredibly expensive in staff time and logistics to receive small donations—but if you’re giving more than, say, $5,000, you could just email the organizations to ask, and I’m sure they could figure it out.
But as I answered, CHS does have a donation page. (And NTI does allow donations, with a box to indicate where you’d like the money to go, but it’s unclear to me if that actually lets you direct it only to bio.)
Any chance we’ll see some evaluation of prior grants?
ALTER Israel − 2024 Mid-Year Update
LLMs are not AGIs in the sense being discussed, they are at best proto-AGI. That means the logic fails at exactly the point where it matters.
When I ask a friend to give me a dollar when I’m short, they often do so. Is this evidence that I can borrow a billion dollars? Should I go on a spending spree on the basis that I’ll be able to get the money to pay for it from those friends?
When I lift, catch, or throw a 10 pound weight, I usually manage it without hurting myself. Is this evidence that weight isn’t an issue? Should I try to catch a 1,000 pound boulder?
No one is really suggesting that a unilateral “pause” is effective, but there is growing support for some non-unilateral version as an important approach to be negotiated.
There was a quite serious discussion of the question, and different views, on the forum late last year (which I participated in,) summarized by Scott Alexander here; https://forum.effectivealtruism.org/posts/7WfMYzLfcTyDtD6Gn/pause-for-thought-the-ai-pause-debate
Confirmed; he does work in this area, there’s independent reporting about his work on these topics, and has a substack about his very relevant legal work; https://www.nlrbedge.com/
Do you have any comment on the idea that nondisparagement clauses like this could be found invalid for being contrary to public policy? (How would that be established?)
I think there are useful analogies between specific aspects of bio, cyber, and AI risks, and it’s certainly the case that when the biorisk is based on information security, it’s very similar to cybersecurity, not the least in that it requires cybersecurity! And the same is true for AI risk; to the extent that there is a risk of model weights leaking, this is in part a cybersecurity issue.
So yes, I certainly agree that many of the dissimilarities with AI are not present if analogizing to cyber. However, more generally, I’m not sure cybersecurity is a good analogy for biorisk, and have heard that computer security people often dislike the comparison of computer viruses and biological viruses for that reason, though they certainly share some features.
Biorisk is an Unhelpful Analogy for AI Risk
Gutting the FTT token is customers losing money because of their investing, not customer losses via FTX loss of custodial funds or token, though, isn’t it?
Alameda exile told Time that SBF “didn’t have a distinction between firm capital and trading capital. It was all one pool.” That’s at least a badge of fraud (commingling)
Alameda was a prop trading firm, so there isn’t normally any distinction between those. The only reason this didn’t apply was that there was a third bucket of funds, pass-through custodial funds that belonged to FTX customers, which they evidently didn’t pass through due to poor record keeping. That’s not as much indicative of fraud, it’s indicative of incompetance.
I think it’s useful to distinguish between industrial policy, regulation, and nationalization, and your new term seems to be somewhere in between. I think your model is generally useful, but at the same time, introducing a new term without being very clear about what it means in relation to existing terms is probably more confusing than clarifying.