There’s probably something that I’m missing here, but:
Given that the dangerous AI capabilities are generally stated to emerge from general-purpose and agentic AI models, why don’t people try to shift AI investment into narrower AI systems? Or try to specifically regulate those systems?
Possible reasons:
This is harder than it sounds
General-purpose and agentic systems are inevitably going to outcompete other systems
People are trying to do this, and I just haven’t noticed, because I’m not really an AI person
General-purpose and agentic systems are inevitably going to outcompete other systems
There’s some of this: see this Gwern post for the classic argument.
People are trying to do this, and I just haven’t noticed
LLMs seem by default less agentic than the previous end-to-end RL paradigm. Maybe the rise of LLMs was an exercise in deliberate differential technological development. I’m not sure about this, it is personal speculation.
This is too tangential from the forecasting discussion to justify being a comment there so I’m putting it here:
Forecasting makes no sense as a cause area, because cause areas are problems, something like “people lack resources/basic healthcare/etc.”, “we might be building superintelligent AI and we have no idea what we’re doing”. Forecasting is more like a tool. People use forecasting to address AI, global poverty, and all sorts of more general problems, including ones that aren’t major EA focuses.
For instance, we could treat vaccines as a cause area. All the funding to some AI-x-biosecurity people, GAVI campaigns for existing vaccines, and people working on bird flu vaccines could be treated like they’re doing the same thing. And then we could argue about whether vaccines meet the funding bar. But that would be a pretty pointless argument, when really all those projects are trying to do different things with similar tools.
So I’d rather judge the AI forecasting by AI standards, the general-purpose forecasting by metascience standards, and the global development forecasting by global development standards, rather than trying to lump them in as a single entity. That being said, I do side with the view that there’s too much money and enthusiasm being spent on forecasting, but it’s a weakly held view, and that doesn’t mean that every forecasting project isn’t worth being funded, or even that they’re all equally inflated.
I wrote up something for my personal blog about my relationship with effective altruism. It’s intended for a non-EA audience—at this point my blog subscribers are mostly friends and family—so I didn’t think it was worth cross posting as I spend a lot of time trying to explain what effective altruism is exactly, but some people might still be interested. My blog mostly is about books and whatnot, not effective altruism, but if I do write some more detailed stuff on effective altruism I will try to post it to the forum also.
Do you like SB 1047, the California AI bill? Do you live outside the state of California? If you answered “yes” to both these questions, you can e-mail your state legislators and urge them to adopt a similar bill for your state. I’ve done this and am currently awaiting a response; it really wasn’t that difficult. All it takes is a few links to good news articles or opinions about the bill and a paragraph or two summarizing what it does and why you care about it. You don’t have to be an expert on every provision of the bill, nor do you have to have a group of people backing you. It’s not nothing, but at least for me it was a lot easier than it sounded like it would be. I’ll keep y’all updated on if I get a response.
Both my state senator and my state representative have responded to say that they’ll take a look at it. It’s non-commital, but it still shows how easy it is to contact these people.
Are there any organizations out there that would describe their niche as advising for small/medium-sized donors? I can’t think of any, and I’m wondering why not. I’m not exactly sure what organizations that claim to advise large donors actually do, but it seems plausible that some things are also effective for smaller donors just because there are larger numbers of those. I’m thinking of, for instance:
tl;dr is (1) a lot of evaluators will do this for their cause area (can’t speak to every one but Giving Green is happy to advise donors of any size, just shoot us an email); (2) look into giving circles inside or outside EA
I’d add that it’s probably worth seeking a financial advisor for the tax law and will writing type questions—a lot of EA advisories offer free initial services, but I’ve been told that total assets >100k is generally the point at which it makes sense to find an advisor
There’s probably something that I’m missing here, but:
Given that the dangerous AI capabilities are generally stated to emerge from general-purpose and agentic AI models, why don’t people try to shift AI investment into narrower AI systems? Or try to specifically regulate those systems?
Possible reasons:
This is harder than it sounds
General-purpose and agentic systems are inevitably going to outcompete other systems
People are trying to do this, and I just haven’t noticed, because I’m not really an AI person
Something else
Which is it?
There’s some of this: see this Gwern post for the classic argument.
LLMs seem by default less agentic than the previous end-to-end RL paradigm. Maybe the rise of LLMs was an exercise in deliberate differential technological development. I’m not sure about this, it is personal speculation.
This is too tangential from the forecasting discussion to justify being a comment there so I’m putting it here:
Forecasting makes no sense as a cause area, because cause areas are problems, something like “people lack resources/basic healthcare/etc.”, “we might be building superintelligent AI and we have no idea what we’re doing”. Forecasting is more like a tool. People use forecasting to address AI, global poverty, and all sorts of more general problems, including ones that aren’t major EA focuses.
For instance, we could treat vaccines as a cause area. All the funding to some AI-x-biosecurity people, GAVI campaigns for existing vaccines, and people working on bird flu vaccines could be treated like they’re doing the same thing. And then we could argue about whether vaccines meet the funding bar. But that would be a pretty pointless argument, when really all those projects are trying to do different things with similar tools.
So I’d rather judge the AI forecasting by AI standards, the general-purpose forecasting by metascience standards, and the global development forecasting by global development standards, rather than trying to lump them in as a single entity. That being said, I do side with the view that there’s too much money and enthusiasm being spent on forecasting, but it’s a weakly held view, and that doesn’t mean that every forecasting project isn’t worth being funded, or even that they’re all equally inflated.
I wrote up something for my personal blog about my relationship with effective altruism. It’s intended for a non-EA audience—at this point my blog subscribers are mostly friends and family—so I didn’t think it was worth cross posting as I spend a lot of time trying to explain what effective altruism is exactly, but some people might still be interested. My blog mostly is about books and whatnot, not effective altruism, but if I do write some more detailed stuff on effective altruism I will try to post it to the forum also.
Do you like SB 1047, the California AI bill? Do you live outside the state of California? If you answered “yes” to both these questions, you can e-mail your state legislators and urge them to adopt a similar bill for your state. I’ve done this and am currently awaiting a response; it really wasn’t that difficult. All it takes is a few links to good news articles or opinions about the bill and a paragraph or two summarizing what it does and why you care about it. You don’t have to be an expert on every provision of the bill, nor do you have to have a group of people backing you. It’s not nothing, but at least for me it was a lot easier than it sounded like it would be. I’ll keep y’all updated on if I get a response.
Both my state senator and my state representative have responded to say that they’ll take a look at it. It’s non-commital, but it still shows how easy it is to contact these people.
Are there any organizations out there that would describe their niche as advising for small/medium-sized donors? I can’t think of any, and I’m wondering why not. I’m not exactly sure what organizations that claim to advise large donors actually do, but it seems plausible that some things are also effective for smaller donors just because there are larger numbers of those. I’m thinking of, for instance:
tax law advice for effective giving
will writing advice
compiling resources on charity evaluations
conducting charity evaluations
there was a post about this last year: https://forum.effectivealtruism.org/posts/oFcLqTETnC8rajxeg/advisors-for-smaller-major-donors
tl;dr is (1) a lot of evaluators will do this for their cause area (can’t speak to every one but Giving Green is happy to advise donors of any size, just shoot us an email); (2) look into giving circles inside or outside EA
I’d add that it’s probably worth seeking a financial advisor for the tax law and will writing type questions—a lot of EA advisories offer free initial services, but I’ve been told that total assets >100k is generally the point at which it makes sense to find an advisor
Can you define the class of “small/medium-sized donors” you have in mind? That means different things to different people.
I was being purposely kind of vague, but let’s say people donating <100k a year? Whatever’s too small for the organizations that advise large donors.