Also you look at the current US administration and the priorities and … they’re certainly not Singaporean or particularly interested in x-risk mitigation
David T
Feels like the most straightforwardly rational argument for portfolio diversification is the assumption your EV and probability estimates almost certainly aren’t the accurate or at least unbiased estimator they need to be for the optimal strategy to be to stick everything on the highest EV outcome. Even more so when the probability that a given EV estimate is accurate is unlikely to be uncorrelated with whether it scores particularly highly (the good old optimiser’s curse, with a dose of wishful thinking thrown in). Financiers don’t trust themselves to be perfectly impartial about stuff like commodity prices in central Asia or binary bets on the value of Yen on Thursday, and it seems unlikely that people who are extremely passionate about the causes they and their friends participate in ahead of a vast range of other causes that nominally claim to do good achieve a greater level of impartiality. Pascalian odds seem particularly unlikely to be representative of the true best option (in plain English, a 0.0001% subjective probability assessment of a 1 shot event is roughly “I don’t really know what the outcome of this will be and it seems like there could be many, many things more likely to achieve the same end”). You can make the assumption that if they appear to be robustly positive and neglected they might deserve funding anyway, but that is a portfolio argument...
Doesn’t this depend on what you consider the “top tier areas for making AI go well” (which doesn’t seem to be defined by the post)? If that happens to be AI safety research institutes focused specifically on preventing “AI doom” via stuff you consider to be non-harmful, then naively I’d expect nearly all of them to be aligned with the movement focused on that priority, given that those are relatively small niches, the OP and their organisation and the wider EA movement are actively nudging people into them based on EA assumption that they’re the top tier ones, and anyone looking more broadly at AI as a professional interest will find a whole host of lucrative alternatives where they won’t be scrutinised on their alignment at interview and can go and make cool tools and/or lots of money on options.
If you define it as “areas which have the most influence on how AI is built” then those are more the people @titotal was talking about, and yeah, they don’t seem particularly aligned with EA, not even the ones that say safety-ish things as a marketing strategy and took money from EA funds.
And if you define “safety” more broadly there are plenty of other AI research areas focusing on stuff like cultural bias or job market impact. But you and your organisation and 80000 hours probably don’t consider them top tier for effectiveness and (not coincidentally) I suspect these have very low proportions of EAs. Same goes for defence companies who’ve decided the “safest” approach to AI is to win the arms race. Similarly, it’s no surprise that people who happen to be very concerned about morality and utilitarianism and doing the best they can with their 80k hours of working life who get their advice from Brutger don’t become AI researchers at all, despite the similarities of their moral views.
Got to agree with the AI “analysis” being pretty limited, even though it flatters me by describing my analysis as “rigorous”.[1] It’s not a positive sign that this news update and jobs listing is flagged as having particularly high “epistemic quality”
That said, I enjoyed the ‘egregore’ section bits about the “ritualistic displays of humility”, “elevating developers to a priesthood” and “compulsive need to model, quantify, and systematize everything, even with acknowledged high uncertainty and speculative inputs ⇒ illusion of rigor”.[2] Gemini seems to have absorbed the standard critiques of EA and rationalism better than many humans, including humans writing criticisms of and defences of those belief systems. It’s also not wrong.
Its poetry is still Vogon-level though.
- ^
For a start I think most people reading our posts would conclude that Vasco and I disagree on far too much to be considered “intellectually aligned”, even if we do it mostly politely by drilling down to the details of each others’ arguments
- ^
OK, if my rigour is illusory maybe that complement is more backhanded than I thought :)
- ^
Fair. I agree with this
Plenty of entities who aren’t EAs doing that sort of lobbying already anyway
There are some good arguments that in some cases, developing countries can benefit from protecting some of their own nascent industries.
There are basically no arguments that the developed world putting tariffs (or anti dumping duties) on imports helps the developing world, which is the harmful scenario Karthik discusses in his article as an example of Nunn’s argument that rich countries should stop doing things that harm poorer countries. Developed countries know full well these limit poorer countries’ ability to export to them… but that’s also why they impose them
At face value that might seem the case. In practice, Reform is a party dominated by a single individual, who enjoys promoting hunting, deregulation and criticising the idea of vegan diets: he’s not exactly the obvious target for animal welfare arguments, particularly not when it’s equally likely a future coalition will include representatives of a Green Party.
The point in the original article about conservatives and country folk being potentially sympathetic to arguments for restrictions on importing meat from countries with lower animal welfare standards is a valid one, but it’s the actual Conservative Party (who will be present in any coalition Reform needs to win and have a yawning policy void of their own) that fits that bracket, not the upstart “anti-woke”, pro-deregulation party whose core message is a howl of rage about immigration. Farage’s objections to the EU were around the rules, not protectionism, and he’s actually highly vocal on the need to reduce restrictions in the import of meat from the US, which has much lower standards in many areas. Funnily enough, Farage political parties have had positions on regulating stunning animals for slaughter, but the targeting of slaughtering practices associated with certain religions might have been for… other reasons, and Farage rowed back on it[1]- ^
halal meat served in the UK is often pre-stunned, whereas kosher meat isn’t, so the culture war arguments for mandatory stunning hit the wrong target....
- ^
I thought I was reasonably clear in my post but I will try again. As far as I understand .your argument is that the items in the tiers are heuristics people might use to determine how to make decisions, and the “tiers” represent how useful/trustworthy they are at doing that (with stuff in lower tiers like “folk wisdom” being not that useful and stuff in higher tiers like RCTs being more useful)
But I don’t really see “literacy” or “math” broadly construed as methods to reach any specific decision, they’re simply things I might need to understand actual arguments (and for that matter I am convinced that people can use good heuristics whilst being functionally illiterate or innumerate). The only real reason I can think of for putting them at the top is “many people argue against trusting (F-tier) folk wisdom is bad, there are some good arguments about not overindexing on (B-tier) RCTs, there are few decent arguments on principle against (S-tier) reading or adding up, despite the fact that literacy helps genocidal grudges as well as scientific knowledge to spread. I agree with this, but I don’t think it illustrates very much that can be used to help me make better decisions as an individual. Because what really matters if I’m using my literacy to help me make a decision is what I read and what things I read I trust; much more than whether I can trust I’ve parsed it correctly. Likewise I think what thought experiments I’m influenced by is more important than the idea that thought experiments are (possibly) less trustworthy than at helping me make decisions than a full blown philosophical framework or more trustworthy than folk wisdown.
FWIW I think the infographic was fine and would suggest reinstating it (I don’t think the argument is clearer without it, and it’s certainly harder for people to suggest methods you might have missed if you don’t show methods you included!)
Your linkpost also strips most of the key parts from the article, which I suspect some of the downvoters missed
But Gebru and Torres don’t object to “the entire ideology of progress and technology” so much as accuse a certain [loosely-defined] group of making nebulous fantasy arguments about progress and technology to support their own ends, suggest they’re bypassing a load of lower level debates about how actual progress and technology is distributed and accuse them of being racist. It’s a subset of the “TESCREALs” who want AI development stopped altogether, and I don’t think they’re subliminally influenced by ancient debates on divine purpose either.
It’s something of an understatement to suggest that it’s not just Catholics and Anglicans opposed to ideas they disagree with gaining too much power and influence,[1] and it would be even more tendentious to argue that secular TESCREALs’ interest in shaping the future and consequentialism is aligned in any way with Calvinist predestination.
If Calvin were to encounter any part of the EA movement he’d be far more scathing than Gebru and Torres or people writing essays about how utilitarianism is bunk.[2] Maybe TESCREALism is just anti-Calvinism ;) …
- ^
Calvin was opposed to them too, although he believed heretics should suffer the death penalty rather than merely being invited to read thousand word blogs and papers about how they were bad people.
- ^
and be equally convinced that the e-accelerationists and Timnit and Emile were condemned to eternal damnation.
- ^
I didn’t downvote or disagreevote, but I’m not sure the logic of the rankings is well explained. I get the idea that concepts in the lowest tiers are supposed to be of more limited value, but I’m not sure why the very top tiers are literacy/mathematics—seems like literacy/mathematics by themselves almost never point to any particular conclusions, but are merely prerequisites to using some other method to reach a decision. Is the argument that few people would dispute that literacy and mathematics should play some role in making decisions, where as the value of ‘divine revelation’ is hotly disputed and the validity of natural experiments debatable? That makes sense, but it feels like it needs more explanation.
E.g., most members of the Democratic party in the US would endorse “social safety nets, universal health care, equal opportunity education, respect for minorities” but would not self-identify as socialist
Many mainstream European politicians would though, whilst happily coexisting with capitalism. Treatment of “socialism” as an extremist concept which even people whose life mission is to expand social safety nets shy away from is US-exceptionalism; in the rest of the world it’s a label embraced by a broad enough spectrum to include both Tony Blair and Pol Pot. So it’s certainly of value to narrow that definition down a bit. :)
It certainly reads better as satire than intellectual history. A valid criticism of the idea of “TESCREALISM” is that bundling together a long list of niche ideas just because they involve overlapping people hanging out on overlapping niche corners of the web (and in California) to debate related ideas about the future and their own cleverness doesn’t actually make it a coherent *thing*, given that lots of the individual representatives of those groups have strong disagreements with the others and the average EA probably doesn’t know what cosmism is.
On the other hand, it’s difficult to take seriously the idea that secular intellectuals who find the Singularity and some of its loudest advocates a bit silly and some of the related ideas pushed a bit sus are covertly defending a particular side of a centuries old debate in Christian theology...
Feels like the argument you’ve constructed is a better one than the one Thiel is actually making, which seems to be a very standard “evil actors often claim to be working for the greater good” argument with a libertarian gloss. Thiel doesn’t think redistribution is an obviously good idea that might backfire if it’s treated as too important, he actively loathes it.
I think the idea that trying too hard to do good things and ending up doing harm is absolutely a failure mode worth considering, but has far more value in the context of specific examples. It seems like quite a common theme in AGI discourse (follows from standard assumptions like AGI being near and potentially either incredibly beneficial or destructive, research or public awareness either potentially solving the problem or starting a race etc) and the optimiser’s curse is a huge concern for EA cause prioritization overindexing on particular data points. Maybe that deserves (even) more discussion.
But I don’t think an guy that doubts we’re on the verge of an AI singularity and couldn’t care less whether EAs encourage people to make the wrong tradeoffs between malaria nets, education and shrimp welfare adds much to that debate, particularly not with a throwaway reference to EA in a list of philosophies popular with the other side of the political spectrum he things are basically the sort of thing the Antichrist would say.
I mean, he is also committed to the somewhat less insane-sounding “growth is good even if it comes with risks” argument, but you can probably find more sympathetic and coherent and less interest-conflicted proponents of that view.
“Pro-natalists” do, although that tends to be more associated with specific ideas that the world needs more people like them (often linked to religious or nationalistic ideas) than EA. The average parent tends to think that bringing up a child is [one of] the most profound ways they can contribute to the world, but they’re thinking more in terms of effort and association than effect size.
I also think it’s pretty easy to make a case that having lots of children (who in turn have descendants) is the most impactful thing you could do based on certain standard longtermist assumptions (large possible future, total utilitarian axiology, human lives generally net positive) and uncertainty about how to prevent human extinction but I’m not aware of a strand of longtermism that actually preaches or practices this and I don’t think it’s a particularly strong argument.
Yeah. Frankly of all the criticisms of EA that might be easily be turned into something more substantial, accurate and useful with a little bit of reframing, a liberalism-hating surveillance-tech investor dressing his fundamental loathing of its principles and opposition to the limits it might impose on tech he actively promotes in pretentious pseudo-Christian allusion seems least likely to add any value. [1]
Doesn’t take much searching of the forum to find outsider criticisms of aspects of the AI safety movement which are a little less oblique than comparing it with the Antichrist, written by people without conflicts of interest who’ve probably never written anything as dumb as this, most of which seem to get less sympathetic treatment.
- ^
and I say that as someone more in agreement with the selected Thiel pronouncements on how impactful and risky near-term AI is likely to be than the average EA
- ^
tbf to the AI 2027 article, whilst it makes a number of contentious arguments its actual titles and subtitles seem quite low key.
But I do agree with the meta point that norms of only socially punishing critics for boldness of their claims is counterproductive, and norms of careful hedging can result in actual sanewashing of nonsense “RFK advances novel theory about causes of autism; some experts suggest other causes”.
Good post Nick. I think the question mark about the timing of the experiment considering cuts to many robustly good programmes is a particularly good one
I don’t think the Centre for Effective Aid Policy is a particularly accurate comparison, as I think there’s a significant difference between the likely effectiveness of a new org lobbying Western governments to give money to different causes (against sophisticated lobbyists for the status quo and government-defined “soft power” priorities) and orgs with established relationships providing technical recommendations to improve healthcare outcomes to LEDC governments that actually express interest in using them. I think the lack of positive findings in the wider literature links you provide are more interesting, although suspect the outcomes are highly variable depending on level of government engagement, competence of organizations, magnitude of problems they purport to solve and whether the shifts they are promoting are even in the right direction. It would be interesting in that respect to see how GiveWell evaluated the individual organizations. I do agree that budgeting dashboards don’t necessarily seem like an area relatively highly paid outsiders are best placed to optimise.
I suspect the high cost reflects use of non-local staff, which of course has a mixture of advantages and disadvantages beyond the higher cost.
I’m sceptical of the value of RCTs between nations that have different healthcare policies and standards and bureaucracies to start with (particularly as I don’t think there’s a secular global trend in the sort of outcomes TSUs are supposed to achieve, and collecting data on some of them feels like it would involve nearly as much effort as actually providing the recommendations). A lot of policy and government optimization work—effective or otherwise—is hard to RCT especially at national level. Which doesn’t mean there can’t be more transparency and non-RCT metrics
By “scope of longtermism” I took Thorstad’s reference to “class of decision situations” in terms of permutations to be evaluated (maximising welfare, maximising human proliferation, minimising suffering etc) rather than categories of basic actions (spending, voting, selecting clothing).[1] I’m not actually sure it makes a difference to my interpretation of the thrust of his argument (diminution, washing out and unawareness means solutions whose far future impact swamps short term benefits are vanishingly rare and generally unknowable) either way.
Sure, Thorstad absolutely starts off by conceding that under certain assumptions about the long term future,[2] a low probability but robustly positive action like preparing to stop asteroids from hitting earth which indirectly enables benefits to accrue over the very long term absolutely can be a valid priority.[3] But it doesn’t follow that one should prioritise the long term future in every decision making situation in which money is given away. The funding needs of asteroid monitoring sufficient to alert us to impending catastrophe are plausibly already met[4], and his core argument is we’re otherwise almost always clueless about what the [near] best solution for the long term future is. It’s not a particularly good heuristic to focus spending on outcomes you are most likely to be clueless about, and a standard approach to accumulation of uncertainty is to discount for it, which of course privileges the short term.
I mean, I agree that Thorstad makes no dent in arguments to the effect that if there is an action which leads to positive utility sustained over a very long period of time for a very large number of people it will result in very high utility relative to actions which don’t have that impact: I’m not sure that argument is even falsifiable within a total utilitarian framework.[5] But I don’t think his intention is to argue with [near] tautologies, so much as to insist that the set of decisions which credibly result in robustly positive long term impact is small enough to usually be irrelevant.
- ^
all of which can be reframed in terms of making money to spend available to spend on priorities” in classic “hardcore EA” style anyway...
- ^
Some of the implicit assumptions behind the salience of asteroid x-risk aren’t robust: if AI doomers are right then that massive positive future we’re trying to protect looks a lot smaller. On the other hand compared with almost any other x-risk scenario, asteroids are straightforward: we don’t have to factor in the possibility of asteroids becoming sneaky in response to us monitoring them, or attach much weight to the idea that informing people about asteroids will motivate them to try harder to make it hit the earth.
- ^
you correctly point out his choice of asteroid monitoring service is different from Greaves and MacAskill’s. I assume he does so partly to steelman the original, as the counterfactual impact of a government agency incubating the first large-scale asteroid monitoring programme is more robust than that of the marginal donation to NGOs providing additional analysis. And he doesn’t make this point, but I doubt the arguments that decided its funding actually depended on the very long term anyway....
- ^
this is possibly another reason for his choice of asteroid monitoring service...
- ^
Likewise, pretty much anyone familiar with total utilitarianism can conceive a credible scenario in which the highest total utility outcome would be to murder a particular individual (baby Hitler etc), and I don’t think it would be credible to insist such a situation could never occur or never be known. This would not, however, fatally weaken arguments against the principle of “murderism” that focused on doubting there were many decision situations where murder should be considered as a priority
- ^
I don’t see how Thorstad’s claim that the Space Guard Survey is a “special case” of a strong longtermist priority being reasonable (and that other longtermist proposals did not have the same justification) is “rebutted” by the fact that Greaves and McAskill use the Space Guard Survey as its example. The actual scope of longtermism is clearly not restricted to observing exogenous risks with predictable regularity and identifiable and sustainable solutions, and thus is subject at least to some extent to the critiques Thorstad identified.
Even the case for the Space Guard Survey looks a lot weaker than Thorstad granted if one considers that the x-risk from AI in the near term is fairly significant, which most longtermists seem to agree with. Suddenly instead of it having favourable odds of enabling a vast future, it simply observes asteroids[1] for three decades before AI becomes so powerful that human ability to observe asteroids is irrelevant, and any positive value it supplies is plausibly swamped by alternatives like researching AI that doesn’t need big telescopes to predict asteroid trajectories and can prevent unfriendly AI and other x-risks. The problem is of course, that we don’t know what that best case solution looks like[2] and most longtermists think many areas of spending on AI look harmful rather than near best case, but don’t high certainty (or any consensus) about which areas those are. Which is Thorstad’s ‘washing out’ argument
As far as I can see, Thorstad’s core argument is that even if it’s [trivially] true that the theoretical best possible course of action has most of its consequences in the future, we don’t know what that course of action is or even near best solutions are. Given that most longtermists don’t think the canonical asteroid example is the best possible course of action and there’s widespread disagreement over whether actions like accelerating “safe” AI research are increasing or reducing risk, I don’t see his concession the Space Guard Survey might have merit under some assumptions as undermining that.
- ^
ex post, we know that so far it’s observed asteroids that haven’t hit us and won’t in the foreseen future.
- ^
in theory it could even involve saving a child who grows up to be an AI researcher from malaria. This is improbable, but when you’re dealing with unpredictable phenomena with astronomical payoffs...
- ^
I’m more confused by how this apparent near future, current world resource base timeline interacts with the idea that this Dyson swarm is achieved clandestinely (I agree with your sentiment the “disassemble Mercury within 31 years” scenario is even more unlikely, though close to Mercury is a much better location for a Dyson swarm). Most of the stuff in the tech tree doesn’t exist yet and the entities working on it are separate and funding-starved: the relationship between entities writing papers about ISRU or designing rectenna for power transmission and an autonomous self-replicating deep space construction facility capable of acquiring unassailable dominance of the asteroid belt within a year is akin to the relationship between a medieval blacksmith and a gigafactory. You could close that gap more quickly with an larger-than-Apollo-scale joined up research endeavour, but that’s the opposite of discreet.
Stuff like the challenges of transmitting power/data over planetary distances and the constant battle against natural factors like ionizing radiation don’t exactly point towards permanent dominance by a single actor either.