Hi, my name is James Fodor. I am a longtime student and EA organiser from Melbourne. I love science, history, philosophy, and using these to make a difference in the world.
Fods12
I think it is appropriate for the movement to reflect at this time on whether there are systematic problems or failings within the community that might have contributed to this problem. I have publicly argued that there are, and though I might be wrong about that, I do think its entirely reasonable to explore these issues. I don’t think its reasonable to just continually assert that it was all down to a handful of bad actors and refuse to discuss the possibility of any deeper or broader problems. I like to think that the EA community can learn and grow from this experience.
I disagree that events can’t be evidence for or against philosophical positions. If empirical claims about human behaviour or the real-world operation of ethical principles are relevant to the plausibility of competing ethical theories, then I think events can provide evidential value for philosophical positions. Of course that raises a much broader set of issues and doesn’t really detract from the main point of this post, but I thought I would push back on that specific aspect.
I love the research-focus of this piece and the lack of waffle. Very impressed.
“Is it really “grossly immoral” to do the same thing in crypto without telling depositors?”
Yes
Great point about ventilation. I am not aware of any evidence that hand sanitisation in particular is merely ‘safety theater’. Surface transmission may not be the major method of viral spread, but it still is a method, and hand sanitisation is a very simple intervention. Also, to emphasise something I mentioned in the post, masks are definitely not ‘safety theater’. It is good to see that the revised COVID protocol now mentions that mask use will be encouraged and widely available.
I don’t understand how Australia’s travel policy is relevant. I’m not asking for anything particularly unusual or onerous, I just would expect that a community of effective altruists would follow WHO guidelines regarding methods to reduce the spread of COVID. I honestly don’t understand the negative reaction.
Thanks Amy, I think these clarifications significantly improve the policy. I disagree on the decision not to mandate masks but I understand there will be differences in views there. However mentioning that they are encouraged may be just as effective at ensuring widespread use. That was part of my original concern, that I did not feel this aspect of norm-setting was as evident in the original version of the policy.
It doesn’t seem to me this has much relevance to EA.
Hi David,
We deliberately only included information which is based on some specific empirical evidence, not simply advice or recommendations. Of course readers of the review may wish to incorporate additional information or assumptions in deciding how they will run their groups then of course they are welcome to do so.
If you have any particular sources or documents outlining what has been effective in London I’d love to see them!
Hi everyone, thanks for your comments. I’m not much for debating in comments, but if you would like to discuss anything further with me or have any questions, please feel free to send me a message.
I just wanted to make one clarification that I feel didn’t come across strongly in the original post. Namely, I don’t think its a bad thing that EA is an ideology. I do personally disagree with some commonly believed assumptions or methodological preferences etc, but the fact that EA itself is an ideology I think is a good thing, because it gives EA substance. If EA were merely a question I think it would have very little to add to the world.
The point of this post was therefore not to argue that EA should try to avoid being an ideology, but that we should realise the assumptions and methodological frameworks we typically adopt as an EA community, critically evaluate whether they are all justified, and then to the extent they are justified defend them with the best arguments we can muster, of course always remaining open-minded to new evidence or arguments that might change our minds.
People who aren’t “cool with utilitarianism / statistics / etc” already largely self-select out of EA. I think my post articulates some of the reasons why this is the case.
Thanks for the comment!
I agree that the probabilities matter, but then it comes to a question of how these are assessed and weighed against each other. On this basis, I don’t think it has been established that AGI safety research has strong claims to higher overall EV than other such potential mugging causes.
Regarding the Dutch book issue, I don’t really agree with the argument that ‘we may as well go with’ EV because it avoids these cases. Many people would argue that the limitations of the EV approach, such as having to give a precise probability for all beliefs and not being able to suspend judgement etc, also do not fit with our picture of ‘rational’. Its not obvious why hypothetical better behaviours are more important than these considerations. I am not pretending to resolve this argument but I am just trying to raise the issue as being relevant for assessing high impact, low probability events—EV is potentially problematic in such cases and we need to talk about this seriously.
Hi Zeke,
I give some reasons here why I think that such work won’t be very effective, namely that I don’t see how one can achieve sufficient understanding to control a technology without also attaining sufficient understanding to build that technology. Of course that isn’t a decisive argument so there’s room for disagreement here.
Hi Zeke!
Thanks for the link about the Fermi paradox. Obviously I could not hope to address all arguments about this issue in my critique here. All I meant to establish is that Bostrom’s argument does rely on particular views about the resolution of that paradox.
You say ‘it is tautologically true that agents are motivated against changing their final goals, this is just not possible to dispute’. Respectfully I just don’t agree. It all hinges on what is meant by ‘motivation’ and ‘final goal’. You also say ” it just seems clear that you can program an AI with a particular goal function and that will be all there is to it”, and again I disagree. A narrow AI sure, or even a highly competent AI, but not an AI with human level competence in all cognitive activities. Such an AI would have the ability to reflect on its own goals and motivations, because humans have that ability, and therefore it would not be ‘all there is to it’.
Regarding your last point, what I was getting at is that you can change a goal by explicitly rejecting a goal and choosing a new one, or by changing one’s interpretation of an existing goal. This latter method is an alternative path by which an AI could change its goals in practise even if it still regarded itself as following the same goals it was programmed with. My point isn’t that this makes goal alignment not a problem. My point was that this makes the ‘AI will never change its goals’ not a plausible position.
Hi rohinmshah, I agree that our current methods for building an AI do involve maximising particular functions and have nothing to do with common sense. The problem with extrapolating this to AGI is 1) these sorts of techniques have been applied for decades and have never achieved anything close to human level AI (of course that’s not proof it never can but I am quite skeptical, and Bostrom doesn’t really make the case that such techniques will be likely to lead to human level AI), and 2) as I argue in part 2 of my critique, other parts of Bostrom’s argument rely upon much broader conceptions of intelligence that would entail the AI having common sense.
Thanks for these links, this is very useful material!
Hi Denkenberger, thanks for engaging!
Bostrom mentions this scenario in his book, and although I didn’t discuss it directly I do believe I address the key issues here in my piece above. In particular, the amount of protein one can receive in the mail in a few days is small, and in order to achieve its goals of world domination an AI would need large quantities of such materials in order to produce the weapons or technology or other infrastructure needed to compete with world governments and militaries. If the AI chose to produce the protein itself, which it would likely wish to do, it would need extensive laboratory space to do that, which takes time to build and equip. The more expansive its operations become the more time consuming they take to build. It would likely need to hire lawyers to acquire legal permits to build the facilities needed to make the nanotech, etc. I outline these sorts of practical issues in my article. None of these are insuperable, but I argue that they aren’t things that can be solved ‘in a matter of days’.
Thanks for your thoughts. Regarding spreading my argument across 5 posts, I did this in part because I thought connected sequences of posts were encouraged?
Regarding the single quantity issue, I don’t think it is a red herring, because if there are multiple distinct quantities then the original argument for self-sustaining rapid growth becomes significantly weaker (see my responses to Flodorner and Lukas for more on this).
You say “Might the same thing be true of AI—that a few factors really do allow for drastic improvements in problem-solving across many domains? It’s not at all clear that it isn’t.” I believe we have good reason to think no such few factors exist. I would say because A) this does not seem to be how human intelligence works and B) because this does not seem to be consistent with the history of progress in AI research. Both I would say are characterised by many different functionalities or optimisations for particular tasks. Not to say there are no general principles but I think these are not as extensive as you seem to believe. However regardless of this point, I would just say that if Bostrom’s argument is to succeed I think he needs to give some persuasive reasons or evidence as to why we should think such factors exist. Its not sufficient just to argue that they might.
Thanks for your thoughts.
Regarding your first point, I agree that the situation you posit is a possibility, but it isn’t something Bostrom talks about (and remember I only focused on what he argued, not other possible expansions of the argument). Also, when we consider the possibility of numerous distinct cognitive abilities it is just as possible that there could be complex interactions which inhibit the growth of particular abilities. There could easily be dozens of separate abilities and the full matrix of interactions becomes very complex. The original force of the ‘rate of growth of intelligence is proportional to current intelligence leading to exponential growth’ argument is, in my view, substantively blunted.
Regarding your second point, it seems unlikely to me because if an agent had all these abilities, I believe they would use then to uncover reasons to reject highly reductionistic goals like tilling the universe with paperclips. They might end up with goals that are still in opposition to human values, but I just don’t see how an agent with these abilities would not become dissatisfied with extremely narrow goals.
Hi David,
The point I was trying to communicate here was simply that our design was able to find a pattern of differences between the control and treatment groups which is interpretable (i.e. in terms of different ages and career stage). I think this provides some validation of the design, in that if large enough differences exist then our measures pick up these differences and we can statistically measure them. We don’t, for instance, see an unintelligable mess of results that would cast doubt on the validity of our measures or the design itself. Of course, if as you point out the effect size for attending the conference is smaller then we won’t be able to detect that given our sample size. For most of our measures this was around 15-20%. But given we were able to measure sufficiently large effects using this design, I think it provides justification for thinking that a large enough sample size using a similar study design would be able to detect smaller effects, if they existed. Hope that clarifies a bit.