The case for doing EA community building hinges on having significant probability on ‘long’ (>2040) AI timelines
Not sure it’s okay to say this, but I simply agree with Michael Dickens on this. If we expect to have have AGI by 2038, or even say, 2033 (8 years from now!) it seems like EA community building could be very important. I know people who went full-time into AI safety /​ governance work less than one year after discovering the issue through EA.
Not sure it’s okay to say this, but I simply agree with Michael Dickens on this. If we expect to have have AGI by 2038, or even say, 2033 (8 years from now!) it seems like EA community building could be very important. I know people who went full-time into AI safety /​ governance work less than one year after discovering the issue through EA.