Currently doing local AI safety Movement Building in Australia and NZ.
Happy to talk that through if you’d like, though I’m kind of biased, so probably better to speak to someone who doesn’t have a horse in the race.
I don’t know if this can be answered in full-generality.I suppose it comes down to things like:• Financial runway/back-up plans in case your prediction is wrong• Importance of what you’re doing now• Potential for impact in AI safety
I would love to see attempts at either a community-building fellowship or a community-building podcast.With the community-building podcast, I suspect that people would prefer something that covers topics relatively quickly as community builders are already pretty busy.
a) I suspect AI able to replace human labour will create such abundance that it will eliminate poverty (assuming that we don’t then allow the human population to increase to the maximum carrying capacity).b) The connection the other way around is less interesting. Obviously, AI requires capital, but once AI is able to self-reproduce then amount of capital required to kickstart economic development becomes minimal.c) “I also wonder if you have the time to expand on why you think AI would solve or improve global poverty, considering it currently has the adverse effect?”—How is it having an adverse effect?
Debating still takes time and energy which reduces the time and energy available elsewhere.
Yep, that’s the main one, but to a lesser extent Sora being ahead of schedule + realising what this means for AI agents.It’s less about my median timeline moving down, but more about the tail end not extending out as far.
I’d imagine the natural functions of city and national groups to vary substantially.
I was previously very uncertain about this, but given the updates in the last week, I’m now feeling confident enough in my prediction of the future that I regret any money I put into my super (our equivalent of a pension).Please do not interpret this comment as financial advice, rather just a statement of where I am at.
A few questions that you might find helpful for thinking this through:• What are your AI timelines?• Even if you think AI will arrive by X, perhaps you’ll target a timeline of Y-Z years because you think you’re unlikely to be able to make a contribution by X• What agendas are you most optimistic about? Do you think none of these are promising and what we need are outside ideas? What skills would you require to work on these agendas?• Are you likely to be the kind of person who creates their own agenda or contributes to someone else’s?• How enthusiastic are you about these subjects? Are you likely to be any good at them? Many people make a contribution without using things outside of computer science, but sometimes it takes a person with outside knowledge to really push things forward to the next level.
Do the intro fellowship completions only include EA Intro Fellowship, not people doing the AI Safety Fundamentals course?
My gut feeling is that, putting to one side the question of which is the most effective strategy for reducing x-risk etc., the ‘narrow EA’ strategy is a mistake because there’s a good chance it is unethical to try to guide society without broader societal participation.
I suppose it depends on how much of an emergency you consider the current situation to be.
If you think it’s truly a dire situation, I expect almost no-one would reason as follows: “Well, we’re insufficiently diverse, it’d be immoral for us to do anything, we should just sit over here and wait for the end of the world”.
I suspect that, at least in these circumstances, a more productive lens is the lens of responsibility, where those who are afforded disproportionate influence are responsible to use it for the good of all and to strive to be conscious of potential blindspots due to selection biases.
Just to clarify, the above paragraphs are an argument against “it is unethical to try to guide society without broader societal participation” rather than an argument for narrow EA. I support the latter as well, but I haven’t made an argument for it here.
If EA decided to pursue the politics and civil society route, I would suggest that it would likely make sense to follow a strategy similar to what the Good Ancestors Project has been following in Australia. This project has done a combination of a) outreach to policy-makers b) co-ordinating an open letter to the government c) making a formal submission to a government inquiry d) walking EA’s through the process of making their own submissions (you’d have to check with Greg to see if he still thinks all of these activities are worthwhile).
Even though AI Policy seems like the highest priority at the moment, there are benefits of working on multiple cause areas since a) you can only submit to an inquiry when one is happening, so more cause areas increases the chance that there is something relevant b) there’s a nice synergy that comes from getting EA’s who have different cause areas as their main focus to submit to the inquiries for other areas.
Greg has a great explanation where he talks about EA having spent a lot of effort figuring out how to leverage our financial capital and our career capital to make the world better, but that we’ve been neglecting our political capital. Obviously there’s the question of whether we have good ways to deploy that capital, but I suspect that this answer is that we do.I’m not claiming that this is necessarily the route forward, but it is likely worth exploring in countries with well-developed EA communities.
If this ends up succeeding, then it may be worthwhile asking whether there are any other sub-areas of EA that might deserve their own forum, but I suppose that’s more a question to ask in a few months.
To be honest, I don’t really see these kinds of comments criticising young organisations that likely have access to limited amounts of funding to be helpful. I think there are some valid issues to be discussed, but I’d much rather see them discussed at an ecosystem level. Sure, it’s less than ideal that low-paid internships provide an advantage to those from a particular class, but it’s also easier for wealthier people to gain a college degree as well, I think it’d be a mistake for us to criticise universities for offering college degrees. At least with these internships, you’re being paid something, as opposed to accruing debt, so they’re actually much more accessible than the comparative.
But I suppose this doesn’t address my real objection which is that there are people who are willing to work to make the world better and an organisation that is willing to provide them with some financial support to make it happen. In return, these people gain the opportunity to develop new skills and if these interns are particularly talented, they are likely to be referred on to further opportunities. They might even change the course of someone’s career: someone who was just going to go into the business world might end up having a highly impactful career instead.
So I guess it just feels like that given how many benefits there are, we should have a really high bar for standing in the way of things. And I don’t really feel that this is met here. There’s so much that is horrible in the world, but we have the opportunity to change that. And if that involves a large number of 1000 EURO/month internships, well, that seems like an incredibly low price to pay.
I’m not going to fully answer this question, b/c I have other work I should be doing, but I’ll toss in one argument. If different domains (cyber, bio, manipulation, ect.) have different offense-defense balances a sufficiently smart attacker will pick the domain with the worst balance. This recurses down further for at least some of these domains where they aren’t just a single thing, but a broad collection of vaguely related things.
Oh, I can see why it is ambiguous. I meant whether it is easier to attack or defend, which is separate from the “power” attackers have and defenders have.”What incentive is there to destroy the world, as opposed to take it over? If you destroy the world, aren’t you sacrificing yourself at the same time?”Some would be willing to do that if they can’t take it over.
Your argument in objection 1 doesn’t the position people who are worried about an absurd offense-defense imbalance.Additionally: It may be that no agent can take over the world, but that an agent can destroy the world. Would someone build something like that? Sadly, I think the answer is yes.