Theoretical Computer Science Msc student at the University of [Redacted] in the United Kingdom.
I’m an aspiring alignment theorist; my research vibes are descriptive formal theories of intelligent systems (and their safety properties) with a bias towards constructive theories.
I think it’s important that our theories of intelligent systems remain rooted in the characteristics of real world intelligent systems; we cannot develop adequate theory from the null string as input.
𝕮𝖎𝖓𝖊𝖗𝖆
Thanks, yeah.
My main hesitancy about this is that I probably want to go for a PhD, but can only get the graduate visa once, and I may want to use it after completing the PhD.
But I’ve come around to maybe it being better to use it up now, pursue a PhD afterwards, and try to secure employment before completing my program so I can transfer to the skilled workers visa.
Immigration is such a tight constraint for me.
My next career steps after I’m done with my TCS Masters are primarily bottlenecked by “what allows me to remain in the UK” and then “keeps me on track to contribute to technical AI safety research”.
What I would like to do for the next 1 − 2 years (“independent research”/ “further upskilling to get into a top ML PhD program”) is not all that viable a path given my visa constraints.
Above all, I want to avoid wasting N more years by taking a detour through software engineering again so I can get Visa sponsorship.
[I’m not conscientious enough to pursue AI safety research/ML upskilling while managing a full time job.]
Might just try and see if I can pursue a TCS PhD at my current university and do TCS research that I think would be valuable for theoretical AI safety research.
The main detriment of that is I’d have to spend N more years in <city> and I was really hoping to come down to London.
Advice very, very welcome.
[Not sure who to tag.]
FWIW, I most read the core message of this post as: “you should start an AI safety lab. What are you waiting for? ;)”.
The post felt to me like debunking reasons people might feel they aren’t qualified to start an AI safety lab.
I don’t think this was the primary intention though. I feel like I came away with that impression because of the Twitter contexts in which I saw this post referenced.
This is a good post.
There are counterarguments about how the real world is a much richer and more complex environment than chess (e.g. consider that a superintelligence can’t beat knowledgeable humans in tic-tac-toe but that doesn’t say anything interesting).
However, I don’t really feel compelled to elaborate on those counterarguments because I don’t genuinely believe them and don’t want to advocate a contrary position for the sake of it.
I wouldn’t be able to start until October (I’m a full time student and might be working on my thesis and have at least one exam to write during the summer); should I still apply?
I am otherwise very interested in the SERI MATS program and expect to be a strong applicant in other ways.
Beren’s “Deconfusing Direct vs Amortised Optimisation”
Orthogonality is Expensive
“Dangers of AI and the End of Human Civilization” Yudkowsky on Lex Fridman
Sparks of Artificial General Intelligence: Early experiments with GPT-4 | Microsoft Research
I notice that I am surprised and confused.
I’d have expected Holden to contribute much more to AI existential safety as CEO of Open Philanthropy (career capital, comparative advantage, specialisation, etc.) than via direct work.
I don’t really know what to make of this.
That said, it sounds like you’ve given this a lot of deliberation and have a clear plan/course of action.
I’m excited about your endeavours in the project!
Sorry. LW crosspost issues.
It wasn’t deliberate, but something about how the integration works wiped the comments when I was working on the post on LW (I made major edits/added a new section so I republished it on LW, and just didn’t think about the crosspost issue).
In future, I probably will just stop crossposting my work in progress AI safety pieces here (posting a tentative version first is better than the post languishing in my drafts unpublished [10k+ words of unpublished AI safety writings]).
Maybe I’ll only crosspost after I feel I have a final version ready.
What are those reasons, please?
Google invests $300mn in artificial intelligence start-up Anthropic | FT
AI Risk Management Framework | NIST
I guess I don’t understand how slow takeoff can happen without economic consequences.
Like takeoff (in capabilities progress) may still be slow, but the impact of AI is more likely to be discontinuous in that case.
I was probably insufficiently clear on that point.
I liked this apology.
You’re welcome.
Happy to be useful!
Heretical Thoughts on AI | Eli Dourado
Your claim that “blacks are less intelligent..” is pretty much as widely discredited as Holocaust denial (2), supported with as sparse evidence as the latter.
I think this is completely wrong as an empirical statement. The claim may very well be false, but the evidence supporting it isn’t as tenuous as the evidence supporting Holocaust denial.
Sad to hear this happened, but it seems the situation was irrecoverable, and the organisation was already dead for a bit before it officially shuttered.
Glad for this post and all the comments.