Hi Yonatan, I actually got some 1:1 career advice from 80k recently, they were great! I’m also friends with someone in AI who’s local to Montréal and who’s trying to help me out. He works at MILA which has ties to a few universities in the city (that’s kind of what inspired the speculative master’s application). Thanks in advance for the referrals!
tcelferact
[Question] Career Advice: Philosophy + Programming → AI Safety
Now, I’ve always been very sceptical of these arguments because they seem to rely on nothing but intuition and go against historical precendent
What historical precedent do you have in mind here? The reason my intuitions initially would go in the opposite direction is a case study like invasive species in Australia.
tl;dr is when an ecosystem has evolved holding certain conditions constant (in this case geographical isolation), and that changes fairly rapidly, even a tiny change like a European rabbit can have negative consequences well beyond what was foreseen by the folks who made the change.
I won’t pretend to be an expert on how analogous climate is to this example, but if someone wanted to shift my intuitions, a good way to start would be to convince me that, for some given optimistic economic forecast, the likelihood it has missed significant knock-on negative consequences of an X degree average rise in temperature is <50%.
Thanks for your suggestions! Some answers:
1. Robust decision making. And yes, pretty much, I was thinking of the interpretations covered here: https://plato.stanford.edu/entries/probability-interpret.2. I think formalizing this properly would be part of the task, but if we take the Impact, Neglectedness, Tractability framework, I’m roughly thinking of a decision-making framework that boosts the weight given to impact and lowers the weight given to tractability.
3. I was roughly thinking of an analysis of the approach used by exceptional participants in forecasting tournaments like Tetlock’s. Most of them seem to be doing something Bayesian in flavor, if not strictly Bayesian updating, and with impressive results. I suspect that could have interesting implications for how we understand (the relation of subjectivity to) a Bayesian interpretation of probability.
Yes, this would also be useful, and thank you for the link!
Thank you, PMed!
I’m going to add some of this to my ‘done’ column, thanks for pointing it out.