LW server reports: not allowed.
This probably means the post has been deleted or moved back to the author's drafts.
My upskilling study plan:
1. Math
i) Calculus (derivatives, integrals, Taylor series)
ii) Linear Algebra (this video series)
iii) Probability Theory
2. Decision Theory
3. Microeconomics
i) Optimization of individual preferences
4. Computational Complexity
5. Information Theory
6. Machine Learning theory with a focus on deep neural networks
7. The Alignment Forum
8. Arbital
This is helpful, thank you. I’m in the early stages of some of these topics.
What is your goal? i.e. what are you optimising for? :)
Full-time research in AI Safety.
It’s more epistemically virtuous to make a wrong prediction than to make no predictions at all.
The Collingridge dilemma: it is difficult to predict the future impact of a technology. However, once the technology has been implemented, it becomes difficult to manage.
A model of one’s own (or what I say to myself):
Defer a bit less today—think for yourself!
What would the world look like if X was not true?
Make a prediction—don’t worry if it turns out to be false.
Articulate an argument and find at least one objection to it.
Why bother with New Year’s resolutions when you can just start doing things today (and every today)?
“When they ask me about truth, I say, truth in which axiomatic system?” Teukros Michailides
“Find where the difficult thing hides, in its difficult cave, in the difficult dark.” Iain S. Thomas
Five types of people on AI risks:
Wants AGI as soon as possible, ignores safety.
Wants AGI, but primarily cares about alignment.
Doesn’t understand AGI/doesn’t think it’ll happen anytime during her lifetime; thinks about robots that might take people’s jobs.
Understands AGI, but thinks the timelines are long enough not to worry about it right now.
Doesn’t worry about AGI; being locked-in in our choices and “normal accidents” are both more important/risky/scary.
mmm, would quibble about believing that robots could take people’s jobs means that a person doesn’t understand AGI...
My upskilling study plan:
1. Math
i) Calculus (derivatives, integrals, Taylor series)
ii) Linear Algebra (this video series)
iii) Probability Theory
2. Decision Theory
3. Microeconomics
i) Optimization of individual preferences
4. Computational Complexity
5. Information Theory
6. Machine Learning theory with a focus on deep neural networks
7. The Alignment Forum
8. Arbital
This is helpful, thank you. I’m in the early stages of some of these topics.
What is your goal? i.e. what are you optimising for? :)
Full-time research in AI Safety.
It’s more epistemically virtuous to make a wrong prediction than to make no predictions at all.
The Collingridge dilemma: it is difficult to predict the future impact of a technology. However, once the technology has been implemented, it becomes difficult to manage.
A model of one’s own (or what I say to myself):
Defer a bit less today—think for yourself!
What would the world look like if X was not true?
Make a prediction—don’t worry if it turns out to be false.
Articulate an argument and find at least one objection to it.
Why bother with New Year’s resolutions when you can just start doing things today (and every today)?
“When they ask me about truth, I say, truth in which axiomatic system?” Teukros Michailides
“Find where the difficult thing hides, in its difficult cave, in the difficult dark.” Iain S. Thomas
Five types of people on AI risks:
Wants AGI as soon as possible, ignores safety.
Wants AGI, but primarily cares about alignment.
Doesn’t understand AGI/doesn’t think it’ll happen anytime during her lifetime; thinks about robots that might take people’s jobs.
Understands AGI, but thinks the timelines are long enough not to worry about it right now.
Doesn’t worry about AGI; being locked-in in our choices and “normal accidents” are both more important/risky/scary.
mmm, would quibble about believing that robots could take people’s jobs means that a person doesn’t understand AGI...