# Eleni_A

Karma: 383

PhD-ing. I think and write about AI safety, cognitive science, history and philosophy of science/​technology.

# I de­signed an AI safety course (for a philos­o­phy de­part­ment)

23 Sep 2023 21:56 UTC
26 points

# Con­fu­sions and up­dates on STEM AI

19 May 2023 21:34 UTC
7 points

# AI Align­ment in The New Yorker

17 May 2023 21:19 UTC
23 points
(www.newyorker.com)

# A Study of AI Science Models

13 May 2023 19:14 UTC
12 points

# A Guide to Fore­cast­ing AI Science Capabilities

29 Apr 2023 6:51 UTC
19 points

# On tak­ing AI risk se­ri­ously

13 Mar 2023 5:44 UTC
51 points
(www.nytimes.com)
• Helpful post, Zach! I think it’s more useful and concrete to focus on asking about specific capabilities instead of asking about AGI/​TAI etc. and I’m pushing myself to ask such questions (e.g., when do you expect to have LLMs that can emulate Richard Feynmann-level -of-text). Also, I like the generality vs capability distinction. We already have a generalist (Gato) but we don’t consider it to be an AGI (I think).

# Every­thing’s nor­mal un­til it’s not

10 Mar 2023 1:42 UTC
6 points
• 7 Mar 2023 3:19 UTC
2 points
0 ∶ 0

A model of one’s own (or what I say to myself):

• Defer a bit less today—think for yourself!

• What would the world look like if X was not true?

• Make a prediction—don’t worry if it turns out to be false.

• Articulate an argument and find at least one objection to it.

# Ques­tions about AI that bother me

31 Jan 2023 6:50 UTC
33 points
• My upskilling study plan:

1. Math

i) Calculus (derivatives, integrals, Taylor series)

ii) Linear Algebra (this video series)

iii) Probability Theory

2. Decision Theory

3. Microeconomics

i) Optimization of individual preferences

4. Computational Complexity

6. Machine Learning theory with a focus on deep neural networks

8. Arbital

• “Find where the difficult thing hides, in its difficult cave, in the difficult dark.” Iain S. Thomas

• The Collingridge dilemma: it is difficult to predict the future impact of a technology. However, once the technology has been implemented, it becomes difficult to manage.

• The quick answer is that wanting to do alignment-related work does not depend on a Philosophy PhD, or any graduate degree tbh. I’d say, start thinking about what are your interests more specifically and then there might be different paths to impact with or without the degree.

# Emerg­ing Paradigms: The Case of Ar­tifi­cial In­tel­li­gence Safety

18 Jan 2023 5:59 UTC
16 points

# [Question] Should AI writ­ers be pro­hibited in ed­u­ca­tion?

16 Jan 2023 22:29 UTC
3 points