Hey Josh, I think this is a good point—it would be great to have some common knowledge of what sort of commitment this is.
Here’s where I am so far:
I read through the full report reasonably carefully (but only some of the appendices).
I spent some time thinking about potential counterexamples. It’s hard to say how much; this mostly wasn’t time I carved out, more something I was thinking about while taking a walk or something.
At times I would reread specific parts of the writeup that seemed important for thinking about whether a particular idea was viable. I wrote up one batch of rough ideas for ARC and got feedback on it.
I would guess that I spent several hours on #1, several hours on #2, and maybe another 2-3 hours on #3. So maybe something like 10-15 hours so far?
At this point I don’t think I’m clearly on track to come up with anything that qualifies for a prize, but I think I understand the problem pretty well and why it’s hard for me to think of solutions. If I fail to submit a successful entry, I think it will feel more like “I saw what was hard about this and wasn’t able to overcome it” than like “I tried a bunch of random stuff, lacking understanding of the challenge, and none of it worked out.” This is the main benefit that I wanted.
My background might unfortunately be hard to make much sense of, in terms of how it compares to someone else’s. I have next to no formal technical education, but I have spent tons of time talking about AI timelines and AI safety, including with Paul (the head of ARC), and that has included asking questions and reading things about the aspects of machine learning I felt were important for these conversations. (I never wrote my own code or read through a textbook, though I did read Michael Nielsen’s guide to neural networks a while ago.) My subjective feeling was that the ELK writeup didn’t have a lot of prerequisites—mostly just a very basic understanding of what deep learning is about, and a vague understanding of what a Bayes net is. But I can’t be confident in that. (In particular, Bayes nets are generally only used to make examples concrete, and I was generally pretty fine to just go with my rough impression of what was going on; I sometimes found the more detailed appendices, with pseudocode and a Conway’s Game of Life analogy, clearer than the Bayes net diagrams anyway.)
Congrats on submitting proposals that would have won you a $15,000 prize if you had been eligible! How long did it take you to come up with these proposals?
Thanks! I’d estimate another 10-15 hours on top of the above, so 20-30 hours total. A good amount of this felt like leisure time and could be done while not in front of a computer, which was nice. I didn’t end up with “solutions” I’d be actually excited about for substantive progress on alignment, but I think I accomplished my goal of understanding the ELK writeup well enough to nitpick it.
Hey Josh, I think this is a good point—it would be great to have some common knowledge of what sort of commitment this is.
Here’s where I am so far:
I read through the full report reasonably carefully (but only some of the appendices).
I spent some time thinking about potential counterexamples. It’s hard to say how much; this mostly wasn’t time I carved out, more something I was thinking about while taking a walk or something.
At times I would reread specific parts of the writeup that seemed important for thinking about whether a particular idea was viable. I wrote up one batch of rough ideas for ARC and got feedback on it.
I would guess that I spent several hours on #1, several hours on #2, and maybe another 2-3 hours on #3. So maybe something like 10-15 hours so far?
At this point I don’t think I’m clearly on track to come up with anything that qualifies for a prize, but I think I understand the problem pretty well and why it’s hard for me to think of solutions. If I fail to submit a successful entry, I think it will feel more like “I saw what was hard about this and wasn’t able to overcome it” than like “I tried a bunch of random stuff, lacking understanding of the challenge, and none of it worked out.” This is the main benefit that I wanted.
My background might unfortunately be hard to make much sense of, in terms of how it compares to someone else’s. I have next to no formal technical education, but I have spent tons of time talking about AI timelines and AI safety, including with Paul (the head of ARC), and that has included asking questions and reading things about the aspects of machine learning I felt were important for these conversations. (I never wrote my own code or read through a textbook, though I did read Michael Nielsen’s guide to neural networks a while ago.) My subjective feeling was that the ELK writeup didn’t have a lot of prerequisites—mostly just a very basic understanding of what deep learning is about, and a vague understanding of what a Bayes net is. But I can’t be confident in that. (In particular, Bayes nets are generally only used to make examples concrete, and I was generally pretty fine to just go with my rough impression of what was going on; I sometimes found the more detailed appendices, with pseudocode and a Conway’s Game of Life analogy, clearer than the Bayes net diagrams anyway.)
Congrats on submitting proposals that would have won you a $15,000 prize if you had been eligible! How long did it take you to come up with these proposals?
Thanks! I’d estimate another 10-15 hours on top of the above, so 20-30 hours total. A good amount of this felt like leisure time and could be done while not in front of a computer, which was nice. I didn’t end up with “solutions” I’d be actually excited about for substantive progress on alignment, but I think I accomplished my goal of understanding the ELK writeup well enough to nitpick it.