I am somewhat familiar with it yes:) perhaps I have not fully appreciated the dangers of it yet (admittedly I should, given that I hang out in Oxford with the people who do this research). Will watch the video
With different discounting rate, preference and particular skillset (my skillset), I see this focus is the most impactful thing for me right now.
I would like to hear from you if/how, on the margin, you believe that the intervention we are working on would make any meaningful difference in making AI more or less a threat to humanity!
If there are concerns, I have the ability to steer the training content, so if anything, these will be the least dangerous software engineers out there. Maybe they will replace software engineers who are less AI-safety-aware even as they take jobs from western engineers who do not receive any training whatsoever?
With different discounting rate, preference and particular skillset (my skillset), I see this focus is the most impactful thing for me right now.
I would be really surprised to see this intervention come out ahead purely based on impact, even with a very different discounting rate and skillset, but I think it’s okay to make decisions that take your personal preferences into account.
I would like to hear from you if/how, on the margin, you believe that the intervention we are working on would make any meaningful difference in making AI more or less a threat to humanity!
I don’t expect it to have much in the way of capability externalities unless you specifically include the cutting edge of AI such as transformers. This seems easy to avoid as there’s so much else in AI for students to explore.
If there are concerns, I have the ability to steer the training content, so if anything, these will be the least dangerous software engineers out there. Maybe they will replace software engineers who are less AI-safety-aware even as they take jobs from western engineers who do not receive any training whatsoever?
One option would be to have an introductory talk on AI Safety and then give people the option to work through the AGI safety fundamentals course if they’re interested in it (I wouldn’t waste the time of people who aren’t keen on it).
One framing that might be useful is that Western companies are making a very big decision for humanity by choosing to bring smarter-than-human intelligence into existence with the vast majority of humanity having no say in it.
As someone not too familiar with AI safety discourse… is there a $-value estimate benchmark one can use to compare “apples to apples”?
The number we have at the moment is at about $130 in DIRECT increased lifetime earnings (relative to counterfactual) per $1 donated.
If we also include spillover effects, then this increases to about $300 per $1 donated.
Again, there are many detailed assumption made to arrive at these numbers, and you are very welcome to point out which ones you believe are unreasonable!
And the purpose of our pilots and proposed RCT (later) is of course to test this in practice.
I like the idea of having an introductory AI safety lecture. We’re actually planning out the guest lectures for the introductory bootcamp right now. Would you be interested in doing 1 hour on this topic? Or if not, could you refer me to someone?
Right now we do have one lecturer talking about how one can use AI in ones work as a software web developer, as a tool. I think it would be great to also, in conjunction, have something on the safety/dangers.
I think there are other people in the community who could give a talk much better than I could, but I suppose I could give it a go if there weren’t any better options and it was during my waking hours.
I am somewhat familiar with it yes:) perhaps I have not fully appreciated the dangers of it yet (admittedly I should, given that I hang out in Oxford with the people who do this research). Will watch the video
With different discounting rate, preference and particular skillset (my skillset), I see this focus is the most impactful thing for me right now.
I would like to hear from you if/how, on the margin, you believe that the intervention we are working on would make any meaningful difference in making AI more or less a threat to humanity!
If there are concerns, I have the ability to steer the training content, so if anything, these will be the least dangerous software engineers out there. Maybe they will replace software engineers who are less AI-safety-aware even as they take jobs from western engineers who do not receive any training whatsoever?
I would be really surprised to see this intervention come out ahead purely based on impact, even with a very different discounting rate and skillset, but I think it’s okay to make decisions that take your personal preferences into account.
I don’t expect it to have much in the way of capability externalities unless you specifically include the cutting edge of AI such as transformers. This seems easy to avoid as there’s so much else in AI for students to explore.
One option would be to have an introductory talk on AI Safety and then give people the option to work through the AGI safety fundamentals course if they’re interested in it (I wouldn’t waste the time of people who aren’t keen on it).
One framing that might be useful is that Western companies are making a very big decision for humanity by choosing to bring smarter-than-human intelligence into existence with the vast majority of humanity having no say in it.
As someone not too familiar with AI safety discourse… is there a $-value estimate benchmark one can use to compare “apples to apples”?
The number we have at the moment is at about $130 in DIRECT increased lifetime earnings (relative to counterfactual) per $1 donated.
If we also include spillover effects, then this increases to about $300 per $1 donated.
Again, there are many detailed assumption made to arrive at these numbers, and you are very welcome to point out which ones you believe are unreasonable!
And the purpose of our pilots and proposed RCT (later) is of course to test this in practice.
I like the idea of having an introductory AI safety lecture. We’re actually planning out the guest lectures for the introductory bootcamp right now. Would you be interested in doing 1 hour on this topic? Or if not, could you refer me to someone?
Right now we do have one lecturer talking about how one can use AI in ones work as a software web developer, as a tool. I think it would be great to also, in conjunction, have something on the safety/dangers.
Best Simon
I think there are other people in the community who could give a talk much better than I could, but I suppose I could give it a go if there weren’t any better options and it was during my waking hours.
Maybe ask in the AI alignment Slack? https://join.slack.com/t/ai-alignment/shared_invite/zt-1ug7qhc4h-OKXxhilyH9CQoK113L4IWg