Hi calebp.
If you have time to read the papers, let me know if you think they are actually useful.
Hi calebp.
If you have time to read the papers, let me know if you think they are actually useful.
Thanks a lot for giving more context. I really appreciate it.
These were not “AI Safety” grants
These grants come from Open Philanthropy’s focus area “Potential Risks from Advanced AI”. I think it’s fair to say they are “AI Safety” grants.
Importantly, the awarded grants were to be disbursed over several years for an academic institution, so much of the work which was funded may not have started or been published. Critiquing old or unrelated papers doesn’t accurately reflect the grant’s impact.
Fair point. I agree old papers might not accurately reflect the grant’s impact, but they correlate.
Your criticisms of the papers lack depth … Do you do research in this area, …
I totally agree. That’s why I shared this post as a question. I’m not an expert in the area and I wanted an expert to give me context.
Could you please update your post to address these issues and provide a more accurate representation of the grants and the lab’s work?
I added an update linking to your answer.
Overall, I’m concerned about Open Philanthropy’s granting. I have nothing against Thompson or his lab’s work.
Sorry, I should have attached this in my previous message.
where does it say that he is a guest author?
Here.
This paper is from Epoch. Thompson is a “Guest author”.
I think this paper and this article are interesting but I’d like to know why you think they are “pretty awesome from an x-risk perspective”.
Epoch AI has received much less funding from Open Philanthropy ($9.1M), yet they are producing world-class work that is widely read, used, and shared.
It’s MIT FutureTech: https://futuretech.mit.edu/
Agree. OP’s hits-based giving approach might justify the 2020 grant, but not the 2022 and 2023 grants.
Thanks for your thorough comment, Owen.
And do the amounts ($1M and $0.5M) seem reasonable to you?
As a point of reference, Epoch AI is hiring a “Project Lead, Mathematics Reasoning Benchmark”. This person will receive ~$100k for a 6-month contract.
In the case of OpenDevin it seems like the grant is directly funding an open-source project that advances capabilities.
I’d like more transparency on this.
Very good point. Yeah, it seems like a 1⁄10 life has to be net negative. But a 4⁄10 life I’m not sure it’s net negative.
The difference in subjective well-being is not as high as we might intuitively think.
(anecdotally: my grandparents were born in poverty and they say they had happy childhoods)
The average resident of a low-income country rated their satisfaction as 4.3 using a subjective 1-10 scale, while the average was 6.7 among residents of G8 countries
Doing a naive calculation: 6.7 / 4.3 = 1.56 (+56%).
The difference in the cost of saving a live between a rich and a poor country is 10x-1000x.
It would probably be good to take this into account, but I don’t think it would change the outcomes that much.
What is missing in terms of a GPU?
Something unknown.
I think given a big enough GPU, yes, it seems plausible to me. Our mids are memory stores and performing calculations.
Do you think it’s plausible that a GPU rendering graphics is conscious? Or do you think that a GPU can only be conscious when it runs a model that mimics human behavior?
I think bacteria are unlikely to be conscious due to a lack of processing power.
Potential counterargument: microbial intelligence.
That’s true for many CEOs (like Elon Musk) but Sam Altman did not over-hype any of the big OpenAI launches (ChatGPT, gpt3.5, gpt4, gpt4o, dall-e, etc.).
It’s possible that he’s doing it for the first time now, but I think it’s unlikely.
But let’s ignore Sam’s claims. Why do you think LLM progress is slowing down?
I think it’s likely we’ll be able to use matter to make other conscious minds
Can you expand on this? Do you think that a model loaded onto a GPU could be conscious?
And do you think bacteria might be conscious?
I assume that ML skills are less in-supply however?
I think there’s enough demand for both.
I’m currently sitting at a desk at a SWE unpaid internship LOL.
Nice!
I don’t think I currently have the skills to start getting paid for SWE work sadly.
Gotcha. Probably combining your studies with internships is the best option for now.
An LLM capable of automating “mid-sized SWE jobs” would probably be able to accelerate AI research and would be capable of cyberattacks. I guess: AI labs would not release such a powerful model, they would just use it internally to reach ASI.
Thanks for the comment @aogara <3. I agree this paper seems very good from an academic point of view.
My main question: how does this research help in preventing existential risks from AI?
Other questions:
What are the practical implications of this paper?
What insights does this model provide regarding text-based task automation using LLMs?
Looking into one of the main computer vision tasks: self-driving cars. What insights does their model provide? (Tesla is probably ~3 years away from self-driving cars and this won’t require any hardware update, so no cost)