Thanks for clarifying Ozzie! (Just to be clear, this post is not an attack on you or on your position, both of which I highly appreciate :). Instead, I was trying to raise a related point, which seems extremely important to me and I was thinking about recently, and making sure the discussion doesn’t converge to a single point).
With regards to the funding situation, I agree that many tech projects could be funded via traditional VCs, but some might not be, especially those that are not expected to be very financially rewarding or very risky (a few examples that come to mind are the research units of the HMOs in Israel, tech benefitting people in the developing world [e.g. Sella’s teams at Google], basic research enabling applications later [e.g. research on mental health]). An EA VC which funds projects based mostly on expected impact might be a good idea to consider!
this post is not an attack on you or on your position
Thanks! I didn’t mean to say it was, just was clarifying my position.
An EA VC which funds projects based mostly on expected impact might be a good idea to consider
Now that I think about it, the situation might be further along than you might expect. I think I’ve heard about small “EA-adjacent” VCs starting in the last few years.[1] There are definitely socially-good-focused VCs out there, like 50 Year VC.
Anthropic was recently funded for $124 Million as the first round. Dustin Moskovitz, Jaan Tallinn, and the Center for Emerging Risk Research all were funders (all longtermists). I assume this was done fairly altruistically.
I think Jaan has funded several altruistic EA projects; including ones that wouldn’t have made sense just on a financial level.
That’s great, thanks! I was aware of Anthropic, but not of the figures behind it.
Unfortunately, my impression is that most funding for such projects are around AI safety or longtermism (as I hinted in the post...). I might be wrong about this though, and I will poke around these links and names.
Relatedly, I would love see OPP/EA Funds fund (at least a seed round or equivalent) such projects, unrelated to AI safety and longtermism, or hear their arguments against that.
Thanks for clarifying Ozzie!
(Just to be clear, this post is not an attack on you or on your position, both of which I highly appreciate :). Instead, I was trying to raise a related point, which seems extremely important to me and I was thinking about recently, and making sure the discussion doesn’t converge to a single point).
With regards to the funding situation, I agree that many tech projects could be funded via traditional VCs, but some might not be, especially those that are not expected to be very financially rewarding or very risky (a few examples that come to mind are the research units of the HMOs in Israel, tech benefitting people in the developing world [e.g. Sella’s teams at Google], basic research enabling applications later [e.g. research on mental health]). An EA VC which funds projects based mostly on expected impact might be a good idea to consider!
Thanks! I didn’t mean to say it was, just was clarifying my position.
Now that I think about it, the situation might be further along than you might expect. I think I’ve heard about small “EA-adjacent” VCs starting in the last few years.[1] There are definitely socially-good-focused VCs out there, like 50 Year VC.
Anthropic was recently funded for $124 Million as the first round. Dustin Moskovitz, Jaan Tallinn, and the Center for Emerging Risk Research all were funders (all longtermists). I assume this was done fairly altruistically.
I think Jaan has funded several altruistic EA projects; including ones that wouldn’t have made sense just on a financial level.
https://pitchbook.com/profiles/company/466959-97?fbclid=IwAR040xC65lCV0ZW68DOXwI7K_RkSzyr7ZJa9HBs7R7C4ZkFGM5sC1Lec9Wk#team
https://www.radiofreemobile.com/anthropic-open-ai-mission-impossible/?fbclid=IwAR3iC0B-EKFD40Hf7DXEedI_tzFgqypT7_Pf4jSiUhPeKbHq_xFawHc-rpA
[1]: Sorry for forgetting the 1-2 right names here.
That’s great, thanks!
I was aware of Anthropic, but not of the figures behind it.
Unfortunately, my impression is that most funding for such projects are around AI safety or longtermism (as I hinted in the post...). I might be wrong about this though, and I will poke around these links and names.
Relatedly, I would love see OPP/EA Funds fund (at least a seed round or equivalent) such projects, unrelated to AI safety and longtermism, or hear their arguments against that.