“Profits for investors in this venture [ETA: OpenAI] were capped at 100 times their investment (though thanks to a rule change this cap will rise by 20% a year starting in 2025).”
I stumbled upon this quote in this recent Economist article [archived] about OpenAI. I couldn’t find any good source that supports the claim additionally, so this might not be accurate. The earliest mention I could find for the claim is from January 17th 2023 although it only talks about OpenAI “proposing” the rule change.
If true, this would make the profit cap less meaningful, especially for longer AI timelines. For example, a 1 billion investment in 2023 would be capped at ~1540 times in 2040.
I’ve talked to some people who are involved with OpenAI secondary markets, and they’ve broadly corroborated this.
One source told me that after a specific year (didn’t say when), the cap can increase 20% per year, and the company can further adjust the cap as they fundraise.
I’d like to be able to search the “80000 hours” and the “Effective Altruism” LinkedIn groups for members from my city. The group member lists are only searchable for names.
I think it could be a good way to contact local EA-aligned people who aren’t on our radar. Is there any workaround for doing this?
(I thought you could do it on the unpaid version too but I just checked and can’t see it. I specifically remember having the functionality to use specific search filters restricted to only people within certain groups when I had recruiter Lite though.)
The “Personal Blogposts” section has recently become swamped with [Event] posts. Most of them are irrelevant to me. Is there a way to hide them in the “All Posts”-view?
It would be awesome to be able to opt-in for “within-text commenting” (similar to what happens when you enable commenting in a google doc) when posting on the EA Forum.
You might think that this will lead to a “party-of-1” dynamic, but due to the way it’s implemented (check out the above post), quoted text in comments will lead to side comments for you.
I stumbled upon this quote in this recent Economist article [archived] about OpenAI. I couldn’t find any good source that supports the claim additionally, so this might not be accurate. The earliest mention I could find for the claim is from January 17th 2023 although it only talks about OpenAI “proposing” the rule change.
If true, this would make the profit cap less meaningful, especially for longer AI timelines. For example, a 1 billion investment in 2023 would be capped at ~1540 times in 2040.
I’ve talked to some people who are involved with OpenAI secondary markets, and they’ve broadly corroborated this.
One source told me that after a specific year (didn’t say when), the cap can increase 20% per year, and the company can further adjust the cap as they fundraise.
As of January 2023, the institutional markets were not predicting AGI within 30 years.
I’d like to be able to search the “80000 hours” and the “Effective Altruism” LinkedIn groups for members from my city. The group member lists are only searchable for names.
I think it could be a good way to contact local EA-aligned people who aren’t on our radar.
Is there any workaround for doing this?
I believe you can do this search with a subscription to a paid LinkedIn subscription, like Recruiter Lite.
Yep, you can.
(I thought you could do it on the unpaid version too but I just checked and can’t see it. I specifically remember having the functionality to use specific search filters restricted to only people within certain groups when I had recruiter Lite though.)
The “Personal Blogposts” section has recently become swamped with [Event] posts.
Most of them are irrelevant to me. Is there a way to hide them in the “All Posts”-view?
Moonshot EA Forum Feature Request
It would be awesome to be able to opt-in for “within-text commenting” (similar to what happens when you enable commenting in a google doc) when posting on the EA Forum.
Optimally those comments could also be voted on.
I have good news for you! LessWrong has developed this feature. You can access the feature by going to your settings and checking “opt-in to experimental features.”
You might think that this will lead to a “party-of-1” dynamic, but due to the way it’s implemented (check out the above post), quoted text in comments will lead to side comments for you.
is it April’s Fool?
I stumbled on this flow chart from 2015 about how different value and empirical judgements might change what cause areas we’d want to work on and with which methods:
http://globalprioritiesproject.org/2015/09/flowhart/
It’s a bit dated by now. But I think an updated version of this could be very valuable for newcomers to EA.