The FTX Future Fund recently finished a large round of regrants, meaning a lot of people are approved for grants that have not yet been paid out. At least one person has gotten word from them that these payouts are on hold for now. This seems very worrisome and suggests the legal structure of the fund is not as robust or isolated as you might have thought. I think a great community support intervention would be to get clarity on this situation and communicate it clearly. This would be helpful not only to grantees but to the EA community as a whole, since what is on many people’s minds is not as much what will happen to FTX but what will happen to the Future Fund. (From the few people I have talked to, many were under the impression that funds committed to the Future Fund were actually committed in a strict sense, e.g. transferred to a separate entity. If that turns out not to be the case, it’s really bad.)
Markus Amalthea Magnuson
For the same reason that e.g. net electricity generation from fusion power is not the “number one single factor debated in every single argument on any economic/political topic with medium-length scope”: Until it exists, it is fictional – why should everyone focus so much on fictional technology? It remains a narrow, academic field. The difference is that there is actual progress towards fusion.
I don’t have a view on that, but it would be cool if it was available as a forum setting (”Weight votes by account age”) and some people might like it better that way.
I wrote this in 2013, might be of interest to those concerned:
Why would you want to help build AGI?
Should EA influence governments to enact more effective interventions?
I plan to post my reports on LessWrong and the Effective Altruism forum
Why would posting mainly in these tiny communities be the best approach? First, I think these communities are already far more familiar with the topics you plan to publish on than the average reader. Second, they are – as I said – tiny. If you want to be a public intellectual, I think you should publish where public intellectuals generally publish. This is usually a combination of books, magazines, journals, and your own platforms (e.g. personal website/blog, social media etc.)
You could probably improve on your plan by making a much more in-depth analysis of what your exact goals are and what your exact audiences are. It seems to me a few steps are missing in this statement:
I believe such people provide considerable value to the world (and specifically to the project of improving the world).
What would probably be useful is, in a sense, a theory of change on how doing the things you want to do lead to the outcomes you want.
If you do decide to go ahead with this plan, I would also focus a lot on this part:
In contrast, I am quite below average on conscientiousness and related traits like diligence, perseverance, willpower, “work ethic”, etc.
You are going to need those in the massively competitive landscape you aim for.
If you speak to a stranger about your worries of unaligned AI, they’ll think you’re insane (and watch too many sci-fi films).
I’m not so sure this is true. In my own experience, a correct explanation of the problem with unaligned AI makes sense to almost everyone with some minimum of reasoning skill. Although this is anecdotal, I would not be surprised if an actual survey among “strangers” would show this too.
Commenting on your general point, I think the reason is that most people’s sense of when AGI could plausibly happen is “in the far future”, which makes it psychologically unremarkable at this point.
Something like extinction (even if literal extinction is unlikely) from climate change, although possibly further off in time, might feel closer to a lot of people because climate change is already killing people.
You would essentially be a freelancer. Using that framing instead, there are plenty of resources out there on how to build a life as a freelancer. For an EA-specific perspective, here’s a good starting point: https://resourceportal.antientropy.org/docs/receiving-grant-funds
This is very exciting and has huge potential. Please get in touch with the Altruistic Agency for tech needs (e.g. website) when you are at that point, I’d love to help.
Since sociology is probably an underrepresented degree in effective altruism, maybe you can consider it a comparative advantage rather than “the wrong degree”. The way I see it, EA could use a lot more sociological inquiry.
I’m aware of this, and it raises more questions than it answers, frankly. For example, I wonder what the terms were when what was originally a grant to a non-profit, turned into (?) an investment in a for-profit.
[Question] Did OpenPhil ever publish their in-depth review of their three-year OpenAI grant?
Anti Entropy are doing a lot of work towards this in the operations area, especially for new organisations. I think a lot of the things you ask for (especially in infrastructure) is currently provided ad hoc and informally (e.g. in various invite-only Slack workspaces) or by service providers and (EA) agencies that charge for it.
Well done! If anyone would ever be interested, Nick has the transcript in Swedish here: https://nickbostrom.com/interviews/Sommarprat-P1.pdf
More views from two days ago: https://forum.effectivealtruism.org/posts/9rvpLqt6MyKCffdNE/jobs-at-ea-organizations-are-overpaid-here-is-why
I especially recommend this comment: https://forum.effectivealtruism.org/posts/9rvpLqt6MyKCffdNE/jobs-at-ea-organizations-are-overpaid-here-is-why?commentId=zbPE2ZiLGMgC7hkMf
What would be truly useful is annual (anonymous) salary statistics among EA organisations, to be able to actually observe the numbers and reason about them.
Great designs! I just have to ask… the “typo” is intentional, right?
I mentioned this in an email to you, but thought I’d leave it in a comment here as well just to make other readers aware of the initiative: BOAS does something similar, in the niche of sustainable baby and kids products. They started out fairly recently and have already donated 2,000 EUR to effective charities. I will check if Vincent who founded BOAS has an account here on the forum, and if so, ask him to make some general comments in this thread, on his experience.
A concrete follow-up question (anyone, feel free to answer it):
What do you think is the correct salary for some common roles, and why that number?
That a situation where they are not “absolutely sure” can even occur is one of the major causes of worry here, regardless of the conclusions that can be drawn at this point.