Ok, and any advice for reaching out to trusted-but-less-prestigious experts? It seems unlikely that reaching out to e.g. Kevin Esvelt will generate a response!
harsimony
Massive Scaling Should be Frowned Upon
[Question] How to disclose a new x-risk?
Great post, I really appreciate an in-depth review of research on reducing sleep need.
I wrote some arguments for why reducing sleep is important here:
https://harsimony.wordpress.com/2021/02/05/why-sleep/
I also submitted a cause exploration app:
https://harsimony.wordpress.com/2022/07/14/cause-exploration-prize-application/
Your post includes substantially more research than mine and I would encourage you to reformat it and submit it to the OpenPhil’s Cause Exploration Prize. I’m happy to help you with edits or combine our efforts!
This kind of thing could be made more sophisticated by making fines proportional to the harm done, requiring more collateral for riskier projects, or setting up a system to short sell different projects. But simpler seems better, at least initially.
Have you thought about whether it could work with a more free market, and not necessarily knowing all of the funders in advance?
Yeah, that’s a harder case. Some ideas:
-
People undertaking projects could still post collateral on their own (or pre-commit to accepting a fine under certain conditions). This kind of behavior could be rewarded by retro-funders giving these projects more consideration and the act of posting collateral does constitute a costly signal of quality. But that still requires some pre-commitments from retro funders or a general consensus from the community.
-
If contributors undertake multiple projects it should be possible to punish after-the-fact by docking some of their rewards from other projects. For example, if someone participates in 1 beneficial project and 1 harmful project, their retro funding rewards from the beneficial project can be reduced due to their participation on the harmful project. Unfortunately, this still requires some sort of pre-commitment from funders.
-
I proposed a simple solution to the problem:
For a project to be considered for retroactive funding, participants must post a specific amount of money as collateral.
If a retroactive funder determines that the project was net-negative, they can burn the collateral to punish the people that participated in it. Otherwise, the project receives its collateral back.
This eliminates the “no downside” problem of retroactive funding and makes some net-negative projects unprofitable.
The amount of collateral can be chosen adaptively. Start with a small amount and increase it slowly until the number of net-negative projects is low enough. Note that setting the collateral too high can discourage net-positive but risky projects.
Why your charitable giving should be sustainable
I make a slightly different anti-immortality case here:
https://harsimony.wordpress.com/2020/11/27/is-immortality-ethical/
Summary: At a steady state of population, extended lifespan means taking resources away from other potential people. Technology for extended life may not be ethical in this case. Because we are not in steady state, this does not argue against working on life extension technology today.
One reason people make this claim is that many models of economic growth depend on population growth. Like you noted, there are lots of other ways to grow the economy by making each individual more productive (lower poverty, more education, automating tasks, more focus on research, etc.).
But crucially, all of these measures have diminishing returns. Let’s say that in the future everyone on earth has a PhD, is highly productive, and works in an important research field. In this case the only way to continue growing economy is through population growth, since everything else has already been maxed out. This is why Chad Jones claims that the long run growth rate limits to the population growth rate:
https://web.stanford.edu/~chadj/annualreview.pdf
At least that’s what the models say. Jones himself admits that AI might change these dynamics (I guess population growth of AI’s would become the thing that matters more if they replace human labor?).
A Model of Hits-Based Giving
Predicting for Good: Charity Prediction Markets
Thanks for writing this. Great to see people encouraging a sustainable approach to EA!
I want to tell you that taking care of yourself is what’s best for impact. But is it?
I claim that this is true:
Finding personal fulfillment is a positive result in and of itself.
It’s important to prioritize personal needs, otherwise you will not be in a good position to help others (family, friends, charity, etc.).
Ensuring one’s relationship with EA is sustainable can actually lead to more impact over the long run (though this shouldn’t be peoples primary goal, personal wellbeing comes first).
Encouraging a sustainable culture can make EA more welcoming to others.
I think another possible route around gambling restrictions to prediction markets is to ensure all proceeds go to charity, but the winners get to choose which charity to donate to. I wrote about this more here:
https://forum.effectivealtruism.org/posts/d43f6HCWawNSazZqb/charity-prediction-markets
I have noticed that few people hold the view that we can readily reduce AI-risk. Either they are very pessimistic (they see no viable solutions so reducing risk is hard) or they are optimistic (they assume AI will be aligned by default, so trying to improve the situation is superfluous).
Either way, this would argue against alignment research, since alignment work would not produce much change.
Strategically, it’s best to assume that alignment work does reduce AI-risk, since it is better to do too much alignment work (relative to doing too little alignment work and causing a catastrophe).
Though I am not super familiar with the research, it seems that in general more indirect democracy functions better due to the fact that voters have little incentive to cast informed votes, whereas representatives are incentivized to make informed decisions on voters behalf.
I think the book 10% Less Democracy can point you to relevant research on this topic. It was discussed briefly on MR here.
You may also want to check out Caplan’s The Myth of the Rational Voter for research along similar lines.
The Unilateralist’s Gift
Great post!
To reiterate what AppliedDivinityStudies said, I would love to hear more about proposed solutions to this problem. For example, what do you think of this paper on preventing supervolcanic eruptions?
Interventions that may prevent or mollify supervolcanic eruptions
Of course, EA funds can do all of these things, and I appreciate the work they are doing.
I think it is important to be explicit about the structure of EA funds, meta-charities, and charitable foundations: they typically involve pooling money from many donors and putting funding decisions in the hands of a few people. This is not a criticism! It makes a lot of sense to turn these decisions over to knowledgeable, committed specialists in the EA community. This approach likely improves the impact of peoples donations over the counterfactual where people give directly to charities without considering how other are donating.
While I appreciate this system, I don’t see why we shouldn’t at least consider other systems of collective donation. It seems worthwhile to explore other approaches before settling on one specific model of collective giving.
Also, it seems like you have more faith than me in the collective wisdom of many non-experts, compared to a team of experts whose job is to work on the questions full-time.
Under the right circumstances, many non-experts can and do outperform experts. Tetlock’s Superforcasting and prediction markets are good examples of this. That being said, I am highly uncertain as to whether these conditions hold for charitable donation, so experimentation with different funding models seems valuable.
I agree that the EA funds (and meta-charities like Givewell), are great opportunities to give and can help balance the flow of donations going to different charities. But I don’t think that these funds have entirely solved the collective action problem in charitable giving. Rather, they aggregate money from many donors and turn over funding decisions to a handful of experts. These experts are doing great work, and I really respect them, but it doesn’t hurt to consider how we might do things even better!
If we really did have a system for small donors to coordinate their giving like large donors, things would look quite different:
-
Collections of small donors would be able to fund specific research projects, found new charitable organizations, and exert significant control over the day-to-day activities of these organizations.
-
Collections of donors would be able to work with mega-donors, governments, and charitable organizations to pursue much larger projects.
-
Collections of small donors would be able to deliberate amongst themselves and make funding decisions based on their combined knowledge.
Charities and EA funds do this in a roundabout way by acting as representatives for many small donors, but this isn’t the only way to organize giving. What about a kickstarter for EA research projects? Or a charitable fund where managers are elected by donors? Or a prediction market on how impactful different interventions are? I’m not claiming that these ideas are going to be better than the current instantiation of EA funds, but I want to encourage exploration and experimentation before we settle on these as the only solution to collective donation.
-
Thanks for posting this, this seems like valuable work.
I’m particularly interested in using MLOSS to intentionally shape AI development. For example, could we identify key areas where releasing particular MLOSS can increase safety or extend the time to AGI?
Finding ways to guide AI development towards narrow and simple AI models can extend AI timelines, which is complimentary to safety work:
https://www.lesswrong.com/posts/BEWdwySAgKgsyBzbC/satisf-ai-a-route-to-reducing-risks-from-ai
In your opinion, what traits of a particular piece of MLOSS determine whether it increases or decreases risk?