Awards for the Future Fund’s Project Ideas Competition
This post announces the winners of the Future Fund’s Project Ideas Competition, and reflects on the process of running the competition.
We got an overwhelming response to the competition, receiving close to 1000 submissions. The blog post received more comments than any other post on the Forum; in fact, more than double the second-most-commented post. We were thrilled at the level of excitement that the competition generated. So that we can appropriately reward the submissions, we’ve decided to include a category of “honorable mentions”: these will not go onto the website, but will receive an award of $1000.
We will be contacting the winners individually about how to receive their prizes.
Winners
The winners—which will soon go up on our project ideas page (often in modified form) - are as follows:
gavintaylor—Infrastructure to support independent researchers
Epistemic Institutions, Empowering Exceptional People
The EA and Longtermist communities appear to contain a relatively large proportion of independent researchers compared to traditional academia. While working independently can provide the freedom to address impactful topics by liberating researchers from the perverse incentives, bureaucracy, and other constraints imposed on academics, the lack of institutional support can impose other difficulties that range from routine (e.g. difficulties accessing pay-walled publications) to restrictive (e.g. lack of mentorship, limited opportunities for professional development). Virtual independent scholarship institutes have recently emerged to provide institutional support (e.g. affiliation for submitting journal articles, grant management) for academic researchers working independently. We expect that facilitating additional and more productive independent EA and Longtermist research will increase the demographic diversity and expand the geographical inclusivity of these communities of researchers. Initially, we would like to determine the main needs and limitations independent researchers in these areas face and then support the creation of a virtual institute focussed on addressing those points.
Konstantin Pilz—EA content translation service
Effective Altruism, Movement-Building
EA-related texts are often using academic language needed to convey complex concepts. For non-native speakers reading and understanding those texts takes a lot more time than reading about the same topic in their native language would. Furthermore, today many educated people in important positions, especially in non-western countries, do not speak or only poorly speak English. (This is likely part of the reason that EA currently mainly exists in English speaking countries and almost exclusively consists of people speaking English well.)
To make EA widely known and easy to understand there needs to be a translation service enabling e.g. 80k, important Forum posts or the Precipice to be read in different languages. This would not only make EA easier to understand—and thus spread ideas further—but also likely increase epistemic diversity of the community by making EA more international.
Mackenzie Arnold—A regulatory failsafe for catastrophic or existential biorisks
Biorisk and Recovery from Catastrophes
Currently, many government regulators (like the FDA in the US) apply a static set of criteria when evaluating countermeasures used to fight disease or other public harms. While these criteria may operate relatively well during normal times, during catastrophic events, they would likely impose overly cautious limitations on response efforts and, in some cases, may even prohibit the development or deployment of countermeasures with relatively minor risk profiles relative to the threat at hand. To avoid such a system, we would be interested in supporting work that aims to research, develop, or advocate for (through policy or legal challenges) alternative regulatory structures that would better accommodate the needs of catastrophic risks scenarios.
Thanks to Kyle Fish for discussing this and related ideas over the past year.
Marc-Everin Carauleanu—Datasets for AI alignment research
Artificial Intelligence
The success of Machine Learning experiments relies heavily on the quality and quantity of training data which is oftentimes difficult and expensive to obtain. We would like to see an organization that has the infrastructure and capacity to provide training data for any promising AI Alignment research proposal. This could aid the development of alignment-relevant metrics in line with Open Philanthropy’s ‘Measuring and forecasting risks’ research direction as well as potentially incentivize ML researchers to focus on alignment work as a key bottleneck—training data—will be taken care of. We would hope that datasets developed by this organization have the potential to transform AI Alignment research similarly to how ImageNet accelerated Computer Vision research.
Elizabeth Barnes—High-quality human data
Artificial Intelligence
Most proposals for aligning advanced AI require collecting high-quality human data on complex tasks such as evaluating whether a critique of an argument was good, breaking a difficult question into easier subquestions, or examining the outputs of interpretability tools. Collecting high-quality human data is also necessary for many current alignment research projects.
We’d like to see a human data startup that prioritizes data quality over financial cost. It would follow complex instructions, ensure high data quality and reliability, and operate with a fast feedback loop that’s optimized for researchers’ workflow. Having access to this service would make it quicker and easier for safety teams to iterate on different alignment approaches.
Some alignment research teams currently manage their own contractors because existing services (such as surge.ai and scale.ai) don’t fully address their needs; a competent human data startup could free up considerable amounts of time for top researchers.
Such an organization could also practice and build capacity for things that might be needed at ‘crunch time’ – i.e., rapidly producing moderately large amounts of human data, or checking a large volume of output from interpretability tools or adversarial probes with very high reliability.
The market for high-quality data will likely grow – as AI labs train increasingly large models at a high compute cost, they will become more willing to pay for data. As models become more competent, data needs to be more sophisticated or higher-quality to actually improve model performance.
Making it less annoying for researchers to gather high-quality human data relative to using more compute would incentivize the entire field towards doing work that’s more helpful for alignment, e.g., improving products by making them more aligned rather than by using more compute.
[Thanks to Jonas V for writing a bunch of this comment for me]
[Note from Nick: we’ll probably add just one of the above two ideas to our site, or some amalgamation.]
Mark Xu—Detailed stories about the future
Artificial Intelligence
We’re interested to see stories about how the present evolves into the future that are as specific and realistic as possible. Such stories should be set in a world that is “on trend” with respect to technological development and aim to consider realistic sets of technologies coexisting in a global economy. We think such stories might help make it easier to feel, rather than just abstractly understand, that this might be the most important century.
We will also award a $5000 prize to Fin Moorhouse, for his list of EA projects. The ideas are not of the right format to immediately go on the website, and we think the ideas make more sense to be directed specifically at the EA community than to go on the Future Fund website. But we thought that the write-up was impressive, and we want to reward it.
Honorable mentions
We are also awarding $1000 “honorable mention” prizes for the following suggestions:
agnode—SEP for every subject
Epistemic institutions
Create free online encyclopedias for every academic subject (or those most relevant to longtermism) written by experts and regularly updated. Despite the Stanford Encyclopedia of Philosophy being widely-known and well-loved there are few examples from other subjects. Often academic encyclopedias are both behind institutional paywalls and not accessible on sci-hub (e.g. https://oxfordre.com/). This would provide decision makers and the public with better access to academic views on a variety of topics.
JacksonWagner—Resilient ways to archive valuable technical / cultural / ecological information
Biorisk and recovery from catastrophe
In ancient Sumeria, clay tablets recording ordinary market transactions were considered disposable. But today’s much larger and wealthier civilization considers them priceless for the historical insight they offer. By the same logic, if human civilization millennia from now becomes a flourishing utopia, they’ll probably wish that modern-day civilization had done a better job at resiliently preserving valuable information. For example, over the past 120 years, around 1 vertebrate species has gone extinct each year, meaning we permanently lose the unique genetic info that arose in that species through millions of years of evolution.
There are many existing projects in this space—like the internet archive, museums storing cultural artifacts, and efforts to protect endangered species. But almost none of these projects are designed robustly enough to last many centuries with the long-term future in mind. Museums can burn down, modern digital storage technologies like CDs and flash memory aren’t designed to last for centuries, and many critically endangered species (such as those which are “extinct in the wild” but survive in captivity) would likely go extinct if their precarious life-support breeding programs ever lost funding or were disrupted by war/disaster/etc. We’re potentially interested in funding new, resilient approaches to storing valuable information, including the DNA sequences of living creatures.
JanBrauner—Cognitive enhancement research and development (nootropics, devices, …)
Values and Reflective Processes, Economic Growth
Improving people’s ability to think has many positive effects on innovation, reflection, and potentially individual happiness. We’d like to see more rigorous research on nootropics, devices that improve cognitive performance, and similar fields. This could target any aspect of thinking ability—such as long/short term memory, abstract reasoning, creativity—and any stage of the research and development pipeline—from wet lab research or engineering over testing in humans to product development.
Jared Mueller—“Building vibrant EA academic communities in Africa, Asia, and Latin America”
The largest EA university communities are concentrated in Europe and North America. We would like to support the emergence of more broad-based and vibrant EA academic communities across Africa, Asia, and Latin America. This will enhance the cultural diversity of EA, and broaden the supply of students and faculty tackling the most pressing problems. The Universities of Ibadan and São Paulo, India’s IITs and IIMs, and the National Autonomous University of Mexico are only a few examples of campuses where we would be eager to see thriving EA intellectual communities.
Kat Woods—Translate EA content at scale
Reach More Potential EAs in Non-English Languages
Problem: Lots of potential EAs don’t speak English, but most EA content hasn’t been translated.
Solution: Pay people to translate the top EA content of all time into the most popular languages, then promote it to the relevant language communities.
Keiran Harris—Betting exchange focused on debates between individuals
Bryan Caplan argues that “bets are one of the best ways to (a) turn vague verbiage into precise statements, and (b) discover the extent of genuine disagreement about such precise statements.” But it isn’t easy to set up bets with people you don’t already trust, so many potential bets don’t get made. If instead one person could say, “I just set up a bet on X.com on our point of disagreement, here’s the link if you want to accept” — many more bets might get made. The site could also keep a public record of bettor’s long-run track records — which Caplan thinks is one of the best ways to assess thinkers’ credibility.
Lennart Stern and John Halstead—The Mission Innovation Initiative
The Mission Innovation Initiative tracks public spending on different categories of clean energy RD&D. For each category, a fund could be created with the mandate to use its budget to maximize the rate of progress on the specific type of clean energy RD&D through incentive payments made to countries. When a global fund directly spends money on projects in areas in which the government already spends substantial amounts, there is the possibility that the governments’ spending will be crowded out. Theoretically, there are therefore advantages to incentivising countries to spend more. Stern (2021) finds that 1 billion dollars spent through the optimal such mechanism causes an increase in aggregate clean energy RD&D of 5 billion.
Fossil fuels combustion can be curbed through restricting supply, restricting demand and expanding substitutes. Here is an analysis suggesting that it is currently best for new global funds to focus on substitute expansion.
New scalable Global Public Good Institutions like the one proposed could be funded through novel mechanisms like the MGF mechanism proposed here.
MaxG—DIY decentralized nucleic acid observatory
Biorisk and Recovery from Catastrophes
As part of the larger effort of building an early detection center for novel pathogens, a smaller self-sustaining version is needed for remote locations. The ideal early-detection center would not only have surveillance stations in the largest hubs and airports of the world, but also in as many medium sized ones as possible. For this it is necessary to provide a ready-made, small and transportable product which allows meta-genomic surveillance of wastewater or air ventilation. One solution would be designing a workflow utilizing the easily scalable and portable technology of nanopore sequencing and combining it with a workflow to extract nucleic acids from wastewater. The sharing of instructions on how to build and use this method could lead to a “do it yourself” (DIY) and decentralized version of a nucleic acid observatory. Instead of staffing a whole lab at a central location, it would be possible to only have one or two personnel in key locations who use this product to sequence samples directly and only transmit the data to the larger surveillance effort.
Nicholas Schiefer—Special economic zones near the United States
There are tons of talented people outside of the United States who would like to live there and participate in its dynamic economy and remarkable institutions. However, US immigration law makes this very difficult, even for very talented people. Meanwhile, high productivity regions within the United States tend to have extremely high costs of living, primarily due to shortages of housing.
We’d love to solve both of these problems by working with a place near the United States (such as a Caribbean or Atlantic Island, or perhaps a Canadian province or Mexican State) to set up a “special economic zone” (SEZ). An SEZ would target explicit harmonization with American law on matters related to doing business and research, making it easy for American organizations to hire people or set up offices there. A SEZ would have extremely non-restrictive immigration policies, allowing people to move there from basically anywhere in the world. They would also set policy carefully so that basic goods like housing remain affordable (for example, by forbidding restrictions on the height of buildings).
Pablo—Retrospective grant evaluations
Research That Can Help Us Improve
EA funders allocate over a hundred million dollars per year to longtermist causes, but a very small fraction of this money is spent evaluating past grantmaking decisions. We are excited to fund efforts to conduct retrospective evaluations to examine which of these decisions have stood the test of time. We hope that these evaluations will help us better score a grantmaker’s track record and generally make grantmaking more meritocratic and, in turn, more effective. We are interested in funding evaluations not just of our own grantmaking decisions (including decisions by regrantors in our regranting program), but also of decisions made by other grantmaking organizations in the longtermist EA community.
RoryFenton—Campaign to eliminate lead globally
Economic Growth
Lead exposure limits IQ, takes over 1 million lives every year and costs Africa alone $130 billion annually, 4% of GDP: an extraordinary limit on human potential. Most lead exposure is through paint in buildings and toys. The US banned lead paint in 1978 but 60% of countries still permit it. We would like to see ideas for a global policy campaign, perhaps similar to Bloomberg’s $1 billion tobacco advocacy campaign (estimated to have saved ~30 million lives), to push for regulations and industry monitoring.
Rumtin Sepasspour—Existential risk whistleblowing mechanism
Say you’re an employee of an organization building AGI without safety measures or a government official that sees your leadership massively increasing risk, how do you raise the alarm? You risk personal, career or financial costs if you try to push back on these concerns internally. There needs to be some incentives, such as financial compensation, to bring these concerns to public attention. It may also require legal support or protection, and a clear avenue for making an impact with your revelations. This may be particularly important in authoritarian countries, where the financial compensation may be extremely high in comparison to a salary, and policy change is unlikely from inside the system. We need more mechanisms that allow people to raise these concerns and reinforce norms around existential security.
RyanCarey—EA Coworking Spaces at Scale
Effective Altruism
The EA community has created several great coworking spaces, but mostly in an ad hoc way, with large overheads. Instead, a standard EA office could be created in upto 100 towns and cities. Companies, community organizers, and individuals working full-time on EA projects would be awarded a membership that allows them to use these offices in any city. Members gain from being able to work more flexibly, in collaboration with people with similar interests (this especially helps independent researchers with motivation). EA organizations benefit from decreased need to do office management (which can be done centrally without special EA expertise). EA community organizers gain easier access to an event space and standard resources, such as a library, and hotdesking space, and some access to the expertise of others using the office.
Toby Shevlane—Tools that facilitate structured access to powerful AI systems
Structured access is an approach to sharing AI models that allows people to use and study the model, but only within a structure that prevents misuse and undesired information leaks. There are early examples of structured access (e.g. OpenAI’s GPT-3 API) but the paradigm has not yet reached maturity. It would likely be possible to give external researchers greater flexibility to study models (including for the purposes of safety) even without significantly increasing the likelihood of misuse and unwanted proliferation. Also, tools for granting structured access to AI models are not widely available, which is a barrier for labs adopting a structured access approach. We would be excited to fund a fellowship for technical researchers to build open source tools for structured access. This could be in collaboration with OpenMined, an existing open source community focussed on similar topics.
Zdgroff—Advocacy for digital minds
Artificial Intelligence, Values and Reflective Processes, Effective Altruism
Digital sentience is likely to be widespread in the most important future scenarios. It may be possible to shape the development and deployment of artificially sentient beings in various ways, e.g. through corporate outreach and lobbying. For example, constitutions can be drafted or revised to grant personhood on the basis of sentience; corporate charters can include responsibilities to sentient subroutines; and laws regarding safe artificial intelligence can be tailored to consider the interests of a sentient system. We would like to see an organization dedicated to identifying and pursuing opportunities to protect the interests of digital minds. There could be one or multiple organizations. We expect foundational research to be crucial here; a successful effort would hinge on thorough research into potential policies and the best ways of identifying digital suffering.
Some reflections
Here are some quick notes on how this project went relative to our expectations:
Our initial best guess was that we’d add 5-10 project ideas to our website, and so that aspect of the final outcome matched our expectations.
We got about twice as many proposals as we expected.
It was notable that we didn’t see any idea that felt exceptional enough to go over the $5000 payout. Most of the ideas we liked were familiar to us and the main value-add was either reminding us of a potential project that we’d forgotten about or a clear written explanation. This was somewhat surprising to us.
The project was overall a pretty significant amount of work, and we are undecided about how often to use this kind of approach in the future. Still, we are overall happy we did the experiment, and we were happy about the excitement it created around generating concrete project ideas.
Here are some quick notes on what kinds of project ideas we found more or less helpful:
We found project ideas much more useful when they were more concrete. There’s a huge difference in usefulness to us between “We should hire someone to run X to help with Y / we should do research on X” and “This particular line of attack would be helpful.”
We found project ideas more helpful when there was a clear reason that there was a market failure that was being corrected. For example, a common category was “X for EAs” where X is some service that seems like it would be best provided by normal market mechanisms.
We were excited about concrete projects that weren’t just about promoting EA or longtermism. We really want the EA community to achieve concrete wins and launch ambitious and inspiring object-level projects to improve the long-term future. We think that will help the world directly, improve our culture, and also help with recruitment in the long run.
- Future Fund June 2022 Update by 1 Jul 2022 0:50 UTC; 279 points) (
- Why we’re not founding a human-data-for-alignment org by 27 Sep 2022 20:14 UTC; 150 points) (
- 20 concrete projects for reducing existential risk by 21 Jun 2023 15:54 UTC; 132 points) (
- EA’s Culture and Thinking are Severely Limiting its Impact by 26 Jul 2022 11:10 UTC; 96 points) (
- Why we’re not founding a human-data-for-alignment org by 27 Sep 2022 20:14 UTC; 88 points) (LessWrong;
- Future Matters #1: AI takeoff, longtermism vs. existential risk, and probability discounting by 23 Apr 2022 23:32 UTC; 57 points) (
- EA Updates for April 2022 by 31 Mar 2022 16:43 UTC; 32 points) (
- Which texts do we need in non-English languages? by 11 Apr 2022 16:59 UTC; 14 points) (
- 5 Apr 2022 20:37 UTC; 9 points) 's comment on Issues with centralised grantmaking by (
- 25 May 2022 18:35 UTC; 9 points) 's comment on Open Philanthropy’s Cause Exploration Prizes: $120k for written work on global health and wellbeing by (
- 3 Jun 2022 15:38 UTC; 5 points) 's comment on Data collection for AI alignment—Career review by (
- 26 Mar 2022 15:06 UTC; 4 points) 's comment on The Future Fund’s Project Ideas Competition by (
- 11 Apr 2022 16:54 UTC; 4 points) 's comment on Propose and vote on potential EA Wiki articles / tags [2022] by (
- 11 Oct 2023 9:52 UTC; 1 point) 's comment on Jonas Hallgren’s Shortform by (LessWrong;
Both “EA translation service” and “EA spaces everywhere” seem like ideas which can take good, but also many bad or outright harmful forms.
A few years ago, I tried to describe how to to establish a robustly good “local effective altruism” in a new country or culture (other than Anglosphere).
The super brief summary is
1. it’s not about translating a book, but about “transmission of a tradition of knowledge”
2. what is needed is a highly competent group of people
3. who can, apart from other things, figure out what the generative question of EA means in the given context (which may be quite different from Oxford or Bay)
Point #2 is the bottleneck. Without it, efforts to have “country/language X EA” will be often unsuccessful or worse. Doing Good Better was translated to e.g. japanese. It is doing a bit worse on Amazon.co.jp than the english version on Amazon.co.uk, but not by many orders. Yet you aren’t seeing a lot of content and projects from EA Japan.
So: One good version of “translation service” seems to be basically giving functional national EA groups money to get as good translations as needed.
One bad version is a “centralized project” trying to translate centrally selected content to centrally selected languages by hiring professional contractors to do it.
Similarly EA spaces:
Hiring offices in 100 cities is easy, and you can do that centrally. Running a space with a good culture and gatekeeping is harder, and bottlenecked on “#2″.
Good version of the project seems to be basically giving functional city EA groups money to get good coworking spaces, which is probably doable via CEA group support.
Many bad versions are somewhere close to “centralized megaproject to run EA spaces everywhere”.
In other words: it isn’t easy to get around the bottleneck ‘you need people on the ground’.
Footnote: there are many variations on this theme. The basic pattern roughly is:
“notice that successful groups are doing activity X” (e.g. they have a coworking space, or materials in the local language, or they organize events,...).
the next step, in which this goes wrong, is, “let’s start a project P(X) that will try to make activity X happen everywhere”.
Also: The chances of bad or explicitly harmful outcomes increase roughly in proportion to the combination of cultural, network and geographic distance from the “centre” from which such activity should be directed. Eg project which will try to run spaces in Stanford from Berkeley seem fine
Regarding coworking spaces everywhere, I strongly agree that you need competent people to set the culture locally, but the number of places with such people is rapidly increasing.
And there’s also something to the fact that if you try to run a project far away from current cultural hubs, then it can drift away from your intended purpose. But if there is decent local culture, some interchange of people between hubs, and some centralised management, I think it would work pretty well.
The considerations for centralising office management and gatekeeping seem strong overall—freeing up a lot of EA organisers’ time, improving scalability, and improving ability to travel between offices.
Hi Nick, Great work getting so much interest and so many ideas.
I am super curious to know how much prioritisation and vetting is going on behind the scenes for the ideas on the FTX Fund project list and how confident you are in the specific ideas listed.
One way to express this would be: Do you see the ideas on your list as likely to be in the top 100 longtermist project ideas or as likely to be in the top 10,000 longtermist project ideas or somewhere in between?* I think knowing this could be useful for anyone looking to start a project to decide how closely to stick to the ideas you list.
I expect part of the reason why a number of people commented that they were expecting more awards/winners (and I too had this intuition) is because it may not be at all clear to the causal EA reader why the ideas listed in the 723 comments replying to the competition post are any better or worse than the ideas listed on the FTX project list. I think it is possible that there is more vetting and prioritisation going on behind the scenes in FTX than people realise. But if so it would be good to make that transparent.
Thank you Nick!
– –
NOTE: For what it is worth, my initial intuition when reading though the list, is that all the FTX ideas were very likely in the top “top 2500 longtermist ideas”* across FTX’s areas of interest. Which is decent. My intuition was driven by focusing on topics where I have done ideas research**:
The FTX list included “Alternative voting systems”. Voting is the most outwardly obvious part of the the policy reflection and decision process, but is roughly just 1% of the process. So this would be a top 100 idea for improving policy reflection and decision process. (Within the broader category of “Values and Reflective Processes” maybe a top 300 idea).
The FTX list included “Strengthening the Bioweapons Convention”. This is perhaps in the top 50 for ideas preventing dangerous research based on this mapping, and a top 150 idea for preventing or mitigating an existential biorisks.
– –
* By saying an collection of ideas are “top 1000 ideas” I mean that I could with a bit of time write out or collect 1000 ideas in the relevant category(s) where it would not be obvious that they were better or worse than the ideas in question (but substantially more than 1000 would be difficult). So if I then spent some months doing prioritisation research there would be a roughly 10% chance that any specific idea from the collection would make it to the top 100 (the top 10%).
** E.g. through my current job which is running a team at Charity Entrepreneurship who work on listing, prioritising between and researching ideas for EA orgs.
I suspect a lot of the “very best” ideas in terms of which things are ex ante the best to do, if we don’t look at other things in the space (including things not currently done), will look very similar to each other.
Like 10 extremely similar AI alignment proposals.
So I’d expect any list to have a lot of regularization for uniqueness/side constraint optimizations, rather than thinking of the FTX project ideas list as a ranked list of the most important x-risk reducing projects on the margin. Arguably, the latter ought to be closer to how altruistic individuals should be optimizing for what projects to do, after adjusting for personal fit
Having set up the TEAMWORK EA-Coworking-Space in Berlin I’m very sympathetic to the EA Coworking Spaces at Scale idea (almost applied with something similar).
A couple of question and thoughts on this topic though:
1. It says “The EA community has created several great coworking spaces”. Where are those other spaces?
2. It also says they were set up “in an ad hoc way, with large overheads”. In my case I agree with the “ad hoc” part (which I don’t consider particularly bad though) but am not sure with the “large overheads” part. What is this assessment based upon? Probably I spend too much of my own time on it (because of a lack of funding) but I don’t really see how a central international organization would have saved much time or money if the alternative would be to hire and pay someone locally.
3. I kind of doubt that there currently is the demand for up to 100 EA-Coworking-Spaces in the world, at least if you are thinking of them including event space, a library…. The space in Berlin is pretty small (200m2) and we haven’t reached capacity yet. Could be a lack of marketing (and obviously Covid) etc. but my best guess is that there are right now less than 10 cities worldwide where a bigger space would make sense. If the growth of EA accelerates and the BEAHR is unleashed that might change soon though and it could make sense to set up the necessary infrastructure already.
Anyway, if anyone wants to open up an EA-Coworking-Space I’m happy to talk (not sure if I can provide much insight besides adding phone booths from the start but I’ll try).
Thanks so much for doing all of this work! I agree that the initiative was valuable. I also think that the new additions and mentions are exceptional ideas and worthy of more exploration.
I want to share my feedback and reflections as a competition participant and post reader. Note that these are weakly held views, shared quickly, and mainly for transparency.
I was personally surprised by how few awards and mentions were given and the relatively small overall value of the pay-out. I thought that there were easily 20+ good ideas proposed, maybe more.
I would have probably liked if more submissions were at least explicitly classed as having ‘high potential’ etc. Part of this view comes from recently hearing how EA is flooded with funding etc. This leads me to feel that that foregrounding good ideas is increasingly important. I suppose I worry that we might miss out on potential value from what I perceived to be a very valuable and fruitful innovation ideation exercise.
Related to that, I’d really like to see funders/stakeholder proactively nudge the development of any ideas that they liked. IMO many deserve a full forum post and more examination and some should be funded for a trial. I would discourage people from assuming that anyone who proposed an idea will be proactive in trying to progress it in the absence of feedback and commitments for funding, even if it was awarded a prize.
In future rounds/similar competitions, it could be valuable to give even very short feedback on propositions to indicate your receptiveness and reasoning. At this stage, those who took time out of their work and leisure to contribute ideas but didn’t win an award may feel that they gained insufficient impact/reward/insight for that work. If so, that does not optimally motivate them to invest time in future ideation projects (though they may anyway if sufficiently intrinsically motivated, etc). I think that giving even a small amount of feedback could reduce that risk. Getting feedback would show the person offering the idea that it was engaged with and give them an update on its fit for funding etc.
I am interested to hear other people’s responses.
I was also surprised by the low number of awards. I was expecting ~5-10x as many winners (50-100).
Also it’s interesting to note the low correlation between comment karma and awards. Of the (3 out of 6) public submissions, the winners had a mean of 20 karma [as of posting this comment], minimum 18, and the (9 out of 15) honourable mentions a mean of 39 (suggesting perhaps these were somewhat weighted “by popular demand”), minimum 16. None of the winners were in the top 75 highest rated comments; 8⁄9 of the publicly posted honourable mentions were (including 4 in the top 11).
There are 6 winners and 15 honourable mentions listed in OP (21 total); the top 21 public submissions had a mean karma of 52, minimum 38; the top 50 a mean of 40, minimum 28; and the top 100 a mean of 31, minimum 18. And there are 86 public submissions not amongst the awardees with higher karma than the lowest karma award winner. See spreadsheet for details.
Given that half of the winners were private entries (2/3 if accounting for the fact that one was only posted publicly 2 weeks after the deadline), and 40% of the honourable mentions, one explanation could be that private entries were generally higher quality.
I note that the karma figures are confounded by posting date (and possibly popularity of the poster), and a better system for showing them would likely have produced different results, as per the considerations Nathan Young outlines in the second most upvoted comment on the initial competition announcement. Also karma is an imperfect measure (so maybe the discrepancy isn’t that surprising).
Maybe giving feedback on the idea could’ve been outsourced by also giving monetary rewards for especially useful responses regarding the usefulness of an idea. :D
(by the way, I read all public suggestions and I remember liking many of your ideas, Peter)
I think the lead exposure project is quite interesting but isn’t this already done by LEEP which spun out of Charity Entrepreneurship a while back? What’s the rationale for another organization here? Or is RoryFenton already involved in that project.
Hey!
LEEP is indeed working on this—I mentioned them in my original comment but I have no connection to them. I was thinking of a campaign on the $100M/year scale, comparable to Bloomberg’s work on tobacco. That could definitely be LEEP, my sense (from quick Googling and based purely on the small size of their reported team) is they would have to grow a lot to take on that kind of funding, so there could also be a place for a large existing advocacy org pivoting to lead elimination. I have not at all thought through the implementation side of things here.
Hi! We at LEEP would also be excited about a campaign at something like $100 million/year—great to see you submitted the idea Rory. We recently wrote this proposal aimed at the Biden administration with some of our ideas: https://www.dayoneproject.org/post/eliminating-childhood-lead-poisoning-worldwide
And yes, we’re currently a small team (3 FTE), but hoping to expand significantly later this year!
I think taking this forward would be awesome, and I’m potentially interested to contribute. So consider this comment an earmarking for me to come speak with you and / or Rory about this at a later date :)
I’m excited to see this proposal! Digital minds are our research focus at Sentience Institute. We published a couple papers on it (in Science and Engineering Ethics and Futures), and we’re trying to help coordinate and support more research on this. Anyone is welcome to email us info@sentienceinstitute.org if it may be useful.
A strange upside of this project: It created an unofficial list of EA ideas that is likely to contain all the high quality ideas that weren’t funded yet.
This is in turn an incentive for others to add their idea to the same list
Edit: Ok maybe not?
No, not at all. I agree that this list is valuable, but I expect there to be many more high quality ideas / important projects that are not mentioned in this list. Those are just a few obvious ideas of what we could do next.
(Btw. you apparently just received a strong downvote while I wrote this. That wasn’t me, my other comment was strong downvoted too.)
Nice, we now have some good project ideas, next we need people to execute them.
I wouldn’t expect that to happen automatically in many cases. Therefore, I am particularly excited about projects that help as accelerators for getting other projects started, like actively finding the right people (and convincing them to start/work on a specific project) or making promising people more capable.
In particular, I’d be excited about a great headhunting organization to get the right people (EAs and non-EAs) to work on the right projects. (Like you considered in the project idea “EA ops”, though I think it would also help a lot for e.g. finding great AI safety researchers.)
The way you phrased the projects “Talent search” and “innovative educational experiments” generally sound too narrow to me. I don’t only want to find and help talented youths, but also e.g. get great professors to work on AI safety, and support all sorts of people through e.g. leadership and productivity training.
Posting a comment because I expect people who read this to also be somewhat entrepreneurially aligned. If anyone is interested in the below areas and wants to kick ideas around, potentially cofounder match, etc. I’d love to chat as I’m considering doing something in one of these spaces after I finish my fellowship in Congress:
Charter Cities (e.g. SEZ near the U.S.); especially those that could also serve as a stopgap while other orgs advocate for domestic immigration reform for critical areas (e.g. AI researchers from India or China who would have a hard time getting a visa)
Purchasing coal mines in order to 1) preserve them for future generations in case of a catastrophic event and 2) prevent the near-term use of coal to prevent additional harmful emissions and 3) to see if there are other things we can do with them that would be interesting… unsure what that is yet. Underground bunkers next to the mine for biological weapons shelters?
Any startup ideas around defense, helping make humanity multiplanetary, or anything far future that is closer to a venture backable company than a nonprofit.
I think another class of really important projects are research projects that try to evaluate what needs to be done. (Like priorities research, though even a bit more applied and generating and evaluating ideas and forecasting to see what seems best.)
The projects that are now on your project list, are good options when we consider what currently seem like good things to do. But in the game against x-risk, we want to be able to look more moves ahead, consider how our opponent may strike us down, and probably invest a lot of effort into improving our long-term position on the gameboard, because we really don’t want to lose that game.
Sadly, I don’t think there are that many people who can do that kind of research well, but finding those seems really important.
(I intend to write more about this soon.)