It’s also possible there won’t be much competition. There may only be 3-6 entities with serious chances of making an AGI. One idea is to have safety researchers in almost every entity.
It’s definitely an important question.
In this case, the equivalent is a “car safety” nonprofit that goes around to all the car companies to help them make safe cars. The AI safety initiatives would attempt to make sure that they can help or advise whatever groups do make an AGI. However, knowing how to advise those companies does require making a few cars internally for experimentation.
I believe that OpenAI basically publically stated that they are willing to work with any groups close to AGI, but forgot where they mentioned this.
That makes sense to me. When I said “harder to scale”, I mean harder to “put a bunch on top of each other”. In some ways it’s not as elegant.
Agreed that Impact Prizes are one ways that Certificates of Impact could work long-term. Like, one group places $100k of Impact Prizes for 2030, where it will only be used to purchase Certificates of Impact.
I’d generally agree with that.
Honestly, the technical infrastructure for Certificates of Impact would be very similar to that for Impact Prizes as I discuss them above. I think both would be really interesting to test at larger scales.
Impact Prizes may need less hype though, but may be more difficult to scale.
Not that I know of. Paul has a lot of stuff going on, for one thing. :)
I think some people are still excited about Certificates of Impact though.
Yep, it’s related. I’ve looked a bit into social impact bonds; they actually are quite specific and precise though (they pay with a specific interest rate, for very specific outcomes).
There have been different thoughts on how to use markets for charitable work before. The Impact Certificate Post and its comments list some.
I imagine it would select in part for sales & persuasion, but not more than for other prizes (where you need to do the same for the judges). The middlemen would focus on the financial motive, so I’d expect them to be relatively sane.
I would really desire the evaluations and predictions/estimations to be really good, in order to make sure people focus on the right things.
I wrote this post recently:
Generally, I feel like there are actually pretty few regular engineering positions around for EAs (Maybe 8-15), and these both have fairly high bars and require work in the US/UK.
Small orgs have different needs to large ones, and most of the EA groups are small. This in part means they want senior and/or entrepreneurial types.
I do suggest that programmers learn ML or intensely learn Functional programming, though not that many available people seem interested in either (especially those who are doing E2G outside of EA jobs.) Either would be a significant challenge, for one thing.
I’ve worked around this space (cofounded .impact), and currently I do recommend Michal’s work for this issue.
That said, I think it’s less overlooked than it would appear. Volunteers, even tech volunteers are generally really difficult to work with. The really good ones tend get to get hired by groups rather quickly, and most of the rest are quite flaky. (Though this may be less true of Eng, which is a bit less in demand than other roles now.)
There’s generally a lot of overhead for managing a tech project, and doing it with someone who has a good chance of flaking out quickly is not that great.
My impression is that while there are a bunch of EAs in tech, very few are willing to sustain a 10hr/week plus time commitment; especially ones who don’t have a lot of other experience doing side projects of that type.
On your questions:
1. I’ve been doing a decent amount of thinking & experimentation in similar work recently. I’m personally optimistic about non-market applications like GJP and Metaculus. I think that the path for similar groups to pay forecasters is much more straightforward than similar in prediction markets. I think there could be a lot more good work in this area.
2. GJP charges several thousand per question, but Metaculus is free, assuming they accept your questions. I think the answer to this is very complicated; there are many variables at play. That said, I think that with a powerful system, $50k-500k per year in predictions could get a pretty significant informational return.
3. This is also a very vague question, it’s not obvious what metrics to use to best answer it. That said, if a good prediction system is made, it could help answer this question in specific quantitative ways. It seems to me that a robust prediction system should be roughly at least as accurate as a non-predictive system with the same people. Long-term predictions are tricky, but I think we could have some basic estimates of bias.
4. This is also a huge question. I think there’s a lot of experimentation yet to be done here on many different kinds of questions. If we could have meta-predictions on things like, “How important will we have found this predictable item was to have in the system”, then we may be able to use the system to answer and optimize here.
5. I’m not very optimistic about prediction markets. This is of course something that would be nice to formally predict in the next 1-3 years.
By that do you mean that you feel like I am offering information that would critique people not maximizing victory points?
I felt like reallyeli was explicitly asking for an honest take of impact.
Do you have advice on how to give similar information without potential negatives that could come from it? Especially in a way that doesn’t take significantly longer?
I think one assumption is that compared to the main prestigious EA positions now, most jobs are orders of magnitude lower-impact per unit time. OpenPhil has spent a lot of time exploring options and only found a few possible areas, and even some of those (prison reform) don’t seem as good as AI safety, from what I can tell, in many ways. Unless there’s some clever EA analysis that a field is really surprisingly good, I think the burden of proof is on that field to show some surprising insight; in this case, education. If you have a senior role you may be able to do 5x as much, or 15x, but I think the thinking is that the choice of industry could make a 50-200x difference.
This comment contained some honest estimates & thoughts, and then got decently downvoted. The back-and-forth doesn’t seem highly productive to me.
On that note, if you are an engineer, you may want to consider going the AI-safety route. I’ve written about this here
Quick 2c: I think it’s typically assumed among many prominent EAs that global poverty / animal issues / long-term issues are all a lot more efficient than U.S. educational issues. As such, I’d personally expect that the main benefits of you doing that work, assuming you will later work in one of the three areas I mentioned (or meta-work), to come from the first two things you mentioned (learning & career capital.)
I think it’s incredibly difficult to have much counterfactual impact in the for-profit world. You’re right to have considerable epistemic uncertainty.
I’m personally more interested in US structures, mainly because I have a lot more familiarity with them and expect to spend most of my career in the US. That said, this post was meant to collect general advice for others also doing similar things, so any thoughts are appreciated.
Good to see! I used screen flow to record myself going through the site for the first time, and recorded my reactions and thoughts. I’m hesitant to post it publicly (though invite you to ever do so, if you want), but sent it in Slack. In general I encourage people to review new projects in that way and similar; text feedback could take more time and be less information-dense (unless you spend a while summarizing it).
This is really neat, I’m a big fan of the comprehensive approach and the documentation style. Will spend more time later looking into the details; I’m not an expert in the field and can’t comment on the specific methods, but the high-level work seems very reasonable.
Side note that it seems kind of dismal that wild chimps are apparently rated with higher welfare than average humans in India, though I guess the chimp lives may actually be pretty nice, especially because there aren’t many of them. On that note, are there other animal species you think are particularly happy, but didn’t include in this report?