A lot of these would be good for a small founding team, rather than individuals. What do you mean by ‘good for an EA group?’
Richard_Batty
No tasty money for you: http://effective-altruism.com/ea/18p/concrete_project_lists/
Concrete project lists
I was just looking at the EA funds dashboard. To what extent do you think the money coming into EA funds is EA money that was already going to be allocated to similarly effective charities?
I saw the EA funds post on hacker news, are you planning to continue promoting EA funds outside the existing EA community?
You can understand some of what people are downvoting you for by looking at which of your comments are most downvoted—ones where you’re very critical without much explanation and where you suggest that people in the community have bad motives: http://effective-altruism.com/ea/181/introducing_ceas_guiding_principles/ah7 http://effective-altruism.com/ea/181/introducing_ceas_guiding_principles/ah6 http://effective-altruism.com/ea/12z/concerns_with_intentional_insights/8p9
Well-explained criticisms won’t get downvoted this much.
This is really helpful, thanks.
Whilst I could respond in detail, instead I think it would be better to take action. I’m going to put together an ‘open projects in EA’ spreadsheet and publish it on the EA forum by March 25th or I owe you £100.
I think we have a real problem in EA of turning ideas into work. There have been great ideas sitting around for ages (e.g. Charity Entrepreneurship’s list of potential new international development charities, OpenPhil’s desire to see a new science policy think tank, Paul Christiano’s impact certificate idea) but they just don’t get worked on.
- 17 Mar 2017 7:41 UTC; 9 points) 's comment on Open Thread #36 by (
Yes! The conversations and shallow reviews are the first place I start when researching a new area for EA purposes. They’ve saved me lots of time and blind alleys.
OpenPhil might not see these benefits directly themselves, but without information sharing individual EAs and EA orgs would keep re-researching the same topics over and over again and not be able to build on each other’s findings.
It may be possible to have information sharing through people’s networks but this becomes increasingly difficult as the EA network grows, and excludes competent people who might not know the right people to get information from.
Even simpler than fact posts and shallow investigations would be skyping experts in different fields and writing up the conversation. Total time per expert is about 2 hours − 1 hour for the conversation, 1 hour for writing up.
Thanks, that clarifies.
I think I was confused by ‘small donor’ - I was including in that category friends who donate £50k-£100k and who fund small organisations in their network after a lot of careful analysis. If the fund is targeted more at <$10k donors that makes sense.
OpenPhil officers makes sense for MVP.
On EA Ventures, points 1 and 2 seem particularly surprising when put together. You found too few exciting projects but even they had trouble generating funder interest? So are you saying that even for high-quality new projects, funder interest was low, suggesting risk-aversion? If so, that seems to be an important problem to solve if we want a pipeline of new potentially high-impact projects.
On creating promising new projects, myself and Michael Peyton Jones have been thinking a lot about this recently. This thinking is for the Good Technology Project—how can we create an institution that helps technology talent to search for and exploit new high-social-impact startup opportunities. But a lot of our thinking will generalise to working out how to help EA get better at exploration and experimentation.
Small donors have played a valuable role by providing seed funding to new projects in the past. They can often fund promising projects that larger donors like OpenPhil can’t because they have special knowledge of them through their personal networks and the small projects aren’t established enough to get through a large donor’s selection process. These donors therefore act like angel investors. My concern with the EA fund is that:
By pooling donations into a large fund, you increase the minimum grant that it’s worth their time to make, thus making it unable to fund small opportunities
By centralising decision-making in a handful of experts, you reduce the variety of projects that get funded because they have more limited networks, knowledge, and value variety than the population of small donors.
Also, what happened to EA Ventures? Wasn’t that an attempt to pool funds to make investments in new projects?
- Update on Effective Altruism Funds by 20 Apr 2017 17:20 UTC; 21 points) (
- 21 Apr 2017 11:23 UTC; 6 points) 's comment on Update on Effective Altruism Funds by (
- 2 Mar 2017 18:52 UTC; 3 points) 's comment on What Should the Average EA Do About AI Alignment? by (
What communities are the most novel/talented/influential people gravitating towards? How are they better?
This is really exciting, looking forward to these posts.
The Charity Entrepreneurship model is interesting to me because you’re trying to do something analogous to what we’re doing at the Good Technology Project—cause new high impact organisations to exist. Whereas we started meta (trying to get other entrepreneurs to work on important problems) you started at the object level (setting up a charity and only later trying to get other people to start other charities). Why did you go for this depth-first approach?
Exploration through experimentation might also be neglected because it’s uncomfortable and unintuitive. EAs traditionally make a distinction between ‘work out how to do the most good’ and ‘do it’. We like to work out whether something is good through careful analysis first, and once they’re confident enough of a path they then optimise for exploitation. This is comforting because we then get to do only do work when we’re fairly confident of it being the right path. But perhaps we need to get more psychologically comfortable with mixing the two together in an experimental approach.
Is there an equivalent to ‘concrete problems in AI’ for strategic research? If I was a researcher interested in strategy I’d have three questions: ‘What even is AI strategy research?’, ‘What sort of skills are relevant?’, ‘What are some specific problems that I could work on?’ A ‘concrete problems’-like paper would help with all three.
What sort of discussion of leadership would you like to see? How was this done in the Army?
I know some effective altruists who see EAs like Holden Karnofsky or what not do incredible things, and feel a little bit of resentment at themselves and others; feeling inadequate that they can’t make such a large difference.
I think there’s a belief that people often have when looking at successful people which is really harmful, the belief that “I am fundamentally not like them—not the type of person who can be successful.” I’ve regularly had this thought, sometimes explicitly and sometimes as a hidden assumption behind other thoughts and behaviours.
It’s easy to slip into believing it when you hear the bios of successful people. For example, William MacAskill’s bio includes being one of the youngest associate professors of philosophy in the world, co-founder of CEA, co-founder of 80,000 Hours, and a published author. Or you can read profiles of Rhodes Scholars and come across lines like “built an electric car while in high school and an electric bicycle while in college”.
When you hear these bios it’s hard to imagine how these people achieved these things. Cal Newport calls this the failed simulation effect—we feel someone is impressive if we can’t simulate the steps by which they achieved their success. But even if we can’t immediately see the steps they’re still there. They achieved their success through a series of non-magic practical actions, not because they’re fundamentally a different sort of person.
So a couple of suggestions:
If you’re feeling like you fundamentally can’t be as successful as some of the people you admire, start by reading Cal Newport’s blog post. It gives the backstory behind a particularly impressive student, showing the exact (non-magical) steps he took to achieve an impressive bio. Then, when you hear an impressive achievement, remind yourself that there is a messy practical backstory to this that you’re not hearing. Maybe read full biographies of successful people to see their gradual rise. Then go work on the next little increment of your plan, because that’s the only consistent way anyone gets success.
If you’re a person others look up to as successful, start communicating some of the details of how you achieved what you did. Show the practicalities, not just the flashy bio-worthy outcomes.
An EA stackexchange would be good for this. There is one being proposed: http://area51.stackexchange.com/proposals/97583/effective-altruism
But it needs someone to take it on as a project to do all that’s necessary to make it a success. Oli Habryka has been thinking about how to make it a success, but he needs someone to take on the project.
Not sure, it’s really hard to make volunteer-run projects work and often a small core team do all the work anyway.
This half-written post of mine contains some small project ideas: https://docs.google.com/document/d/1zFeSTVXqEr3qSrHdZV0oCxe8rnRD8w912lLw_tX1eoM/edit