I liked this post, but I’m interested in hearing from someone who disagrees with the authors: Do you think it would be a bad idea to try these ideas, period? Or do you just object to overhauling existing EA institutions?
There were a few points at which the authors said “EA should...” and I’m wondering if it would be productive to replace that with “We, the authors of this post, are planning to… (and we want your feedback on our plans)”
I suppose the place to start would be with some sort of giving circle that operates according to the decisionmaking processes the authors advocate. I think this could generate a friendly and productive rivalry with EA Funds.
Implementing a lot of these ideas comes down to funding. The suggestions are either about distributing money (in which case you need money to hand out) or about things that will take a lot of work, in which case someone needs to be paid a salary.
I also noticed that one of the suggestions was to get funding from outside EA. I have no idea how to fundraise. But anyone who know how to fundraise, can just do that, and then use that money to start working down the list.
I don’t think any suggestion to democratise OpenPhil’s money will have any traction.
I think this hits the nail on the head. Funding is the issue, it always is.
One thing I’ve been thinking about recently is maybe we should break up OpenPhil, particularly the XRisk side (as they are basically the sole XRisk funder) . This is not because I think OpenPhil is not great (afaik they are one of the best philanthropic funds out there), but because having essentially a single funder dictate everything that gets funded in a field isn’t good, whether that funder is good or not. I wouldn’t trust myself to run such a funding boday either.
What would this mean exactly? I assume OpenPhil have already splitt up different types of funding between different teams of people. So what would it mean in practice to split up OpenPhil itself?
Making it into two legal entities? I don’t think the number of legal entitets matters.
Moving the teams working on different problems to different offices?
So OpenPhil is split into different teams, but I’ll focus specifically on their grants in XRisk/Longtermism.
OpenPhil, either directly or indirectly, are essentially the only major funder of XRisk. Most other funders essentially follow OpenPhil. Even though I think they are very competent, the fact the field has one monolithic funder isn’t great for diversity and creativity; certainly I’ve heard a philosopher of science describe xrisk as one of the most hierarchical fields they have seen, a lot due to this.
OpenPhil/Dustin Moskovitz have assets. They could break up into a number of legal entities with their own assets, some overlapping on cause area (eg 2 or 3 xrisk funders). You would want them to be culturally different; work from different offices, have people with different approaches to xrisk etc. This could really help reduce the hierarchy and lack of creativity in this field.
Some other funding ideas/structures are discussed here https://www.sciencedirect.com/science/article/abs/pii/S0039368117303278
Implementing a lot of these ideas comes down to funding.
Yep, that’s why I suggested starting with a giving circle :-)
Lots of people upvoted this post. Presumably some of them would be interested in joining.
I don’t think any suggestion to democratise OpenPhil’s money will have any traction.
My guess would be that if the authors start a giving circle and it acquires a strong reputation within the community for giving good grants, OpenPhil/Dustin Moskovitz will become interested.
Along the same lines: The authors recommend giving every user equal voting weight on the EA Forum. There is a subreddit for Effective Altruism which has this property. I’ll bet some of the authors of this post could become mods there if they wanted. Also, people could make posts on the subreddit and cross-post them here.
I agree that I would be massively more in favour of basically all of these proposals of they were proposed to be tried in parallel with, rather than instead of/”fixing”, current EA approaches. Even the worst of them I’d very much welcome seeing tried.
I liked this post, but I’m interested in hearing from someone who disagrees with the authors: Do you think it would be a bad idea to try these ideas, period? Or do you just object to overhauling existing EA institutions?
There were a few points at which the authors said “EA should...” and I’m wondering if it would be productive to replace that with “We, the authors of this post, are planning to… (and we want your feedback on our plans)”
I suppose the place to start would be with some sort of giving circle that operates according to the decisionmaking processes the authors advocate. I think this could generate a friendly and productive rivalry with EA Funds.
Implementing a lot of these ideas comes down to funding. The suggestions are either about distributing money (in which case you need money to hand out) or about things that will take a lot of work, in which case someone needs to be paid a salary.
I also noticed that one of the suggestions was to get funding from outside EA. I have no idea how to fundraise. But anyone who know how to fundraise, can just do that, and then use that money to start working down the list.
I don’t think any suggestion to democratise OpenPhil’s money will have any traction.
I think this hits the nail on the head. Funding is the issue, it always is.
One thing I’ve been thinking about recently is maybe we should break up OpenPhil, particularly the XRisk side (as they are basically the sole XRisk funder) . This is not because I think OpenPhil is not great (afaik they are one of the best philanthropic funds out there), but because having essentially a single funder dictate everything that gets funded in a field isn’t good, whether that funder is good or not. I wouldn’t trust myself to run such a funding boday either.
What would this mean exactly? I assume OpenPhil have already splitt up different types of funding between different teams of people. So what would it mean in practice to split up OpenPhil itself?
Making it into two legal entities? I don’t think the number of legal entitets matters.
Moving the teams working on different problems to different offices?
So OpenPhil is split into different teams, but I’ll focus specifically on their grants in XRisk/Longtermism. OpenPhil, either directly or indirectly, are essentially the only major funder of XRisk. Most other funders essentially follow OpenPhil. Even though I think they are very competent, the fact the field has one monolithic funder isn’t great for diversity and creativity; certainly I’ve heard a philosopher of science describe xrisk as one of the most hierarchical fields they have seen, a lot due to this. OpenPhil/Dustin Moskovitz have assets. They could break up into a number of legal entities with their own assets, some overlapping on cause area (eg 2 or 3 xrisk funders). You would want them to be culturally different; work from different offices, have people with different approaches to xrisk etc. This could really help reduce the hierarchy and lack of creativity in this field. Some other funding ideas/structures are discussed here https://www.sciencedirect.com/science/article/abs/pii/S0039368117303278
Yep, that’s why I suggested starting with a giving circle :-)
Lots of people upvoted this post. Presumably some of them would be interested in joining.
My guess would be that if the authors start a giving circle and it acquires a strong reputation within the community for giving good grants, OpenPhil/Dustin Moskovitz will become interested.
Along the same lines: The authors recommend giving every user equal voting weight on the EA Forum. There is a subreddit for Effective Altruism which has this property. I’ll bet some of the authors of this post could become mods there if they wanted. Also, people could make posts on the subreddit and cross-post them here.
I agree that I would be massively more in favour of basically all of these proposals of they were proposed to be tried in parallel with, rather than instead of/”fixing”, current EA approaches. Even the worst of them I’d very much welcome seeing tried.