Incubators usually take founders and ideas together, whereas I like the Charity Entrepreneurship approach of splitting up those tasks, and that it would fit the EA community well.
I think there are opportunities for lots of high expected value startups, when taking the approach that the goal is to do as much good as possible, for instance:
1. Proving a market for things that are good for the world, like Tesla’s strategy.
2. Identifying startups that could have high negative value if externalities are ignored, and trying to have an EA aligned startup be a winner in that space.
3. Finding opportunities that may be small or medium in terms of profitability, but have high positive externalities.
The difference between this and any other incubator is that this would not be measuring just profitability as its’ main measure, but also working to measure the externalities of the company’s, and aim to create a portfolio that does the highest good for the world.
Note that this is quite easy to do. Give me or someone else that’s competent access to the server for a few hours, and we can install Yourls or another existing url shortening tool.
Impact assessments. I think our ability to do impact assesments are bounded by our tools (for instance, they were on average much worse before guesstimate). If EAs started regularly modelling complex feedback loops because there was a readily available tool for it, I think the quality of thinking and estimates would go up by quite a bit.
A tool that makes systems modelling (with sinks, flows, and feedback and feedforward loop) as easy as Monte Carlo modelling was made with Guesstimate.
Charity Entrepreneurship, but for for-profits.
An organization dedicated to studying how to make other organizations more effective, that runs small scale experiments based on the existing literature, then helps EA orgs adopt best practices.
An early stage incubator that can provide guidance and funding for very small projects, like Charity Entrepreneurship but on a much more experimental scale.
Are you aware of any extremely efficient ways to reduce trauma ?
There are several promising canidates that show high enough efficacy to do more research. Drugs therapies such as MDMA show promise, as do therepeutic techniques like RTM. (RTM is particularly promising because it appears to be quick, cheap, and highly effective).
Is trauma something that can easily be measured.
Of course. Like most established constructs in psychology, there are both diagnostic criteria for assesment by trained professionals and self-report indexes. Most of these tend to be fairly high on agreement between different measures as well as test-retest reliability.
One consistent frame I’ve seen with EAs is a much higher emphasis on “How can I frame this to avoid looking bad to as many people as possible?” rather than “How can I frame this to look good and interesting to as many people as possible?”
Something the “cold hard truth about the icebucket challenge” did (correctly I think), is be willing to be controversial and polarizing deliberately. This is something that in general EAs seem to avoid, and there’s a general sense that these sorts of marketing framings are the “dark arts” that one should not touch.
On one hand, I see the argument for how framing the facts in the most positive light is obviously bad for an epistemic culture, and could hurt EA’s reputation; on the other hand, I think EA is so allergic to this that it hurts it. I do think this is a risk aversion bias when it comes to both public perception and epistemic climate, and that EA is irrationally too far towards being cautious.
Another frequent mistake I see along this same vein (although less rare with the higher status people in the movement) is to confuse epistemic and emotional confidence. People often think that if they’re unsure about an opinion, they need to appear unsure of themselves when stating an opinion.
The problem with this in the context of the above post is that appearing unsure of yourself signals low status. The antidote to this is to detach your sure-o-meter from your feeling of confidence, and be able to verbally state your confidence levels without being unsure of yourself. If you do this currently in the EA community, there can be a stigma about epistemic overconfidence that’s difficult to overcome, even though this is the correct way to maximize both epistemic modesty and outside perception.
So to sum my suggestions up for concrete ways that people in organizations could start taking status effects more into account:
Shift more from “how can frame the truth to avoid looking bad?” to “How can I frame the truth to look good?”
Work to detach your emotional and your epistemic confidence, especially in public settings.
I will note that I notice that I’m feeling very adversarial in this conversation, rather than truth seeking. For that reason I’m not going to participate further.
If you just look backwards from EAs’ priorities, then you have no good reason to claim that EAs are doing things wrong. Maybe such systemic causes actually are worse, and other causes actually are better.
Maybe, but I didn’t say that I’d expect to see lots of projects trying to fix these issues, just that I’d expect to see more research into them, which is obviously the first step to determine correct interventions.
Arguments like this don’t really go anywhere. Especially if you are talking about “thoughts not thinked”, then this is just useless speculation.
What would count as useful speculation if you think that EAs cause prioritization mechanisms are biased?
What’s systemic if not voting mechanisms? Voting seems like a very root part of the government system, more so than economic and social policies for instance.
Voting mechanisms can be systemic if they’re approached that way. For instance, working backwards from a two party system in the US, figuring out what causes this to happen, and recommending mechanisms that fix that.
are human enhancement to eliminate suffering
This is another great example of EA bucking the trend, but I don’t see it as a mainstream EA cause.
functional decision theory to enable agents to cooperate without having to communicate, moral uncertainty to enable different moral theories to cooperate
These are certainly examples of root cause thinking, but to be truly systems thinking they have to take the next step to ask how can we shift the current system to these new foundations.
You can probably say that I happen to underestimate or overestimate their importance but the idea that it’s inherently difficult to include them with EA methodology just seems clearly false, having done it. I mean it’s pretty easy to just come up with guesstimates if nothing else.
The EA Methodology systemically underestimates systemic changes and handwaves away modelling of them. Consider for instance how hard it is to incorporate a feedback loop into a guesstimate model, not to mention flowthrough effects, and that your response here didn’t even mention those as problems.
What would a “systemic solution” look like?
Non-systemic solution: Seeing that people are irrational, then creating an organization that teaches people to be rational.
Systemic solution: Seeing that people are irrational, asking what about the system creates irrational people, and then creating an organization that looks to change that.
I feel like you are implicitly including “big” as part of your definition of “systemic”
I’m including systems thinking as part of my definition. This often leads to “big” interventions, because systems are resillient and often in local attractors, but oftentimes the interventions can be small, but targeted to cause large feedback loops and flowthrough effects. However, the second is only possible through either dumb luck, or skillful systems thinking.
Well they’re not going to change all of it. They’re going to have to try something small, and hopefully get it to catch on elsewhere.
They “have to” do that? Why? Certainly that’s one way to intervene in the system. There are many others as well.
“Hopefully” getting it to catch on elsewhere also seems silly. Perhaps they could try to look into ways to model the network effects, influence and power structures, etc, and use systems thinking to maximize their chances of getting it to catch on elsewhere.
It’s hard to point to thoughts not thinked :). A few lines of research and interventions that I would expect to be more pursued in the EA community if this bias wasn’t present:
1. More research and experimentation with new types of governance (on a systemic level, not just including the limited research funding into different ways to count votes).
2. More research and funding into what creates paradigm shifts in science, changes in governance structures, etc.
3. More research into power, and influence, and how they can effect large changes.
4. Much much more looking at trust and coordination failures, and how to handle them.
5. A research program around the problem of externalities and potential approaches to it.
Basically, I’d expect much more of a “5 why’s approach” that looks into the root causes of suffering in the world, rather than trying to fix individual instances of it.
An interesting counter example might be CFAR and the rationality focus in the community, but this seems to be a rare instance, and at any rate tries to fix a systemic problem with a decidely non-systemic solution (there are a few others that OpenPhil has lead, such as looking into changing academic research, but again the mainstream EA community mostly just doesn’t know how to think this way).
As someone who agrees EAs aren’t focused enough on systemic change, I don’t see a single “system” that EAs are ignoring. Rather, I see a failure to use systems thinking to tackle important but hard to measure opportunities for interventions in general. That is, I may have particular ideas for systemic change of particular systems (academia and research, capitalism, societal trust) I’m working on or have worked on, but my critique is simply that EAs (at least in the mainstream movement) tend to ignore this type of thinking at all, when historically the biggest changes in quality of life seem to have come from systemic change and the resulting feedback loops.
Thought about this for the last couple of days, and I’d recommend against it. the workshop is set up to be a complete, contained experience, and isn’t really designed to be consumed only partially.
There is an opportunity cost in not having a better backdrop.
It’s possible I’m wrong. I find it unlikely that veganism wasn’t influenced by existing political arguments for veganism. I find it unlikely that a focus on institutional decision making wasn’t influenced by existing political zeitgist around the problems with democracy and capitalism. I find it unlikely that the global poverty focus wasn’t influenced by the existing political zeitgeist around inequality.
All this stuff is in the water supply, the arguments and positions have been refined by different political parties moral intuitions and battle with the opposition. This causes problems when there’s opposition to EA values, sure, but it also provides the backdrop from which EAs are reasoning from.
It may be that EAs have somehow thrown off all of the existing arguments, cultural milleu, and basic stances and assumptions that have been honed for the past few generations, but that to me represents more of a failure of EA if true than anything else.
I haven’t seen any examples of cause areas or conclusions that were discovered because of political antipathy towards EA.
Veganism is probably a good example here. Institutional decisionmaking might be another. I don’t think that political antipathy is the right way to view this, but rather just the general political climate shaping the thinking of EAs. Political antipathy is a consequence of the general system that produces both positive effects on EA thought, and political antipathy towards certain aspects of EA.
Internal debate within the EA community is far better at reaching truthful conclusions than whatever this sort of external pressure can accomplish. Empirically, it has not been the case that such external pressure has yielded benefits for EAs’ understanding of the world.
It can be the case that external pressure is helpful in shaping directions EVEN if EA has to reach conclusions internally. I would put forward that this pressure has been helpful to EA already in reaching conclusions and finding new cause areas, and will continue to be helpful to EA in the future.
Rethink Priorities seems to be the obvious organization focused on this.
An implicit problem with this sort of analysis is that it assumes the critiques are wrong, and that the current views of Effective Altruism are correct.
For instance, if we assume that systemic change towards anti-capitalist ideals actually is correct, or that taking refugees does actually have long run bad effects on culture, then the criticism of these views and the pressure on the community from political groups to adopt these views is actually a good thing, and provides a net-positive benefit for EA in the long term by providing incentives to adopt the correct views.