SamuelKnoche
I give the MWI a probability of greater than 0.5 of being correct, but as far as I can tell, there isn’t any way to generate more value out of it. There isn’t any way to create more branches. You only can choose to be intentional and explicit about creating new identifiable branches, but that doesn’t mean that you’ve created more branches. The branching happens regardless of human action.
Someone with a better understanding of this please weigh in.
I believe Sam Harris is working on an NFT project for people having taken the GWWC pledge, so that would be one example.
Academia seems like the highest leverage place one could focus on. Universities are to a large extent social status factories, and so aligning the status conferred by academic learning and research with EA objectives (for example, by creating an ‘EA University’) could be very high impact. Also relates to the point about ‘institutions.’
″...cryptocurrencies makes stopping the funding of terrorists basically impossible.”
No. Really, really, no. I could talk a lot more about this, but if you think terrorist groups can manage infosec well enough to overcome concerted attacks by the NSA, or Mossad, or FSB, etc., you’re fooling yourself.
“Impossible” might be an exaggeration, but it does seem to make it much easier. That’s also what the article you link to suggests. Edit: Are you skeptical because of the on/off ramps, the security of terrorist’s computer infrastructure or something else?
Other than the misunderstanding and conflating nodes for hash power, this is also not true. Has power is concentrated, so you’d need to somehow convince the biggest mining groups that they don’t care about countries keeping their operations legal, and as we’ve seen, they do. That means they will continue to run to embrace KYC/AMF regulation, and will do whatever else makes their investments go well—including cooperating with nation-states in almost any way you can imagine.
So far, the only serious KYC/AMF happens at the level of centralized exchanges. Nation-states cannot enforce KYC/AMF at the level of decentralized exchanges. They can also use chain-analysis and put pressure on mining groups within their countries to do KYC/AMF or to create address “black lists” but so far there hasn’t been much political will for this, and probably would lead to a big backlash from the crypto community. And this becomes impossible for privacy coins such as Zcash and Monero.
I feel like a number of these maybe could be fitted under a single very large organization. Namely:
Max-Planck Society (MPG) for EA research
EA university
Forecasting Organization
EA forecasting tournament
ML labs
Large think tank
Basically, a big EA research University with a forecasting, policy research and ML/AI safety department.
I’d also add non-profit and for-profit startup incubator. I think Universities would be much better if they made it possible to try something entrepreneurial without having to fully drop-out.
In my experience, EAs tend to be pretty dissatisfied with the higher education system, but I interpreted the muted/mixed response to my post on the topic as a sign that my experience might have been biased, or that despite the dissatisfaction, there wasn’t any real hunger for change. Or maybe a sense that change was too intractable.
Though I might also have done a poor job at making the case.
My speculative, cynical, maybe unfair take is that most senior EAs are so enmeshed in the higher education system, and sunk so much time succeeding in it, that they’re incentivized against doing anything too disruptive that might jeopardize their standing within current institutions. And why change how undergrad education is done if you’ve already gone through it?
The very quick summary: Japan used to be closed off from the rest of the world, until 1853 when the US forced them to open up. This triggered major reforms. The Shogun was overthrown and replaced with the emperor, and in less than a century, Japan went from an essentially medieval economic and societal structure, to a modern industrial economy.
I don’t know of any books exclusively focused on it, but it’s analyzed in Why Nations Fail and Political Order and Political Decay.
I have argued for a more “mutiny” (edit: maybe “exit” is a better word for it) style theory of change in higher education so I really like the idea of an EA university where learning would be more guided by a genuine sense of purpose, curiosity and ambition to improve the world rather than a zero-sum competition for prestige and a need to check boxes in order to get a piece of paper. Though I realize that many EAs probably don’t share my antipathy towards the current higher education system.
One downside of EA universities I can think of is that it might slow movement growth since EAs will be spending less time with people unfamiliar with the movement / fewer people at normal universities will come across EA.
Though if it becomes really successful and prestigious, it could also raise the profile of EA.
Another example that comes to mind is Japan’s Meiji Restoration. I don’t think it fits neatly in any of the categories. It’s a combination of mutiny, steering and rowing. But just like the American revolution, I think it illustrates that very rapid and disruptive change in political and economic systems can be undertaken successfully.
The ability to maintain, or improve steering and/or rowing seem to be two important preconditions for a successful mutiny.
Also, the various revolutions that swept Eastern Europe and led to the end of the Soviet Union also seem to be successful mutinies. Of course, the reason these countries ended up under Soviet communism and needed to rise up was because of the Bolshevik mutiny, but still.
I feel like people in EA are mostly anti-mutiny because the only people advocating for it seem to be far left, anti-capitalist types who don’t seem to have a realistic plan for how to go about it, or a coherent plan for what could replace it. But I don’t think EA should be closed to the idea of mutiny in principle. It’s just that any mutiny proposal has to pass a really high bar.
Thanks for clarifying. I did somewhat misinterpret the intention of your comment.
I agree that the US revolution was unusual and in many ways more conservative than other revolutions.
I guess you could think of the US revolution as being a bit like a mutiny that then kept largely the same course as the previous captain anyway.
I feel like this is really underselling what happened, though I guess it might be subjective. Sure, they didn’t try to reinvent government, culture and the economy completely from scratch, but it was still the move from a monarchy to the first modern liberal constitutional republic.
If something dangerous occurs when driving, slamming on the brakes is often a pretty good heuristic, regardless of the specific nature of the danger.
What if you’re being chased by a dragon?
I think we can make a similar analogy for Anchoring, because some the same reasons that make Steering more attractive now than in the past also apply for Anchoring. If there are an unusually large number of icebergs up ahead, or you are afraid the Mutineers will steer us towards them, or you are attempting to moor up alongside a larger vessel, reducing speed could be a generally prudent move—and this is the case even if full speed ahead was the optimal strategy in the past when you were on the open seas.
What if you think that the people currently Steering are the ones blindly heading towards the icebergs? Wouldn’t Mutiny be an option worth considering? What if the ship is taking on water and people in the lower decks are drowning? Wouldn’t you want to Speed up and get to land as fast as possible?
This metaphor doesn’t seem too informative until we’ve made sense of what world we actually live in.
I agree with this. I was just pushing back against the “somewhere between never-before-done and impossible” characterization. Mutiny definitely goes wrong more often than not, and just blindly smashing things without understanding how they work, and with no real plan for how to replace them is a recipe for disaster.
Certainly, but I still think that it counts as an example of a successful “mutiny.” If overthrowing the government and starting a new country isn’t mutiny, I don’t know what is. And I don’t think anyone sympathetic to the mutiny theory of change wants to restart from the state of nature and reinvent all of civilization completely from scratch.
I would suggest that the feasibility of managing it once you smashed all the working pieces is somewhere between never-before-done and impossible.
How does the American Revolution fit into this? Wasn’t the US basically created from scratch, and now is arguably the most successful country in the world?
Ideally, people would get the opportunity to get up to speed, “bridge the inferential gap” and get to start thinking about how to have an impact full time during their undergraduate studies. The way most university programs are set up right now, people spend years on often irrelevant content and wasteful busywork. I was thus pleased to see Ben Todd and Will MacAskill mention the idea of creating some kind of EA University during their EAG appearances.
See also my own “case for education.”
If the Everett interpretation is true, then all experiences are already amplified exponentially. Unless I’m missing something, a QC doesn’t deserve any special consideration. It all adds up to normality.
You’re right. The questions of moral realism and hedonistic utilitarianism do make me skeptical about QRI’s research (as I currently understand it), but doing research starting from uncertain premises definitely can be worthwhile.
Thanks for the response. I guess I find the idea that there is such a thing as a platonic form of qualia or valence highly dubious.
A simple thought experiment: for any formal description of “negative valence,” you could build an agent that acts to maximize this “negative valence” form and still acts exactly like a human maximizing happiness when looking from the outside (something like a “philosophical masochist”). It seems to me that it’s impossible to define positive and negative valence independently from the environment the agent is embedded in.
Disclaimer: I’m not very familiar with either QRI’s research or neuroscience, but in the spirit of Cunningham’s Law:
QRI’s research seems to predicated on the idea that moral realism and hedonistic utilitarianism are true. I’m very skeptical about both of these, and I think QRI’s time would be better spent working on the question of whether these starting assumptions are true in the first place.
My reading of this post is that it attempts to gesture at the valley of bad rationality.