There seems to be very little precedent of someone founding new successful universities, partially because the perceived success of a university is so dependent on pedigree. There is even less precedent of successful “themed” universities, and the only ones I know that have attained mainstream success (not counting women’s universities or black universities, which are identity-based rather than movement-based) are old religious institutions like Saint John’s or BYU. I think a more realistic alternative would be to buy something like EdX or a competing online academic content aggregator (“MOOC”) and give it an EA slant. The success of programs like EdX is much more recent, and much more “buyable”, since it is just a matter of either licensing the right content or hiring famous academics to give independent courses.
iporphyry
Atlantic article on effective Ukraine aid
I think the Christian Science Monitor’s popularity and reputation makes Christian Scientists (note: totally different from Scientologists) significantly more respectable than they would be otherwise.
From Britannica:
The Christian Science Monitor, American daily online newspaper that is published under the auspices of the Church of Christ, Scientist. Its original print edition was established in 1908 at the urging of Mary Baker Eddy, founder of the church, as a protest against the sensationalism of the popular press. The Monitor became famous for its thoughtful treatment of the news and for the quality of its long-range, comprehensive assessments of political, social, and economic developments. It remains one of the most respected American newspapers. Headquarters are in Boston.
So I would try to buy a dying newspaper, or another media source. Alternatively (and more likely), I would found a new newspaper with a name like “San Francisco Herald” and try to attract a core of editors from a dying media source.
This is a nice project, but as many people point out this seems a bit fuzzy for a “FAQ” question. If it’s an ongoing debate within the community, it seems unlikely to have a good 2-minute answer for the public. There’s probably a broader consensus around the idea that if you commit to any realistic discount scheme, you see that the future deserves a lot more consideration than it is getting in the public and the academic mainstreams, and I wonder whether this can be phrased as a more precise question. I think a good strategy for public-facing answers would be to compare climate change (where people often have a more reasonable rate of discount) to other existential risks
Disclaimer: this is an edited version of a much harsher review I wrote at first. I have no connection to the authors of the study or to their fields of expertise, but am someone who enjoyed the paper here critiqued and in fact think it very nice and very conservative in terms of its numbers (the current post claims the opposite). I disagree with this post and think it is wrong in an obvious and fundamental way, and therefore should not be in decade review in the interest of not posting wrong science. At the same time it is well-written and exhibits a good understanding of most of the parts of the relevant model, and a less extreme (and less wrong :) version of this post would pass my muster. In particular I think that the criticism
However, since this parameter is capped at 1, while there is no lower limit to the long tail of very low estimates for fl, in practise this primarily has the effect of reducing the estimated probability of life emerging spontaneously, even though it represents an additional pathway by which this could occur.
is very valid, and a model taking this into account would have a correspondingly higher credence for “life is common” scenarios. However the authors of the paper being criticized are explicitly thinking about the likelihood of “life is not common” scenarios (which a very naive interpretation of the Drake equation would claim are all but impossible) and here this post is deeply flawed.
The essential beef of the author of the post (henceforth the OP) with the authors of the paper (henceforth, Sandberg et al) concerns their value fl, which is the “log standard deviation in the log uncertainty of abiogenesis” (abiogenesis is the event wherein random and non-replicating chemical processes create the first replicating life). A very rough explanation of this parameter (in the log uncertainty model which Sandberg et al use and OP subscribes to) is the probability of the best currently known model for abiogenesis occuring on a given habitable planet. Note that this is very much not the probability of abiogenesis itself, since there can be many other methods which produce abiogenesis a lot more frequently than the best currently known model. The beautiful conceit of this paper (and the field it belongs to) is the idea that, absent a model for a potentially very large or very small number (in this case, the probability of abiogenesis, or, in the larger paper, the probability of the emergence of life on a given paper), our best rough estimate for our uncertainty it is more or less log uniformly distributed between the largest and smallest “theoretically possible” values (so a number between 10^-30 and 10^-40 is roughly as likely as a value between 10^-40 and 10^-50, provided these numbers are within the “theoretically possible” range. The difference between “log uniform” and “log normal” is irrelevant to a first approximation). The exact definition of “theoretically possible” is complicated, but in the case of abiogenesis the largest theoretically possible value of fl (as of any other probability measure) is 1 while the smallest possible value is the probability of abiogenesis given the best currently known methods. The model is not perfect, but by far the best we have for predicting the lower tail of such distributions, i.e., in this case, the likelihood of the cosmos being mostly devoid of intelligent life. (Note that the model doesn’t tell us this probability is close to 1! Just that it isn’t close to 0.)
Now the best theoretically feasible model for abiogenesis currently known is the so-called RNA world model, which is analyzed in supplement 1 of Sandberg et al. Essentially, the only sure-fire way we know of abiogenesis is spontaneously generating the genome of an archaeobacterium, which has hundreds of thousands of base pairs, and would put the probability of abiogenesis at under 10^-100,000 (insanely small). However, we are fairly confident both that much smaller self-replicating RNA sequence would be possible in certain conducive chemical environments (the putative RNA world), and that there is some redundancy in how to generate a near minimal self-replicating RNA sequence (so you don’t have to get every base pair right). The issue is that we don’t know how small the smallest genome is and how much redundancy there is in choosing it. By the nature of log uncertainty, if we want to get the lowest value in the range of uncertainties (what OP and Sandberg et al call log standard deviation) we should take the most pessimistic reasonable estimates. These are attempted in the previously mentioned supplement, though rather than actually taking pessimistic values, Sandberg et al rather liberally assume a very general model of self-replicating RNA formation, with their lower bound based on assumptions about protein folding (rather than a more restrictive model based on assuming low levels of redundancy, which I would have chosen, and which would have put the value of fl significantly lower even than the Sandberg et al paper: they explicitly say that they are trying to be conservative). Still, they estimate a value of fl equal or lower than 10^-30 with the current best model. In order to argue for a 10^-2 result while staying within the log normal model, OP would have to convince me of some drastic additional knowledge. Either that they have a proof, beyond all reasonable doubt, that either an RNA chain shorter than the average protein is capable of self-replicating, or that there is a lot of redundance in how self-replicating RNA can form, and a chemical “RNA soup” would naturally tend to self-replication under certain conditions. Both of these are plausible theories, but as such methods for abiogenesis are not currently known to exist, assuming they work for your lower bounds on log probability is precisely not how log uncertainty works. In this way OP is, quite simply, wrong. Therefore, as incorrect science, I do not recommend this post for the decade review.
I think that it’s not always possible to check that a project is “best use, or at least decent use” of its resources. The issue is that these kinds of checks are really only good on the margin. If someone is doing something that jumps to a totally different part of the pareto manifold (like building a colony on Mars or harnessing nuclear fission for the first time), conventional cost-benefit analyses aren’t that great. For example a standard post-factum justification of the original US space program is that it accelerated progress in materials science and computer science in a way that payed off the investment even if you don’t believe that manned space exploration is worthwhile. Whether or not you agree with this (and I doubt this counterfactual can be quantified with any confidence), I don’t think that the people who were working on it would have been able to make this argument convincingly at the time. I imagine that if you ran a cost-benefit analysis at the time, it would have found that a better investment would be to put money into incremental materials research. But without the challenge of having to develop insulators, etc., that work in space, there would have plausibly been fewer new materials discovered.
I think that here there is an important difference between SpaceX and facebook, since SpaceX is an experiment that just burns private money if it fails to have a long-term payoff, whereas facebook is a global institution whose negative aspects harm billions of people. There’s also a difference between something like Mars exploration, which is a simple and popular idea that’s expensive to implement, and more kooky vanity projects which consist of rich people imagining that their being rich also makes them able to solve hairy problems that more qualified people have failed to solve for ages (an example that comes to mind, which thankfully doesn’t have billions of dollars riding on it, is Wolfram’s project to solve physics: https://blog.wolfram.com/2021/04/14/the-wolfram-physics-project-a-one-year-update/). I think that many big ambitious initiatives by billionaires are somewhere in between kooky ego-trip and genuinely original/pareto-optimal experiment, but it seems important to recognize that these are different things. Given this point of view, along with the general belief that large systems tend to err on the side of being conservative, I think that it’s at least defensible to support experiments like SpaceX or Zuckerberg’s big Newark school project, even when (like Zuckerberg’s school project) they end up not being successful.
I agree with you that EA outreach to non-Western cultures is an important and probably neglected area — thank you for pointing that out!
There are lots of reasons to make EA more geographically (and otherwise) diverse, and also some things to be careful about, given that different cultures tend to have different ethical standards and discussion norms. See this article about translation of EA into Mandarin. Something to observe is that outreach is very language and culture-specific. I generally think that international outreach is best done in a granular manner — not just “outreach to all non-Western cultures” or “outreach to all the underprivileged”. So I think it would be wonderful for someone to post about how to best approach outreach in Malawi, but that the content might be extremely different from writing about outreach in Nigeria.
So: if you’re interested in questions like this, I think it would be great if someone were to choose a more specific question and research it! (And I appreciate that your post points out a real gap.)
On a different note, I think that the discussion around your post would be more productive if you used other terms than “social justice.” Similarly, I think that the dearth of the phrase “social justice” on the EA Forum is not necessarily a sign of a lack of desire for equity and honesty. There are many things about the “social justice” movement that EAs have become wary of. For instance, my sense is that the conventional paradigm of the contemporary Western elite is largely based on false or unfalsifiable premises. I’d guess that this makes EAs suspicious when they hear “social justice” — just like they’re often wary about certain types of sociology research (things like “grit,” etc. which don’t replicate) or psychosexual dynamics and other bits of Freud’s now-debunked research.
At the same time (just like with Freudism), a lot of the core observations that the modern social justice paradigm makes are extremely true and extremely useful. It is profoundly obvious, both from statistics and from the anecdotal evidence of any woman that pretty much every mixed-gender workplace has an unacceptable amount of harassment. There is abundant evidence that e.g. non-white Americans experience some level of racism, or at least are treated differently, in many situations.
Given this, here are some things that I think it would be useful to do:
Make the experience of minorities within EA more comfortable and safe.
Continue seriously investigating translating EA concepts to other cultural paradigms (or conversely, translating useful ideas from other cultural paradigms into EA). (See also this article .)
Take some of the more concrete/actionable pieces of the social justice paradigm and analyze/ harmonize them with the more consequentialist/science-based EA philosophy (with the understanding that an honest analysis sometimes finds cherished ideas to be false).
I think the last item is definitely worth engaging with more, especially with people who understand and value the social justice paradigm. Props if you can make progress on this!