I follow Crocker’s rules.
niplav
Encountered while logged in. Now it doesn’t happen anymore. Maybe it was because I’d opened a bunch of tabs before dismissing the notification, which had still pre-loaded on other pages? Anyway, now it’s fixed, at least for me.
Basically a bug report: The popup “Sign up for the weekly EA Forum Digest” appears on every new page, even when I’ve already clicked “No thanks” on other pages. I highly dislike this.
Yep, seems true that useful advice comes from people who were in a similar situation and then solved the problem.
Does it happen often in EA that unqualified people give a lot of advice? 80,000 hours comes to mind, but you would hope they’re professional enough to having thought of this failure mode.
Ideally, I would include at this point some readings on how aggregation might work for building a utopia, since this seems like an obvious and important point. For instance, should the light cone be divided such that every person (or every moral patient more broadly, perhaps with the division taking moral weight into account) gets to live in a sliver of the light cone that’s optimized to fit their preferences? Should everybody’s preferences be aggregated somehow, so that everyone can live together happily in the overall light cone? Something else? However, I was unable to find any real discussion of this point. Let me know in the comments if there are writings I’m missing. For now, I’ll include the most relevant thing I could find as well as a more run-of-the-mill reading on preference aggregation theory.
It would probably be worth if for someone to write out the ethical implications of K-complexity-weighted utilitarianism/UDASSA on how to think about far-future ethics.
A few things that come to mind about this question (these are all ~hunches and maybe only semi-related, sorry for the braindump):
The description length of earlier states of the universe is probably shorter, which means that the “claw” that locates minds earlier in a simple universe is also shorter. This implies that lives earlier in time in the universe would be more important, and that we don’t have to care about exact copies as much.
This is similar to the reasons why not to care too much about Boltzmann brains.
We might have to aggregate preferences of agents with different beliefs (possible) and different ontologies/metaphysical stances (not sure about this), probably across ontological crises.
I have some preliminary writings on this, but nothing publishable yet.
The outcomes of UDASSA is dependent on the choice of Turing machine. (People say it’s only up to a constant, but that constant can be pretty big).
So we either find a way of classifying Turing machines by simplicity without relying on a single Turing machine to give us that notion, or we start out with some probability distribution over Turing machines and do some “2-level-Solomonoff induction”, where we update both the probability of each Turing machine and the probabilities of each hypothesis for Turing machine.
This leads to selfishness for whoever is computing Solomonoff induction, because the Turing machine where the empty program just outputs their observations receives the highest posterior probability.
If we use UDASSA/K-ultilitarianism to weigh minds there’s a pressure/tradeoff to simplify one’s preferences to be simpler.
If we endorse some kind of total utilitarianism, and there are increasing marginal returns to energymatter or spacetime investment into minds with respect to degree of moral patienthood then we’d expect to end up with very few large minds, if there are decreasing marginal returns we end up with many small minds.
Theorems like Gibbard-Satterthwaite and Hylland imply that robust preference aggregation that resists manipulation is really hard. You can circumvent this by randomly selecting a dictator, but I think this would become unnecesary if we operate in an open-source game theory context, where algorithms can inspect each others’ reasons for a vote.
I’m surprised you didn’t mention reflective equilibrium! Formalising reflective equilibrium and value formation with meta-preferences would be major steps in a long reflection.
I have the intuition that Grand Futures talks about this problem somewhere[1], but I don’t remember/know where.
- ↩︎
Which, given its length, isn’t that out there.
I’ve thought a bit about this and updated to include a (admittedly minor) discount for impactful or interesting work, “$20 for impactful or interesting projects, $35 for work with a public result, $50 otherwise”.
What do you mean by “accurate estimate”? The more sophisticated version would be to create a probability distribution over the value of the marginal win, as well as for the intervention, and then perform a Monte-Carlo analysis, possibly with a sensitivity analysis.
But I imagine your disagreement goes deeper than that?
In general, I agree with the just estimate everything approach, but I imagine you have some arguments here.
Isn’t the solution to this to quantify the value of a marginal win, and add it to the expected utility of the intervention?
I’ve found Replaceability (Paul Christiano, 2013) an interesting exploration of the different levels this question can take on. Takeaway: It’s complicated, but you’re less replaceable than you think.
Consider the problem of being automated away in a period of human history with explosive growth, and having to subsist on one’s capital. Property rights are respected, but there is no financial assistance by governments or AGI corporations.
How much wealth does one need to have to survive, ideally indefinitely?
Finding: If you lose your job at the start of the singularity, with monthly spending of $1k, you need ~$71k in total of capital. This number doesn’t look very sensitive to losing one’s job slightly later.
At the moment, the world economy is growing at a pace that leads to doublings in GWP every 20 years, steadily since ~1960. Explosive growth might instead be hyperbolic (continuing the trend we’ve seen seen through human history so far), with the economy first doubling in 20, then in 10, then in 5, then 2.5, then 15 months, and so on. I’ll assume that the smallest time for doublings is 1 year.
initial_doubling_time=20 final_doubling_time=1 initial_growth_rate=2^(1/(initial_doubling_time*12)) final_growth_rate=2^(1/(final_doubling_time*12)) function generate_growth_rate_array(months::Int) growth_rate_array = zeros(Float64, years) growth_rate_step = (final_growth_rate - initial_growth_rate) / (years - 1) current_growth_rate = initial_growth_rate for i in 1:years growth_rate_array[i] = current_growth_rate current_growth_rate += growth_rate_step end return growth_rate_array end
We can then define the doubling sequence:
years=12*ceil(Int, 10+5+2.5+1.25+final_doubling_time) economic_growth_rate = generate_growth_rate_array(years) economic_growth_rate=cat(economic_growth_rate, repeat([final_growth_rate], 60*12-size(economic_growth_rate)[1]), dims=1)
And we can then write a very simple model of monthly spending to figure out how our capital develops.
capital=collect(1:250000) monthly_spending=1000 # if we really tighten our belts for growth_rate in economic_growth_rate capital=capital.*growth_rate capital=capital.-monthly_spending end
capital
now contains the capital we end up with after 60 years. To find the minimum amount of capital we need to start out with to not lose out we find the index of the number closest to zero:julia> findmin(abs.(capital)) (1.1776066747029436e13, 70789)
So, under these requirements, starting out with more than $71k should be fine.
But maybe we’ll only lose our job somewhat into the singularity already! We can simulate that as losing a job when initial doubling times are 15 years:
initial_doubling_time=15 initial_growth_rate=2^(1/(initial_doubling_time*12)) years=12*ceil(Int, 10+5+2.5+1.25+final_doubling_time) economic_growth_rate = generate_growth_rate_array(years) economic_growth_rate=cat(economic_growth_rate, repeat([final_growth_rate], 60*12-size(economic_growth_rate)[1]), dims=1) capital=collect(1:250000) monthly_spending=1000 # if we really tighten our belts for growth_rate in economic_growth_rate capital=capital.*growth_rate capital=capital.-monthly_spending end
The amount of initially required capital doesn’t change by that much:
julia> findmin(abs.(capital)) (9.75603002635271e13, 68109)
Ah, makes sense. I don’t know whether others do this. I will have to think on how I handle this myself, but I want to make it cheaper for individuals & EA topics.
Reach heaven through research consulting.
People other than at Arb also offering it (at various rates):
Alok Singh (~$300/hr)
Elizabeth van Nostrand (~$300/hr)
Niplav ($20 for impactful or interesting projects, $35 for work with a public result, $50 otherwise)
Nuño Sempere (~$250/hr, at marginally decreasing price)
Vasco Grilo, (~$20/hr)
I remember Sarah Constantin having been available for this too, but I don’t know whether she still does research consulting.
Thank you! His name was somewhat hard to google, because of another (apparently more Google-famous) David Goldberg.
I don’t know. Which EA organisation did he found?
I believe that was a joke.
See also Tomasik 2017.
No consensus as far as I know, but there’s Trophic Cascades Caused by Fishing (Brian Tomasik, 2015). Summary:
One of the ecological effects of human fishing is to change the distribution of prey animals in the food web. Some evidence suggests that harvesting of big predatory fish may increase populations of smaller forage fish and decrease zooplankton populations. Meanwhile, harvesting forage fish directly (to eat as sardines/anchovies or to feed to farmed fish, pigs, or chickens) should tend to decrease forage-fish populations and increase zooplankton populations. On the other hand, it may also be that harvesting more fish reduces total fish biomass in the ocean, without significantly increasing smaller fish populations. There are many other trends that might be observed, and generalization is difficult.
Was this practice clearly delineated as an experiment to the participants?
Related question: How does one become someone like Carl Shulman (or Wei Dai, for that matter)?
The story I know is that if you can change the course of such an object by a slight amount early enough, that should be sufficient to cause significant deviations late in its course. Am I mistaken about this, and the force is not strong enough because the deviation is far too small?
I wonder whether the lives of those moths were net negative. If the population was rising, then the number of moths dying as larvae might’ve been fairly small. I assume that OPs apartment doesn’t have many predatory insects or animals that eat insects, so the risk of predation was fairly small. That leaves five causes of death: old age, hunger, thirst, disease and crushing.
Death by old age for moths is probably not that bad? They don’t have a very long life, so their duration of death also doesn’t seem very long to me, and couldn’t offset the quality of their life.
Hunger and thirst are likely worse, but I don’t know by how much, do starved moths die from heart problems? (Do moths have hearts?)
Disease in house moth colonies is probably fairly rare.
Crushing can be very fast or lead to long painful death. Seems the worst of those options.
I think those moths probably had a better life than outside, just given the number of predatory insects; but I don’t think that this was enough to make their lives net-positive. But it’s been a while since I’ve read into insect welfare, so if most young insects die by predation, I’d increase my credence in those moths having had net-positive lives.
More:
Speculations on Invertebrate Population Dynamics Relevant to Reducing Suffering (Brian Tomasik, 2019), sections Should we err on the side of not squishing healthy insects due to r-selection? and Should we (humanely) squish non-predator insects?
Killing Animals and Turnover (Brian Tomasik, 2014)