Since this is tagged “Existential risk”: What does this have to do with existential risk? Or is it not supposed to be about existential risk, not even indirectly? As far as I can tell, the article does not talk about existential risk. I could make my own guesses and association of this topic with existential risk, but I would prefer if this is spelled out.
harfe
Here is another argument against Geometric utility: It does not work if negative utilities are involved: is undefined if is negative. And I think some real-world experiences that involve suffering have negative utility.
FHI has shut down yesterday: https://www.futureofhumanityinstitute.org/
Charging your phone is a very small amount of energy. The CO2-Emissions from that are less than what humans breathe out during an hour.
(some math with example values found through quick google searches, actual values depend on lot of factors and differ from source to source: 20 Wh to charge a phone, 1kg/kWh CO2 for energy, 0.66kg/day CO2 per human breathing. So a phone charge is 20g of CO2, which is equivalent to 20 * 24 / 660 = 0.73 hours of human breathing)
A relevant (imo) piece of information not in this post: The EA forum post that you are talking about was down-voted a lot. (I have down-voted too, although I don’t remember why I did so at the time.)
This makes me less worried than I otherwise would have been.
edit: I did not see Jason’s comment prior to posting mine, sorry for duplicate information.
I think this requires more elaboration how exactly the suggested system is supposed to work.
You are right, the page does contain the phrase “Give 10% of your income each year”.(Somehow google has not picked it up so I did not find it). I think GWWC has made a mistake here. The text of the actual pledge does not have this constraint.
Maybe @graceadams or someone else from GWWC can clarify things and fix the formulation on their website?
Why are you under the impression that you have to give each year? I tried to google your quoted string but could not find an exact match.
As I interpret the GWWC pledge formulation, there is no condition when you have to donate, just how much.
eg people do care about climate change, nuclear war, rogue AI, deadly pandemics
But those things are also important without longtermism. So you can make non-longtermist PR for things that longtermists like, but it would feel dishonest if you hide the part with large numbers of people millions of years into the future.
If the utility function U(t) does not diminish to zero—people keep living happy lives on earth—at a rate faster than the rate at which the survival probability function decreases, then the integral that defines the change in world utility can diverge, implying an infinite loss. In short, expected values don’t fare well in the presence of infinities.
It is unclear to me what you mean here. I have two possible understandings:
-
(1): You claim if does not go to zero (eg a constant because people keep living happy lives on earth) then the integral diverges. If this is your claim, given your choice for I think this is just wrong on a mathematical level.
-
(2): You claim if grows exponentially quickly (and grows at a faster rate than the survival probability function decreases), then the integral diverges. I think this would mathematically correct. But I think the exponential growth here is not realistic: there are finite limits to energy and matter achievable on earth, and utility per energy or matter is probably limited. Even if you leave earth and spread out in a 3-dimensional sphere at light speed, this only gives you cubic growth.
I still think that one should be careful when trying to work with infinities in an EV-framework. But this particular presentation was not convincing to me.
-
So it also makes sense to say “I have a 95% credence that the “true” P(heads) is between X and Y”
I might also say “The ‘true’ P(heads) is either or ”. If the coin comes up heads, the “‘true’ P(heads)” is , otherwise the “‘true’ P(heads)” is . Uncertainty exists in the map, not in the territory.
Okay, in in this specific context I can guess what you mean by the statement, but I think this is an inaccurate way to express yourself, and the previous “I have a 95% credence that the bias is between X and Y” is a more accurate way to express the same thing.
To explain, lets formalize things using probability theory. One way to formalize the above would be to use a random variable which describes the bias of the coin, and then you could say I think this would be a good way to formalize things. A wrong way to formalize things would be to say And people might think of the latter when you say “I have a 95% credence that the “true” P(heads) is between X and Y”. People might mistakenly think that “‘true’ P(heads)” is a real number, but what you actually mean is that it is a random variable.
So, can you have a probability of probabilities?
If you do things formally, I think you should avoid “probability of probabilities” statements. You can have a probability of events, but probabilities are real numbers, and in almost all useful formalizations, events are not in the domain of real numbers. Making sense of such a statement always requires some kind of interpretation (eg random variables that refer to biased coins, or other context), and I think it is better to avoid such statements. If sufficient context is provided, I can guess what you mean, but otherwise I cannot parse such statements meaningfully.
On a related note, a “median of probabilities” also does not make sense to me.
What about P(doom)?
It does make sense to model your uncertainty about doom by looking at different scenarios. I am not opposed to model P(doom) in more detail rather than just conveing a single number. If you have three scenarios, you can consider a random variable with values in which describes which scenario will happen. But in the end, the following formula is the correct way to calculate the probability of doom:
Talking about a “median ” does not make sense in my opinion. Two people can have the same beliefs about the world, but if they model their beliefs with different but equivalent models, they can end up with different “median ” (assuming a certain imo naive calculation of that median).
What about the cosmologist’s belief in simulation shutdown?
As you might guess by now, I side with the journalist here. If we were to formalize the expressed beliefs of the cosmologist’s using different scenarios (as above in the AI xrisk example), then the resulting would be what the journalist reports.
It is fine to say “I don’t know” when asked for a probability estimate. But the beliefs of the cosmologist’s look incoherent to me as soon as they enter the domain of probability theory.
I suspect it feels like a paradox because he gives way higher estimates for simulation shutdown than he actually believes, in particular the 1% slice where doom is between 10% and 100%.
if AIs can own property and earn income by selling their labor on an open market, then they can simply work a job and use their income to purchase whatever it is they want, without any need to violently “take over the world” to satisfy their goals.
If an individual AI’s relative skill-level is extremely high, then this could simply translate into higher wages for them, obviating the need for them to take part in a violent coup to achieve their objectives.
For example, one can imagine a human hiring a paperclip maximizer AI to perform work, paying them a wage. In return the paperclip maximizer could use their wages to buy more paperclips.
It could be that the AI can achieve much more of their objectives if it takes over (violently or non-violently) than it can achieve by playing by the rules. To use your paperclip example, the AI might think it can get 10^22 paperclips if it takes over the world, but can only achieve 10^18 paperclips with the strategy of making money through legal means and buying paperclips on the open market. In this case, the AI would prefer the takeover plan even if it has only a 10% chance of success.
Also, the objectives of the AI must be designed in such a way that they can be achieved in a legal way. For example, if an AI strongly prefers a higher average temperature of the planet, but the humans put a cap on the global average temperature, then it will be hard to achieve without breaking laws or bribing lawmakers.
There are lots of ways for AIs to have objectives that are shaped in a bad way. To obtain guarantees that the objectives of the AIs don’t take these bad shapes is still a very difficult thing to do.
This has “AI alignment” in the headline, but it is not clear how this specifically has to do with alignment, and it sounds like capabilities research instead. Maybe the message you wanted to convey is something like “consider seeking funding from IARPA for you alignment project under this program which is otherwise not focused on alignment”? In any case, explanation what this has to do with alignment is missing from the post.
In the context of a misaligned AI takeover, making negotiations and contracts with a misaligned AI in order to allow it to take over does not seem useful to me at all.
A misaligned AI that is in power could simply decide to walk back any promises and ignore any contracts it agreed to. Humans cannot do anything about it because they lost all the power at that point.
Perhaps more cost-effectiveness analyses?
I have seen very little discussions about these things in EA circles and I dont know of a thorough investigation. Maybe some EAs have briefly thought about it but think that the cause is not as important/tractable/neglected as other causes, without doing a long writeup.
As for extinction of a single species, I imagine that moral factors are also at play here. Many people (including me), consider the extinction of homo sapiens to be much worse than the extinction of the “wandering albatross” (which I pulled from your linked list).
Could you give some examples of animal x-risk? And what I could do about it? How much to prioritize an issue depends on these more concrete things, not just abstract considerations.
Also, are you having in mind extinction scenarios of a single species, or extinction scenarios of all mammals, or all non-human animal life?
It sounds like you think that the other 19 employees of nonlinear had the same arrangement (travel with them and be paid $12k/year). I doubt this is true. Probably many of the 19 are being remotely employed.
They got to pocket $12k/year into savings and live like a king.
Many people spend money besides rent+food+travel, so this sounds exaggerated.
Do you read the “First they came for one EA leader” poem as ironic? When I read it, I saw it as an argument against “EA leader lynching”, and as a request for people to speak up to protect EA leaders.
I think in general it is fine to use this poem in a joking manner, see the comment by Guy Raveh below, and I don’t expect John G. Halstead to be against all repurposing of the holocaust poem.
I haven’t checked your sources on twitter, because your link doesnt work for people without an account. But I don’t consider random tweets to be a reliable source of whats considered insensitive anyways.
Consider donating all or most of your Mana on Manifold to charity before May 1.
Manifold is making multiple changes to the way Manifold works. You can read their announcement here. The main reason for donating now is that Mana will be devalued from the current 1 USD:100 Mana to 1 USD:1000 Mana on May 1. Thankfully, the 10k USD/month charity cap will not be in place until then.
Also this part might be relevant for people with large positions they want to sell now: