The difficulty is in the name “longtermist”. It asserts ownership over concern for the future.
People who disagree with the ideas that carry this banner are also concerned for the future.
The difficulty is in the name “longtermist”. It asserts ownership over concern for the future.
People who disagree with the ideas that carry this banner are also concerned for the future.
The founding premise of EA was that you need to weigh evidence. This distinction is saying the longtermists have abandoned the founding premise of the movement.
As it currently stands, the AI is dependent on us for everything. We supply the power and we maintain the hardware and software. I think you ask the wrong question.
How do you propose we as humans make ourselves beneficial to an “overlord” AI so that we become indispensable?
We are already indispensable to the AI. The question should be how is it possible that we lose that?
Is this kind of replaceability compatible with current practices in Longtermism?
What is the consequence of the claim that if I fail to take an action to preserve Future People, Other Future People likely replace them?
Let’s say I give money to MIRI instead of saving current people, based on some calculations of future people I might save. Are we discounting those Future People by the Other Future People? Why don’t we value Other Future People just as much as Future People? Of course we do.
Perhaps that is the point of this thought experiment. Perhaps “of course you don’t pull the switch” is the only right answer precisely because of replaceability.
Had Bostrom left out the paragraph quoted by the OP, the apology would read very differently. In the prior paragraph he wrote:
I also think that it is deeply unfair that unequal access to education, nutrients, and basic healthcare leads to inequality in social outcomes, including sometimes disparities in skills and cognitive capacity.
This, if left to its own, would have stood as a strong statement on equity.
By adding the following paragraph on genetics, Bostrom implies the opposite of his claimed indifference to the genetics of race. Announcing that you ” leave to others” the question of genetic factors in intelligence by race, rings of an invitation to weigh this matter.
We need a different name for those people who will be, and potential future people. We can be concerned for the former and indifferent to the latter.
MacAskell has a Gedanken about imagining yourself as all the future people. He drifts from future people to potential future people. By conflating the two he attempts to fool us into concern for the latter . https://www.nytimes.com/2022/08/05/opinion/the-case-for-longtermism.html
Since these hypothetically large number of people are the product of this thought experiment, I like to call them Gedanken people, as distinguished from future people.
Each love letter, if delivered, leads to conception and birth of a new human. The trolley, unswitched, would destroy the love letters. If your ethos say yes, you would pull the lever to save the love letters, how many love letters would it take?
The love letter leads to the conception and birth of a new human.
The average American drives 10^4 miles per year. The seatbelt comparison is off.
In footnote 2 you ask:
do you really assign probability of less than 0.1^10^10^10, or are you just rationalizing why you aren’t going to give the money?
I find myself asking the opposite whenever someone says they have a very very large good outcome. Didn’t they just make up the amount of really large bigness as an attempt to preempt your assignment of a small probability?
There is an author who has done a top down analysis using a modification of Gott’s estimator . Prospects for Human Survival: Wells, Willard H. It is grounded in numerical methods. He also has an earlier treatment,Apocalypse When?: Calculating How Long the Human Race Will Survive.
His methods are more refined than any I have checked in your table.
in “Eternity in six hours”, Armstrong and Sandberg ignore a fundamental physical law known as the constant radiance theorem. It states that the radiant flux from a thermal source cannot be concentrated above the value at the emitter. They proceed to imagine an exponential growth of energy delivered to their system that exceeds this fundamental limit by at least 6 orders of magnitude.
This doesn’t work because the power delivered is more than enough to melt the apparatus mining the planet.
If you direct the sun’s energy onto mercury with an flux as high as that leaving the sun, it will get white hot. This video and the paper it is based on, assume an energy flux a million times that large.
The square of the ratio of the sun’s radius to that of mercury is 200 billion. So when the power directed at disassembling Mercury gets to the total solar output of 4*10^26 Watts/(200 billion)=2*10^18 Watts, you are melting everything on Mercury’ surface. This is 6 orders of magnitude lower than figure 2 of this paper.
It also violates the constant radiance theorem (second law of thermodynamics)
Population growth is an existential risk. The HANDY model shows many regimes where self organizing systems grow to the point of catastrophic failure. These models attempt to explain the fall of prior isolated civilizations, including the complete loss of population such as Easter Island and the loss or civilization such as the collapse of Teotihuacan.
https://www.sciencedirect.com/science/article/pii/S0921800914000615
It would be worthwhile to factor in such risks.
I must disagree. I roasted a large plane for Thanksgiving yesterday and it was incomparable to a bird. For tips on brining your plane, see here: https://en.wikipedia.org/wiki/US_Airways_Flight_1549
>It seems extremely unlikely to me that global poverty is just as good at …
Wealth inequality is an xrisk factor. See the HANDY model.
https://www.sciencedirect.com/science/article/pii/S0921800914000615
The camp that advocates for a moon-shot which exhausts resources is gambling on some future reward for which they have made a tenuous plausibility argument. This strategy is more likely to lead to extinction.
The ML engineer is developing an automation technology for coding and is aware of AI risks . The engineers polite acknowledgment of the concerns is met with your long derivation of how many current and future people she will kill with this.
Automating an aspect of coding is part of a long history of using computers to help design better computers, starting with Carver Mead’s realization that you don’t need humans to cut rubylith film to form each transistor.
You haven’t shown an argument that this project will accelerate the scenario you describe. Perhaps the engineer is brushing you off because your reasoning is broad enough to apply to all improvements in computing technology. You will get more traction if you can show more specifically how this project is “bad for the world”.