The difficulty is in the name “longtermist”. It asserts ownership over concern for the future.
People who disagree with the ideas that carry this banner are also concerned for the future.
The difficulty is in the name “longtermist”. It asserts ownership over concern for the future.
People who disagree with the ideas that carry this banner are also concerned for the future.
Yudkowsky claims that AI developers are plunging headlong into our research in spite of believing we are about to kill all of humanity. He says each of us continues this work because we believe the herd will just outrun us if any one of us were to stop.
The truth is nothing like this. The truth is that we do not subscribe to Yudkowsky’s doomsday predictions. We work on artificial intelligence because we believe it will have great benefits for humanity and we want to do good for humankind.
We are not the monsters that Yudkowsky makes us out to be.
We need a different name for those people who will be, and potential future people. We can be concerned for the former and indifferent to the latter.
MacAskell has a Gedanken about imagining yourself as all the future people. He drifts from future people to potential future people. By conflating the two he attempts to fool us into concern for the latter . https://www.nytimes.com/2022/08/05/opinion/the-case-for-longtermism.html
Since these hypothetically large number of people are the product of this thought experiment, I like to call them Gedanken people, as distinguished from future people.
in “Eternity in six hours”, Armstrong and Sandberg ignore a fundamental physical law known as the constant radiance theorem. It states that the radiant flux from a thermal source cannot be concentrated above the value at the emitter. They proceed to imagine an exponential growth of energy delivered to their system that exceeds this fundamental limit by at least 6 orders of magnitude.
@Linch Have you ever met any of these engineers who work on advancing AI in spite of thinking that the “most likely result … is that literally everyone on Earth will die.”
I have never met anyone so thoroughly depraved.
Mr. Yudkowsky and @RobBensinger think our field has many such people.
I wonder if there is a disconnect in the polls. I wonder if people at MIRI have actually talked to AI engineers who admit to this abomination. What do you even say to someone so contemptible? Perhaps there are no such people.
I think it is much more likely that these MIRI folks have worked themselves into a corner of an echo chamber than it is that our field has attracted so many low-lifes who would sooner kill every last human than walk away from a job.
In the analogies of types of washing we should include Altruism-Washing.
The ML engineer is developing an automation technology for coding and is aware of AI risks . The engineers polite acknowledgment of the concerns is met with your long derivation of how many current and future people she will kill with this.
Automating an aspect of coding is part of a long history of using computers to help design better computers, starting with Carver Mead’s realization that you don’t need humans to cut rubylith film to form each transistor.
You haven’t shown an argument that this project will accelerate the scenario you describe. Perhaps the engineer is brushing you off because your reasoning is broad enough to apply to all improvements in computing technology. You will get more traction if you can show more specifically how this project is “bad for the world”.
I do not believe @RobBensinger ’s and Yudkowsky’s claim that “there are also lots of people in ML who do think AGI is likely to kill us all, and choose to work on advancing capabilities anyway.”
What experiences tell you there are also lots of people in ML who do think AGI is likely to kill us all, and choose to work on advancing capabilities anyway?
Each love letter, if delivered, leads to conception and birth of a new human. The trolley, unswitched, would destroy the love letters. If your ethos say yes, you would pull the lever to save the love letters, how many love letters would it take?
>It seems extremely unlikely to me that global poverty is just as good at …
Wealth inequality is an xrisk factor. See the HANDY model.
https://www.sciencedirect.com/science/article/pii/S0921800914000615
“when I know a bunch of excellent forecasters...”
Perhaps your sampling techniques are better than Tetlock’s then.
Daylight Savings Time Fix: The real problem is losing the hour of sleep in the spring. The solution is to set clocks back an hour in Fall, and move them forward by 40 seconds everyday for 90 days in spring. No one is going to miss 40 seconds of sleep. Most clocks are digital and set themselves, so you don’t need adjust them and you won’t notice anything in spring.
You could substitute “work” where you write “fight”. The latter evokes violence.
As it currently stands, the AI is dependent on us for everything. We supply the power and we maintain the hardware and software. I think you ask the wrong question.
How do you propose we as humans make ourselves beneficial to an “overlord” AI so that we become indispensable?
We are already indispensable to the AI. The question should be how is it possible that we lose that?
Is this kind of replaceability compatible with current practices in Longtermism?
What is the consequence of the claim that if I fail to take an action to preserve Future People, Other Future People likely replace them?
Let’s say I give money to MIRI instead of saving current people, based on some calculations of future people I might save. Are we discounting those Future People by the Other Future People? Why don’t we value Other Future People just as much as Future People? Of course we do.
Perhaps that is the point of this thought experiment. Perhaps “of course you don’t pull the switch” is the only right answer precisely because of replaceability.
The love letter leads to the conception and birth of a new human.
The average American drives 10^4 miles per year. The seatbelt comparison is off.
In footnote 2 you ask:
do you really assign probability of less than 0.1^10^10^10, or are you just rationalizing why you aren’t going to give the money?
I find myself asking the opposite whenever someone says they have a very very large good outcome. Didn’t they just make up the amount of really large bigness as an attempt to preempt your assignment of a small probability?
I must disagree. I roasted a large plane for Thanksgiving yesterday and it was incomparable to a bird. For tips on brining your plane, see here: https://en.wikipedia.org/wiki/US_Airways_Flight_1549