I dropped out of a MSc. in mathematics at a top university, in order to focus my time on AI safety.
Knight Lee
Disclaimer: I have no expertise, just want to share random thought. Only read if you’re willing to risk wasting time.
I think civilians theoretically have the power to stop the war in Ukraine (or at least make a successful peace deal/ceasefire more likely), if they pledge that which country they live in doesn’t depend on which country wins.
This means that if they want to live in the other country, they move there right now. If they want to stay in the current country, they will evacuate away from the front lines, and if they ever end up in the other country they promise they will try their best to leave. They do not stay where they are waiting to be liberated.
There are obvious downsides to this, as leaving one’s house and hometown is extremely costly. But then again, so is war. The war in Ukraine is not only costly in human lives, but makes the world a more hostile and uncooperative place, and reduces humanity’s ability to survive existential risks.
If enough civilians follow this strategy, there will be less political pressure for the Ukrainian government to liberate people trapped in Russian occupied territories, and Putin will have far less to gain by conquering large parts of Ukraine, because any people he hopes to add are more likely to flee. He still gains land and resources, but they won’t offset the enormous costs of war. A war over land and resources (instead of people) is still very hard to stop, but may end relatively sooner.
It sounds bad to let civilians to join the other side, but it’s possible that a deal can be reached, similar to prisoner swaps which have occurred in the past. It’s costly to keep civilians who want to leave to the other side, since they may be a liability, and keeping them in your territory means the other side is motivated to liberate civilians or increase their population by taking those territories.
There are definitely a large population of civilians who won’t follow this strategy no matter what, and a large population of civilians who already follow this strategy anyways. But I think there are still lots of war-weary people who are undecided and would be swayed if they knew its influence on ending the war.
Sorry if this idea is naive, but I’m interested in these things, and would like to learn why this idea probably won’t work, (or why all such “shower thought” ideas are so certain to fail they’re not worth asking about).
You’re probably right, and I admit I’m not planning to devote very much of my life to solving this problem either :/
But from the point of view of costs and benefits, it (probably) isn’t that costly to make a few policymakers aware (of the enormous economic waste due to high rent). And even if the new city idea is stupid, it’s plausible policymakers will change their behaviour sufficiently that rent prices actually will go down by a bit.
I think it’s currently intractable because a lot of people (and probably policymakers) have the wrong intuition that high rent isn’t extremely wasteful, it’s just money flowing from tenants to landlords, and if they offset that with wealth distribution in the other direction then little is lost.
If they see that a lot of work is lost, they might change things.
Maybe say, I strongly believe in the principles[1] of EA.
- ^
The EA principles I follow does not include “the ends always justify the means.”
Instead, it includes:
Comparing charities and prosocial careers quantitively, not by warm fuzzy feelings
Animal rights, judged by the subjective experience of animals not how cute they look
Existential risk, because someday in the future we’ll realize how irrational it was to neglect it
- ^
Consequentialists should be strong longtermists
Technically I agree that 100% consequentialists should be strong longtermists, but I think if you are moderately consequentialist, you should only sometimes be a longtermist. When it comes to choosing your career, yes, focus on the far future. When it comes to abandoning family members to squeeze out another hour of work, no. We’re humans not machines.
At Google, Larry Page and Sergey Brin control 51% of the shareholder vote thanks to their supervoting stock.
At Meta/Facebook, Zuckerberg controls 61% of the vote.
Anthropic is a public benefit corporation.
OpenAI was supposed to be controlled by a nonprofit board though Sam Altman is trying to convert it into a public benefit corporation.
Shareholders are also unlikely to remove Elon Musk from Tesla even if he does a lot of things against Tesla’s interests.
Executives are under intense pressure to make profit to prevent the business from going bankrupt, and maybe to get bonuses or reputation, but the pressure to avoid getting voted out by shareholders is relatively less.
Charities have a lot of the same pressures (minus the bonuses).
I don’t have any expertise, I may be totally wrong.
Can you describe yourself “moderately EA,” or something like that, to distinguish yourself from the most extreme views?
The fact we have strong disagreements on this forum feels like evidence that EA is more like a dimension on the political spectrum, rather than a united category of people.
How would you rate current AI labs by their bad influence or good influence? E.g. Anthropic, OpenAI, Google DeepMind, DeepSeek, xAI, Meta AI.
Suppose that the worst lab has a −100 influence on the future, for each $1 they spend. A lab half as bad, has a −50 influence on the future for each $1 they spend. A lab that’s actually good (by half as much) might have a +50 influence for each $1.
What numbers would you give to these labs?[1]
- ^
It’s possible this rating is biased against smaller labs since spending a tiny bit increases “the number of labs” by 1 which is a somewhat fixed cost. Maybe pretend each lab was scaled to the same size to avoid this bias against smaller labs.
(Kind of crossposted from LessWrong)
- ^
My silly idea is that your voting power should not scale with your karma directly, but should scale with the number of unique upvotes minus the number of unique downvotes you received. This prevents circular feedback.
Reasons
Hypothetically, if you had two factions which consistently upvote themselves, A with 67 people, and B with 33 people. People in A will have twice as many unique upvotes as people in B, and their comments can have up to 4 times more karma (in the simplistic case where voting power scales linearly with karma).
However, if voting power depends not on unique upvotes but on karma, then at first people in A will still have twice as many unique upvotes as people in B, and their comments will still have more than 4 times more karma. But then, (in the simplistic case where voting power scales linearly with karma), their comments will have 8 times more karma. Which further causes their comments to have 16 times more karma.
This doesn’t happen in practice because voting power doesn’t scale linearly with karma (thank goodness), but circular feedback is still partially a problem.
Can you do a back-of-the-envelope calculation, on the costs and benefits of doing more back-of-the-envelope calculations? E.g. getting a different person to independently replicate a back-of-the-envelope calculation, in order to average out errors and biases specific to one individual and make it more robust.