Thinking about AI. Trying to build a Rat, EA, TPOT meetup scene in Morristown, New Jersey
Matt Brooks
Btw, two small suggestions for the chatbot:
smaller max width for container div, size 16px font, and 150% line height
before : https://i.imgur.com/AHjaJHD.png
after: https://i.imgur.com/jAO4ozG.png
Ask the LLM to use standard markdown in it’s output. This will automatically create headings and bolded elements that make it much easier to skim/read
I like this idea and it looks great!
I had a similar concept in mind that I wanted to build but with more of a questionnaire/survey design rather than solely text articles or an open-ended chatbot. More of a hand-holding guided experience through the concerns/debate points.
How’s it going so far? How many daily active users do you have?
Maybe they’re training “GPT-4.5”, maybe they’ve come up with a new name and they’re training “Assistant-1″
But he’s said elsewhere publicly that they’re not training GPT-5
Maybe they’re going to focus on plugins, fine-tuning, visual processing, etc.
Do you not trust Ilya when he says they have plenty more data?
I don’t really have the time, skills, or contacts to make this happen, if you want to pick up the torch I would gladly pass it to you.
Tyler seems keen although worried about censors: https://twitter.com/tylercowen/status/1614402492518785025
It seems from the podcast he wanted to only release the book in Chinese (maybe especially at this point due to the decline in willingness for the west to work with China) but I’m not sure, maybe the book would help westerns understand China’s culture as much as Chinese to understand the west. A lot of great power war-concerned EAs would probably buy the book to get a better insight.
If I had to guess, I don’t think he needs help finding a translator or transcribing the book to an audiobook or any other particular singular task, I think it’s bigger than that.
If we could get someone with contacts and backing like OpenPhil to say to Tyler “We will pay all costs to publish the book and assign a project manager to do all of the annoying bits for you” it seems harder for Tyler to turn down, but I’m just guessing.
Happy to chat more, if you’d like
From the way Tyler was talking about the book and topics, it did not seem to me like a politically controversial book “it was a book designed to explain America to the Chinese, and make it more explicable, more understandable”.
Or at least the controversial parts could be taken out if required and a lot of the value could remain.Though I covered a lot of basic differences across the economies, the policies, why are the economies different?
Why is there so little state ownership in America?
Why are so many parts of America so bad at infrastructure?
Why do Americans save less?
How is religion different in America?
EA should help Tyler Cowen publish his drafted book in China
Hey Robbert-Jan,
Sorry, somehow I missed your comment but saw it once Simon replied and I got a notification.
We’re likely staying in the web2 world for now, but there is a chance we graduate to web3/crypto in the future.
Check out our website here: https://impactmarkets.io/ Join our Discord here: https://discord.gg/7zMNNDSxWv Read (or skim) our long EA post here: https://forum.effectivealtruism.org/posts/7kqL4G5badqjskYQs/toward-impact-markets-1
Hey Simon,
We’ve been funded by the FTX Future Fund regrantor program!
Check out our website here: https://impactmarkets.io/ Join our Discord here: https://discord.gg/7zMNNDSxWv Read (or skim) our long EA post here: https://forum.effectivealtruism.org/posts/7kqL4G5badqjskYQs/toward-impact-markets-1
Exciting! I just filled out the form.
Experiment in Retroactive Funding: An EA Forum Prize Contest
I think this is really difficult to truly assess because there is a huge confounder. The more you age the worse your memory gets, your creativity decreases, your ability to focus decreases, etc., etc.
If all of that was fixed with anti-aging it may not be true that science progresses one funeral at a time because the people at the top of their game can keep producing great work instead of becoming geriatric while still holding status/power in the system.
Also, it could be a subconscious thing: “why bother truly investigating my beliefs at age 70, I’m going to die soon anyway, let me just continue with the inertia until I retire soon”
Also, this seems possible to fix with better institutional structures/incentives. Academia is broken in many ways, this is just one of them.
This is a good comment. I’d like to respond but it feels like a lot of typing… haha
but that’s not the same as seeing improvements in leaders’ quality
I just mean the world is trending towards democracies and away from totalitarianism.
It’s inherently easier to attain and keep power by any means necessary with zero ethics
Yes, but 100x easier? Probably not. What if the great minds have 100x the numbers and resources? Network effects are strong
There’s another asymmetry where it’s often easier to destroy/attack/kill than build something.
Same response as above
I think it’s ambiguous whether Putin supports your point. The world is in a very precarious situation now because of one tyrant.
My point is that the vast majority of the world immediately pushed back on Putin much harder than people thought. This backs up my trend that people are less tolerant of totalitarianism than they were 100 years ago. We are globally trying (and succeeding) to set stronger norms against inflicting violence and oppression.
Some personality pathologies like narcissism and psychopathy seem to be increasing lately, tracking urbanization rates and probably other factors.I’m guessing it will be somewhat easier to reverse these trends in a less scarcity-based society in the future, especially when we have a better handle on mental health from all angles. And the increases are probably not enough to matter in the wider question of great minds vs dictators.
People can be “brilliant” on some cognitive dimensions but fail at defense against dark personality types. For instance, some otherwise brilliant people may be socially naive.
The great minds can just outnumber the dictators in numbers and in resources, but again network effects can fight against this because each individual person doesn’t have to succeed against dictators, the whole global fight for good has to collectively succeed.
Outside of our EA bubble, it doesn’t look like the world is particularly sane or stable.
The world definitely seems to be trending towards saner and more stable though.
I agree, it feels like a stakesy decision! And I’m pretty aligned with longtermist thinking, I just think that “entire future at risk due to totalitarianism lock-in due to removing death from aging” seems really unlikely to me. But I haven’t really thought about it too much so I guess I’m really uncertain here as we all seem to be.
“what year you guess it would first have been good to grant people immortality?”
I kind of reject the question due to ‘immortality’ as that isn’t the decision we’re currently faced with. (unless you’re only interested in this specific hypothetical world). The decision we’re faced with is do we speed up anti-aging efforts to reduce age-related death and suffering? You can still kill (or incapacitate) people that don’t age, that’s my whole point of the great minds vs. dictators.
But to consider the risks in the past vs today:
Before the internet and modern society/technology/economy it was much much harder for great minds to coordinate against evils in a global sense (thinking of the Cultural Revolution as you mentioned). So my “great-minds counter dictators” theory doesn’t hold up well in the past but I think it does in modern times.
The population 200 years ago was 1⁄8 what is today and growing much slower so the premature deaths you would have prevented per year with anti-aging would have been much less than today so you get less benefit.
The general population’s sense of morals and demand for democracy is improving so I think the tolerance for evil/totalitarianism is dropping fairly quickly.
So you’d have to come up with an equation with at least the following:
- How many premature deaths you’d save with anti-aging
- How likely and in what numbers will people, in general, oppose totalitarianism
- If there was opposition, how easily could the global good coordinate to fight totalitarianism
- If there was coordinated opposition would their numbers/resources outweigh the numbers/resources of totalitarianism
- If the coordinated opposition was to fail, how long would this totalitarian society last (could it last forever and totally consume the future or is it unstable?)
Of course, it would, but if you’re reducing the risk of totalitarian lock-in from 0.4% to 0.39% (obviously made up numbers) by waiting 200 years I would think that’s a mistake that costs billions of lives.
The thing that’s hard to internalize (at least I think) is that by waiting 200 years to start anti-aging efforts you are condemning billions of people to an early death with a lifespan of ~80 years.
You’d have to convince me that waiting 200 years would reduce the risk of totalitarian lock-in so much that it offsets billions of lives that would be guaranteed to “prematurely end”.
Totalitarian lock-in is scary to think about and billions of people’s lives ending prematurely is just text on a screen. I would assume that the human brain can easily simulate the everyday horror of a total totalitarian world. But it’s impossible for your brain to digest even 100,000,000 premature deaths, forget billions and billions.
But we’re not debating if immortality over the last thousand years would have been better or not, we’re looking at current times and then estimating forward, right? (I agree a thousand years ago immortality would have been much much riskier than starting today)
In today’s economy/society great minds can instantly coordinate and outnumber the dictators by a large margin. I believe this trend will continue and that if you allow all minds to continue the great minds will outgrow the dictator minds and dominate the equation.
Dictators are much more likely to die (not from aging) than the average great mind (more than 50x?). This means that great minds will continue to multiply in numbers and resources while dictators sometimes die off (from their risky lifestyle of power-grabbing).
Once there are 10,000 more brilliant minds with 1,000x more resources than the evil dictators how do you expect the evil dictator to successfully power grab a whole country/the whole world?
When thinking about the tail of dictators don’t you also have to think of the tail of good people with truly great minds you would be saving from death? (People like John von Neumann, Benjamin Franklin, etc.)
Overall, dictators are in a very tough environment with power struggles and backstabbing, lots of defecting, etc. while great minds tend to cooperate, share resources, and build upon each other.
Obviously, there are a lot more great minds doing good than ‘great minds’ wishing to be world dictators. And it seems to trend in the right direction. Compare how many great smart democratic leaders there are now vs 100 years ago. Extend that line another 100 years and it seems like we’ll be improving.
In a world in which a long tail dictator could theoretically work out an ironclad grasp of their country for evil, wouldn’t there be thousands of truly brilliant minds with lots of global coordinated resources around the world pushing against this? (see Russia vs Ukraine for a very very simple real-world example of “1 evil guy vs the world”)
So this long tail dictator has to worry about intense internal struggle/pressure but also most of the world externally pressuring them as well? I don’t see how the moral brilliant minds don’t just outmaneuver this dictator because they have 100x+ more people, resources, and coordination (in this theoretical future).
Congrats on winning the hackathon! Very Impressive! I’m excited to see how this project progresses, this seems like a great opportunity to improve the traditional funding and non profit sector without taking huge crazy leaps.
What’s the “whole manifund debacle”? People complaining about Curtis Yarvin or something?