Should EA avoid using AI art for non-research purposes?
Seems somewhat epistemically toxic to give in to a populist backlash against AI art if I donât buy the arguments for it being bad myself.
Should EA avoid using AI art for non-research purposes?
Seems somewhat epistemically toxic to give in to a populist backlash against AI art if I donât buy the arguments for it being bad myself.
I just remembered another sub-category that seems important to me: AI-enabled very accurate lie detection. This could be useful for many things, but most of all for helping make credible commitments in high-stakes US-China ASI negotiations.
Thanks Caleb, very useful. @ConnorA Iâm interested in your thoughts re how to balance comms on catastrophic/âexistential risks and things like Deepfakes. (I donât know about the particular past efforts Caleb mentioned, and I think I am more open to comms of Deepfakes being useful to develop a broader coalition, even though deepfakes are a tiny fraction of what I care about wrt AI.)
Have you applied to LTFF? Seems like the sort of thing they would/âshould fund. @Linch @calebp if you have actually already evaluated this project I would be interested in your thoughts as would others I imagine! (Of course, if you decided not to fund it, Iâm not saying the rest of us should defer to you, but it would be interesting to know and take into account.)
Unclearâas they note early on, many people have even shorter timelines than Ege, so not representeative in that sense. But probably many of the debates are at least relevant axes people disagree on.
o1-pro!
Here is a long AI summary of the podcast.
If these people werenât really helping the companies it seems surprising salaries are so high?
I think I directionally agree!
One example of timelines feeling very decision-relevant is for people who are looking to specialise in partisan influence, you might want to specialise far more in Republicans the larger your credence in TAI/âASI by Jan 2029. Whereas for longer timelines on priors Democrats have a ~50% chance of controlling the presidency from 2029, so specialising in Dem political comms could make more sense.
Of course criticism is only a partially overlapping set with advice, but this post reminded me a bit of this take on giving and receiving criticism.
I overall agree we should prefer USG to be better AI-integrated. I think this isnât a particularly controversial or surprising conclusion though, so I think the main question is how high a priority this is, and I am somewhat skeptical it is on the ITN pareto frontier. E.g. I would assume plenty of people care about government efficiency and state capacity generally, and a lot of these interventions are generally about making USG more capable rather than too targeted towards longtermist priorities.
So this felt like neither the sort of piece targeted to mainstream US policy folks, nor that convincing for why this should be an EA/âlongtermist focus area. Still, I hadnât thought much about this before, and so doing this level of medium-depth investigation feels potentially valuable, but Iâm unconvinced that e.g. OP should spin up a grantmaker focused on this (not that you were necessarily recommending this).
Also, a few reasons govts may have a better time adopting AI come to mind:
Access to large amounts of internal private data
Large institutions can better afford one-time upfront costs to train or finetune specialised models, compared to small businesses
But I agree the opposing reasons you give are probably stronger.
we should do what we normally do when juggling different priorities: evaluate the merits and costs of specific interventions, looking for âwin-winâ opportunities
If only this were how USG juggled its priorities!
Yes, this seems right, hard to know which effect will dominate. Iâm guessing you could assemble pretty useful training data of past R&D breakthroughs which might help, but that will only get you so far.
Clearly only IBBIS should be allowed to advertise on the job board from now on, impeccable marketing skills @Tessa A đ¸ :)
This seems to be out of context?
Yeah I think I agree with all this; I suppose since âweâ have the AI policy/âstrategy training data anyway that seems relatively low effort and high value to do, but yes if we could somehow get access to the private notes of a bunch of international negotiators that also seems very valuable! Perhaps actually asking top forecasters to record their working and meetings to use as training data later would be valuable, and I assume many people already do this by default (tagging @NunoSempere). Although of course having better forecasting AIs seems more dual-use than some of the other AI tools.
Yes, I suppose I am trying to divide tasks/âprojects up into two buckets based on whether they require high context and value-alignment and strategic thinking and EA-ness. And I think my claim was/âis that UI design is comparatively easy to outsource to someone without much of the relevant context and values. And therefore the comparative advantage of the higher-context people is to do things that are harder to outsource to lower-context people. But I know ~nothing about UI design, maybe being higher context is actually super useful.
Nice post! I agree moral errors arenât only a worry for moral realists. But they do seem especially concerning for realists, as the moral truth may be very hard to discover, even for superintelligences. For antirealists, the first 100 years of a long reflection may get you most of the way to where your views will converge towards after a billion years of reflecting on your values. But the first 100 years of a long reflection are less guaranteed to get you close to the realist moral truth. So a 100-years-reflection is e.g. 90% likely to avoid massive moral errors for antirealists, but maybe only 40% likely to do so for realists.
--
Often when there are long lists like this I find it useful for my conceptual understanding to try to create some scructure to fit each item into, here is my attempt.
A moral error is making a moral decision that is quite suboptimal. This can happen if:
The agent has correct moral views, but makes a failure of judgement/ârationality/âempirics/âdecision theory and so chooses badly by their own lights.
The agent is adequately rational, but has incorrect views about ethics, namely the mapping from {possible universe trajectories} to {impartial value}. This could take the form of:
A mistake in picking out who is a moral patients, {universe trajectory} --> {moral patients}. (animals, digital beings)
A mistake in assigning lifetime wellbeing scores to each moral patient {moral patients} --> {list of lifetime wellbeing}. (theories of wellbeing, happiness vs suffering)
A mistake in aggregating correct wellbeing scores over the correct list of moral patients into the overall impartial value of the universe {list of lifetime wellbeings + possibly other relevant facts} --> {impartial value}. (population ethics, diversity, interestingness)
--
Some minor points:
I think the fact that people wouldnât take bets involving near-certain death and a 1-in-a-billion chance of a long amazing life is more evidence about people being risk averse than that lifetime wellbeing is bounded above.
As currently written, choosing Variety over Homogeneity would only be a small moral error, not a massive one, as epsilon is small.
This seems right to meâpersonally I am more likely to read a post if it is by someone I know (in person or by reputation). I think selfishly this is the right choice as those posts are more likely to be interesting/âvaluable to me. But it is also perhaps a bad norm as we want new writers to have an easy route in, even if no-one recognises their name. So I try to not index too heavily on whether I know the person.