Interested in AI safety talent search and development.
Peter
I think politics can seem very opaque, incomprehensible, and lacking clear positive payoffs but after volunteering, studying, and working on campaigns for a few years, I think it’s more simple but difficult.
I think politics is an area where there are a lot of entrenched ways of doing things as well as a lot of pitfalls that often require experience to navigate well. And even then, the chance of failure is still high. A moment’s slip up or bad assumptions or a random event can undermine months or years of work. This doesn’t happen as often in other areas.
For animal welfare, I think the outcomes show that it’s something people are more willing to vote for than pay for, so I think ballot initiatives are generally a good route to try out. I think the pork industry challenge to the MA law is pretty weak, but even if the initiative is struck down, it was probably good to try and see if it worked, and that may still open up some new opportunity. Winning by a large margin is good in that it may discourage special interests from trying to run a counter ballot initiative next time to repeal it.
I think it’s important not to become naive about anyone elected to office. Just because they have a similar background to you, say things you agree with, or belong to a group you like doesn’t mean they’re going to actually do good things or that the things they do are good. Just because they seem right about one or even many topics doesn’t mean they know what they are doing on other topics.
Politics is about coalition building and that often means various kinds of deal making. This is not for everyone, and not every deal is good or even necessarily clearly good or bad. It also involves constant tradeoffs and high uncertainty that will often make a lot of people unhappy.
Politicians spend most of their careers fundraising—even when in office—and not nearly enough time talking to groups of their constituents that represent the diversity of experiences in their districts. This means a lot of popular ideas get ignored, some of which are good and others which are maybe not. Being a good representative means knowing when, how, and how much to defer to people.
If you click on your name in the top right corner, then click edit profile, you can scroll down and delete tags under “my activity” by clicking the x on the right side of each block.
What things would make people less worried about AI safety if they happened? What developments in the next 0-5 years should make people more worried if they happen?
What are good ways to test your fit for technical AI Alignment research? And which ways are best if you have no technical background?
Well squad-esque seems like an odd litmus test since there are many other progressive members of congress than them but POF did support Maxwell Frost who won.
Well to be fair I didn’t say it was impossible, just that the outcome probably had more to do with the fundamentals of the race. It may have had a negative effect yes, but plenty of candidates win in races despite being supported by all kinds of PACs and having negative press about it.
Having more connections within the state for support and donations and highlighting those would have helped blunt negative attacks about PAC funding, for example.
I like the idea of Protect Our Future being more transparent about how and why they make endorsements. Giving a specific list of ways they evaluate candidates would be helpful for people to understand their actions. I also worry a little bit that this would make it easy to game their endorsement process or encourage political stunts that are more about drawing attention than doing something useful. But I’m not sure how big of a worry this should be.
Not sure but I think the Flynn campaign result was more likely an outcome of the fundamentals of the race: a popular, progressive, woman of color with local party support who already represented part of the district as a state rep and helped draw the new congressional district was way more likely to win over someone who hadn’t lived there in years and had never run a political campaign before.
Hi everyone, I’m a psychology graduate interested in learning about ways for dealing with infohazards especially in online journal publications. As well as what skills and projects are important in Bioethics.
Messaged you
In terms of goal directedness, I think a lot of the danger hinges on whether and what kinds of internal models of the world will emerge in different systems, and not knowing what those will look like. Many capabilities people didn’t necessarily expect or foresee suddenly emerged after more training—for example the jumps in abilities from GPT to GPT-2 and GPT-3. A similar jump to the emergence of internal models of the world may happen at another threshold.
I think I would feel better if we had some way of concretely and robustly specifying “goal directedness that doesn’t go out of control” with the training models that are currently being used. Or at least something to show a robust model of how these systems currently work like, “current models are doing xyz abilities by manipulating data in this way which will never include that testable ability in the next 1-3 years but will likely include these abilities in that timeframe.”
In terms of an AI vs all human intelligence combined, even assuming all humans combined are more intelligent than an AGI, and that this AGI is for whatever reasons not able to drastically self-improve, it could still make copies of itself. And each one could think thousands of times faster than any person. Current trends show it takes far more compute/resources to train a model than to run it, so an AGI that copies itself thousands or millions of times with each of them modifying themselves to be better and better at specific tasks would still be really dangerous if their goals are misaligned. As far as I can tell, it would be pretty easy for a group of AGIs all smarter than the smartest humans to hack into and take over any systems they needed through the internet and by deceiving people to acquire resources when that’s insufficient by itself. And given that they will all be created with the same goals, their coordination will likely be much better than humans.
Oh thanks! I think I will once I get them a bit more organized.
I have some ideas and I’m starting to flesh them out so yeah!
Yeah I can see what you mean—they could have taken a less flashy and more straightforward approach. It would be interesting to think about what else they could have made or done with what they made instead that might have been better.
Yes—I didn’t think about looking at how humane laws for people correlate with animals though. That’s really interesting.
Do you write fiction at all?
Thanks for sharing. Yeah, I can see what you mean—Senku can be a bit annoying. I think that also makes him a more realistic character too, though maybe they overdo it a bit at times. I found Gen’s syllable switching (like saying the second half first) on certain words really grating and it seemed to come out of nowhere. I didn’t remember him doing that at the beginning. I think it would be super cool to try and make more stories like Dr. Stone where it’s entertainment but also learning real stuff is a tool the characters use to progress the plot.
Oh, that’s an interesting point about the tech. I think the phone was more important since they needed to use it earlier and it was key to their plan for trying to win without killing, but they could have been in trouble if they ran out of time.
Those are probably both true. I’m kind of surprised sometimes when I see people who don’t care about animal rights at all. To me it seems like it would make it likelier/easier to dismiss other forms of suffering too. Both individually and as a society. I remember some research indicating a connection between specieism and other forms of prejudice but I’m not sure how much that link has been explored. Maybe more stories about animals would help.
Oh yeah that makes sense, I agree. Yeah, FMAB is often a good “first anime” to recommend since it does lots of things pretty well.
I’m really curious, how would you improve Dr. Stone? I think it could be improved but I’m not overflowing with ideas on how to do it at the moment.Oh, I forgot about him being vegetarian. I think the reason the AI angle is more popular is because of how much more similar he seems to be to humans than animals. There are so many qualities people think of as being human capabilities/behavior that he does even if not all of them are 100% human monopolies. Talking, walking on two feet, planning, reasoning, taking strong moral stances, etc.
I think it’s also harder to tell a story about animals without anthropomorphizing them into something else whereas with AI we don’t really know the possibilities so you can kind of do anything easily.
Hmm I don’t think it matters—we’re the only ones keeping this post afloat right now. But we can if you want.
“They genuinely weren’t surprised by anything that happened. They didn’t necessarily predict everything perfectly, but everything that happened matched their model well enough. Their deep insight into ML progress enables them to clearly explain why AGI isn’t coming soon, and they can provide rough predictions about the shape of progress over the coming years.”
Would definitely like to hear from people like this and see them make lots of predictions about AI progress in the short term (months and years).Seems like a very promising profile for identifying those with a valuable counter perspective/feedback for improving ideas.
FMAB is pretty widely liked. It used to be my favorite, but nowadays I think I put more emphasis on things that change the way I think. It has been a long time since I watched it though so I might change my mind if I rewatched.
Frankenstein makes me think about AI as well since it’s all about creating something with greater capabilities than a human.
I’ve been meaning to read The Dispossessed. Will have to check out those other ones.
What would you say is your favorite scifi or anime?
Yeah there are a lot of “fairweather friends” in politics who won’t feel inclined to return any favors when it matters most. The opposite of that is having a committed constituency that votes enough in elections to not be worth upsetting—aka a base of people power. These take serious effort to create and not all groups are distributed geographically the same way so some have more/easier influence than others. One reason the NRA is so powerful and not abandoned despite negative media coverage is that they have tight relationships with Republican politicians and they turn out big time in any primary where someone opposes them or something they want. It’s not so much about the campaign contributions as far as I can tell (other groups spend far more and are much less influential) though campaign contributions are certainly a part of their system of carrots and sticks.
The lack of more broad participation in primaries is a problem for represenation and responsive good government. It’s an opportunity for groups that aren’t all that representative to magnify their influence. Alaska’s top 4 primary election seems like a step in the right direction since it opens up primaries to more voters and then lets voters rank the top 4 candidates in November. It increases the chances that someone can try to run and win as a more representative candidate instead of being filtered out by small, highly partisan groups.
It’s often easier to stick to established narratives, group identifiers, and allies, or even make up new conspiracies than to be measured and nuanced. Something inflamatory and/or conspiratorial is more likely to hook into human brains, be amplified by engagement seeking algorithms, and, if it’s obscure but rapidly repeated, not have any better sources of information competing with it when people look up its key words.