I work primarily on AI Alignment. My main direction at the moment is to accelerate alignment work via language models and interpretability.
jacquesthibs
Honestly, I’m happy with this compromise. I want to hear more about what ‘leadership’ is thinking, but I also understand the constraints you all have.
This obviously doesn’t answer the questions people have, but at least communicating this instead of radio silence is very much appreciated. For me at least, it feels like it helps reduce feelings of disconnectedness and makes the situation a little less frustrating.
Quillette founder seems to be planning to write an article regarding EA’s impact on on tech:
“If anyone with insider knowledge wants to write about the impact of Effective Altruism in the technology industry please get in touch with me claire@quillette.com. We pay our writers and can protect authors’ anonymity if desired.”
It would probably be impactful if someone in the know provided a counterbalance to whoever will undoubtedly email her to disparage EA with half-truths/lies.
Since I expect some people to be a bit confused as to what exactly was the bad thing that has happened after reading this post, I think it would be great if the community health team could write a post explaining and pointing out exactly what was bad here and in other similar instances.
I think there is value in being crystal clear about what were the bad things that happened because I expect people will takeaway different things from this post.
Personally, I’ve mostly seen people confused and trying to demonstrate willingness to re-evaluate what might have led to these bad outcomes. They may overly sway in one direction, but this only just happened and they are re-assessing their worldview in real-time. Some are just asking questions about how decisions were made in the past so we just have more information and can improve things going forward (which might mean doing nothing differently in some instances). My impression is that a lot of the criticism about EA leadership are overblown and most (if not all) were blindsided.
That said, I haven’t really had the impression it’s as bad and widespread as this post makes it seem though. Maybe I just haven’t read the same posts/comments and tweets.
I do think that working together so we can land on our feet and continue to help those in need sounds nice and hope you’ll still be there since critical posts like this are obviously needed.
If you work at a social media website or YouTube (or know anyone who does), please read the text below:
Community Notes is one of the best features to come out on social media apps in a long time. The code is even open source. Why haven’t other social media websites picked it up yet? If they care about truth, this would be a considerable step forward beyond. Notes like “this video is funded by x nation” or “this video talks about health info; go here to learn more” messages are simply not good enough.
If you work at companies like YouTube or know someone who does, let’s figure out who we need to talk to to make it happen. Naïvely, you could spend a weekend DMing a bunch of employees (PMs, engineers) at various social media websites in order to persuade them that this is worth their time and probably the biggest impact they could have in their entire career.
If you have any connections, let me know. We can also set up a doc of messages to send in order to come up with a persuasive DM.
I think the information you are sharing is useful (some parts less so, I agree with pseudonym), just don’t deadname/misgender them. EA is better than that.
One thing that may backfire with the slow rollout of talking to journalists is that people who mean to write about EA in bad faith will be the ones at the top of the search results. If you search something like “ea longtermism”, you might find bad faith articles many of us are familiar with. I’m concerned we are setting ourselves up to give people unaware of EA a very bad faith introduction.
Note: when I say “bad faith“ here, it may just be a matter of semantics with how some people are seeing it as. I think I might not have the vocabulary to articulate what I mean by “bad faith.” I actually agree with pretty much everything David has said in response to this comment.
More information about the alleged manipulative behaviour of Sam Altman
From what I understand, Amazon does not get a board seat for this investment. Figured that should be highlighted. Seems like Amazon just gets to use Anthropic’s models and maybe make back their investment later on. Am I understanding this correctly?
As part of the investment, Amazon will take a minority stake in Anthropic. Our corporate governance structure remains unchanged, with the Long Term Benefit Trust continuing to guide Anthropic in accordance with our Responsible Scaling Policy. As outlined in this policy, we will conduct pre-deployment tests of new models to help us manage the risks of increasingly capable AI systems.
Here’s a comment I shared on my LessWrong shortform.
——
I’m still thinking this through, but I am deeply concerned about Eliezer’s new article for a combination of reasons:I don’t think it will work.
Given that it won’t work, I expect we lose credibility and it now becomes much harder to work with people who were sympathetic to alignment, but still wanted to use AI to improve the world.
I am not convinced as he is about doom and I am not as cynical about the main orgs as he is.
In the end, I expect this will just alienate people. And stuff like this concerns me.
I think it’s possible that the most memetically powerful approach will be to accelerate alignment rather than suggesting long-term bans or effectively antagonizing all AI use.
- 30 Mar 2023 17:41 UTC; 123 points) 's comment on Pausing AI Developments Isn’t Enough. We Need to Shut it All Down by Eliezer Yudkowsky by (
So, things have blown up way more than I expected and things are chaotic. Still not sure what will happen or if a treaty is actually in the cards, but I’m beginning to see a path to tons of more investment in alignment potentially. One example why is that Jeff Bezos just followed Eliezer on Twitter and I think it may catch the attention of pretty powerful and rich people who want to see AI go well. We are so off-distribution, could go in any direction.
In case we have very different feeds, here’s a set of tweets critical about the article:
https://twitter.com/mattparlmer/status/1641230149663203330?s=61&t=ryK3X96D_TkGJtvu2rm0uw (lots of quote-tweets on this one)
https://twitter.com/jachiam0/status/1641271197316055041?s=61&t=ryK3X96D_TkGJtvu2rm0uw
https://twitter.com/finbarrtimbers/status/1641266526014803968?s=61&t=ryK3X96D_TkGJtvu2rm0uw
https://twitter.com/plinz/status/1641256720864530432?s=61&t=ryK3X96D_TkGJtvu2rm0uw
https://twitter.com/perrymetzger/status/1641280544007675904?s=61&t=ryK3X96D_TkGJtvu2rm0uw
https://twitter.com/post_alchemist/status/1641274166966996992?s=61&t=ryK3X96D_TkGJtvu2rm0uw
https://twitter.com/keerthanpg/status/1641268756071718913?s=61&t=ryK3X96D_TkGJtvu2rm0uw
https://twitter.com/levi7hart/status/1641261194903445504?s=61&t=ryK3X96D_TkGJtvu2rm0uw
https://twitter.com/luke_metro/status/1641232090036600832?s=61&t=ryK3X96D_TkGJtvu2rm0uw
https://twitter.com/gfodor/status/1641236230611562496?s=61&t=ryK3X96D_TkGJtvu2rm0uw
https://twitter.com/luke_metro/status/1641263301169680386?s=61&t=ryK3X96D_TkGJtvu2rm0uw
https://twitter.com/perrymetzger/status/1641259371568005120?s=61&t=ryK3X96D_TkGJtvu2rm0uw
https://twitter.com/elaifresh/status/1641252322230808577?s=61&t=ryK3X96D_TkGJtvu2rm0uw
https://twitter.com/markovmagnifico/status/1641249417088098304?s=61&t=ryK3X96D_TkGJtvu2rm0uw
https://twitter.com/interpretantion/status/1641274843692691463?s=61&t=ryK3X96D_TkGJtvu2rm0uw
https://twitter.com/lan_dao_/status/1641248437139300352?s=61&t=ryK3X96D_TkGJtvu2rm0uw
https://twitter.com/lan_dao_/status/1641249458053861377?s=61&t=ryK3X96D_TkGJtvu2rm0uw
https://twitter.com/growing_daniel/status/1641246902363766784?s=61&t=ryK3X96D_TkGJtvu2rm0uw
https://twitter.com/alexandrosm/status/1641259179955601408?s=61&t=ryK3X96D_TkGJtvu2rm0uw
It’s a good project because, you know, doing good is important and we should want to do good better rather than worse. It’s utterly absurd because everyone who has ever wanted to do good has wanted to do good well, and acting as though you and your friends alone are the first to hit upon the idea of trying to do it is the kind of galactic hubris that only subcultures that have metastasized on the internet can really achieve.
This seems wrong to me. Just this week, I went on a date with someone who told me the only reason she volunteers is that it makes her feel good about herself, and she doesn’t particularly care much about the impact. And you know what, props to her for admitting something that I expect a lot of other people do as well. I don’t think there’s something wrong with it, I’m just saying that “everyone who has ever wanted to do good has wanted to do good well” seems wrong to me.
The following tweet is being shared now: https://twitter.com/autismcapital/status/1590551673721991168?s=46&t=q60fxwumlq0Mq8CpGV3bxQ
This is obviously just a random unverified source, but I think it will be worth reflecting on this deeply once this is all said and done. It feeds directly into how EA’s maximizing behaviour can lead to these outcomes. Whether the above is true or not, it will certainly be painted as such by those who have been critical of EA.
I want to say that I appreciate posts like this by parents in the community. I’m an alignment researcher and given how fast things are moving, I do worry that I’m under-weighting the amount of impact I could lose in the next 10 years if I have kids. I feel like ‘short timelines’ make my decision harder even though I’m convinced I want kids in 5 or so years from now.
Some considerations I’ve been having lately:
Should I move far away from my parents, which would make it harder to depend on someone for childcare on the weekends and evenings? Will we be close to my future wife’s parents?
Should I be putting in some time to make additional income I can eventually use to make my life easier in 5 years? Maybe it’s easier for me to do so now before AGI crunch time?
The all-encompassing nature of AGI makes things like the share of household work a potential issue for a couple of years. I feel bad for thinking that I may have to ask my future wife if I can reduce housework in those couple of years of crunch time (let’s say 2 years max). It feels selfish… Ultimately, this will just be a decision my future wife and I will have to make. I do want to do at least 50% of the housework outside of the crunch time.
It particularly feels bizarre in the context of some wild AGI thing we aren’t even confident about how it will go. But like, if someone is the CEO of a startup, it feels more reasonable for their partner to take up additional housework if things get intense for a while. Or maybe a better example is that a pandemic is starting and one of the parents is head of some bio-risk org, I would find it odd if they tried to keep the household dynamic the same throughout the crucial time to limit the impact of the pandemic?
Overall I’m trying to be a good future husband and stuff like this weighs on me and I don’t want to make the decision in some terrible and naive way like “my career is more important than yours.” :/
Another data point: I got my start in alignment through the AISC. I had just left my job, so I spent 4 months skilling up and working hard on my AISC project. I started hanging out on EleutherAI because my mentors spent a lot of time there. This led me to do AGISF in parallel.
After those 4 months, I attended MATS 2.0 and 2.1. I’ve been doing independent research for ~1 year and have about 8.5 more months of funding left.
I would, however, not downplay their talent density.
As an example: I specifically chose to start working on AI alignment rather than trying to build startups to try to fund EA because of SBF. I would probably be making a lot more money had I took a different route and would likely not have to deal with being in such a shaky, intense field where I’ve had to put parts of my life on hold for.
My main concerns regarding vegan diets is the lack of creatine (and its potential effect on IQ) as well as children being raised as vegans (based on my minimal research, it seems that vegan kids tend to be shorter).
As someone who doesn’t eat meat at the moment, I’ve been debating eating meat again because 1) I don’t want it to negatively impact my intelligence/memory and then I make less progress on ai alignment 2) I’d be concerned it negatively impacts the growth (in all aspects) of my future children.
In general, I’ve been quite underwhelmed by the level of research (and written–up analysis) on the above concerns. It seems that lack of creatine does lower IQ, and I’d like more understanding as to whether the supplements actually work to resolve that issue (or is absorption a problem?). That said, I’ve read that meat eaters typically only get about 1g/day of creatine and I supplement 5g/day (my guess is that beyond 3g you probably don’t get more IQ boost).
For children, I’m having a hard time imagining the quality of research will be sufficient by the time I have kids, so I will likely default to having them eat a mediterranean diet.
People have some strong opinions about things like polyamory, but I figured I’d still voice my concern as someone who has been in EA since 2015, but has mostly only interacted with the community online (aside from 2 months in the Bay and 2 in London):
I have nothing against polyamory, but polyamory within the community gives me bad vibes. And the mixing of work and fun seems to go much further than I think it should. It feels like there’s an aspect of “free love” and I am a little concerned about doing cuddle puddles with career colleagues. I feel like all these dynamics lead to weird behaviour people do not want to acknowledge.
I repeat, I am not against polyamory, but I personally do not expect some of this bad behaviour would happen as much if in a monogamous setting since I expect there would be less sliding into sexual actions.
I’ve avoided saying this because I did not want to criticize people for being polyamorous and expected a lot would disagree with me and it not leading to anything. But I do think the “free love” nature of polyamory with career colleagues opens the door for things we might not want.
Whatever it is (poly within the community might not be part of the issue at all!), I feel like there needs to be a conversation about work and play (that people seem to be avoiding).