My name is pronounced somewhat like ‘yuh-roon’.
My name is pronounced somewhat like ‘yuh-roon’.
I think you’re right (I don’t mind the serif titles within the blog posts, nor do I mind the sans serif use on substack and medium). I am likely just too attached to the previous look, the most important opinion is that of new users :) Thank you for the work you’ve done!
Yeah I like most of the UI changes but not a big fan of the sans serif font.
Indeed weird that the use isn’t consistent either. (ETA: don’t agree with this sentence anymore). If people are divided on this, perhaps have a setting to bring it back so people can choose?
Hi Joe, I read your posts twice and I liked many of the things raised but have a bit of a difficult time figuring out your exact positions on these topics. Would it be possible to just write down your views in a few lines? You can leave out the arguments.
I would change 2 under “against a boycott” to not just donations, but having an impact in general. Just like an airplane flight could be offset by a talk on veganism.
MacAskill declined to answer a list of detailed questions from TIME for this story. “An independent investigation has been commissioned to look into these issues; I don’t want to front-run or undermine that process by discussing my own recollections publicly,” he wrote in an email. “I look forward to the results of the investigation and hope to be able to respond more fully after then.” Citing the same investigation, Beckstead also declined to answer detailed questions.
How long do investigations like these typically take?
My patience is running out. If the response of EA leaders after the investigation is lacking, I’m not sure I would still want to be part of this community. I’m also not sure what to do or feel in the meantime while we wait for their responses.
Is it a pure coincidence that 3 prominent LLMs are announced on the same day?
I personally like Will’s writing and I think he’s a good speaker. But I do find it weird that millions were spent on promoting WWOTF. I find that weird on its own (how can you be so confident it’s impactful?), but even more so when comparing WWOTF to The Precipice which is in my opinion (and from my impression many others’ opinion as well) a much better and more impactful book. I don’t know if Ben shares these thoughts or if he has any others.
Edit to add: I vaguely remember seeing a source other than Torres. But as long as I can’t find it you can disregard this comment. I do think promoting the book was/is a lot more likely to be net positive than net negative, I’m still even promoting the book myself. It’s just the amount of money I’m concerned about compared to other causes. But as long as I don’t have a figure, I can’t comment.
Can’t find the source for this, so correct me if I’m wrong!
I feel quite worried that the alignment plan of Anthropic currently basically boils down to “we are the good guys, and by doing a lot of capabilities research we will have a seat at the table when AI gets really dangerous, and then we will just be better/more-careful/more-reasonable than the existing people, and that will somehow make the difference between AI going well and going badly”. That plan isn’t inherently doomed, but man does it rely on trusting Anthropic’s leadership, and I genuinely only have marginally better ability to distinguish the moral character of Anthropic’s leadership from the moral character of FTX’s leadership, and in the absence of that trust the only thing we are doing with Anthropic is adding another player to an AI arms race.
More broadly, I think AI Alignment ideas/the EA community/the rationality community played a pretty substantial role in the founding of the three leading AGI labs (Deepmind, OpenAI, Anthropic), and man, I sure would feel better about a world where none of these would exist, though I also feel quite uncertain here. But it does sure feel like we had a quite large counterfactual effect on AI timelines.
Thank you so much for voicing these concerns. I share them too and they need to be said more loudly. I’m extremely worried the EA/LessWrong community has had a net negative impact on the world simply because of the increased AI risk. I haven’t heard any good arguments against this.
If we exclude AI-related work, I do think EA has been net positive.
Thanks for sharing all these details!
To me this seems like either a scam or an example of the unilateralist curse. I would urge people not to invest in this. For something like this to have any potential, it has to be started by a team of people 1) with lots of relevant experience / a good public track record and 2) that have been actively involved in the EA community for at least a while (a year or more). Even then I would be skeptical as this seems like something way too complex for a broad audience and as a strong prior I would not touch anything blockchain/crypto/NFT related with a 10 foot pole.
One reason I think the subforums didn’t work well is that there isn’t a big difference between having that feature and just customizing your front page to see more of the topics you like.
This test is a great idea and I hope something like this will get implemented. I’m not a big fan of the tab idea, since community posts will then still be very prominent/accessible. But I do think it’s better than what we have today. And in case of the section it would still be great if we could remove that section. Maybe neither a tab or a section is necessary, just show that community is hidden under ‘customize feed’. But that might make community posts too hidden.
Added a transcript to this post! Will do so for my other videos as well.
Loved this post, thanks for writing it! I like the reframing to inside/outside games. I guess my main worry is whether outside games are effective. I can imagine them being effective when veganism becomes more popular/mainstream, but at the moment I’m worried they are more aversive than helpful. I remember Tobias Leenaert in his book “How to create a vegan world” talking about the need of adopting different strategies at different stages of a movement.