Do you think it would be possible to edit this post to make it less harmful/bad/wrong, and still allow me to get feedback on what’s wrong with my thinking? (I think something’s wrong, and posted asking for feedback/thoughts).
It’s easy to talk about the importance of x-risks without making poverty and health charities the direct comparison.
For me it is the direct comparison that matters though, I need to choose between those two
I believe so.
I don’t understand, you believe which one?
I still presume you care about people who suffer from systemic issues in the world. This kind of post would not be the kind of thing that would make anyone like this feel respected.
Does that also apply to any post about e.g. animal welfare and climate change?
As for damage: maybe I can write more clearly that I’m probably wrong and that I’m a random anonymous account? Would be happy to edit this post!
Does that also apply to any post about e.g. animal welfare and climate change?
This would apply to a post titled “Reducing carbon emissions by X may be equivalent to 500M in donations to GiveWell charities.”
On the question of deleting
I don’t think this post will be particularly good at sparking good conversations.
I think it would be better to have a different post that makes more effort in the estimation proposed and clearly asks a question in the title.
Relatedly, I think the large majority of the potential downside of this post comes from the title. Someone like Torres may have no interest in reading the actual post or taking any nuances into account when commenting on it. They likely wouldn’t even read anything beyond the title. They’d just do their thing and be a pundity troll, and the title gives exactly the kind of ammunition they want.
I work on AI safety tools. I believe this might be the most important thing for someone like me to do FWIW. I think AI doom is not likely but likely enough to be my personal top priority. But when I give money away I do it to GiveWell charities for reasons involving epistemic humility, moral uncertainty, and my belief in the importance of a balanced set of EA priorities.
I’m interested in why you don’t think AI doom is likely, given a lot of people in the AI safety space at least seem to suggest it’s reasonably likely (>10% likelihood in the next 10 or 20 years)
I wonder what are your thoughts on delaying timelines, instead of working on tooling, but I guess it might hinge on being more longtermist and personal fit.
I very badly want to delay timelines, especially because doing so gives us more time to develop responses, governance strategies, and tools to handle rapid changes. I think this is underemphasized. And lately, I have been thinking that the most likely thing that could make me shift my focus is the appeal of work that makes it harder to build risky AI or that improves our ability to respond to or endure threats. This contrasts with my current work which is mostly about making alignment easier.
Do you think it would be possible to edit this post to make it less harmful/bad/wrong, and still allow me to get feedback on what’s wrong with my thinking? (I think something’s wrong, and posted asking for feedback/thoughts).
E.g. keeping feedback like this
For me it is the direct comparison that matters though, I need to choose between those two
I don’t understand, you believe which one?
Does that also apply to any post about e.g. animal welfare and climate change?
As for damage: maybe I can write more clearly that I’m probably wrong and that I’m a random anonymous account? Would be happy to edit this post!
This would apply to a post titled “Reducing carbon emissions by X may be equivalent to 500M in donations to GiveWell charities.”
On the question of deleting
I don’t think this post will be particularly good at sparking good conversations.
I think it would be better to have a different post that makes more effort in the estimation proposed and clearly asks a question in the title.
Relatedly, I think the large majority of the potential downside of this post comes from the title. Someone like Torres may have no interest in reading the actual post or taking any nuances into account when commenting on it. They likely wouldn’t even read anything beyond the title. They’d just do their thing and be a pundity troll, and the title gives exactly the kind of ammunition they want.
Edited the title, do you think this is good enough?
Could you please point out your estimation? Since at the end of the day we do need to decide what to work on.
I believe this is a big improvement.
I work on AI safety tools. I believe this might be the most important thing for someone like me to do FWIW. I think AI doom is not likely but likely enough to be my personal top priority. But when I give money away I do it to GiveWell charities for reasons involving epistemic humility, moral uncertainty, and my belief in the importance of a balanced set of EA priorities.
I’m interested in why you don’t think AI doom is likely, given a lot of people in the AI safety space at least seem to suggest it’s reasonably likely (>10% likelihood in the next 10 or 20 years)
My guess is like 5-10%
Thank you for the pushback on the title!
I wonder what are your thoughts on delaying timelines, instead of working on tooling, but I guess it might hinge on being more longtermist and personal fit.
I very badly want to delay timelines, especially because doing so gives us more time to develop responses, governance strategies, and tools to handle rapid changes. I think this is underemphasized. And lately, I have been thinking that the most likely thing that could make me shift my focus is the appeal of work that makes it harder to build risky AI or that improves our ability to respond to or endure threats. This contrasts with my current work which is mostly about making alignment easier.