Im not trying to get dignity points. Im just trying to have a positive impact. At this point if AI is hard to align we all die (or worse!). I spent years trying to avoid contributing to the problem and helping when I could. But at this point its better to just hope alignment isn’t that hard (lost cause timelines) and try to steer the trajectory positively.
i don’t think that’s how dignity points works.
for me, p(alignment hard) is still big enough that when weighing
p(alignment hard) × value of my work if alignment hard
p(alignment easy) × value of my work if alignment easy
it’s still better to keep working on hard alignment (see my plan). that’s where the dignity points are.
“shut up and multiply”, one might say.
Im not trying to get dignity points. Im just trying to have a positive impact. At this point if AI is hard to align we all die (or worse!). I spent years trying to avoid contributing to the problem and helping when I could. But at this point its better to just hope alignment isn’t that hard (lost cause timelines) and try to steer the trajectory positively.
“dignity points” means “having a positive impact”.
if alignment is hard we need my plan. and it’s still very likely alignment is hard.
and “alignment is hard” is a logical fact not indexical location, we don’t get to save “those timelines”.