There are opportunity costs and other tradeoffs involved in making the project better along public-attention dimensions.
The current version is bad at getting public attention; improving it and making it get 1000x public attention would still leave it with little; likely it’s better to wait for a different project that’s better positioned and more focused on getting public attention. And as I said, I expect such a project to appear soon.
“And as I said, I expect such a project to appear soon.”
I dont know whether to read this as “Zach has some inside information that gives him high confidence it will exist” or “Zach is doing wishful thinking” or something else!
What do you consider the purpose or theory of change of this project? I assumed it was to put pressure on the AI labs to improve along these criteria, which presumably requires some level of public attention. Do you see it more as a way for AI safety people to keep tabs on the status of these labs?
The original goal involved getting attention. Weeks ago, I realized I was not on track to get attention. I launched without a sharp object-level goal but largely to get feedback to figure out whether to continue working on this project and what goals it should have.
I think getting attention would increase the impact of this project a lot and is probably pretty doable if you are able to find an institutional home for it. I agree with Yanni’s sentiment that it is probably better to improve on this project than to wait for another one that is more optimized for public attention to come along (though am curious why you think the latter is better).
Not necessarily. But:
There are opportunity costs and other tradeoffs involved in making the project better along public-attention dimensions.
The current version is bad at getting public attention; improving it and making it get 1000x public attention would still leave it with little; likely it’s better to wait for a different project that’s better positioned and more focused on getting public attention. And as I said, I expect such a project to appear soon.
“And as I said, I expect such a project to appear soon.”
I dont know whether to read this as “Zach has some inside information that gives him high confidence it will exist” or “Zach is doing wishful thinking” or something else!
What do you consider the purpose or theory of change of this project? I assumed it was to put pressure on the AI labs to improve along these criteria, which presumably requires some level of public attention. Do you see it more as a way for AI safety people to keep tabs on the status of these labs?
The original goal involved getting attention. Weeks ago, I realized I was not on track to get attention. I launched without a sharp object-level goal but largely to get feedback to figure out whether to continue working on this project and what goals it should have.
I think getting attention would increase the impact of this project a lot and is probably pretty doable if you are able to find an institutional home for it. I agree with Yanni’s sentiment that it is probably better to improve on this project than to wait for another one that is more optimized for public attention to come along (though am curious why you think the latter is better).