I am confused about what your claims are, exactly (or what you’re trying to say).
One interpretation, which makes sense to me, is the following
“Starting an AI safety lab is really hard and we should have a lot of appreciation for people who are doing it. We should also cut them some more slack when they make mistakes because it is really hard and some of the things they are trying to do have never been done before.” (This isn’t a direct quote)
I really like and appreciate this point. Speaking for me personally, I too often fall into the trap of criticising someone for doing something not perfectly and not 1. Appreciating that they have tried at all and that it was potentially really hard, and 2. Criticising all the people who didn’t do anything and chose the safe route. There is a good post about this: Invisible impact loss (and why we can be too error-averse).
In addition, I think it could be a valid point to say that we should be more understanding if e.g. the research agendas of AIS labs are/were off in the past as this is a problem that no one really knows how to solve and that is just very hard. I don’t really feel qualified to comment on that.
Your post could also be claiming something else:
“We should not criticise / should have a very high bar for criticizing AI safety labs and their founders (especially not if you yourself have not started an AIS lab). They are doing something that no one else has done before, and when they make mistakes, that is way understandable because they don’t have anyone to learn from.” (This isn’t a direct quote)
For instance, you seem to claim that the reference class of people who can advise people working on AI safety is some group whose size is the number of AI safety labs multiplied by 3. (This is what I understand your point to be if I look at the passage that starts with “Some new organizations are very similar to existing organizations. The founders of the new org can go look at all the previous closeby examples, learn from them, copy their playbook and avoid their mistakes.” and ends in “That is the roughly the number of people who are not the subject of this post.”)
If this is what you want to say, I think the message is wrong in important ways. In brief:
I agree that when people work on hard and important things, we should appreciate them, but I disagree that we should avoid criticism of work like this. Criticism is important precisely when the work matters. Criticism is important when the problems are strange and people are probably making mistakes.
The strong version of “they’re doing something that no one else has done before … they don’t have anyone to learn from” seems to take a very narrow reference class for a broad set of ways to learn from people. You can learn from people who aren’t doing the exact thing that you’re doing.
1. A claim like: “We should not criticise / should have a very high bar for criticizing AI safety labs / their founders (especially not if you yourself have not started an AIS lab).”
As stated above, I think it is important to appreciate people for trying at all, and it’s useful to notice that work not getting done is a loss. That being said, criticism is still useful. People are making mistakes that others can notice. Some organizations are less promising than others, and it’s useful to make those distinctions so that we know which to work in or donate to.
In a healthy EA/LT/AIS community, I want people to criticise other organisations, even if what they are doing is very hard and has never been done before. E.g. you could make the case that what OP, GiveWell, and ACE are doing has never been done before (although it is slightly unclear to me what exactly “doing something that has never been done before” means), and I don’t think anyone would say that those organisations should be beyond criticism.
This ties nicely into the second point I think is wrong:
2. A claim like: “they’re doing something that no one else has done before … they don’t have anyone to learn from”
A quote from your post:
The founders of the new org can go look at all the previous closeby examples, learn from them, copy their playbook and avoid their mistakes. If your org is shaped like a Y-combinator company, you can spend dozens of hours absorbing high-quality, expert-crafted content which has been tested and tweaked and improved over hundreds of companies and more than a decade. You can do a 15 minute interview to go work next to a bunch of the best people who are also building your type of org, and learn by looking over their shoulder and troubleshooting together. You get to talk to a bunch of people who have actually succeeded building an org-like-yours. … How does this look for AI safety? … Apply these updates to our starting reference class success rate of ONE. IN. TWENTY. Now count the AI safety labs. Multiply by ~3.
A point I think you’re making:
“They are doing something that no one else has done before [build a successful AI safety lab], and therefore, if they make mistakes, that is way understandable because they don’t have anyone to learn from.”
It is true that the closer your organisation is to an already existing org/cluster of orgs, the more you will be able to copy. But just because you’re working on something new that no one has worked on (or your work is different in other important aspects), it doesn’t mean that you cannot learn from other organisations, their successes and failures. For things like having a healthy work culture, talent retention, and good governance structures, there are examples in the world that even AIS labs can learn from.
I don’t understand the research side of things well enough to comment on whether/how much AIS labs could learn from e.g. academic research or for-profit research labs working on problems different from AIS.
Hey, sorry I’m in a rush and couldn’t read your whole comment. I wanted to jump in anyway to clarify that you’re totally right to be confused about what my claims are. I wasn’t trying to make claims, really, I was channelling an emotion I had late at night into a post that I felt compelled to hit submit on. Hence: “loveletter to the demeaning occupation of desperately trying”
I really value the norms of discourse here, their carefulness, modestness, and earnestness. From the skim of your comment I’m guessing after a closer read I’d think it was a great example of that, which I appreciate.
I don’t expect I’ll manage to rewrite this post in the way which makes everything I believe clear (and I’m not sure that would be very valuable for others if I did)
FWIW, I most read the core message of this post as: “you should start an AI safety lab. What are you waiting for? ;)”.
The post felt to me like debunking reasons people might feel they aren’t qualified to start an AI safety lab.
I don’t think this was the primary intention though. I feel like I came away with that impression because of the Twitter contexts in which I saw this post referenced.
I am confused about what your claims are, exactly (or what you’re trying to say).
One interpretation, which makes sense to me, is the following
I really like and appreciate this point. Speaking for me personally, I too often fall into the trap of criticising someone for doing something not perfectly and not 1. Appreciating that they have tried at all and that it was potentially really hard, and 2. Criticising all the people who didn’t do anything and chose the safe route. There is a good post about this: Invisible impact loss (and why we can be too error-averse).
In addition, I think it could be a valid point to say that we should be more understanding if e.g. the research agendas of AIS labs are/were off in the past as this is a problem that no one really knows how to solve and that is just very hard. I don’t really feel qualified to comment on that.
Your post could also be claiming something else:
For instance, you seem to claim that the reference class of people who can advise people working on AI safety is some group whose size is the number of AI safety labs multiplied by 3. (This is what I understand your point to be if I look at the passage that starts with “Some new organizations are very similar to existing organizations. The founders of the new org can go look at all the previous closeby examples, learn from them, copy their playbook and avoid their mistakes.” and ends in “That is the roughly the number of people who are not the subject of this post.”)
If this is what you want to say, I think the message is wrong in important ways. In brief:
I agree that when people work on hard and important things, we should appreciate them, but I disagree that we should avoid criticism of work like this. Criticism is important precisely when the work matters. Criticism is important when the problems are strange and people are probably making mistakes.
The strong version of “they’re doing something that no one else has done before … they don’t have anyone to learn from” seems to take a very narrow reference class for a broad set of ways to learn from people. You can learn from people who aren’t doing the exact thing that you’re doing.
1. A claim like: “We should not criticise / should have a very high bar for criticizing AI safety labs / their founders (especially not if you yourself have not started an AIS lab).”
As stated above, I think it is important to appreciate people for trying at all, and it’s useful to notice that work not getting done is a loss. That being said, criticism is still useful. People are making mistakes that others can notice. Some organizations are less promising than others, and it’s useful to make those distinctions so that we know which to work in or donate to.
In a healthy EA/LT/AIS community, I want people to criticise other organisations, even if what they are doing is very hard and has never been done before. E.g. you could make the case that what OP, GiveWell, and ACE are doing has never been done before (although it is slightly unclear to me what exactly “doing something that has never been done before” means), and I don’t think anyone would say that those organisations should be beyond criticism.
This ties nicely into the second point I think is wrong:
2. A claim like: “they’re doing something that no one else has done before … they don’t have anyone to learn from”
A quote from your post:
A point I think you’re making:
“They are doing something that no one else has done before [build a successful AI safety lab], and therefore, if they make mistakes, that is way understandable because they don’t have anyone to learn from.”
It is true that the closer your organisation is to an already existing org/cluster of orgs, the more you will be able to copy. But just because you’re working on something new that no one has worked on (or your work is different in other important aspects), it doesn’t mean that you cannot learn from other organisations, their successes and failures. For things like having a healthy work culture, talent retention, and good governance structures, there are examples in the world that even AIS labs can learn from.
I don’t understand the research side of things well enough to comment on whether/how much AIS labs could learn from e.g. academic research or for-profit research labs working on problems different from AIS.
Hey, sorry I’m in a rush and couldn’t read your whole comment. I wanted to jump in anyway to clarify that you’re totally right to be confused about what my claims are. I wasn’t trying to make claims, really, I was channelling an emotion I had late at night into a post that I felt compelled to hit submit on. Hence: “loveletter to the demeaning occupation of desperately trying”
I really value the norms of discourse here, their carefulness, modestness, and earnestness. From the skim of your comment I’m guessing after a closer read I’d think it was a great example of that, which I appreciate.
I don’t expect I’ll manage to rewrite this post in the way which makes everything I believe clear (and I’m not sure that would be very valuable for others if I did)
FWIW, I most read the core message of this post as: “you should start an AI safety lab. What are you waiting for? ;)”.
The post felt to me like debunking reasons people might feel they aren’t qualified to start an AI safety lab.
I don’t think this was the primary intention though. I feel like I came away with that impression because of the Twitter contexts in which I saw this post referenced.