I am confused about what your claims are, exactly (or what youâre trying to say).
One interpretation, which makes sense to me, is the following
âStarting an AI safety lab is really hard and we should have a lot of appreciation for people who are doing it. We should also cut them some more slack when they make mistakes because it is really hard and some of the things they are trying to do have never been done before.â (This isnât a direct quote)
I really like and appreciate this point. Speaking for me personally, I too often fall into the trap of criticising someone for doing something not perfectly and not 1. Appreciating that they have tried at all and that it was potentially really hard, and 2. Criticising all the people who didnât do anything and chose the safe route. There is a good post about this: Invisible impact loss (and why we can be too error-averse).
In addition, I think it could be a valid point to say that we should be more understanding if e.g. the research agendas of AIS labs are/âwere off in the past as this is a problem that no one really knows how to solve and that is just very hard. I donât really feel qualified to comment on that.
Your post could also be claiming something else:
âWe should not criticise /â should have a very high bar for criticizing AI safety labs and their founders (especially not if you yourself have not started an AIS lab). They are doing something that no one else has done before, and when they make mistakes, that is way understandable because they donât have anyone to learn from.â (This isnât a direct quote)
For instance, you seem to claim that the reference class of people who can advise people working on AI safety is some group whose size is the number of AI safety labs multiplied by 3. (This is what I understand your point to be if I look at the passage that starts with âSome new organizations are very similar to existing organizations. The founders of the new org can go look at all the previous closeby examples, learn from them, copy their playbook and avoid their mistakes.â and ends in âThat is the roughly the number of people who are not the subject of this post.â)
If this is what you want to say, I think the message is wrong in important ways. In brief:
I agree that when people work on hard and important things, we should appreciate them, but I disagree that we should avoid criticism of work like this. Criticism is important precisely when the work matters. Criticism is important when the problems are strange and people are probably making mistakes.
The strong version of âtheyâre doing something that no one else has done before ⌠they donât have anyone to learn fromâ seems to take a very narrow reference class for a broad set of ways to learn from people. You can learn from people who arenât doing the exact thing that youâre doing.
1. A claim like: âWe should not criticise /â should have a very high bar for criticizing AI safety labs /â their founders (especially not if you yourself have not started an AIS lab).â
As stated above, I think it is important to appreciate people for trying at all, and itâs useful to notice that work not getting done is a loss. That being said, criticism is still useful. People are making mistakes that others can notice. Some organizations are less promising than others, and itâs useful to make those distinctions so that we know which to work in or donate to.
In a healthy EA/âLT/âAIS community, I want people to criticise other organisations, even if what they are doing is very hard and has never been done before. E.g. you could make the case that what OP, GiveWell, and ACE are doing has never been done before (although it is slightly unclear to me what exactly âdoing something that has never been done beforeâ means), and I donât think anyone would say that those organisations should be beyond criticism.
This ties nicely into the second point I think is wrong:
2. A claim like: âtheyâre doing something that no one else has done before ⌠they donât have anyone to learn fromâ
A quote from your post:
The founders of the new org can go look at all the previous closeby examples, learn from them, copy their playbook and avoid their mistakes. If your org is shaped like a Y-combinator company, you can spend dozens of hours absorbing high-quality, expert-crafted content which has been tested and tweaked and improved over hundreds of companies and more than a decade. You can do a 15 minute interview to go work next to a bunch of the best people who are also building your type of org, and learn by looking over their shoulder and troubleshooting together. You get to talk to a bunch of people who have actually succeeded building an org-like-yours. ⌠How does this look for AI safety? ⌠Apply these updates to our starting reference class success rate of ONE. IN. TWENTY. Now count the AI safety labs. Multiply by ~3.
A point I think youâre making:
âThey are doing something that no one else has done before [build a successful AI safety lab], and therefore, if they make mistakes, that is way understandable because they donât have anyone to learn from.â
It is true that the closer your organisation is to an already existing org/âcluster of orgs, the more you will be able to copy. But just because youâre working on something new that no one has worked on (or your work is different in other important aspects), it doesnât mean that you cannot learn from other organisations, their successes and failures. For things like having a healthy work culture, talent retention, and good governance structures, there are examples in the world that even AIS labs can learn from.
I donât understand the research side of things well enough to comment on whether/âhow much AIS labs could learn from e.g. academic research or for-profit research labs working on problems different from AIS.
Hey, sorry Iâm in a rush and couldnât read your whole comment. I wanted to jump in anyway to clarify that youâre totally right to be confused about what my claims are. I wasnât trying to make claims, really, I was channelling an emotion I had late at night into a post that I felt compelled to hit submit on. Hence: âloveletter to the demeaning occupation of desperately tryingâ
I really value the norms of discourse here, their carefulness, modestness, and earnestness. From the skim of your comment Iâm guessing after a closer read Iâd think it was a great example of that, which I appreciate.
I donât expect Iâll manage to rewrite this post in the way which makes everything I believe clear (and Iâm not sure that would be very valuable for others if I did)
FWIW, I most read the core message of this post as: âyou should start an AI safety lab. What are you waiting for? ;)â.
The post felt to me like debunking reasons people might feel they arenât qualified to start an AI safety lab.
I donât think this was the primary intention though. I feel like I came away with that impression because of the Twitter contexts in which I saw this post referenced.
I am confused about what your claims are, exactly (or what youâre trying to say).
One interpretation, which makes sense to me, is the following
I really like and appreciate this point. Speaking for me personally, I too often fall into the trap of criticising someone for doing something not perfectly and not 1. Appreciating that they have tried at all and that it was potentially really hard, and 2. Criticising all the people who didnât do anything and chose the safe route. There is a good post about this: Invisible impact loss (and why we can be too error-averse).
In addition, I think it could be a valid point to say that we should be more understanding if e.g. the research agendas of AIS labs are/âwere off in the past as this is a problem that no one really knows how to solve and that is just very hard. I donât really feel qualified to comment on that.
Your post could also be claiming something else:
For instance, you seem to claim that the reference class of people who can advise people working on AI safety is some group whose size is the number of AI safety labs multiplied by 3. (This is what I understand your point to be if I look at the passage that starts with âSome new organizations are very similar to existing organizations. The founders of the new org can go look at all the previous closeby examples, learn from them, copy their playbook and avoid their mistakes.â and ends in âThat is the roughly the number of people who are not the subject of this post.â)
If this is what you want to say, I think the message is wrong in important ways. In brief:
I agree that when people work on hard and important things, we should appreciate them, but I disagree that we should avoid criticism of work like this. Criticism is important precisely when the work matters. Criticism is important when the problems are strange and people are probably making mistakes.
The strong version of âtheyâre doing something that no one else has done before ⌠they donât have anyone to learn fromâ seems to take a very narrow reference class for a broad set of ways to learn from people. You can learn from people who arenât doing the exact thing that youâre doing.
1. A claim like: âWe should not criticise /â should have a very high bar for criticizing AI safety labs /â their founders (especially not if you yourself have not started an AIS lab).â
As stated above, I think it is important to appreciate people for trying at all, and itâs useful to notice that work not getting done is a loss. That being said, criticism is still useful. People are making mistakes that others can notice. Some organizations are less promising than others, and itâs useful to make those distinctions so that we know which to work in or donate to.
In a healthy EA/âLT/âAIS community, I want people to criticise other organisations, even if what they are doing is very hard and has never been done before. E.g. you could make the case that what OP, GiveWell, and ACE are doing has never been done before (although it is slightly unclear to me what exactly âdoing something that has never been done beforeâ means), and I donât think anyone would say that those organisations should be beyond criticism.
This ties nicely into the second point I think is wrong:
2. A claim like: âtheyâre doing something that no one else has done before ⌠they donât have anyone to learn fromâ
A quote from your post:
A point I think youâre making:
âThey are doing something that no one else has done before [build a successful AI safety lab], and therefore, if they make mistakes, that is way understandable because they donât have anyone to learn from.â
It is true that the closer your organisation is to an already existing org/âcluster of orgs, the more you will be able to copy. But just because youâre working on something new that no one has worked on (or your work is different in other important aspects), it doesnât mean that you cannot learn from other organisations, their successes and failures. For things like having a healthy work culture, talent retention, and good governance structures, there are examples in the world that even AIS labs can learn from.
I donât understand the research side of things well enough to comment on whether/âhow much AIS labs could learn from e.g. academic research or for-profit research labs working on problems different from AIS.
Hey, sorry Iâm in a rush and couldnât read your whole comment. I wanted to jump in anyway to clarify that youâre totally right to be confused about what my claims are. I wasnât trying to make claims, really, I was channelling an emotion I had late at night into a post that I felt compelled to hit submit on. Hence: âloveletter to the demeaning occupation of desperately tryingâ
I really value the norms of discourse here, their carefulness, modestness, and earnestness. From the skim of your comment Iâm guessing after a closer read Iâd think it was a great example of that, which I appreciate.
I donât expect Iâll manage to rewrite this post in the way which makes everything I believe clear (and Iâm not sure that would be very valuable for others if I did)
FWIW, I most read the core message of this post as: âyou should start an AI safety lab. What are you waiting for? ;)â.
The post felt to me like debunking reasons people might feel they arenât qualified to start an AI safety lab.
I donât think this was the primary intention though. I feel like I came away with that impression because of the Twitter contexts in which I saw this post referenced.