The reason everything is very confusing is because it is a fast growing field, with lots of people doing lots of things. As someone already pointed out, different orgs often have the same end en goal (i.e. reducing X-risk form AI) but different ideas for how to do this.
But this is not even the reason there are several orgs. Orgs are just legal entities to hire people to do the work. One reason to have several orgs is that there are researchers in more than one country and it’s easier to organise this under different orgs. Another reason is that different orgs have different funding models, or leadership styles, etc.
But also, most orgs don’t grow very fast, probably for good reasons, but I don’t know this is just an imperial observation. This means there are lots of researchers wanting to help, and some of them get funding, and some of them decide to start new orgs.
So we end up with this mess of lots of orgs doing their own thing, and no one really knows everything that is going on. This has some cost, e.g. there are probably people doing almost the same research with out knowing of each other. And as you say, it is confusing, especially when you are new. But I rather have this mess than a well ordered centrally controlled research ecosystem. Central coordination might seem good in theory, but in practice it not worth it. A centralised system is slow and can’t spot its own blind spots.
So what to do? How to navigate this mess?
There are some people who are creating information resources which might be helpful.
There’s AI Safety Supports lots of links page, which is both too long and to abbreviated, because making a good list is hard. AI Safety Support—Lots of Links
Currently I think the best way to get oriented is to talk so someone who is more acquainted with the AI Safety career landscape than you. Either someone you know, or book a call with AI Safety Support. AI Safety Support—Career Coaching
This is both very informative and very helpful, thank you for the advice! That does seem like a very reasonable way of thinking about the current situation, and I’m happy to see that there already exist resources that try to compile this information.
I was already referred to AISS in private, but your recommendation helped me take the step of actually applying for their coaching. Looking forward to seeing what comes of it, thanks again!
The reason everything is very confusing is because it is a fast growing field, with lots of people doing lots of things. As someone already pointed out, different orgs often have the same end en goal (i.e. reducing X-risk form AI) but different ideas for how to do this.
But this is not even the reason there are several orgs. Orgs are just legal entities to hire people to do the work. One reason to have several orgs is that there are researchers in more than one country and it’s easier to organise this under different orgs. Another reason is that different orgs have different funding models, or leadership styles, etc.
But also, most orgs don’t grow very fast, probably for good reasons, but I don’t know this is just an imperial observation. This means there are lots of researchers wanting to help, and some of them get funding, and some of them decide to start new orgs.
So we end up with this mess of lots of orgs doing their own thing, and no one really knows everything that is going on. This has some cost, e.g. there are probably people doing almost the same research with out knowing of each other. And as you say, it is confusing, especially when you are new. But I rather have this mess than a well ordered centrally controlled research ecosystem. Central coordination might seem good in theory, but in practice it not worth it. A centralised system is slow and can’t spot its own blind spots.
So what to do? How to navigate this mess?
There are some people who are creating information resources which might be helpful.
There’s AI Safety Supports lots of links page, which is both too long and to abbreviated, because making a good list is hard.
AI Safety Support—Lots of Links
Alignment Ecosystem are working on some better resources, but it’s all still under construction.
Other resources · Alignment Ecosystem Development (coda.io)
Currently I think the best way to get oriented is to talk so someone who is more acquainted with the AI Safety career landscape than you. Either someone you know, or book a call with AI Safety Support.
AI Safety Support—Career Coaching
This is both very informative and very helpful, thank you for the advice! That does seem like a very reasonable way of thinking about the current situation, and I’m happy to see that there already exist resources that try to compile this information.
I was already referred to AISS in private, but your recommendation helped me take the step of actually applying for their coaching. Looking forward to seeing what comes of it, thanks again!