I’m not sure I understood the first part and what f(A,B) is. In the example that you gave B is only relevant with respect of how much it affects A (“damage the reputability of the AI risk ideas in the eye of anyone who hasn’t yet seriously engaged with them and is deciding whether or not to”). So, in a way you are still trying to maximize |A| (or probably a subset of it: people who can also make progress on it (|A’|)). But in “among other things” I guess that you could be thinking of ways in which B could oppose A, so maybe that’s why you want to reduce it too. The thing is: I have problems visualizing most of B opposing A and what that subset (B’) could even be able to do (as I said, outside reducing |A|). I think that that is my biggest argument, that B’ is a really small subset of B and I don’t fear them.
Now, if your point is that to maximize |A| you have to keep in mind B and so it would be better to have more ‘legitimacy’ on the alignment problem before making it viral you are right. So is there progress on that? Is the community building plan to convert authorities in the field to A before reaching the mainstream then?
Also are people who try to disprove the alignment problem in B? If that’s the case I don’t know if our objective should be to maximize |A’|. I’m not sure if we can reach a superintelligence with AI, so I don’t know if it wouldn’t be better to think about maximize the people trying to solve OR dissolve the alignment problem. If we consider that most people probably wouldn’t feel strongly about one side of the other (debatable), then I don’t think is that big of a deal bringing the discussions more to the mainstream. If AI risk arguments include that not matter how uncertain researchers are about the problem giving what’s at stake we should lower the chances, then I even see B and B’ smaller. But maybe I’m too optimistic/ marketplacer/ memer.
Lastly, the maximum size of A is smaller the shorter the timelines. Are people with short timelines the ones trying to reach the most people in the short term?
Interesting.
I’m not sure I understood the first part and what f(A,B) is. In the example that you gave B is only relevant with respect of how much it affects A (“damage the reputability of the AI risk ideas in the eye of anyone who hasn’t yet seriously engaged with them and is deciding whether or not to”). So, in a way you are still trying to maximize |A| (or probably a subset of it: people who can also make progress on it (|A’|)). But in “among other things” I guess that you could be thinking of ways in which B could oppose A, so maybe that’s why you want to reduce it too. The thing is: I have problems visualizing most of B opposing A and what that subset (B’) could even be able to do (as I said, outside reducing |A|). I think that that is my biggest argument, that B’ is a really small subset of B and I don’t fear them.
Now, if your point is that to maximize |A| you have to keep in mind B and so it would be better to have more ‘legitimacy’ on the alignment problem before making it viral you are right. So is there progress on that? Is the community building plan to convert authorities in the field to A before reaching the mainstream then?
Also are people who try to disprove the alignment problem in B? If that’s the case I don’t know if our objective should be to maximize |A’|. I’m not sure if we can reach a superintelligence with AI, so I don’t know if it wouldn’t be better to think about maximize the people trying to solve OR dissolve the alignment problem. If we consider that most people probably wouldn’t feel strongly about one side of the other (debatable), then I don’t think is that big of a deal bringing the discussions more to the mainstream. If AI risk arguments include that not matter how uncertain researchers are about the problem giving what’s at stake we should lower the chances, then I even see B and B’ smaller. But maybe I’m too optimistic/ marketplacer/ memer.
Lastly, the maximum size of A is smaller the shorter the timelines. Are people with short timelines the ones trying to reach the most people in the short term?